

Thatās the opposite of what Iām saying. Deepseek is the one under scrutiny, yet they are the only one to publish source code and training procedures of their model.
this has absolutely fuck all to do with anything iāve said in the slightest, but i guess you gotta toss in the talking points somewhere
e: itās also trivially disprovable, but i donāt care if itās actually true, i only care about headlines negative about AI
i can admit itās possible iām being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJās part. but i still think that itās a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper ā though they note using a 2048-GPU cluster of H800ās that would put them down around $40m). iām thinking in the mode of āthe whitepaper exists to serve the companyās bottom lineā
btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn