Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah it’s a copy of o1 easier than doing SOTA work


How do you "copy" something like that if OpenAI did not disclose any of the details?


Use OAI to create synthetic data for your training, which is clearly what they are doing. This is why their models claim to be ChatGPT when asked.


xAI did/does the same, but Grok is nowhere near as good. Perhaps a measure of talent is required to "copy" as well as DeepSeek.


that's not how this works. o1's thinking trace is hidden, and that's what's valuable here, not the output.


So? Every other model maker is doing that. Including OAI

There's a lot more to making foundation models and Deepseek are very much punching well above their weight




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: