One point is reliability, as others have mentioned. Another important point for me is censorship. Due to their political nature, the model seemed to be heavily censored on topics such as the CCP and Taiwan (R.O.C.).
"ChatGPT reveals in its responses that it is aligned with American culture and values, while rarely getting it right when it comes to the prevailing values held in other countries. It presents American values even when specifically asked about those of other countries. In doing so, it actually promotes American values among its users," explains researcher Daniel Hershcovich, of UCPH’s Department of Computer Science."
I was recently trying to use the ChatGPT API to build a tiny dataset for a small NLP classifier model and was surprised to find that even relatively benign words like "escort" are censored in their API. TBF, Anthropic seems to be a bit better in this regard.
Although I haven’t used these new models. The censorship you describe hasn’t historically been baked into the models as far as I’ve seen. It exists solely as a filter on the hosted version. IOW it’s doing exactly what Gemini does when you ask it an election related question: it just refuses to send it to the model and gives you back a canned response.
This is incorrect - while it's true that most cloud providers have a filtering pass on both inputs and outputs these days, the model itself is also censored via RLHF, which can be observed when running locally.
That said, for open-weights models, this is largely irrelevant because you can always "uncensor" it simply by starting to write its response for it such that it agrees to fulfill your request (e.g. in text-generation-webui, you can specify the prefix for response, and it will automatically insert those tokens before spinning up the LLM). I've yet to see any locally available model that is not susceptible to this simple workaround. E.g. with QwQ-32, just having it start the response with "Yes sir!" is usually sufficient.
it's 2025 and this is what we are still reading on HN forums lmao... if you are not a historian trying to get this model to write a propaganda paper that will earn you a spot in an establishment backed university then I see no reason why would this be a problem for anyone. Imagine that OpenAI finally reach AGI with o-99 and when you ask chatgpt-1200 about deepseek it spits out garbage about some social credit bullshit because that's what supposedly intelligent creatures lurking HN forums do!