Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One point is reliability, as others have mentioned. Another important point for me is censorship. Due to their political nature, the model seemed to be heavily censored on topics such as the CCP and Taiwan (R.O.C.).


To be fair, anthropic and openai censor heavily on a lot of subjects

1. profanity 2. slightly sexual content 3. "bad taste" joke

that is heavily linked to the fact that they are US-based company, so I guess all AI companies produce a AI model that is politically correct.


"ChatGPT reveals in its responses that it is aligned with American culture and values, while rarely getting it right when it comes to the prevailing values held in other countries. It presents American values even when specifically asked about those of other countries. In doing so, it actually promotes American values among its users," explains researcher Daniel Hershcovich, of UCPH’s Department of Computer Science."

https://di.ku.dk/english/news/2023/chatgpt-promotes-american...

So I don't see much difference, to be honest...


I was recently trying to use the ChatGPT API to build a tiny dataset for a small NLP classifier model and was surprised to find that even relatively benign words like "escort" are censored in their API. TBF, Anthropic seems to be a bit better in this regard.


Yeah. And Deepseek denies that a country (Taiwan) exists.

I'm not sure I can trust a model that has such a focused political agenda.


Although I haven’t used these new models. The censorship you describe hasn’t historically been baked into the models as far as I’ve seen. It exists solely as a filter on the hosted version. IOW it’s doing exactly what Gemini does when you ask it an election related question: it just refuses to send it to the model and gives you back a canned response.


This is incorrect - while it's true that most cloud providers have a filtering pass on both inputs and outputs these days, the model itself is also censored via RLHF, which can be observed when running locally.

That said, for open-weights models, this is largely irrelevant because you can always "uncensor" it simply by starting to write its response for it such that it agrees to fulfill your request (e.g. in text-generation-webui, you can specify the prefix for response, and it will automatically insert those tokens before spinning up the LLM). I've yet to see any locally available model that is not susceptible to this simple workaround. E.g. with QwQ-32, just having it start the response with "Yes sir!" is usually sufficient.


Not ideal but the use cases that info pop quizzes about the ccp aren’t exactly many

I’d prefer it rather not be censored out of principle but practically it’s a non issue


Chinese censorship is less than American censorship.

Have you tried asking anything even slightly controversial to ChatGPT?


it's 2025 and this is what we are still reading on HN forums lmao... if you are not a historian trying to get this model to write a propaganda paper that will earn you a spot in an establishment backed university then I see no reason why would this be a problem for anyone. Imagine that OpenAI finally reach AGI with o-99 and when you ask chatgpt-1200 about deepseek it spits out garbage about some social credit bullshit because that's what supposedly intelligent creatures lurking HN forums do!


It will then become the truth, unless the US and EU starts to loosen copyright, which is going to allow higher quality datasets to be ingested.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: