Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This tells me that either there were very few commercial users of finetuned models, or they need to decommission the infrastructure to free up GPU's for more valuable projects.


The former seems very believable. And I bet a lot of the fine tuned models that are active are still part of prototypes or experiments.

I assume if you reach out they throw some credits at you


If it really was a tiny number of users, they would publically make a really good offer - for example: "Unfortunately, you will need to retune your models on top of GPT-4. OpenAI will do this for you for free, and refund all money you paid tuning your original model, and offer the new model for the same price as the original model."

The extra trust gained by seeing another customer treated that way easily pays for a few credits for a small number of users.


OpenAI probably doesn't feel the need to pay to win publicity right now—they've been in the spotlight for as long as LLMs have been a thing, and GPT-4 is far ahead of competitors' offerings.


It’s about trust - not publicity. Trust is hard to earn back once broken, and there will be multiple offerings eventually.

For example, AWS was one of the first cloud providers. Now there are alternatives, but I still pick AWS because I trust them not to break my dependencies way more than, say, Google


Yeah but that sets a precedent


Just the models available for fine tunings are waay behind gpt4.

I have much better performance by "prompt tuning" - when question arises, I search 30 similiar examples in training set, and send it to non-tuned GPT and ask the question and get much better performance than fine-tuned older models.


There’s also the possibility that they weren’t seeing lots of ongoing usage of existing fine tuned models e.g. users tuning, running some batch of inputs, then abandoning the fine tuned weights.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: