Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is honestly pretty disappointing.

* No function calling support yet * Only 4k tokens, so can't use the full 16k token length.

I really wish they'd share some info as to if we'll be able to fine tune the multimodality of GPT-4 as well.



> multimodality of GPT-4

Is there a multimodal GPT 4 model in the wild? All I saw was that one example at launch.


Yes, just via Bing.com for now. The only people who have OpenAI's API access (officially) are the BeMyEyes app for vision impaired users.

InstructBlip is the SOTA model for open source otherwise.


Yes, on Bing.


> I really wish they'd share some info as to if we'll be able to fine tune the multimodality of GPT-4 as well.

Literally the first sentence in the article:

> Fine-tuning for GPT-3.5 Turbo is now available, with fine-tuning for GPT-4 coming this fall.


Yes, nowhere does it say anything about MULTIMODAL fine tuning being supported/unsupported when GPT-4 fine tuning becomes available.

This was my entire point. I did read the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: