* No function calling support yet * Only 4k tokens, so can't use the full 16k token length.
I really wish they'd share some info as to if we'll be able to fine tune the multimodality of GPT-4 as well.
Is there a multimodal GPT 4 model in the wild? All I saw was that one example at launch.
InstructBlip is the SOTA model for open source otherwise.
Literally the first sentence in the article:
> Fine-tuning for GPT-3.5 Turbo is now available, with fine-tuning for GPT-4 coming this fall.
This was my entire point. I did read the article.
* No function calling support yet * Only 4k tokens, so can't use the full 16k token length.
I really wish they'd share some info as to if we'll be able to fine tune the multimodality of GPT-4 as well.