Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can it call anything self hosted or ollama?


It should be possible using LiteLLM and a patch or a proxy.

https://github.com/BerriAI/litellm


That's on the roadmap


Can one enter their own opeanai URL and api-key? (so we can use openai-compatible things like openrouter or lm-studio)?


Doesn't look like it: https://gitlab.com/literally-useful/voxos/-/blob/dev/voxos/s...

edit: shouldn't be hard to enable though


Yes, you can define your own key in either the .env, CLI call on run.sh, or in your environment.

https://gitlab.com/literally-useful/voxos/-/blob/dev/.env?re...


That doesn't let me send requests to my local litellm instance, though. You have to be able to configure the endpoint that requests are sent against as well.


Nice. LiteLLM was just the thing I've been looking for and hoping to integrate.


Hell yeah. Good luck!


Do you know if there's anything out there like LiteLLM that includes OpenAI's whisper model? I took a look at the litellm package and it doesn't appear they support the audio module. :/


I'm not sure if it is _fully_ openai compatible, but whispercpp has a server bundled that says it is "OAI-like": https://github.com/ggerganov/whisper.cpp/tree/master/example...

I don't have any direct experience with it... I've only played around with whisper locally, using scripts.



I think anything compatible with either chat completions or completions API should work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: