Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting to think about this form of computation (LLM + function call) in terms of circuitry. It is still unclear to me however, if the sequential form of reasoning imposed by a sequence of chat messages is the right model here. LLM decoding and also more high-level "reasoning algorithms" like tree of thought are not that linear.

Ever since we started working on LMQL, the overarching vision all along was to get to a form of language model programming, where LLM calls are just the smallest primitive of the "text computer" you are running on. It will be interesting to see what kind of patterns emerge, now that the smallest primitive becomes more robust and reliable, at least in terms of the interface.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: