Exactly this. I would argue that I believe doing it efficiently is "just engineering", but I would not claim we know that to any reasonable amount of certainty.
I hold beliefs about what LLMs may be capable of that are far stronger than what I argued, but stated only what can be supported by facts for a reason:
That absent evidence we can exceed the Turing computable, we have no reason to believe LLMs can't be trained to "represent ideas that it has not encountered before" or "come up with truly novel concepts".
I hold beliefs about what LLMs may be capable of that are far stronger than what I argued, but stated only what can be supported by facts for a reason:
That absent evidence we can exceed the Turing computable, we have no reason to believe LLMs can't be trained to "represent ideas that it has not encountered before" or "come up with truly novel concepts".