And if so, does that make us (via prompting) a left-brain?
LLMs are fantastic at generating believable nonsense, a creative stream of expression that superficially resembles reality. This is not a criticism; what LLms can do is a superpower. But the left-brain of AGI is still MIA. Are model builders aiming to fill that void, or do we need a different mechanism? If so, what might that be?
Self learn (i.e. unsupervised) patterns (using multi-head attention mechanism) from a large volume of input and then generate new patterns based on the. prompt.
reply