Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you tell me that trees are big, and trees are made of hard wood, I as a human am capable of asking whether trees feel pain. I don't think what you said is false and I am not familiar with computational theory to be able to debate it. People occasionally have novel creative insights that do not derive from past experience or knowledge, and that is what I think of when I think of creativity.

Humans created novel concepts like writing literally out of thin air. I like how the book "Guns, Steels, and Germs" describes that novel creative process and contrasts it via a disseminative derivation process.





> People occasionally have novel creative insights that do not derive from past experience or knowledge, and that is what I think of when I think of creativity.

If they are not derived from past experience or knowledge, then unless humans exceed the Turing computable, they would need to be the result of randomness in one form or other. There's absolutely no reason why an LLM can not do that. The only reason a far "dumber" pure random number generator based string generator "can't" do that is because it would take too long to chance on something coherent, but it most certainly would keep spitting out novel things. The only difference is how coherent the novel things are.


> If they are not derived from past experience or knowledge

Every animal is born with intuition, you missed that part.


So knowledge encoded in the physical structure of the brain.

You're missing the part where unless there is unknown physics going on in the brain that breaks maths as me know it, there is no mechanism for a brain to exceed the Turing computable, in which case any Turing complete system is comptationally equivalent to it.


Turing machines are deterministic, brain might not be because of quantum mechanics happening. Of course there is no proof that this is related to creativity.

Turing machines are deterministic if all their inputs are deterministic, which they do not need to be, and if we allow them to be. Indeed, by default, LLMs are by default not deterministic because we intentionally inject randomness.

It doesn't mean we can accurately simulate the brain by swapping its source of nondeterminism with any other PRNG or TRNG. It might just so happen that to simulate ingenuity you have to simulate the universe first.

If the brain does not exceed the Turing computable, then it does mean it is possible to accurately simulate the brain. Not only that, but in that case the brain itself is existence proof that doing so efficiently is possible.

If the brain exceeds the Turing computable, then all bets are off, but we have no evidence to suggest it does, nor that doing so is possible. This was in fact my original argument.

The only viable counter to my argument is demonstrating that there are computable functions outside the Turing computable, and that humans can compute them.


This Turing completeness equivalence is misleading. While all Turing-complete systems can theoretically compute the same class of functions, this says nothing about computational complexity, physical constraints, practical achievability in finite time, or the actual algorithms required. A Turing machine that can theoretically simulate a brain does not mean we know how to do it or that it is even feasible. This is like arguing that because weather systems and computers both follow physical laws, you should be able to perfectly simulate weather on your laptop.

Additionally, "No mechanism to exceed Turing computable" is a non-sequitur. Even granting that brains do not perform hypercomputation, this does not support your conclusion that artificial systems are "computationally equivalent" to brains in any practical sense. We would need: (1) complete understanding of brain algorithms, (2) the actual data/weights encoded in neural structures, (3) sufficient computational resources, and (4) correct implementation. None of these follow from Turing completeness alone, I believe.

More importantly, you completely dodged the actual point about intuition. Jensson's point is about evolutionary encoding vs. learned knowledge. Intuition represents millions of years of evolved optimization encoded in brain structure and chemistry. You acknowledge this ("knowledge encoded in physical structure") but then pivot to an irrelevant theoretical CS argument rather than addressing whether we can actually replicate such evolutionary knowledge in artificial systems.

Your original claim was "If they are not derived from past experience or knowledge" which creates a false dichotomy. Animals are born with innate knowledge encoded through evolutionary optimization. This is not learned from individual experience, yet it is still knowledge, specifically, it is millions of years of selection pressure encoded in neural architecture, reflexes, instincts, and cognitive biases.

So, for example: a newborn animal has never experienced a predator but knows to freeze or flee from certain stimuli. It has built-in heuristics for threat assessment, social behavior, spatial reasoning, and countless other domains that cost generations to develop through survival pressure.

Current AI systems lack this evolutionary substrate. They are trained on human data over weeks or months, not evolved over millions of years. We do not even know how to encode this type of knowledge artificially or even fully understand what knowledge is encoded in biological systems. Turing completeness does not bridge this gap any more than it bridges the gap between a Turing machine and actual weather.

Correct me if I'm misinterpreting your argument.


I...I am very interested in this subject. There's a lot to unpack in your comment, but I think it's really pretty simple.

> this does not support your conclusion that artificial systems are "computationally equivalent" to brains in any practical sense.

You're making a point about engineering or practicality, and in that sense, you are absolutely correct.

That's not the most interesting part of the question, however.

> This is like arguing that because weather systems and computers both follow physical laws, you should be able to perfectly simulate weather on your laptop.

Yes, that's exactly what I'd argue, and...hm.. yes, I think that's clearly true. Whether it takes 10 minutes or 10^100 minutes, 1~ or 10^100 human lifetimes to do so, it's irrelevant. Units (including human lifetimes) are arbitrary, and I think fundamental truths probably won't depend on such arbitrary things as how long a particular collection of atoms in a particular corner of the universe (i.e. humans) happens to be stable for. Ratios are closer to being fundamental, but I digress.

To put it a different way - we think we know what the speed of light is. Traveling at v = 0.1c or at v = (1 - 10^(-100))c are equivalent in a fundamental sense, it's an engineering problem. Now, traveling at v = c...that's very different. That's interesting.


Exactly this. I would argue that I believe doing it efficiently is "just engineering", but I would not claim we know that to any reasonable amount of certainty.

I hold beliefs about what LLMs may be capable of that are far stronger than what I argued, but stated only what can be supported by facts for a reason:

That absent evidence we can exceed the Turing computable, we have no reason to believe LLMs can't be trained to "represent ideas that it has not encountered before" or "come up with truly novel concepts".


> While all Turing-complete systems can theoretically compute the same class of functions, this says nothing about computational complexity, physical constraints, practical achievability in finite time, or the actual algorithms required.

True. But if the brain is limited to the Turing computable, then the brain itself is existence proof it is possible to do so efficiently. It might require a different architecture, but that is a detail.

Personally I think that we have gotten this far this quickly with brute force suggests that the problem is fairly tractable, but it may in fact turn out to be much harder than we think.

The point is that when people dismiss it as impossible, that is a belief not backed up by any evidence.

> Additionally, "No mechanism to exceed Turing computable" is a non-sequitur. Even granting that brains do not perform hypercomputation, this does not support your conclusion that artificial systems are "computationally equivalent" to brains in any practical sense. We would need: (1) complete understanding of brain algorithms, (2) the actual data/weights encoded in neural structures, (3) sufficient computational resources, and (4) correct implementation. None of these follow from Turing completeness alone, I believe.

Computationally equivalent here refers to any two Turing complete systems being able to compute all functions that the other can, and so on that basis all four of your points are irrelevant to the question I addressed.

> yet it is still knowledge

You claim my statement creates a false dichotomy, but here you concede it is not.

> Current AI systems lack this evolutionary substrate.

That is irrelevant to the question of whether it is possible. That's an engineering problem, not a fundamental limitation.

> Correct me if I'm misinterpreting your argument.

It seems you're arguing difficult and complexity, while I argued over possibility. Your argument is mostly not relevant to mine for that reason. Most of it is not unreasonable, it just does not say anything about the possibility.


You write (as a response to someone else in this thread): "If the brain is limited to the Turing computable, then the brain itself is existence proof it is possible to do so efficiently."

No. The brain is existence proof that that particular physical substrate can achieve intelligence efficiently. A bird is existence proof that flight is possible efficiently, but not that elephants can fly. You are claiming "computational equivalence" means any Turing-complete system can efficiently replicate any other, but this does not follow from Turing's thesis at all.

You say: "Computationally equivalent here refers to any two Turing complete systems being able to compute all functions that the other can."

But then you make claims about replicating brain capabilities. These are different things. A Python interpreter and raw transistors are Turing-equivalent, but we do not conclude Python can efficiently do what transistors do. The abstraction layers, the architecture, the implementation: these all matter for the actual question at hand.

You dismiss the evolutionary substrate: "That is irrelevant to the question of whether it is possible. That's an engineering problem, not a fundamental limitation.".

This concedes the key point. You are now admitting current AI systems lack something the brain has (millions of years of encoded optimization), then handwaving it away as "just engineering". But the original discussion was whether LLMs as currently implemented can represent truly novel ideas. You have retreated to arguing about theoretical possibility with complete knowledge and arbitrary resources.

Finally: "It seems you're arguing difficult and complexity, while I argued over possibility."

Exactly. Your argument has contracted from making claims about actual LLM capabilities to an unfalsifiable position about theoretical possibility. In the sense you are now defending, it is "possible" that monks with abacuses could run Crysis given infinite time and perfect execution. This tells us nothing interesting about whether current LLMs have unbounded creativity.

Perhaps I am misunderstanding your original argument. Could you clarify what your argument is exactly? I want to make sure we are not talking past each other.


> No. The brain is existence proof that that particular physical substrate can achieve intelligence efficiently.

So in other words, it is existence proof that it can be done efficiently. You arbitrarily applied your false beliefs about what that statement implied.

If you want to claim that we don't have any evidence that it can be done in an arbitrary substrate, then you'd be right, but that is entirely separate argument I have no interest in.

> You are claiming "computational equivalence" means any Turing-complete system can efficiently replicate any other, but this does not follow from Turing's thesis at all.

I have never in my life made that claim.

I have at times argued I believe that efficiency is "just" an engineering problem, but I have certainly not ever argued that computational equivalence proves that.

Again you are falsely attributing opinions to me I do not hold, and it's frankly offensive that you keep attrbuting to me things I not only have not said, but do not agree with.

> The abstraction layers, the architecture, the implementation: these all matter for the actual question at hand.

They do not at all matter for the question of whether one architecture is theoretically capable of computing the same as the other, which is what I have argued it is.

> This concedes the key point.

It concedes nothing. It pointed out that my argument was about whether LLMs can be made to "represent ideas that is has not encountered before" and "come up with truly novel concepts".

Those were the claims I stated has no evidence in favour of them. Nothing of what you have written in any of your responses have any relevance to that.

As you concede:

> Exactly.

Then you go on to make another false assertion about what I have said:

> Your argument has contracted from making claims about actual LLM capabilities to an unfalsifiable position about theoretical possibility.

It has done nothing of the sort. You have repeatedly tried to argue against a position I did not take, by repeatedly misrepresenting what I have claimed, as this quoted statement also does.

There is also nothing unfalsificable about my claim:

Show that humans can compute even a single function outside the Turing computable, and my argument is is proven false.

> In the sense you are now defending, it is "possible" that monks with abacuses could run Crysis given infinite time and perfect execution. This tells us nothing interesting about whether current LLMs have unbounded creativity.

This is the only thing I have been defending. It may not be interesting to you, but to be it matters because without it being possible, there is no point in even arguing over whether it is practical.

If said Crysis-executing monks were fundamentally limited in a way that made it impossible for them to execute the steps, then it would be irrelevant whether or not there were ways for them to speed it up (say, by building computers...).

Since I was arguing against someone who denied the possibility that is the only argument I had any reason to make.

> Perhaps I am misunderstanding your original argument. Could you clarify what your argument is exactly? I want to make sure we are not talking past each other.

I told you how you misunderstood my original argument: I've argued over possibility. I've not made any argument about difficulty or complexity.

You've gone out to falsely and rudely claim that my argument has shifted, but it has not.

Here is my first comment in this sub-thread, where I state there is no evidence to support a claim that LLMs "will not be able to represent ideas that it has not encountered before" and won't be able to "come up with truly novel concepts". My original claim didn't even extent to claim full computational equivalence, because it was not necessary.

https://news.ycombinator.com/item?id=45996749


I will get back to this later, but I literally quoted you and I replied to what I quoted you said, so you cannot say that I made it up myself when I quoted you verbatim and then responded to that.

In one instance you did say "If the brain is limited to the Turing computable, then the brain itself is existence proof it is possible to do so efficiently.", for example, and I explained why it is not the proof you thought it was.

In any case, no hard feelings. I will get back to you in a minute.


Wouldn't this insight derive from many past experiences of feeling pain yourself and the knowledge that others feel it too?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: