Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are they? Not conscious?

If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.

So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

 help



Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.


> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?


How is it not true? Theres a world of difference between predicting the next word of a sentence in a summary and understanding the tenets of mathematics. You're mistaking general application of mathematical knowledge with memorization of mathematical outcomes.

Leave aside "the details" like you being obviously, provably wrong?

We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.

Humans generalize faster than most AIs, but AIs generalize too.


Then prove I'm wrong. Prove that an LLM can in fact solve completely novel arithmetic.

> The line of consciousness, as we understand it, is understanding.

Is it? I'm no expert, by any stretch, but where does this theory come from?

I don't think anyone knows what consciousness is, or why we appear to have it, or even if we do have it. I don't even know that you're conscious. I could be the only conscious being in the universe and the rest of you are just zombies, with all the right external outputs to fool me, but no actual consciousness.


Well we're not. Theory of mind is _understanding_ you're not.

We're not what?

Can you please elaborate a bit more because your comment in isolation is meaningless.


This isn't meant to be an answer that would satisfy everyone, but in my opinion consciousness is a specific biological / evolutionary adaptation that has to do with managing status, relationships, and caring for young. It's about having an identity and an ego and building mental models of the egos / identities / etc of others.

I don't think there's any reason we couldn't in principle attach this sort of concept to an LLM, but it's not something we've actually done. (and no, prompting an LLM to act as if it has an identity does not count)


The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.

Turing aimed too low.


And the chatbots don't even pass the Turing test.

I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.


They are distinguishable because they know too much. Their knowledge base has surpassed humans. We have also instructed them to interact with us in a certain manner. They certainly are able to understand and use human language. Which I think was Turin's point.

Purely retorica but, would you be able to distinguish a chatbot from an autistic human?


> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.


I think that the interior structure doesn't necessarily matter—the problem here is that we don't know what consciousness is, or how it interacts with the physical body. We understand decently well how the brain itself works, which suggests that consciousness is some other layer or abstraction beyond the mechanism.

That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"

I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: