A collection of ideas related to consciousness.
Joscha Bach and Lex Fridman Aug 1, 202312
Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392 Aug 1, 2023
Lex: Can GPTn achieve conciouseness?
Joscha: GPTn probably, it's not even clear for the present systems. When I talk to my friends at OpenAI, they feel that this question, whether the models currently are conscious, is much more complicated than many people might think. ... There are some aspects to this. One is of course this lanuage model has written a lot of text in which epoeple were concious or describe their own conciousness. And it's emulating this. And if it's concious, it's probably not conscious in a way that is closed to the way in which human beings are conscious. But while it is going through these states and going through a hundred step function that is emulating adjacent brain states that require a degree of self-reflection, it can also create a model of an obsever that is reflecting itself in real time and describe what that's like. While this model is a deep fake, our own consciousness is also as if it's virtual. It's not physical. Our consciousness is a representation of a self-reflexive observer that only exists in patterns of interaction between cells. So, it is not a physical object in a sense that exists in base reality, but it's really a representational object that develops its causal power only from a certain modeling perspective. (Lex: it's virtual.) Yes, and so, to which degree is the virtuality of the consciouness in ChatGPT more virtual and less casual than the virtuality of our own consciousness. But you could say it doesn't count. It doesn't count much more than the consciousness of a character in a novel. It's important for the reader to have the outcome, the artifact of a model is describing in the text generated by the author of the book, what it's like to be conscious in a particular situation and performs the necessary inferences. But the task of creating coherence in real time in a self-organizaing system by keeping yourself coherent, so the system is reflexive, that is something tha the language models don't need to do. So, there is no causal need for the system to be conscious in the same way as we are. And for me, it would be very interesting to experiment with this, to basically build a system like a cat probably should be careful at first, build someting that's small, that's limited, has limited resources that we can control and study how system notice a self model, how they become self-aware in real time. And I think it might be a good idea to not start with a language model, but to start from scratch using principles of self-organization.
Note: There is more discussionrelated to why not using language model, like there is no governance.