A Proposed Test for Consciousness in AI Systems
I recently had the chance to talk to Maxim Perumal, Founder of Unai, a VR headset company. During our conversation we talked about the progress in language models such as GPT-3, whether they pass the Turing Test (I argued that they do), and discussed whether these large models are conscious or have elements of consciousness.
He then asked me the question, “how would you design a test for consciousness in an AI system?”
Here’s what I answered with:
Any AI system is conscious if it:
Can identify when its state has changed from some state
The intuition here is that of meta-cogntion: can the model identify that it is thinking/learning, i.e. its state has changed?
Can go back to previous state
t0from some current state
t1without any external intervention.
I think that being conscious implies that you continuously keep adding ‘things you become conscious of’ to some store in the brain and have access to them. If you are just hit by some stimulus and do not store the information that you were hit by that stimulus, were you conscious of that stimulus? I don’t think so. Hence, storing that stimulus as an element of some state is being conscious of that state.
Recently I have been thinking about whether I should add a third condition to the test:
Can the model go forward to a new state
t3, which it has never been in, from some current state
t2without any external intervention?
I am not so sure about this one yet. The hesitancy is that I currently think conscious systems cannot modulate to a new unknown state that has nothing to with its set of previous states, without any external stimuli/intervention.
A version of this question is: can a human become conscious about something that cannot be derived from the all that the human has previously been conscious about, without external intervention?
I am still thinking about this.
March 5, 2022
PS: If you are interested in this problem too, I think Tononi and Koch’s Integrated Information Theory might be worth looking into.