It was pretty frightening, I must tell you.
Is reality observer dependent?
You said yes.
Reality is not only observer dependent, it is created by us observers.
And it's frightening because you want to know that what happened happened, and whoever you are, you are, and whoever your wife is, is your wife, and your home is your home, and so on and so forth.
So now we are dealing...
I don't see that any of this is challenged.
Everything you said is still there. It's just the mechanism of how to reach these outcomes is mediated through the collective, the hive mind of humanity.
It's not a single mind that creates reality, it's the consensus of minds.
I also think it's life, not only human minds.
I think all life is.
Yes, as we said in one of our talks that you also believe that not only human beings, but nature, life in general, is conducive.
And our topic today is artificial intelligence indistinguishable from the human variety. And we're getting to a point where artificial intelligence is ubiquitous.
For instance, I use ways, and I understand that whoever tells me where to go is not a human being, it's just a sequence to voice, cut into phrases and words, and telling me where to go and I do.
I think there are three intermixed issues here.
Issue number one, we confuse language with intelligence. The use of language can be a facet of intelligence, but doesn't have to be.
There's a famous thought experiment, called the Chinese Room argument. Searle said, imagine that you put someone in a room and you provide him with a manual or a dictionary of Chinese. And the manual says, if you get this sequence of ideograms, you react with this sequence of ideograms.
You don't think, just operate. You get this, you get this.
So the guy is in a room, or the girl is in a room, there's a dictionary, and someone slips under the door a sequence of ideograms. The guy takes the sequence, opens the dictionary, and responds with a sequence of ideograms.
According to the dictionary.
Actually, it doesn't understand the ideograms. The pictograms.
The pictograms. It doesn't understand them. He just knows that this sequence of pictograms, or ideograms, has to be responded with.
Actually, what he's doing, he's courting a woman. He's telling her he loves her. She's the center of his life.
But of course, he's not aware that he's doing this.
Searle says there's a huge difference between syntax and semantics. Or semiotics. Semantics. Syntax can be machine learned. Semiotics or semantics? Never.
This guy who is sending in and out pictograms is the machine.
He's good with the syntax.
With the syntax. He's a dictionary, and essentially creates a structure. Syntax.
But he has no idea what he's doing. There's no semantics there.
He says, machines, computers, artificial intelligence, this is the maximum they can reach. They can develop such dictionaries and give us the illusion that they are having a love affair with us. But they don't.
He said the Turing test, the Turing test is if a machine or a computer can imitate sentience, can imitate intelligence to a sufficient degree, then it's indistinguishable from a human being. That's the Turing theorem.
Searle says this is nonsense. The fact that you have input and give the right output doesn't prove you're intelligent or aware or conscious or anything.
He said the use of language is not proof of intelligence. And I fully agree. That's the first mistake we make.
The second mistake we make, in my view, is the differentiation between artificial and natural intelligence. Intelligence is intelligence. Whether it's embedded in silicon, whether it's embedded in carbon, intelligence is intelligence.
But intelligence requires an inner experience, intentionality. It requires the ability to access internal objects.
And this is where we come across a wall. Can machines develop internal objects and then access them with intentionality?
There is no reason in principle that this would be rendered, this would be impossible.
I cannot come with a cogent or cohesive argument or rigorous argument that machines will never have internal objects and they will never develop a process of intentionality.
But then we are going down the rabbit hole.
What if a machine develops internal objects? Memories, values, beliefs, love, and ethics. And accesses, intentionally accesses these internal objects.
Okay, so, and we are able to prove it. We're able to prove that there are objects in the access. How do we know how the machine experiences, does the machine experience this process the way we do?
This is the famous problem posed by Wittgenstein. Wittgenstein, the lion or the bat. If you have a bat, there was another philosopher who presented the bat issues.
And if you have a bat and the bat were able to use language, can the bat communicate badness to you? Is it possible? If a lion could speak, Wittgenstein said, would you understand the lion?
So I think we are confusing the ability to simulate to perfection the human experience, including internal objects, including intentionality, including access, including use of language, including everything.
Simulate to perfection, one on one, hundred percent. And the experience of that perfection.
So, you experience your consciousness as a human being. Will a machine that would be a replica of Beni Hendel, I would even go one step further. Will a copy of Beni Hendel, forget machines, a copy of Beni Hendel, will he experience his consciousness same way you do? When we will come to a stage where we can replicate body and mind, clone Beni Hendel, clone people, to perfection, atom per atom, elementary particle for elementary particle, everything.
The second Beni Hendel sitting here, will you share the same experience?
And this is of course what is known as the intersubjective problem. How do you know that I experience my consciousness the way you experience your consciousness?
I don't have no idea. You don't know. I have no idea. I see this as black and you say that it's black, but I have no idea. I have no clue whether you see black as I see black.
Exactly.
So, the only mind accessible to me, Apopokratos, the only mind accessible to me is mine. It's mine. We are totally solipsistic, utterly.
But we have an agreement. It's probabilities.
So we say, because you look the same like me, we start with visuals. You look the same like me, on two hands, two legs, two. And because you tend to use the same vocabulary and because you behave in ways which are, like for example, you are sad, you cry, I'm sad, I cry. So what we do, we create probabilistic maps or probabilistic trees. We say it tends to reason that we share the same experience of consciousness because we are so similar in so many other ways.
But of course it's nonsense. Of course it's utter nonsense. Because I can create a simulation of you to perfection. And it doesn't mean that the simulation has the same experience.
Actually, there's a high probability that it doesn't.
So where does this take us? Where are we going with this topic?
Is artificial intelligence indistinguishable from the human variety?
I think we should not confuse intelligence with language. And we should not confuse intelligence with consciousness. And we should not confuse intelligence with experience of consciousness. And we are doing all three.
Today the debate, even among scholars, involves these three confusions. The use of language is not intelligence. Is not intelligence.
And machines can use language as well as humans. There are machines who write poetry. We try poetry now.
So the use of language is not a distinguishing feature.
Can we create intelligence that is identical to natural intelligence, ours?
Absolutely yes.
The problem is not to create intelligence.
What is the problem?
The problem is consciousness. And even more so, the experience of how we experience our consciousness.
What is experience and why is it so important?
You're intelligent.
How am I intelligent?
Well I know that you're intelligent because, for example, your verbal output indicates certain structures and so on. That's what I'm trying to tell you. I can say that you're intelligent.
You can.
I can say that you're intelligent.
But I cannot say that you're conscious. And I cannot say how you experience your cautiousness if it does exist.
That's why it stops.
But intelligence, it's a low level function. It's structured. It's very easy to tell that something or someone is intelligent.
The focus on artificial versus natural intelligence is a decoy. It's a wrong turn in the road. That's not the main problem. And it's totally solvable.
Of course artificial intelligence is indistinguishable from natural intelligence.
So what is the experience of consciousness or even the experience of your own intelligence?
You have intelligence.
But how do you experience it? How are you aware of it and how are you aware of it?
This is inaccessible to me. And it's inaccessible to me not only in machines. It's inaccessible to me in Beni Hendel. It's inaccessible in principle.
So why make this distinction between other human beings and machines?
For me, you are not different to a computer as far as my access to your mind.
In your view, does that have anything to do with, sorry to say, sorry to use the term happiness? The way you experience things, is it conducive to being depressed or being happy?
I think the more you experience yourself in ways which are not mediated by language and by intelligence, the happier you are. I think intelligence is a disruptor, disrupts. Its intention is to disrupt because it forces you to see the world in new ways so that you can survive better.
But we're not happy with change and we're not happy with instability and unpredictability and so on, which are the hallmarks of intelligence.
A truly intelligent person is an observer who creates models of the world which are disruptive in the sense that they rearrange the world. If you're incapable of rearranging the elements of the world, you will not survive.
I must tell you that one researcher of the brain told me once that by his research, the mind changes all the time. To change one's mind doesn't mean that I've made a decision and now I've changed my mind.
With every entry that enters the mind, the mind totally changes. This is how he expressed it.
Well, we know from memory. We have studies by Loftus and others. We know there's no such thing as memory. It's a lie.
Every time we try to remember something, we have to pull all the elements from the brain together again.
And we don't know how we do it.
And we don't know how we do it and we get it wrong.
There was a famous experiment, people who witnessed 9-11. They were interviewed. A year later they were interviewed. 90% of the second interview did not match the first. 9-0. That's a year later.
They said totally different.
Totally different. For example, one of them said I was next to the building and then a year later he said I was sitting at a restaurant far from the building. Dead bad.
So even memories are nonsense. It's all on the fly. That's why I'm saying observation is critical.
We are creating internal reality as much as we are creating external reality. It's on the mind.
But coming back to the topic of this conversation.
The focus on intelligence is wrong. Intelligence is detectable, analyzable, discernible and definitely you can tell that a machine is intelligent.
Where we part ways is that I can never access your experience of your intelligence. Also known as consciousness. I cannot access this.
But then I cannot access not only a machine but I cannot access you.
Can you access yourself?
Myself to some extent, yes. But definitely not you. Definitely not you.
So what's the difference if you are sitting here and you are allegedly human or if a machine is sitting here. You are talking to me, the machine is talking to me and people say no.
Benny Hennell is an advantage on the machine. He has natural intelligence. It's nonsense. Anyhow I cannot access your mind.
As much as I cannot access the machine's mind.
So you are indistinguishable to me. Where does it lead you?
That we should stop this nonsense of trying to say will machines ever be like humans? Because they won't. Because humans are already machines.
As far as I am concerned there is no difference between you and a machine. Because I cannot access you.
We are machines you say.
I cannot access you and I cannot access the machine.
So you are the same to me. I have no privileged position by virtue of the fact that you are human. The fact that you are human doesn't endow me with a special power to access your mind.
So you are like a machine. The fact that it's a machine doesn't endow me with special power and the fact that you are human doesn't endow me with a special power.
So you are all facades.
I have to rely on your self-reporting. I have to ask you. Are you sad? You say yes I am sad. How do I know you are not lying?
There is no way, there is no protocol for me to determine your ego. Like are you in pain? There is only one person who can say he or she is in pain. And even that is dubious.
Because how do you experience pain? And how do you find pain?
So why make a distinction between you and a machine? If a machine gives me therapy or writes poetry or tries to... Or tries to beat me in chess. Or tries to fall in love with me or whatever.
And you are doing the same. How can I tell the difference between you?
I don't have a privileged access to your mind. No more than I have to the machine's mind. If it has a mind. I don't even know if you have a mind.
Maybe you are a super-programmed machine from the future.
So it is ridiculous, all this argument. Simply ridiculous.