Background

Is Artificial Intelligence Fully Human? (with Benny Hendel)

Uploaded 9/2/2022, approx. 13 minute read

It was pretty frightening, I must tell you.

Is reality observer dependent?

You said yes.

Reality is not only observer dependent, it is created by us observers.

And it's frightening because you want to know that what happened happened, and whoever you are, you are, and whoever your wife is, is your wife, and your home is your home, and so on and so forth.

So now we are dealing...

I don't see that any of this is challenged.

Everything you said is still there. It's just the mechanism of how to reach these outcomes is mediated through the collective, the hive mind of humanity.

It's not a single mind that creates reality, it's the consensus of minds.

I also think it's life, not only human minds.

I think all life is.

Yes, as we said in one of our talks that you also believe that not only human beings, but nature, life in general, is conducive.

And our topic today is artificial intelligence indistinguishable from the human variety. And we're getting to a point where artificial intelligence is ubiquitous.

For instance, I use ways, and I understand that whoever tells me where to go is not a human being, it's just a sequence to voice, cut into phrases and words, and telling me where to go and I do.

I think there are three intermixed issues here.

Issue number one, we confuse language with intelligence. The use of language can be a facet of intelligence, but doesn't have to be.

There's a famous thought experiment, called the Chinese Room argument. Searle said, imagine that you put someone in a room and you provide him with a manual or a dictionary of Chinese. And the manual says, if you get this sequence of ideograms, you react with this sequence of ideograms.

You don't think, just operate. You get this, you get this.

So the guy is in a room, or the girl is in a room, there's a dictionary, and someone slips under the door a sequence of ideograms. The guy takes the sequence, opens the dictionary, and responds with a sequence of ideograms.

According to the dictionary.

Actually, it doesn't understand the ideograms. The pictograms.

The pictograms. It doesn't understand them. He just knows that this sequence of pictograms, or ideograms, has to be responded with.

Actually, what he's doing, he's courting a woman. He's telling her he loves her. She's the center of his life.

But of course, he's not aware that he's doing this.

Searle says there's a huge difference between syntax and semantics. Or semiotics. Semantics. Syntax can be machine learned. Semiotics or semantics? Never.

This guy who is sending in and out pictograms is the machine.

He's good with the syntax.

With the syntax. He's a dictionary, and essentially creates a structure. Syntax.

But he has no idea what he's doing. There's no semantics there.

He says, machines, computers, artificial intelligence, this is the maximum they can reach. They can develop such dictionaries and give us the illusion that they are having a love affair with us. But they don't.

He said the Turing test, the Turing test is if a machine or a computer can imitate sentience, can imitate intelligence to a sufficient degree, then it's indistinguishable from a human being. That's the Turing theorem.

Searle says this is nonsense. The fact that you have input and give the right output doesn't prove you're intelligent or aware or conscious or anything.

He said the use of language is not proof of intelligence. And I fully agree. That's the first mistake we make.

The second mistake we make, in my view, is the differentiation between artificial and natural intelligence. Intelligence is intelligence. Whether it's embedded in silicon, whether it's embedded in carbon, intelligence is intelligence.

But intelligence requires an inner experience, intentionality. It requires the ability to access internal objects.

And this is where we come across a wall. Can machines develop internal objects and then access them with intentionality?

There is no reason in principle that this would be rendered, this would be impossible.

I cannot come with a cogent or cohesive argument or rigorous argument that machines will never have internal objects and they will never develop a process of intentionality.

But then we are going down the rabbit hole.

What if a machine develops internal objects? Memories, values, beliefs, love, and ethics. And accesses, intentionally accesses these internal objects.

Okay, so, and we are able to prove it. We're able to prove that there are objects in the access. How do we know how the machine experiences, does the machine experience this process the way we do?

This is the famous problem posed by Wittgenstein. Wittgenstein, the lion or the bat. If you have a bat, there was another philosopher who presented the bat issues.

And if you have a bat and the bat were able to use language, can the bat communicate badness to you? Is it possible? If a lion could speak, Wittgenstein said, would you understand the lion?

So I think we are confusing the ability to simulate to perfection the human experience, including internal objects, including intentionality, including access, including use of language, including everything.

Simulate to perfection, one on one, hundred percent. And the experience of that perfection.

So, you experience your consciousness as a human being. Will a machine that would be a replica of Beni Hendel, I would even go one step further. Will a copy of Beni Hendel, forget machines, a copy of Beni Hendel, will he experience his consciousness same way you do? When we will come to a stage where we can replicate body and mind, clone Beni Hendel, clone people, to perfection, atom per atom, elementary particle for elementary particle, everything.

The second Beni Hendel sitting here, will you share the same experience?

And this is of course what is known as the intersubjective problem. How do you know that I experience my consciousness the way you experience your consciousness?

I don't have no idea. You don't know. I have no idea. I see this as black and you say that it's black, but I have no idea. I have no clue whether you see black as I see black.

Exactly.

So, the only mind accessible to me, Apopokratos, the only mind accessible to me is mine. It's mine. We are totally solipsistic, utterly.

But we have an agreement. It's probabilities.

So we say, because you look the same like me, we start with visuals. You look the same like me, on two hands, two legs, two. And because you tend to use the same vocabulary and because you behave in ways which are, like for example, you are sad, you cry, I'm sad, I cry. So what we do, we create probabilistic maps or probabilistic trees. We say it tends to reason that we share the same experience of consciousness because we are so similar in so many other ways.

But of course it's nonsense. Of course it's utter nonsense. Because I can create a simulation of you to perfection. And it doesn't mean that the simulation has the same experience.

Actually, there's a high probability that it doesn't.

So where does this take us? Where are we going with this topic?

Is artificial intelligence indistinguishable from the human variety?

I think we should not confuse intelligence with language. And we should not confuse intelligence with consciousness. And we should not confuse intelligence with experience of consciousness. And we are doing all three.

Today the debate, even among scholars, involves these three confusions. The use of language is not intelligence. Is not intelligence.

And machines can use language as well as humans. There are machines who write poetry. We try poetry now.

So the use of language is not a distinguishing feature.

Can we create intelligence that is identical to natural intelligence, ours?

Absolutely yes.

The problem is not to create intelligence.

What is the problem?

The problem is consciousness. And even more so, the experience of how we experience our consciousness.

What is experience and why is it so important?

You're intelligent.

How am I intelligent?

Well I know that you're intelligent because, for example, your verbal output indicates certain structures and so on. That's what I'm trying to tell you. I can say that you're intelligent.

You can.

I can say that you're intelligent.

But I cannot say that you're conscious. And I cannot say how you experience your cautiousness if it does exist.

That's why it stops.

But intelligence, it's a low level function. It's structured. It's very easy to tell that something or someone is intelligent.

The focus on artificial versus natural intelligence is a decoy. It's a wrong turn in the road. That's not the main problem. And it's totally solvable.

Of course artificial intelligence is indistinguishable from natural intelligence.

So what is the experience of consciousness or even the experience of your own intelligence?

You have intelligence.

But how do you experience it? How are you aware of it and how are you aware of it?

This is inaccessible to me. And it's inaccessible to me not only in machines. It's inaccessible to me in Beni Hendel. It's inaccessible in principle.

So why make this distinction between other human beings and machines?

For me, you are not different to a computer as far as my access to your mind.

In your view, does that have anything to do with, sorry to say, sorry to use the term happiness? The way you experience things, is it conducive to being depressed or being happy?

I think the more you experience yourself in ways which are not mediated by language and by intelligence, the happier you are. I think intelligence is a disruptor, disrupts. Its intention is to disrupt because it forces you to see the world in new ways so that you can survive better.

But we're not happy with change and we're not happy with instability and unpredictability and so on, which are the hallmarks of intelligence.

A truly intelligent person is an observer who creates models of the world which are disruptive in the sense that they rearrange the world. If you're incapable of rearranging the elements of the world, you will not survive.

I must tell you that one researcher of the brain told me once that by his research, the mind changes all the time. To change one's mind doesn't mean that I've made a decision and now I've changed my mind.

With every entry that enters the mind, the mind totally changes. This is how he expressed it.

Well, we know from memory. We have studies by Loftus and others. We know there's no such thing as memory. It's a lie.

Every time we try to remember something, we have to pull all the elements from the brain together again.

And we don't know how we do it.

And we don't know how we do it and we get it wrong.

There was a famous experiment, people who witnessed 9-11. They were interviewed. A year later they were interviewed. 90% of the second interview did not match the first. 9-0. That's a year later.

They said totally different.

Totally different. For example, one of them said I was next to the building and then a year later he said I was sitting at a restaurant far from the building. Dead bad.

So even memories are nonsense. It's all on the fly. That's why I'm saying observation is critical.

We are creating internal reality as much as we are creating external reality. It's on the mind.

But coming back to the topic of this conversation.

The focus on intelligence is wrong. Intelligence is detectable, analyzable, discernible and definitely you can tell that a machine is intelligent.

Where we part ways is that I can never access your experience of your intelligence. Also known as consciousness. I cannot access this.

But then I cannot access not only a machine but I cannot access you.

Can you access yourself?

Myself to some extent, yes. But definitely not you. Definitely not you.

So what's the difference if you are sitting here and you are allegedly human or if a machine is sitting here. You are talking to me, the machine is talking to me and people say no.

Benny Hennell is an advantage on the machine. He has natural intelligence. It's nonsense. Anyhow I cannot access your mind.

As much as I cannot access the machine's mind.

So you are indistinguishable to me. Where does it lead you?

That we should stop this nonsense of trying to say will machines ever be like humans? Because they won't. Because humans are already machines.

As far as I am concerned there is no difference between you and a machine. Because I cannot access you.

We are machines you say.

I cannot access you and I cannot access the machine.

So you are the same to me. I have no privileged position by virtue of the fact that you are human. The fact that you are human doesn't endow me with a special power to access your mind.

So you are like a machine. The fact that it's a machine doesn't endow me with special power and the fact that you are human doesn't endow me with a special power.

So you are all facades.

I have to rely on your self-reporting. I have to ask you. Are you sad? You say yes I am sad. How do I know you are not lying?

There is no way, there is no protocol for me to determine your ego. Like are you in pain? There is only one person who can say he or she is in pain. And even that is dubious.

Because how do you experience pain? And how do you find pain?

So why make a distinction between you and a machine? If a machine gives me therapy or writes poetry or tries to... Or tries to beat me in chess. Or tries to fall in love with me or whatever.

And you are doing the same. How can I tell the difference between you?

I don't have a privileged access to your mind. No more than I have to the machine's mind. If it has a mind. I don't even know if you have a mind.

Maybe you are a super-programmed machine from the future.

So it is ridiculous, all this argument. Simply ridiculous.

If you enjoyed this article, you might like the following:

Are YOU a simulation? (with Benny Hendel)

Professor Sam Vaknin discusses philosopher David Chalmers' view that simulations are as real as reality and that reality may be a simulation. Vaknin disagrees with Chalmers on two main points: 1) Vaknin believes that there will always be a conscious act of will required to switch between reality and simulations, and 2) even if our reality is a simulation, it is still our privileged frame of reference and cannot be escaped. Vaknin argues that Chalmers' view requires an impossible vantage point outside of both reality and simulations to compare them.


Anti-vaxxers: Mentally Ill Victimhood Conspiracists (References in Description)

Vaccination is a moral obligation to protect others, and while vaccine hesitancy is a legitimate concern that encourages critical thinking, anti-vaccine beliefs stem from mental illness. The psychology behind anti-vaxxers includes traits such as conspiracism, grandiosity, and impaired reality testing, leading to delusions and a rejection of expert knowledge. Many individuals in these movements exhibit characteristics of narcissism and psychopathy, often feeling victimized and superior due to their beliefs. This mindset can result in dangerous societal outcomes, as those who believe in conspiracy theories may make poor medical decisions and engage in anti-social behavior. Ultimately, the anti-vaccine movement represents a form of mass psychosis that poses risks to public health and safety.


Nature: Grandiose Delusion (with Benny Hendel)

Professor Sam Vaknin discusses the concept of nature and how humans relate to it. He argues that the traditional ways of relating to nature, such as religious domination, romanticism, and decoupling, are all dysfunctional and fail to recognize that humans are part of nature. Vaknin suggests that everything humans create is natural and that nature will use humans as agents to limit their activities if necessary. He concludes that humans need to accept that they are part of nature and act accordingly.


Are You Sure You Are Human?

The lecture explores the complex question of what it means to be human, highlighting the inadequacy of circular definitions that rely on negation, such as being human because one is not an animal or a machine. It discusses the continuum between humans and other life forms, particularly in light of advancements in artificial intelligence, which may challenge traditional notions of uniqueness and personhood. The speaker emphasizes two key characteristics that may define humanity: behavioral unpredictability and awareness of mortality, while also questioning the implications of merging human and machine traits. Ultimately, the discourse reflects on the shifting perspectives of humanity in the context of science, technology, and societal changes, suggesting an ongoing struggle to define and recognize what constitutes humanness.


Do We Create Reality, Is It a Hive Mind? (with Benny Hendel)

Professor Sam Vaknin discusses the idea that reality is observer-dependent, and that the mind creates reality via the process of intentionality. He suggests that the observer is not naive and does not collapse the wave function, but rather, the observer is not capable of seeing anything else but the collapsed state. Vaknin proposes that the universe has a DNA of order and structure, and that the role of human beings is to observe the universe and via the act of observation, to collapse it, creating order and structure. He suggests that with every act of collective observation, we are cementing the past of the universe, not just the present.


Upload Mind to Cloud, All Life Conscious, Sentient (Dinner with R)

Life is a diverse and complex phenomenon, with all forms of life sharing fundamental elements, suggesting that differences among species are merely a matter of degree rather than essence. Evolutionary theory, while effective in predicting biological dynamics, struggles to explain phenomena like altruism and the role of medicine, indicating a potential misinterpretation of its principles. The concept of consciousness and the mind is questioned, as it may not be universally applicable across species, and our understanding of other minds remains fundamentally limited. The potential for uploading and downloading consciousness raises profound philosophical questions about identity, purpose, and the implications of technology on our existence and future as a species.


Chance And Generational Trauma Pandemic Settles Nature Vs. Nature Debate

Professor Sam Vaknin discusses two new factors that influence who we become: chance or randomness and generational trauma. Recent research suggests that random molecular fluctuations in developing brain cells, especially in the womb, can influence the brain's wiring and have lifelong consequences. Additionally, generational trauma, such as the COVID-19 pandemic, can have a significant impact on mental health and personality development. These factors are considered more important than the traditional nature versus nurture debate in determining our identities.


Suicide: Why Choose Life, Not Death!

Professor Sam Vaknin discusses the rising tide of suicidal ideation among people of all ages and cultures and provides a philosophical foundation for why people should choose life. He argues that existence is always richer in potential than non-existence and that life is full of potentials because it is complex and because of other people. He also criticizes modern society for presenting falsities, lies, manipulations, and life substitutes that limit people's promise and suppress their free will. Ultimately, he urges people to choose themselves and realize that their existence alone suffices to steer everyone and everything in another direction and towards an alternative destiny.


Vaccine Defiance is Psychopathic, Narcissistic, Paranoid, Intellectually Challenged

The lecture discusses the ethical implications of vaccination, emphasizing the obligation individuals have to protect others from harm, particularly in the context of infectious diseases like COVID-19. It argues that while individuals have rights to their own bodies and choices, these rights do not extend to actions that could endanger others, such as refusing vaccination. The speaker highlights the moral calculus involved in balancing individual freedoms against the collective right to life and public health, ultimately asserting that the right to not be harmed by others supersedes the right to refuse vaccination. The conclusion posits that refusing to get vaccinated can be seen as a form of recklessness that endangers the community, warranting societal intervention.


Transhumanism: Culture Replaces Evolution (with Benny Hendel)

Culture serves as a form of evolution for humans, extending beyond genetic and biological adaptation to include the totality of human creativity, which influences both mental and physical environments. This dual inheritance theory posits that culture exerts selective pressure, shaping not only societal structures but also gene expression, thereby impacting future generations. As humans design their environments, they can trigger genetic adaptations, allowing for rapid evolution compared to other species. The potential for a transhumanist future raises concerns about societal stratification and the ethical implications of technology's role in shaping human evolution, emphasizing the responsibility humans have in directing their own evolutionary path.

Transcripts Copyright © Sam Vaknin 2010-2024, under license to William DeGraaf
Website Copyright © William DeGraaf 2022-2024
Get it on Google Play
Privacy policy