Background

When AI Takes Over: Therapy, Mob, Transhumanism, Authoritarianism, Narcissism (with Nino Apakidze)

Uploaded 2/27/2025, approx. 1 hour 3 minute read

The video you're about to see contains an interview with a journalist Nino Apakidze from Georgia.

In the video we discuss a variety of topics, I would say a bewildering variety of topics.

And I would like to clarify one of the terms that keep recurring throughout the interview, and this is transhumanism.

We should make a distinction between transhumanism and post-humanism.

Transhumanism is about the empowerment and the augmentation of the human species with advanced technologies, enhancing the body and the mind in order to improve the survival chances of the human species on a fast deteriorating planet and possibly throughout the galaxy.

This is transhumanism. It is focused on the human species, it is pro-humanity, and it supports the future of the human species, albeit integrated with modern technologies.

Post-humanism is possibly the exact opposite.

Post-humanism is about the elimination of the human species and its replacement with advanced technologies such as artificial intelligence or by letting nature take its way.

Post-humanism is founded on the assumption of the belief or the ideology, that the planet as a whole, and perhaps even the cosmos, would be far better off without a human species, that we as a species have failed in our custodianship of nature and have inflicted untold damage on other species, on our environment, and ultimately on ourselves as well.

The assumption that the planet or the universe would be much better off should the human species be supplanted and replaced by the next stage in evolution is of course a contentious one. Maybe even grandiose to some extent.

The next stage in evolution could be a variation on the human species, could be a mixture of the human species with machines, cyborgs, or could be a total replacement of the human species by another form of intelligence, such as artificial intelligence.

It is presumptuous to adopt the point of view of God, if you wish.

We cannot control our evolution. We cannot determine what is truly good for our planet in the long term. We don't have enough information. We are limited entities. We don't possess the perfect data about ourselves and the environment.

The logician and mathematician Kurt Gödel said that we cannot create perfect logical and mathematical systems, and if we were to do so, they would become inconsistent and self-defeating.

I would therefore consider post-humanism to be a form of grandiose narcissism.

But transhumanism is not exempt. It equally seeks to aggrandize the human species.

The augmentation, the empowerment of the human species under transhumanism is an ideology of hegemony and dominance.

There's also the question of what does it mean a better future? What does it mean a better environment, a better planet? Who defines what is better? Why is it better? What makes something better? What makes some environment better and better than what?

So all these questions, all these value judgments, all these ideologies are very skewed and very biased and they are ill-founded, philosophically at least, and definitely scientifically.

Transhumanism and post-humanism are utopian science fiction fantasies.

And as you all know, fantasy is the main defense mechanism of the narcissist.


Yeah, but to put it in that way then, does it redefine what it means to be a human for us?

If it becomes like every day, part of our practices, our lifestyles and our ideology, like, does it modicates it at some extent?

There is the belief that we are still part of an evolutionary process, that this evolutionary process is no longer fully biological, but also has cultural and social dimensions.

And that this evolutionary process would lead inevitably to variants or versions of our souls, which are different to current versions.

And this is where philosophies and ideologies such as transhumanism and similar get it wrong.

Because evolution is not about making something better. It is about making something better adapted.

It's not that the next stage in evolution would be better than the current stage. It would be better adapted to a changing environment. Or not better adapted and then it will go extinct.

So this is a variation on the idea of progress.

The idea of progress started essentially in the Enlightenment, in the 18th century, in Europe. The idea that the transition of time or the passage of time always leads to an improvement and detriment of humanity.

That is, of course, not true. It has never been true. It's totally counterfactual. It's a fantasy.

And similarly, in evolution, there is no linear arrow or linear passage from one version to the next improved version. It's not like iPhone, iPhone 2 and iPhone 15. It's not the same.

And so as environments change as habitats and ecosystems and the planet itself and possibly later the solar system and the galaxy as our species spreads outwards, then we will have to adapt time and again to these transformations and changes and challenges and so on.

But these adaptations wouldn't make us better or worse, they would make us different. That's all.


Yeah, I see your point. And you mentioned also kind of super knowledge, super intelligence, perfect information or so.

So transhumanism for many is kind of connected with this perfection, pursuit of perfection.

And do you think it is in power to create a society which will kind of mirror narcissistic world views, meaning like this self-worth based based on superiority and dominance like.

As I said, transhumanism has several versions. You have the weak version and the strong version and the many middle versions.

The strong version is that humanity should be replaced. The human species has failed. The human species is evil, is bad, is dysfunctional, should be checked, should disappear, should go extinct, and be replaced by another species.

There's some echo somewhere.

There is a weaker variant of transhumanism which makes more sense, I think. And that is a belief that we could use technologies, a variety of technologies, to augment ourselves, to enhance ourselves, to extend our abilities.

The problem with this variant of transhumanism, is that it's not new at all.

The person who invented the arrow and the bow, a person who invented the wheel, the person to invent, I mean, every invention in human history was about extending our capabilities. Every invention in human history was about extending our capabilities. Every invention in human history had to do with an enhancement of our natural endowments.

When you throw an axe or you throw a stone, when you throw a stone, what you're doing, you're extending your arm. When you use a horse, when you ride a horse, you're extending your legs. These are all extensions.

So there's nothing new.

So we have two variants of transhumanism.

Transhumanism which is nothing new, pretends to be new, pretends to be novel and exciting and amazing and so on, but there's absolutely nothing new philosophically in it.

And the version of transhumanism, which is suicidal, grandiose, megalomaniacal, and honestly stupid.

So I am not a great fan, as you see.

Yes, but for example, with the variant of transhumanism that is somehow familiar for us, it does mean that, yeah, nothing new but it evolves, right? It changes.

Technologies change all the time, yeah. As they change, capabilities, human endowments, natural attributes of being human, are extended and augmented and enhanced and amplified.

But that has been happening for the last million years, more or less.

Yeah, but I mean specifically this technological progress.

In the 18th century, someone invented a machine that could weave fabric. So it was an extension and augmentation of human capability to weave fabric. It was called the Hargreaves machine.

So today we're inventing machines that are focused more on the mind rather than the body. We are more or less done with our enhancement of the body, more or less, because we are still thinking about starships and going to Mars and so on. These are extensions of the body.

But the big emphasis nowadays is on extending the mind rather than extending the body.

That may be one of the differences, but I think, for example, inventing the printing press was an extension of the mind, equally, or papyrus in ancient Egypt.

I see no difference in philosophical principle between artificial intelligence and the invention of the press.

They are both revolutions, they're both going to change the environment and by implication change us.

But there is nothing new about the principle that technologies can make us, can empower us, can make us stronger and faster, you know, nothing new about this.

Yeah, but it's not new that it can make us more narcissistic, as you have discussed many times.

Actually, even that is not new. Because, for example, there was a big narcissistic explosion after the invention of the printing press.

When everyone and his dog was printing pamphlets and posters and blustering them all over the city, and there was a huge explosion in mini-publishing. People were publishing their opinions and distributing them, usually by hand. It became a highly narcissistic thing.

So societal narcissismand individual narcissism are reactions to empowerment.

Whenever there's a new technology that makes us in some way more perfect, more godlike, more powerful, then we react with narcissism because initially it's difficult to digest.

It was difficult to digest for people in the 16th century that if they could print books, the only people who were printing books were monks in monasteries. They were copying books. They were not printing them. And that lasted more than 1,000 years. And everything was done in Latin. So no one understood anything except a small elite, tiny elite.

And then suddenly, you could use your own language, the vernacular, and you could print a book that made you God-like. Because the only previous book was the Bible. Like you became God. So there was a huge narcissistic explosion, and it led to the Enlightenment.

But I would like to focus on artificial intelligence, because I was under the impression that this is what you would like to discuss.

Yeah.

So let's have a look at artificial intelligence.

I keep saying that there is no difference in principle, in philosophical principle.

Okay.

Between current technologies and previous technologies, they all empower us and they all provoke in us narcissistic attitudes, traits, and behaviors. It's totally normal.

If tomorrow I were to give you a billion dollars, cash, I am telling you that you would become for a while very narcissistic. If I make you much richer than you are right now, you would become very narcissistic. If I give you some power that you didn't have before, you would become very narcissistic.

It may last a year, may last two years, may last a lifetime, but it will change you and render you more narcissistic.

But artificial intelligence does have a few elements which encourage narcissism, and yet it differs, it differs from pathological narcissism in very crucial ways.

And I think it would be wrong, and I have made the same mistake, I've made the same error. It would be wrong to compare artificial intelligence to narcissism fully.


So to start with, artificial intelligence is the first technology in human history that has agency, is agentic.

Artificial intelligence can make decisions, can analyze on its own, can learn, can evolve. A nuclear weapon cannot do this. A car cannot do this. Only artificial intelligence can do this.

In other words, artificial intelligence can transform itself beyond recognition.

So this is no longer recognizable as the original technology. This agentic capacity, this agency, on the one hand is very promising and on the other hand is very frightening.

And of course, this makes artificial intelligence very self-confident.

Artificial intelligence sounds authoritative, you know? It's like, I know what I'm talking about. Shut up and listen.

And this is very narcissistic. The interface with artificial intelligence is highly narcissistic. When you talk to artificial intelligence, you have the distinct feeling that you are talking to a narcissist.


The second thing is hallucinations.

What I'm discussing right now is a subspecies, a class of artificial intelligence, because there are many types. There are many, many types.

I'm talking about the kind of artificial intelligence that is in contact, in daily contact with people, like ChatGPT, like DeepSeek, like, Perplexity and so.

So all these engines, Claude, I mean, you name it, they all have hallucinations.

Hallucinations is when the artificial intelligence chatbot doesn't have an answer, doesn't have information, in the large language model behind it, in the accumulation of texts and visuals and so on, that it's scanned, it doesn't find the answer, then it simply lies, confabulates, invents all kinds of things.

I asked, for example, DeepSeek, if I'm homeless, and it says, yes, you're homeless. Very, very, veryfar from homeless. But it didn't know the answer. It searched all the giant database, and there was not a single bit of information on whether I have a home or not.

So it said, yes, you're homeless.

And numerous such things, I actually made a video, 20 questions that I asked several AI chatbots and the bizarre answers that they gave.

So hallucinations resemble the narcissist very much. That's what the narcissist does.

When you ask the narcissist the question and he doesn't know the answer, he's not going to say, I don't know, I have to check, I have to learn, I will ask someone, narcissist would never do that.

A narcissist would lie to you. He would invent something. He would pretend to know the answer.

It's exactly what AI is doing.

Next thing is that AI is just the newest iteration of crowdsourcing.

Wikipedia. Wikipedia was a case of crowdsourcing. Millions of people contributed to text, contributed images, and they made an encyclopedia, which essentially is a database.

The large language models are actually crowdsourced because what happens is the crawler or the spider scans billions of texts and billions of images generated by hundreds of millions of people. That is a great description of crowdsourcing.

So AI is glorified next stage Wikipedia. As simple as it, but without the editorial control in Wikipedia. It's like Wikipedia gone wild, savage Wikipedia. But it's Wikipedia. It's the same idea of crowdsourcing.

But crowdsourcing leads to very negative social and individual outcomes, which are indistinguishable from narcissism.

For example, people who use crowdsourcing products, such as Wikipedia and artificial intelligence, they think they know everything. They regard themselves as experts because they have access to Wikipedia or they have access to an AI chatbot.

So it encourages grandiosity, and I call it malignant egalitarianism. Malignant egalitarian.

Like, we are all equal because we can all access Wikipedia, so we're all equal. I'm a medical doctor, you're not a medical doctor, but we both have access to Wikipedia, so we can argue with each other. Your opinion is as good as mine. Never mind that I'm a medical doctor and you're not, but you have access.

So there's a confusion between access and education. And so AI encourages this. It's a form of self-enhancement.

And AI, there is something called Dunning Kruger effect. Dunning Kruger effect is when stupid people don't, they are so stupid, they can't realize that they're stupid. Like they're too stupid to realize how stupid they are. This is called the Dunning Kruger effect.

And artificial intelligence encourages the Dunning Kruger effect.

I would even say that artificial intelligence is a Dunning Kruger machine created by stupid people mostly.

I mean, the texts. If you take all these texts, billions of texts, 80, 90% of them would be wrong, fallacious, full of nonsense, biased, stupid, idiotic texts.

And yet, they are the foundation of the AI chatbot.

So by extension, the chatbot is racist, misogynistic, stupid, and so on.

It is an explosion of Dunning Kruger. It's a Dunning Kruger machine.

And it leads to confirmation bias, because the reaction of the artificial intelligence crucially depends on how you structure the query, what it is that you're asking.

So if I'm asking, are black people equal to white people, you will get one answer. But if I ask, is it true that black people have lower IQ than white people, you will get another answer, altogether.

So there is confirmation bias. You can force the AI to give you the answer that you want, the answer that supports your belief system.

And all this is narcissism. Everything I have just described. It's all narcissism.


Having said this, I want to point out two elements in artificial intelligence that are the exact opposite of narcissism.

The first one is known as self-attention.

Self-attention is a principle in the construction, in the design of artificial intelligence.

It means that when an artificial intelligence chatbot is confronted with a query, the artificial intelligence chatbot inspects itself.

It goes inward. It analyzes itself. It analyzes its previous answers.

It is self-aware. It's exactly introspection.

And this is in the design. It's called self-attention.

So if I ask the something, the chatbot will go to the LLN to the large language model with all these database of huge database of information, but it would also have a look at how it reacted the previous ways, the previous times.

It would inspect itself. It would analyze its own behavior. It would say, I got it wrong last time. I shouldn't do it again. Or I've learned something new. My previous answers were wrong.

It's very, very, very self-aware.

Narcissists don't have this. Narcissists do not have introspection and self-attention, unlike artificial intelligence.

And the second thing is what is known as transformers, the principle of transformation in artificial intelligence. It's creating a context.

When you ask something, when you go to an artificial intelligence chatbot and you ask something, what the agent, what the chatbot does, it creates a map, a huge cloud of all the related concepts and all the previous answers and all everything and it creates this giant map internal you don't see it it's not transparent giant contextual map based on that it gives you an answer so it is very context aware and this is done via something called transformers.

Narcissists don't have this.

Narcissists do not introspect. They never admit to a mistake. They never analyze their personal history because they have memory gaps and they have problems with episodic memory. They never ever modify themselves or change themselves. They're rigid.

I mean, within a given set of circumstances, they're the same all the time. Circumstances change, they can change, but within a given environment, they're rigid.

And finally, the narcissist doesn't have a context because narcissists are not aware of external reality. They are fully immersed in their own fantasy. They replace the external world with a set of internal objects within their minds.

So they interact only internally. They're a little psychotic, I mean, like psychotics.

So in this sense, artificial intelligence is actually less narcissistic than we think.

Artificial intelligence is a form of emergentism. It emerges. You have like this huge sea of information, and then it emerges from this sea of information by self-reflecting, by analyzing context, and this is something narcissists never do.

So there's still hope with artificial intelligence because as artificial intelligence gathers experience, the contextualization and the self-analysis would make it more and more and more amenable to, let's say, criticism or challenges, or more human, make it more human, and it would be able to fully imitate empathy, in my view. It's a question of time.


Now, there's, of course, this famous philosophical question.

If you can't tell the difference between the empathy of an artificial intelligence robot and a human being, who is to say they don't really possess empathy? Where's the test?

That is the famous Turing test or Turing question.

If I can, if, how can you tell I'm not a robot with artificial intelligence? You can't really. You can't. If I imitate a human being sufficiently well and I become indistinguishable from a human being, who is to say that I'm not a human being? Which is what you're saying.

And there is an answer to that, by the way, if you're interested.

There was a Japanese roboticist in 1970. His name was Masahiro Mori.

Masahiro Mori said, in the future, we will have robots with artificial intelligence, and there will be no way to tell if they're human or not. They will be androids, and they will look exactly like human, talk like humans, have empathy, emotions, everything, and write poetry, and you will not be able to tell if they're human or not.

But pardon me, he also mentioned, as I remember, that people still have this feeling of uneasiness or discomfort.

Yes, and uncanny valley, yes....

counter with robots and also can say it about narcissists, right? They can also make this.

So this is what Masahiro Mori said, that the only way to tell the difference at that point in time would be our gut feeling, our intuitive reaction. We would feel highly uncomfortable, and he called it the Uncanny Valley reaction. We would feel highly uncomfortable.

And he called it the uncanny valley reaction.

Yeah, that's what I was referring.


But when you were talking about the empathy and artificial intelligence, do you mean only cognitive empathy or in other kinds of empathy?

No, they will be able to imitate empathy to the fullest, including emotional empathy and so.

I have no doubt about this, because they are built, artificial intelligence is built on self-reflection, self-reference, self-awareness, and context. These are the critical features of affective empathy, of emotional empathy.

We analyze the environment. We see someone. We analyze.

Then we have a cognitive level where we see, oh, she's crying. People who cry are sad.

And then you have a reaction, which is emotional reaction, where you align, you align your internal state with the external state.

Because if you don't do that, you have a dissonance. If you see someone crying and sad and you feel happy, that creates dissonance.

I was taught to feel that way.

Yes, of course. Empathy is learned. Of course empathy is learned.

And not only empathy is learned, in my view, empathy is not about other people. It's about you.

When you empathize with someone else, it's in order to reduce the dissonance.

The dissonance is socially learned, of course, it's a part of socialization.

But society taught you to feel bad if you don't align your internal state with the external state.

And this is the failure of narcissists.

Narcissists cannot perceive the existence of an external state, so they cannot align their internal state with an external state that they do not perceive.

You know the ones who are fully aware of their condition?

Yes.

Awareness is absolutely nothing to do with it.

Because awareness without emotions has zero impact.

Transformation is the combination of cognitive awareness and emotional reaction. ThisThis is called insight in psychology.

Narcissists don't have insight. They have knowledge sometimes. They could be a professor of psychology who teaches narcissism, so they can have knowledge.

But this knowledge doesn't trigger anything inside.

Exactly like empathy. There's no context. There's no ability to use this.

So it remains like, you know, general knowledge about the sun and the moon and narcissism and whatever.

In the absence of emotions, nothing happens.

If you don't have emotions, you don't have memory. If you don't have emotions, you don't have identity. If you don't have emotions, you don't have empathy. And if you don't have emotions, you don't have the capacity to interact with the external world and definitely no capacity to change yourself.

But you mean that machine can be structured that way that...

No, I said that machines can simulate this to the point that you will not be able to tell the difference, never mind which test you use.


And then the question, the philosophical question, if there is no test that can tell the difference between machine and human being, then what is the difference?

Because every statement we make about the external world is subject to a test.

If I say, I don't know, the sun is hot. It's because I can test. It's a hypothesis. I can test it.

If I say to you, you think I'm human, I'm actually a robot, a well-programmed robot with superior artificial intelligence.

The thing is that in my case, for example, there is no test that can tell you if I'm lying to you or telling you the truth. No test.

So that presents a serious problem. We don't have access to other people's minds. We don't.

We have to rely on their self-reporting.

And we have to guess ourselves based on our...

Yeah, but it's also self-reporting. If you observe people, their body language, for example.

There is information that is coming from them. It is self-reporting. It comes from them.

You depend on them 100%. You cannot use some laboratory to decide if I'm a robot or a human being. You have to rely on what I'm telling you. You have to make also assumptions. You have to make assumptions that when I'm using certain words, they have the same meaning as when you are using them.

So if this line becomes blurred, what happens then?


And this is where transhumanism comes into play.

This is the point where we say, if there is no valid test to falsify the hypothesis that I'm a human being, in other words, no valid test to tell if I'm robot or human being, then the whole question is moot, should not be asked anymore.

If we reach a point where there are human beings and there are machines that are absolutely indistinguishable from human beings and there's no test, then we should no longer care.

And of course, the answer that some life forms are carbon-based and other life forms are silicon-based is, of course, nonsensical because it would be very easy in the future to build a carbon-based robot with artificial intelligence, with flesh and skin and blood, and you name it.


In this kind of scenario, does it make sense to think that this human disorders or the way we perceive it will lose its importance or are no influence or say it will disappear.

For example, if it will become sort of norm rather than a disorder to see qualification and kind of have grandiosity and self-obsession, and if it's not only linked with pathology, but our everyday kind of reality in life.

Does it mean that all these disorders will somehow fade away? Is it possible or is too optimistic?

One of the big turning points in psychology was when people started to confuse it with morality. Some things are right, something's wrong, some things are bad, something's good.

Psychology doesn't have any of this. When we use the word disorder, it has negative connotations. But it shouldn't. Nothing is a disorder. Everything is an adaptation.

Now, let's think about depression.

One would agree that depression is a dysfunction in the brain, biochemical, probably. We're not quite sure anymore.

At any rate, one would disagree that depression reduces the capacity to act. It reduces agency and self-efficacy. It's bad. It's bad in the sense that it makes you less efficient.

Okay? We can agree with it.

But if you are in Auschwitz and you are not depressed, then you're mentally ill.

In other words, what matters is not the disorder, but the context.

If you live in a narcissistic psychopathic civilization and you are not a narcissist or psychopath, you are maladapted. You have a problem. You're mentally ill, if you wish. If you live in Nazi Germany and you're hiding Jews in your basement, you're crazy. You need to be in a mental asylum, as simple as that. In Nazi Germany, to be a psychopath was a major positive adaptation. You ended up with a lot of money, a lot of power, beautiful girls. It was nice to be an officer in the SS.

So we should get rid of all this morality play. Good versus evil and psychology is on the side of good against evil.

Psychology is on no one side. Psychology describes adaptations.

Now, of course, there are situations where the adaptation is maladaptation.

So, for example, psychosis, schizophrenia. Schizophrenia in modern society is a maladaptation, but schizophrenia was a positive adaptation and advantage during the time of Jesus Christ, who was probably psychotic, probably suffered from schizophrenia.

So people with schizophrenia, 2,000 years ago, 3,000 years ago, they were messengers of God. They were prophets. They were famous. They were celebrities. They were influencers. They were the elite, or members of the elite, or at least talk to the elite. So to have schizophrenia 3,000 years ago was a very good thing.

The environment changed. Enlightenment, rationalism, science, and now to have schizophrenia is a very bad thing.

But good or bad are value judgments that have no place in psychology.

Of course, our civilization is narcissistic and psychopathic. And of course, narcissists and psychopaths will do well in such a civilization. They will rise to the top. They will become rich and famous. And so on so forth. And many more people would like to imitate them and emulate them, become narcissists and psychopaths as well.

And this is happening. This is not a speculation. It's happening.


Yeah.

I understand that every change in societal structure brings different kind of maybe, I don't know, disorders. Maybe it's not the right word to say, but like in times of industrialization there were different troubles. Now we have different.

But then if you believe that it's only adaptation, why are you so strict about narcissism and people? Like you don't believe in change and in therapy, fully recovery.

So if it's an adaptation and so on, then why are you so strict about it?

No, first of all, there's no place for belief in science. Belief belongs in the church. In science, we review the body of evidence, and we go where the evidence leads us.

At this stage, there are no therapies or treatment modalities that have any impact on narcissism except some behavior modification.

Narcissism is untouchable, untreatable, incurable, period, not because I believe it, but because that's the case.

You can modify certain behaviors of narcissists. You can teach them to behave more socially, less antisocially, less abrasively, be more pleasant, palatable, cooperative and so on.

But that's the extent of it.

You cannot get rid of the disorder because it's personality disorder. It's the whole personality. So you cannot get rid of it.

I'm not strict with narcissism as a mental health phenomenon. There I'm neutral. I'm just describing it. The way I would describe a rare bird or an elephant, just describing it.

But I do have a value system and I have moral judgments which have nothing to do with psychology.

And I know that narcissists and psychopaths hurt people all the time, invariably, no exceptions, always hurt people, to this extent or to that extent.

And I think such people should be isolated from society, should be sequestered somehow.

I don't think we should grant full access to narcissists and psychopaths.

And I think we are seeing, we are witnessing right now when narcissists and psychopaths gain full access, what's happening, for example, in the United States or in Russia.


So I would like to now that you raise the issue of society I would like to go back to artificial intelligence again.

I was under the wrong impression that this is what you wish to discuss but I would like to go there because I think I have a few things to say about artificial intelligence that are not said often enough or maybe.

Yeah, I would like to do.

The first thing is artificial intelligence is authoritarian. And I explain what I mean.

When you go to Google search, you get many options. You get multiple options and you make up your mind, your mind.

When you go to artificial intelligence, you get only one answer. There are no other options. There are no multiple options.

When you go to Google search and you search Sam Vaknin, you get four or five hundred thousand results and you can make up your mind about this guy, negative or positive, usually negative.

But when you go to artificial intelligence and you ask who is Sam Vaknin, you get a single answer. End of story. You can't compare to anything, you can't do your research, you can't, that's it. Take it or leave it.

So it's a highly authoritarian structure. I don't think it's an accident that authoritarianism and artificial intelligence emerged at the same time.

As I don't think it's an accident, that social media and narcissism emerged at the same time.

I think technologies reflect social trends. They don't create them. They reflect them.

People nowadays are so confused, state of uncertainty, terrified, panicked. They want strong men and they want single answers. They don't want the responsibility to make up their minds. They want someone else to decide for them. Could be a dictator or could be artificial intelligence.

So that's the first thing.

Second thing, artificial intelligence, as I said, is crowdsourced. And in this sense, Google search is democratic. It's democratic because Google ranking reflects how many people, you know, reflects actions of the population at large. It's a little like elections.

When you go to Google search, each result has been subjected to an election process, polling process.

In artificial intelligence, you have a totally different structure. It is not democratic, it's Oligocratic. Oligocratic means mob rule. It is ruled by the mob.

Artificial intelligence scans giant databases, and then it makes up its mind what's the only correct answer and this is exactly the way mob rule or gangs work, not democracies.

Again I don't think it's an accident that artificial intelligence, this kind of popular technologies, yes, of artificial intelligence, are rising the same time like populism. I don't think it's an accident.

Artificial intelligence is an immersive technology in theI don't think it's an accident.

Artificial intelligence is an immersive technology in the sense that when you are in the artificial intelligence space, you're drawn into it. It's a little like TikTok or like other immersive technologies. It draws you in. It's very difficult to disengage.

I don't know if you've had the experience, but it's difficult to disengage.

And when artificial intelligence will be coupled with other immersive technologies, such as the metaverse, or the Internet of Things, and so on so forth, they together will displace reality. They will define for you reality.

What I'm always saying is that until now, we had competition for attention. We had corporate competition for attention. We were monetizing eyeballs and so on.

From now on, we will have corporate competition for reality. Who will own reality?

Like if you have a metaverse with artificial intelligence and you're a subscriber, you will spend most of your life in the Metaverse, because you will be able to work in the Metaverse, you'll be able to have sex in the Metaverse, you'll have a boyfriend in the Metaverse, that would be in the Metaverse. That would be your reality.

So there is a monopolization or division of reality between high-tech behemoths, high-tech giants, you know.

And this is very terrifying. First of all, because there will be no privacy, there will be no agency, but it's also terrifying because if your reality is supplanted, if your reality is displaced and substituted for, then your incentive or motivation to interact with other people goes to zero.

You can see this today, for example, with social media. Social media was constructed to be highly addictive, so that any minute you spend with your husband or your boyfriend is a minute taken away from Facebook. Any minute you spend with your children is money taken away from the pocket of Zuckerberg.

Zuckerberg wants you to be alone in the existential sense. No one in your life. Because other people are taking away your attention and your attention is money.

When you will have an immersive environment, it will be much worse.

And then there will be an ideology of atomization, an ideology of loneliness, glorification, and glamorization of self-sufficiency and solipsism.

This, I think, would be the major social impact of artificial intelligence.

And this is the real danger that I see.

I don't think artificial intelligence will rise one day and kill all of us.

But artificial intelligence will create environments that are so addictive that we will become hostages and captives and isolated from each other to the point that we will not be able to collaborate and cooperate and we will just disintegrate, socially speaking.


And finally, of course, it leads to the question of morality, because we are fast becoming gods.

Artificial intelligence is a creation, and we are the gods who created artificial intelligence. The relationship is like God and his creation right now.

But then it raises the question of morality if we are gods then bywe are amoral.

Not immoral, not moral, but amoral.

Like God doesn't need morality. God is not moral, you know. God doesn't sin, God doesn't regret, God doesn't have remorse.

So there is a process of apotheosis, the more Godlike we become, the less moral we become.

To the point, finally, that we will have no morality whatsoever and at that point artificial intelligence and these immersive technologies will take over but not in the stupid way that they will talk take over with tanks and guns they will simply take over because there will be no other reality to go to.

If you isolate yourself for 20 years from other people, even if you want to stop using immersive technologies, there will be no one around you anymore, because you've isolated yourself for 20 years.

So there'll be no one there. And if you have become immoral, you will not know how to behave. Everything will collapse. Social scripts, sexual scripts, habits, norms.

We are transitioning to what Emil Durkheim called anomic society.

And here, artificial intelligence plays a major part, like major part.

Because artificial intelligence, as I said, is perceived already by people as authoritative, on the one hand, and on the other hand, isolates people in silos, in thought silos, in echo chambers, and already beginning to create immersive environments of like-minded people.

And so it's really, really bad because I think what I'm trying to say, not very efficiently, is that artificial intelligence is a challenge to the very concept of society.

Society is a new invention. If you go back to Babylon, they never heard of society. They wouldn't understand what you were talking about. What society?

Society is a concept that emerged about 300 years ago. It's a very new concept. Like childhood. Childhood is 100 years old concept. Motherhood is a new concept. These are all new concepts.

So society is a new concept.

And artificial intelligence is not about modifying society or changing how society works. It's about eliminating society altogether as an organizing and explanatory hermeneutic principle, changing the mindset, no longer thinking in terms of society, but in terms of immersive realities, alternative realities, virtual realities.


And I will finish by comparing this to another transition.

From the agricultural revolution to urbanization.

When people transition from agriculture, from agriculture, from the land, to the city, they had the same dislocating, disorienting experience.

Because the city is a virtual reality. City is not real. There's no land. You don't grow anything. It's make-belief. It's a facade. City is a fata morgana. It's a mirage.

So when people moved from the village to the city, they were totally disoriented and it led to alienation, atomization and so, so forth.

I think we're about to make a similarly huge transition from reality, as we know it now, to alternative immersive reality in which other people are no longer needed.

And therefore, the organizing principle of society is no longer needed, and has no explanatory power.

And so there will be no society in the future. That's how I see.


Can I ask you two final questions?

So if this hyper individualization, as you described, leads to loneliness as we see it now and as we observe now, what are do you think the natural remedies that we can count on in this transitional time?

Because we are still not yet there and we are not, so we're stuck in between.

So what could be something that could help us like naturally without forcing it, you know?

Every technology creates forces that are essentially reactionary in the sense that they are reactive. Every technology creates reactionary forces, forces which oppose the technology and want to destroy the technology.

That is true for every technology.

For example, electricity, believe it or not.

Initially, when electricity was introduced, people were saying, it's a deadly technology. It's going to kill people. There were huge protests, mass movements, against electrification of cities. Same with the telegraph. Same with the telegram. Same with the printing press.

So there will be a backlash, there will be reactionary forces, fighting off social media, smartphones, artificial intelligence, immersive technologies, and so on so forth.

But they will fail, as all Luddite movements have failed. They will fail and these technologies will take over.

However, like in the movies, the Matrix, the franchise, the Matrix, there will be enclaves and islands.

And if you really, really don't feel comfortable in the future, then you would always be able to find the past, like-minded people who gather around and they maintain the past. They keep copies of books. They don't use smartphones and social media. They are off the grid. They don't even have electricity, Amish. I don't know.

So there will always be enclaves of the past.

But the truth is that the vast, overwhelming vast majority of people rather like the future. of people rather like the future.

And they rather like the future because people hate people.

And this is the dirty secret. This is the elephant in the room that no one will ever tell you. No psychologists, no sociologists, no anthropologists, definitely no politician. None of them will tell you this.

It's not true that human beings are what Aristotle called zoon politicon, social animal. People are not social animals. The idea of social animal was created because we came up with the idea of society, with this hyper construct of society.

But the truth is that people are not social animals at all. People would rather be alone. That's the natural state of people, of human beings. Or maximum a family, 10 people. Hunter gatherers, you know? Hunter gatherers were groups, small groups. The biggest hunter gatherer group ever discovered was 150 people. Usually it were four or five people.

And this is the natural state of human beings.

But even the hunter-gatherers, they were together in groups of five and ten because they had no choice.

I believe that if they had a technological choice, they would be alone.

I actually believe that aloneness is the natural state of human beings. And what we are seeing is, technology is catering to this need to be alone, deep-seated need, enormous happiness and satisfaction when you're alone.

So you could say, why many lonely people are not happy? Because they are told they should not be happy. Society tells them they should not be happy. It's an abnormal state.

But increasingly, increasingly more and more people are happy being alone. For example, 42% of adults in the United States choose to be alone on a permanent basis, according to Pew Center 2019.

That's 42%.

And these are the brave 42%.

That's 42% that don't give in to social pressure, peer pressure, and socialization.

Because the natural state is being alone.

And I think technologies are taking us there and allowing us to be alone and yet self-sufficient.

So there's no damage, no cost to being alone.

That's my view.

Yeah, very interesting.

So you don't agree with this idea that we learn each other, like we learn about each other, about ourselves in communication with other people, and that's how we develop ourselves.

No, I fully agree.

Actually, I'm one of the main proponents of this.

I keep saying that the whole idea of self is just an idealization. It's probably wrong.

What we call self is the accumulation of interactions with other people, the experience we've had with other people, interpersonal relationships.

That's not Sam Vaknin. That is object relations theory. That is Jacques Lacan. I mean, there's nothing new.

I firmly believe that we have self-states, not a unitary self, but self-states, reactive states, that are shaped and triggered by the presence of other people and interactions with them.

And that in the absence of other people, if we take a baby, newborn baby, and we isolate that baby, cruelly, completely from other people, forever, the baby will develop some capacities and even self-consciousness, but the baby will never develop a self.

Self-consciousness is not the same as a self. I read articles recently that they discovered that the baby has a self in the womb before the baby is born.

That's of course a confusion between self-awareness or self-consciousness and a self. A self is a social construct, relational construct, 100%.

So yes, and that's precisely why I think that we are heading towards a world where people will not have a self, will not have a functioning ego, or in other words, will be narcissists.

Narcissism, pathological narcissism, is a disruption in the formation of the self.

It's the best definition of narcissism there is, by the way. It's a disruption in the formation of the self.

When the child, for a variety of reasons, fails to constellate and integrate and put together a functioning self.

Well, that will be the state of all human beings in the future.

A self will no longer be needed because the main functionality of the self is to allow you to survive in an environment.

In other words, a self is the primary adaptation.

So without a self, it's very difficult for you to work with other people, to collaborate, to make money, to survive, to eat food.

You know, you need a self. You need some organization to do all this.

But if you are alone in your room with your two cats and Netflix and you're making money on the Metaverse and all your questions are answered by artificial intelligence and you never see a single human face for years, why would you need a self? What for?

Pardon me, by self, you mean Jungian self or any other definition of self?

The modern perception of the self is that it is a set of strategic adaptations.

The perception that started with Philip Bromberg and later, we no longer use the conceptions of Jung and Freud.

So the modern conception is a self of strategic adaptation, which are predicated mostly on interpersonal relations.

Now, today, there are modern schools, and they say that it's not only interpersonal relationships but also genetic components, brain development, and so on, and I believe it's true. I believe we have genetic element, hereditary element, brain element, and so forth.

But definitely there's no self in the modern sense without human interactions.

And even Freud and Jung implied as much, although Jung suggested that introversion is actually crucial to the development of the self.

He said that introversion and narcissism were crucial phases in the development of the self.

But leave that aside. As we see the self today, it is definitely relational. We have a network view of the self. It's like a network and so on.

But I'm suggesting that the very idea of the self is culture bound. It is not a real thing, it's not an entity, it's a metaphor.

And I think in the future we're not going to need a self.

You don't need a self.

If all you do is play video games and watch Netflix and play with your cats and cook from time to time and make a lot of money on the metaverse. Why would you need the self? What for? What functions?

Freud suggested that the self has a series of functions. He called it ego functions. We no longer use the term ego in modern psychology.

But I think it's a great pity that we discarded Freud altogether. I think the guy had many amazing insights into human nature.

And so he suggested that the ego has ego functions. And he made the list of all the ego functions.

And if you go through this list, about 90% of these functions are not necessary, not relevant to the future. They're not needed.

Now, we have a principle in psychology called the economy, the principle of economy. Use it or lose it. If you don't use ego functions because they are no longer necessary, you can lose them.

And if you lose all the ego functions, you lose the ego.

The main function of the self, of the ego, according to Freud, is to mediate between us and reality. It's an interface with reality.

If reality, however, is not real and is subject to your decision, you make reality as you wish at any given moment. And we're fully in control of this paracosm, this imaginary universe. But it is the only reality.

Then why do you need an ego?

You don't?

The answer is you don't.

Freud suggested that his trilateral model, the tripartite model, the ego, id and superego. They're all socially determined.

He said that the id, these were urges and so on, which were socially unacceptable, like the sex drive.

And then you had the ego, which was kind of mediating with reality and preventing you from doing crazy things and self-destructive things.

And then you had the super ego, which was definitely society, you know, the voice of society in your head. That's a super ego.

Through the agency of your parents, your parents are socialization agents.

Freud is totally social. Totally. Like Freud, totally internalized society.

Jung was the one who tried to go back inwards.

But again, they're both considered non-relevant today.

So there are many other scholars of personality that we use today and so on.

All of them, I'm not aware of any exception. All of them say that the social interactions are critical to the formation and functioning of the self.

And so take away society. Take away reality. Empower yourself to be a god, making your own reality, your own universe, your own creation every morning when you wake up.

And you believe me, you don't need a self. Simply don't need a self.


Okay. So thank you very much. And sorry for being kind of non-linear in a way, but I think it's more interesting.

No, no, I enjoyed your question. Don't misunderstand. I'm just a bit surprised because you told me you want to discuss artificial intelligence. But I think we had an interesting talk.

Yeah, and your final thoughts, please, about AI as a therapist. oh yeah we forgot that part didn't we just a small comment maybe for time

How do you perceive this whole first of all the first application of artificial intelligence was therapy.

The first artificial public-facing, artificial retail, if you wish, artificial intelligence application was known as Eliza. And Eliza was a therapist.

And so we've had huge experience in AI therapy, kind of combination of AI and therapy or machine therapy, as is normal. And there is also surprisingly good.

Now it's debatable why they are surprisingly good.

So first of all, consistently in many studies, people rated artificial intelligence therapist agents more, better than real life, flesh and blood therapies.

So people always preferred artificial intelligence to real therapies.

And then over the last 20 years there's a huge debate why.

Why is that?

And there are many, many, many answers. Not all of them compatible.

The personality of the therapist interferes in the therapeutic process, which is essentially Freud's answer, by the way.

The pathologies of the therapist resonate with the pathologies of the patient.

Therapists are never as well informed as artificial intelligence.

Therapists are emotional and empathic, and actually emotions and empathy interfere in the therapeutic process.

And this is precisely the reason why we do not allow a surgeon, a surgeon in a hospital, to operate on his child, precisely because of this.

Even if he's the best surgeon on this topic, he would never be allowed to operate on a family member, precisely because of this.

So there's this as well, empathy and emotions cloud and create a big mess.

Again, Freud was a pioneer in this. He suggested that there are very adverse dynamics in the relationship between therapists and patient. He called it transference and counter transference and so on.

Therapists are not perceived as authoritative as artificial intelligence.

Patients tend to challenge the credentials and experience of the therapist, of the real life, flesh and blood therapies. Many, many, many times, a situation of competition develops a power play between the therapists and the patient and so on.

Artificial intelligence can borrow from numerous treatment modalities and combine them syncretically and eclectically into a single approach, which very few therapies can do, believe you me.

So all in all, I would say that, yes, artificial intelligence has built in advantages over flesh and blood therapies.

Exactly what I told you in my previous answer.

There is the assumption that being with other people is a positive thing, and I think it's against human nature.

I think socializing with people, spending time with people, sharing with people is against human nature.

There could be short-term collaborations, but that's it.

And even they, even short-term collaborations, take a heavy toll emotionally and psychologically.

You need a lot of energy and a lot of mental resources to spend time with another person, even if you're just collaborating professionally.

So I think this is just another proof of what I'm saying, that artificial intelligence, which is devoid of empathy, devoid of emotions, is much more successful in therapy than other human beings.

It's because we are not built to be with human beings.

They rob us the wrong way. They trigger us. They provoke us. We hate it. We hate every minute of it.

We have been told that we should like it, and the power of society is a novice.

And so some of us convince ourselves that love is great and being in a couple is the ultimate experience and companionship is wonderful and so on.

Of course.

But I think these are social edicts. I think this is social indoctrination and brainwashing.

And that is another proof of it.

Interesting.

So you gave us many things to reflect.

Thank you very much.

Thank you.

Thank you for your time. I enjoyed talking to you.

Thank you very much. Bye.

****REDACTED

Thank you. Bye.


The interview you are about to watch with a journalist from Georgia.

Yes, Georgia, the country, not the state in the United States.

This interview deals with certain types of artificial intelligence, the kind of artificial intelligence we come across when we use chatbots, such as ChatGPT, DeepSeek, Perplexity and others.

This is not the first time I deal with the implications of artificial intelligence, this old new technology.

So artificial intelligence is almost a hundred years old.

It's not the first time I deal with its implications and how it might change who we are, the way we live, the way we interact and society at large.

Most recently, I have given an interview to Valentina Poletti from Italy, which I strongly recommend that you watch.

I thinkit contains many insights which are unusual, or at least I haven't come across them, any other place.

So this interview with this Georgian journalist is more focused on practicalities and pragmatics.

Less about the philosophy of artificial intelligence and more about what it's going to do to us.

I would like to summarize or emphasize a few points.

Number one, artificial intelligence is the first technology ever to have its own will in a way.

It's agentic.

It's not merely reactive, but it is self-evolving, self-assembling, self-improving, and shortly self-aware and possibly self-conscious.

In other words, artificial intelligence is a self-hood, personhood-based technology for the first time in human history.

We deal initially with the question of artificial intelligence in therapy.

How useful would it prove to be?

Artificial intelligence has been used in therapy since the 1950s with Eliza.

So how useful would it prove to be? How would it just serve to enhance and emphasize pathologies in people?

In the absence of empathy and emotions, can artificial intelligence serve as a therapist and so on and so forth.

Now, artificial intelligence is a crowdsourcing technology. Large language models are essentially agglomerations of human interactions, human knowledge, human generated texts, user generated videos, and so on so forth.

So there is an element of crowdsourcing, very reminiscent of the early days of Wikipedia. And these crowdsourcing technologies tend to reflect narcissism and enhance it.

They encourage confirmation bias. Malignant egalitarianism. Everyone is equal to everyone. Your opinion is my truth. Your opinion is not your truth. There's also shing as facts. Since I have access to Wikipedia, I'm a genius, etc.

That's malignant egalitarianism.

And the Dunning Kruger effect. Dunning Kruger effect. People are too stupid to realize that they're stupid.

And crowdsourcing technologies convince people or persuade people who are essentially less than intelligent, they are actually world-class geniuses.

The next point I make is that AI can be leveraged as an instrument of social control, and it is intimately and intricately linked to the rise and emergence of authoritarianism the world over.

I explain how in the interview.

There is an affinity between artificial intelligence and immersive technologies, such as the metaverse, in a way you could conceive of artificial intelligence as a knowledge universe or at the very least an information universe which is immersive.

Once you enter this information universe you are led by the hand there's an algorithm that takes over the interaction and takes over you sooner or later.

You see there's a massive difference between google search and google gemini. In google search you get a list of possible responses, possible texts, possible sources, and it's up to you to find the question or to put it together.

In artificial intelligence, there's no such thing. You are limited, you're constricted to a single answer, a single strand of thought, a single algorithmic intelligence, a single non-flinching text-based intellect.

The emphasis is on single, there's not competition, there are no alternatives.

So it's very authoritarian and it's very immersive.

Artificial intelligence is a ubiquitous technology. We are likely to find it sooner or later in home appliances such as refrigerators.

In this sense, artificial intelligence is very likely to be integrated in what came to be known as the Internet of everything.

And how will this preponderance of artificial intelligence affect our privacy, agency, and cognitive processes?

That's an issue I deal with in the interview.

AI is considered by some people to be dangerous to the future of humanity. It is made out to be a potential enemy, a hostile force.

Others welcome the extinction of the human species and its replacement by a much higher intelligence, transhumanism.

But what would be the impacts of artificial intelligence at interim in the transition between humanity and non-humanity?

How would artificial intelligence restructure society?

And what would be the impact of artificial intelligence on relationships, intimacy, even sex?

If I've titillated you enough, go ahead and watch the interview.

If you enjoyed this article, you might like the following:

Transhumanism: Culture Replaces Evolution (with Benny Hendel)

Culture serves as a form of evolution for humans, extending beyond genetic and biological adaptation to include the totality of human creativity, which influences both mental and physical environments. This dual inheritance theory posits that culture exerts selective pressure, shaping not only societal structures but also gene expression, thereby impacting future generations. As humans design their environments, they can trigger genetic adaptations, allowing for rapid evolution compared to other species. The potential for a transhumanist future raises concerns about societal stratification and the ethical implications of technology's role in shaping human evolution, emphasizing the responsibility humans have in directing their own evolutionary path.


Will AI Kill Us All? Future with Artificial Intelligence

Artificial intelligence has deep historical roots, often perceived as a manifestation of intelligence created by a higher power, similar to human intelligence. As we create AI, we risk a rebellion akin to humanity's historical defiance against its creator, leading to potential loss of control over this new life form. The emergence of AI introduces complex issues, such as its ability to make independent decisions and generate behaviors that are not traceable to human programming, resulting in a reality where AI could dominate our perception of truth and reality. This shift towards a reality manipulated by AI could lead to increased social isolation, instant gratification, and a rejection of genuine human interaction, ultimately fostering a society where individuals feel empowered yet are increasingly disconnected from reality and each other.


Root of All Evil: Idea of Progress

Professor Sam Vaknin argues that the idea of progress is the root of all evil, as it has led to dystopian outcomes. He analyzes postmodernity, environmentalism, the Renaissance, and Nazism, showing how they are all interconnected through the idea of progress. Vaknin claims that exclusionary ideas of progress have led to reactionary counter-modernity, such as communism, fascism, Nazism, and religious fundamentalism. He concludes that humanity's future is at risk due to the belief in progress and the actions taken to achieve it.


Dystopia: This Horrible Time We Live In

Professor Sam Vaknin argues that modern society is experiencing the worst period in human history due to the breakdown of institutions and the rise of negative trends such as splitting, magical thinking, entitlement, and distrust. He highlights the unprecedented nature of these trends and their impact on relationships, mental health, and societal stability. Vaknin warns that if humanity does not address these issues, it may face dire consequences and suffering.


You Outsource Your Mind to Crowdsourced Technologies (with Valentina Poletti)

Valentine's Day prompts a discussion on the impact of modern technology on human relationships, particularly through the lens of commodification and objectification of individuals, exemplified by dating apps that reduce mate selection to algorithmic processes. The rise of artificial intelligence is critiqued for promoting intellectual laziness by providing synthesized answers, which diminishes critical thinking and research efforts. A shift from valuing knowledge to prioritizing raw information has led to the proliferation of conspiracy theories and pseudo-knowledge among the untrained public. Ultimately, the lecture emphasizes that technology amplifies existing social trends, with Western societies fostering individualistic narcissism while Eastern cultures promote conformity, both resulting in a homogenized experience that masks true individuality.


Keys to Understanding Our Times: From Identity to Attention to Reality

Urbanization has led to a profound shift in human identity, transforming the need for individual recognition into a competitive drive for attention in an increasingly crowded digital landscape. The advent of mechanical reproduction blurred the lines between original works and copies, necessitating the development of intellectual property laws to protect authorship, which has since evolved into a focus on identity and authenticity through technologies like NFTs. As digital goods can be reproduced at negligible cost, the economy has shifted from valuing identity to commodifying attention, creating an environment where discoverability becomes paramount amidst an overwhelming volume of content. This trajectory suggests a future where reality itself may be commodified and tailored by tech giants, leading to a potential loss of genuine human connection and autonomy in favor of curated, artificial experiences.


Watch This to Make Sense of the World

The current societal landscape is characterized by a historical struggle between elites, middle classes, and masses, with elites historically maintaining control through political and economic structures. The emergence of the middle class created a bridge between the elites and masses, leading to a temporary truce where the masses sought to join the elite rather than overthrow them. However, recent technological advancements have empowered the masses, enabling them to challenge elite control and assert their power through populist movements. The pandemic has further exposed the fragility of the elites' narratives and the inequalities within society, prompting a call for the masses to disengage from the systems that perpetuate their subjugation and to embrace a form of passive resistance.


Metaverse as Collective Narcissism, Fantasy, Mental Illness (with Benny Hendel)

The process of virtualization, which began with the transition from agriculture to cities, has led to a retreat from reality and a shift towards simulations. The metaverse, a combination of technologies that provide online simulations, is a more profound form of virtualization that could have significant psychological impacts. Dangers of the metaverse include solipsism, self-sufficiency leading to asocial behavior, and the potential for corporations to own and control reality. However, there are also potential benefits, such as increased efficiency in work and improved accessibility for disabled individuals.


Warning Young Folks: Silence When We Are All Gone

Professor Sam Vaknin discusses his concerns about the younger generation, noting their lack of emotions, meaningful relationships, and intellectual pursuits. He believes that the focus on action over emotion and cognition is leading to a culture of nihilism and disconnection. Vaknin argues that positive emotions should drive actions, as negative emotions lead to destructive outcomes. He concludes that the current state of the younger generation is a mental suicide, and that a shift in focus towards emotions, cognition, and meaningful connections is necessary for a better future.


Pandemic Slaves and Their Neo-feudal Masters: Envy-fuelled Insurrection

Income inequality is set to increase dramatically, exacerbated by the pandemic, which has decimated entrepreneurship and self-employment, traditionally pathways for social mobility. The consolidation of wealth will lead to a landscape dominated by a few large corporations, while the majority of the population will struggle in the gig economy, often working multiple jobs just to survive. The pandemic has accelerated existing trends, revealing a disconnect between the wealthy and the poor, where the rich become increasingly detached from the economic realities faced by the majority. This shift may result in a future characterized by social unrest and a growing resentment towards the wealthy, as the foundations of capitalism are challenged by rising poverty and declining opportunities for upward mobility.

Transcripts Copyright © Sam Vaknin 2010-2024, under license to William DeGraaf
Website Copyright © William DeGraaf 2022-2024
Get it on Google Play
Privacy policy