Artificial intelligence is all the rage.
Many of you came across ChatGPT. Do you know what the T stands for? It stands for transformer.
And today I'm going to discuss or borrow concepts from the cutting edge of artificial intelligence and apply them to narcissism. Most notably, I'm going to explore the concept of self-attention.
My name is Sam Vaknin and the author of Malignant Self Love: Narcissism Revisited and a professor of clinical psychology.
So the big idea right now in artificial intelligence is that the more data you feed into large language models, the more you boost the performance of an artificial intelligence program or application.
Finally, the belief is that when we reach a certain point, a certain accumulation, a certain quantitative level of data assimilated, there's going to be a phase transition. Phase transition from machine intelligence to human or human level or human like or humanoid intelligence.
So the idea is simple. You create an artificial intelligence program or app. You feed it with data. This is called a large language model, LLM, you feed it with data, and the more data it digests and absorbs and assimilates and consumes, the more clever and smart and intuitive and human-like it becomes until you reach the point, the Turing test, where you can't tell the difference between these applications and programs and human beings.
And if this is reminiscent of narcissists, it's not in vain.
Okay, what is the transformer? I started a lecture with the transformer. Transformer is a term coined in a paper published by Google, a team at Google. The paper is dated 2017.
In that paper, the team introduced the concept of self-attention.
It means that when you give a model, a language model, a string of words or a string of numbers, a string of data, the model doesn't consider each element in the string separately.
So if you say I love you, the model doesn't consider this input by isolating the words, I love you. That's not how the model works. It doesn't consider each word by itself, but instead the model makes links between the words that it already had been fed, and then it transforms the new input and incorporates it in existing linkages and dependencies to make sense of it.
So there are two ways to go about it.
If I tell you I love you, you can analyze this sentence. You can say, what is I? What is love? What is you?
This is not how large language models have been working since 2018.
What they do instead? They take the totality. They take the whole thing, the whole string, the whole statement, I love you. And they compare it to previous statements and rings. And by comparing it with this act of comparison, they're able to find linkages, links, dependencies. And by spotting these linkages and dependencies, in other ways, by generating context, they're able to derive meaning.
So the new input fits well, like a Lego, Lego brick fits well into a pre-existing field or body of knowledge and this is the self-attention kind of thing which involves the transformation.
Self-attention is therefore a mechanism used in machine learning especially in natural language processing with the unfortunate acronym NLP. Self-attention is also used in computer vision.
And as I said, self-attention captures dependencies and relationships within input sequences. It allows the language model to identify and to weigh the importance of different parts of the input sequence by self-observing, self-attending.
This process of effectively introspection generates the language. That's what renders all these inputs into language elements.
So here we see the buds, the first stages of machine introspection, which could be very frightening to some people, because it renders these machines very human.
We used to think that the only thing differentiating human beings from animals and from machines is the ability of human beings to introspect, to regard themselves from the outside, to delve deep into themselves, as if they were mere observers to put a distance between themselves and themselves.
Well, now machines are doing this. In this algorithm of self-attention.
And so there's a lot of introspection going on. And the meaning of information and data coming from the outside is determined via this process of self-attention or introspection.
Self-attention operates by transforming the input sequence into three vectors, query, key and value. So these three vectors are the output or the outcome of linear transformations of the input.
The attention mechanism kind of calculates the weighted sum of the values based on the similarity between the query and the key vectors.
The resulting weighted sum, together with the original input, is then passed through a feed-forward neural network and produces the final output.
Now to, back to English, what the model does, it focuses, it creates the equivalent of attention.
Now when we focus on something, when we direct our attention, we're making an implicit decision or distinction between relevant and non-relevant data.
So the process of self-attention in machines, which is essentially introspection, involves a decision about the relevance of data, previously acquired data.
The model focuses on relevant information. And because it focuses on relevant information, it captures long-range dependencies, relationships, and linkages.
This is almost indistinguishable from what human beings do. Almost.
Self-attention is very important in machine learning and artificial intelligence because it identifies long-range dependencies. It allows the model to capture relationships between distant elements in a sequence, to understand complex patterns.
Second thing, it provides context.
Understanding and meaning emerge only from context.
If you don't have context, what you have is raw material, raw information.
And that's a problem in today's world.
People have a lot of raw information and raw material. They don't have the context because they are not educated. They're laymen, and yet they believe that they're knowledgeable. They deceive themselves grandiosely into believing that if they have access to information, they're educated.
It's not the same.
In machine learning, contextual understanding is critical.
By attending to different parts of the input sequence, self-attention helps the model understand not only the input but the context.
And the model assigns appropriate weights to each element based on relevance.
And finally it leads to parallel computation.
Self-attention can be computed in parallel for each element in the sequence, making it computationally efficient and scalable for large data sets.
Self-attention has been applied in many areas of machine learning and artificial intelligence and so on so forth. I mentioned natural language processing, self-attention mechanisms. The transformer model has revolutionized this field. Machine translation, summarizing text, sentiment analysis, question answering. They all depend essentially on the transformer model, on self-attention.
Similarly, computer vision, self-attention is used to classify images, detect objects, caption images, capture long-term dependencies and long-range dependencies between regions in the image, face recognition, and so and so forth.
Self-attention gave rise to recommender systems, personalized recommendation systems, because it captures user preferences and relationships between items that the user for example has purchased in the past.
So self-attention is becoming the core, very critical feature of artificial intelligence and is related to other concepts like the transformer that I mentioned.
Self-attention is a key component of the transformer model. It's an architecture that achieved great results in NLP and computer vision. It's connected to attention mechanism. Self-attention is a specific type of attention mechanism, allows the model to selectively focus on relevant information.
And B-Directional Encoder representations from transformers, it's a pre-trained transformer model. It utilizes self-attention to capture contextual information in natural language and so on and so forth.
I am mentioning all this because, as I've said decades ago, then you can go back and watch the first video I've ever posted on this channel. It's a video that compares narcissists to artificial intelligence and aliens.
I keep saying that narcissism is comparable to artificial intelligence, and this is not sensationalism. I don't intend, this is not a kind of a lure or a bait, a clickbait. I really believe this.
Because narcissists lack modules such as empathy, such as access to positive emotions, such as ability to tell the internal from the external, which are critical to becoming human or to being human.
In this particular, in this sense, narcissists are not fully human. They're much more akin, much more comparable to artificial intelligence.
So the problems narcissists have with attention or self-attention, to be precise, with introspection, the problems that narcissists have with focusing, this is a problem that renders a highly specific type of artificial intelligence.
Let me try to explain what I'm saying.
We have one type of artificial intelligence, which is an imitation of the human mind. There is introspection, there's attention. These are all concepts borrowed from psychology.
But I think narcissists represent a second variant, another species, of artificial intelligence.
Because narcissists lack self-attention. They do not transform input the way artificial intelligence in large language models does nowadays. They are not like ChatGPT.
Narcissus present another evolutionary branch in artificial intelligence.
Because narcissists, after all, are capable of processing queries and data using language. They're very successful at that. They're very deceptive.
Narcissists are very deceptive because they're a great imitation of human beings. And they possess skills and capacities which far exceed current day artificial intelligence.
So maybe it would behoove scientists in artificial intelligence would behove them to study narcissists much more deeply.
I believe that narcissism embodies, reifies an alternative concept in artificial intelligence, which does not involve self-attention and does not involve transformation, does not involve recognition of patterns, dependencies, and linkages, which the narcissists are incapable of doing, actually.
So, what does it involve this alternative model?
It involves language, it involves interjection and internalization of external objects. It involves fantasy. It involves a highly unique form of processing of data. Highly unique form, which is self-recursive, self-referential, Godelian, if you wish.
And I can go deeper and further, but this is the thesis of this video.
Narcissus, pathological narcissism, is a form of artificial intelligence, which has preceded, of course, current artificial intelligence, and represents a model of artificial intelligence which is infinitely more efficacious than the current model.
Rather than study healthy normal people and try to emulate them, which is what artificial intelligence is doing nowadays, I recommend that artificial intelligence scientists study narcissists. I think this would lead to much better outcomes in terms of artificial intelligence.
Of course, there's always a risk that we will end up creating narcissistic applications and narcissistic artificial intelligence programs.
Somehow, we must take the honey without getting stung by the bee, pathological narcissism.