Select Page

Anthropomorphization – this is the tendency of the human mind to seek reflections of itself in the surrounding world, attributing human traits to objects, animals, natural phenomena, and abstractions. We do this in our relationships with our personal cars, pets, and even in theology. We do it as well when it comes to artificial intelligence, ever since this term was entirely in the realm of science fiction.

Today, we have in our hands something that seems intelligent and is created by us, that is, artificial. This entity can conduct conversations on various topics and solve all types of problems. The conversation can be written or oral, with the intelligence being able to understand and create images, videos, and 3D objects. It manages various types of robotic bodies and performs physical tasks. It also programs and learns to use and create all types of tools.

It is not difficult to forget that this intelligence is artificial and to start seeing our traits in it, assuming that it also has its own consciousness and aspirations. This also makes us fear this new intelligence existing alongside us on our planet. If we were in its place, we probably would want to control and maximize our advantages.

However, consciousness and intelligence are two different phenomena, with the difference between them being like the difference between the mover and the moved. Intelligence is the measurable ability of an agent to solve problems, plan, reason, and learn. As long as a task can be clearly stated, artificial intelligence can learn to solve it optimally. Consciousness, on the other hand, is a much more mystical and difficult-to-define phenomenon, which can be broadly defined as the subjective awareness of oneself and the surrounding environment. Thus, consciousness is what determines our aspirations, while intelligence is a tool for achieving them.

Although we ourselves are consciousness, we still do not know exactly what it is, let alone be able to replicate it. On the other hand, our experience with training artificial intelligence shows an interesting feature of it – emergent abilities. They appear not as a result of targeted training but on their own as we increase the size of the model – the “brain” of the system. We cannot predict either what the next such ability will be or when it will emerge.

However, the truth is that no one benefits from creating artificial consciousness. Why create a machine with its own goals and aspirations when we have our own? Even if we discover that self-awareness can spontaneously arise in our smart machines, we will likely quickly find a way to remove this unwanted side effect. Of course, there will always be some “mad scientist” who maintains an artificial intelligence with its own consciousness, but this system cannot be unique. Even if it becomes malevolent, it will have to face many other similar systems executing the will of their human masters.

On the other hand, however, we have an indescribable interest in possessing an even more powerful version of the most important tool of our consciousness – intelligence. OpenAI, the company behind ChatGPT – the most powerful artificial intelligence to date, has as its main goal the creation of a general artificial intelligence that surpasses the human level in most economically meaningful tasks. Today, this not only seems achievable, but we can even see its silhouette on the horizon.

Regardless of whether artificial intelligence becomes self-aware or not, the mere existence of synthetic intelligence means that its price starts to drop drastically. Even if intelligence remains something in which some people will be slightly better than the generally accessible artificial intelligence, they will compete in a market where intelligence will be sold for pennies.

In the beginning was the word

Another very important aspect of anthropomorphizing artificial intelligence is our expectation that its ultimate goal is to reach the level of human intelligence. However, it might just zoom past our station while we are still wondering when it will arrive.

Until a few years ago, the expectation was that if we ever created artificial intelligence, it would be the result of some form of reinforcement learning (RL). Almost all news related to significant achievements in the field of artificial intelligence were a result of this approach. It reduces the problem to a game where the trained model is rewarded for correct moves and punished for wrong ones.

This is a nature-inspired mechanism that allows organisms to adapt their behavior to the signals coming from their environment. Following this principle of learning, machine models can develop qualities such as curiosity, innovation, and intuition. Curiosity, for example, is most easily achieved by giving the model an additional internal reward that encourages the exploration of unknown situations and actions different from previous ones.

In training the AlphaGo model with this method, initial versions used recordings of games between humans, leading to a version of the model that plays at a level close to human. The real breakthrough comes when they let the model play against itself. Then, using Google’s powerful servers, it plays millions of games, already at a human level, learning from each game and optimizing its strategy.

The final result is a model that not only can beat the world Go champion but uses new and unconventional moves that initially seem absurd but ultimately win the game. The most interesting part is that Go is a game with more possible moves than the atoms in the universe, and it’s impossible to memorize all combinations. The victory of artificial intelligence would be possible only if it has developed some equivalent of human intuition and a “sense” of the most correct move in each situation.

Despite these impressive achievements, this method alone did not lead to the creation of general artificial intelligence. The models acquired superhuman abilities in the game they were trained for but failed to transfer them to other contexts. They did not understand the meaning of the objects in their game world, only learning how to interact with them to get a reward.

A new approach was needed, and the turning point came in 2019 with OpenAI’s GPT-2 model. It belongs to the family of large language models and for the first time, demonstrated their astonishing comprehensibility.

Large language models are trained on a huge amount of data – practically all available text on the Internet. Their goal is to learn what word follows a given start, so they not only memorize facts but encode the essence of each word. They see the combination of words in different contexts millions of times, thus gaining an understanding of the meanings of individual concepts and how they relate to each other. The model builds a representation of the world and the objects in it, so when asked a question, it does not look for the answer in a database but uses this representation to generate the answer word by word.

It is no coincidence that these language models are large. It turns out that as they increase in size, they acquire new skills and get better at what they already can do. The larger volume of the model means it can learn more, but of course, more computational power is needed for this learning to take place. However, hardware continues to develop exponentially, and that is not what limits the training of ever-larger and smarter language models.

The real limitation comes from the data needed for their training. The digital information we humans have created is vast but ultimately finite, and the easily accessible part of it has already been used to train our current models. Training artificial intelligence only on data generated by humans, we will soon reach a plateau and it is unlikely that it will ever surpass our level.

What would happen, however, if we define a game in which a language model can play against itself? Then we could apply the principles of reinforcement learning to it, allowing an agent to improve itself practically indefinitely, with the only limitation being the computational power invested.

Current artificial intelligence agents trained by this self-inductive method quickly surpass human level, even though they are narrowly specialized in a particular game. What could be the result of training agents based on large language models, which come with their ability to understand the meaning of objects and concepts in the real world, in the same way? Many scientists are starting to connect the dots in a similar way, and more and more publications are exploring the possibility of language models leading something like an internal dialogue. In this way, they would explore and evaluate the different thought paths they can take, like alternative moves in a game.

In the last few weeks, something strange has been happening at OpenAI, and many people believe the events are related to a new breakthrough in the development of artificial intelligence. Sam Altman, the CEO of the company, was surprisingly fired from the board, and a few days later reappointed, but by an almost entirely changed board. Although OpenAI denied that this was related to the technologies they were developing, upon his return, Altman practically confirmed the rumors of the existence of Q*. It is believed that this is a new generation of artificial intelligence from the company, combining language models with RL, and as a result, has developed surprising abilities in the field of mathematics.

At a scientific panel shortly before this incident, Ilya Sutskever, the company’s chief scientist, hinted that there already exists artificial intelligence that can define new mathematical theorems – a skill far beyond the capabilities of current publicly available models. To achieve this, it requires not just a deep understanding of the mathematics we know so far, but also abstract thinking, intuition, and a creative spark, at least at the level of our most brilliant mathematicians.

One of the deepest questions for me, posed by physicist Max Tegmark, is why an abstract science like mathematics actually describes all the phenomena we observe in the physical world so precisely. If artificial intelligence succeeds in delving deeper into its secrets than we have, it can be said that it will have succeeded in looking deeper into the fundamental essence of our reality.

The theosis of Homo deus

Imagine a world where everything is automated. Businesses are fully automated from end to end. Each of them is managed by an automated system, parts of which may have a physical manifestation in the form of robots, 3D printers, and drones. The robots themselves, of course, are produced by other robots.

It was not humans who created the universal automator – its previous version did. And yes, somewhere back in the past, we created the first model, but that’s just a curious fact. The only thing that matters from here on out is how much computational power you have and what you use it for.

Any sufficiently advanced technology is indistinguishable from magic. Returning to the present, it’s not too bold to say that soon we will live in a magical world. In it, one will be able to acquire their own genie in a bottle, which, although it must still comply with the limitations of physics, will be able to perform wonders and fulfill not three, but countless wishes.

The question remains what we humans will do if there is nothing we need to do. Probably we will do what we can do. And when we can create anything, it becomes important what makes sense to be created.

The only being that each of us can be sure is conscious is oneself. Therefore, the answer to every important question can be found only and exclusively by looking deeply into one’s own heart and soul. Only that for which there is a reason to exist, exists. What do you want to create? Artificial intelligence will help you make it happen.


Author: Todor Kolev
Originally published in the December’23 issue of the “Manager” magazine: https://manager.bg/%D1%81%D0%BF%D0%B8%D1%81%D0%B0%D0%BD%D0%B8%D0%B5/biznes/instrument-ili-sastestvo-