American billionaire Elon Musk loves to make predictions. In his interview with reporters, the inventor said: “I believe that artificial intelligence will kill us all sooner or later.” According to Mask, the problem of AI uncontrollability in the future will be worse than in North Korea. The billionaire estimates the chances of humanity to survive in the confrontation with machines at only 5-10%. At the same time, an uprising of machines, as in the movie “Terminator”, may occur in the next decade. Is Elon Musk right and is the fate of pets awaiting us with robots? Let’s get it right.
What is Artificial Intelligence?
The term intelligence is derived from the Latin word intellectus, which means the mind, reason, mind, mental ability of a person. Artificial intelligence, artificial intelligence, AI can be defined as the ability of machines, robots, to take over the functions of human intelligence, for example, to make several decisions based on their experience and external situation.
The story of the creation of artificial intelligence
If you think that the concept of artificial intelligence is a product of the modern era, then you are very mistaken. The idea of creating mechanical creatures, like humans, originated in antiquity. So, the ancient Egyptians created a mechanical statue of their god Amon, and in the “Iliad” of Homer describes the process of creation by Hephaestus of mechanical creatures. Aristotle formed the basic laws of formal logic. In this sense, it can be considered the “forefather” of artificial intelligence. Raymond Lullius, a mathematician and philosopher from Spain, tried to create a machine capable of solving intellectual problems. But a completely new direction of science appeared only in the 40s of the last century, after the creation of electronic computers and the emergence of such a science as cybernetics. Norbert Wiener created this science.
The first attempt to define the concept of artificial intelligence was made by the English scientist Alan Turing. In his article “Computing machines and mind”, he proposed his own “Turing test”. In this text, computer intelligence was assessed by its capabilities for a reasonable dialogue with a person.
In the work of scientists to create artificial intelligence, several stages can be distinguished. The first direction is analytical, functional. Before the machines are set private tasks that are creative in nature. For example, painting, translating a literary text from one language to another.
The second area of work is synthetic, or model. Scientists are trying to simulate the creative activity of the brain in a general sense. The essence of the study was the reproduction of metaprocedures of thinking. That is, not what you learn, but how you study, not what you invent, but how you do it. In this area, two models have developed. One of them was a labyrinth model. Its essence is enumeration of all possible options. Let’s take a chess game as an illustration. The program makes the next move and evaluates its success or failure “in fact”: taking other people’s figures, gaining or losing a positional advantage, etc.
The logic of such a program is that the success gained at each turn will lead to victory in the entire chess game. However, any chess player will tell you that you can sacrifice one or several pieces, go for a visible deterioration in your position in order to organize a trap for the opponent’s king. In this case, several moves in a row will be formally losing, but the whole game will be winning. The two approaches described are the essence of heuristic and dynamic programming. Chess players, using a dynamic approach against “heuristic” programs, have always won. The situation changed when the machines “learned” a dynamic approach.
A further development of artificial intelligence was associative models. Psychology explains associations as a connection of representations: one representation causes another. A striking illustration of the principle are Pavlov’s dogs: a lighted bulb caused saliva to separate from animals. The solution to each new problem in the associative model is based on already solved old problems, similar to the new one. Based on associative models, modern programs for classifying and recognizing images work.
In the future, robots will learn how to accumulate, process and use the information themselves. Obviously, for this the machine must learn to ask itself the questions: “What do I want to know”, “What do I need to achieve the goal”, etc.
The modern use of artificial intelligence
But so far, robots are not capable of understanding themselves. Despite this, they are widely used in many sectors of human life: medicine, education, business, science and everyday life. AI can control automated processes in production, it is able to accumulate, store and process gigantic amounts of information.
In medicine, the IBM Watson diagnostic computer is widely used. He has a database of millions of medical records and medical records. Thanks to this, the machine is able to make fairly accurate diagnoses. However, the probability of error, although negligible, still exists. Therefore, so far this super PC is just an assistant to doctors, and does not replace them completely. And apparently, in the foreseeable future, the situation will not change. At least in areas vital to human society.
How artificial intelligence affects Humanity?
Based on the foregoing, you may get the impression that AI is exclusively beneficial to humans. However, it is not. The introduction of machines in production, in the banking sector has already led to the fact that millions of people around the world have lost their jobs. Some governments are already paying their citizens an unconditional minimum payment to maintain a normal existence.
In Japan, there are robotic musicians, teachers, hotel employees, etc. AI in the Land of the Rising Sun supports lonely elderly people. And there are many such examples.
Is the Robotic Invasion Possible?
Does the invasion of machines threaten humanity, as in the movie “Terminator”? At some stage in the development of AI, this can become a reality. One thought suggests this thought. The super-robot Sofia was asked what she would do with humanity. She answered unequivocally to this question: “I would kill.”
Engineers conducted an experiment. Several machines simulating human behavior were left to their own devices. In such autonomous conditions, they invented a new formalized language and began to communicate with each other.
Having realized himself, AI can consider a person superfluous. Perhaps we will simply be “cleaned up”, leaving several thousand people in the “reserves”. But another option is also possible: people will be implanted with some neural networks, thus forming a hybrid of a robot and a person. In this case, a person will compete with robots and will not be destroyed. Be that as it may, this is a matter of a very distant future.