ecosmak.ru

Turing edu index php t. Turing machine and recursive functions: A textbook for universities

FEDERAL AGENCY FOR EDUCATION STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION "VORONEZH STATE UNIVERSITY" T.K. Katsaran, L.N. Stroeva TURING MACHINE AND RECURSIVE FUNCTIONS Tutorial for universities Publishing and Printing Center of Voronezh state university 2008 Approved by the scientific and methodological council of the faculty of PMM on May 25, 2008, protocol No. 9 Reviewer Doctor of Technical Sciences, Prof. Department of Mathematical Methods for Operations Research T.M. Ledeneva The textbook was prepared at the Department of Nonlinear Oscillations, Faculty of Mechanical Mathematics, Voronezh State University. Recommended for 1st year students of the Faculty of Applied Mathematics and Mathematics of VSU, Starooskolsky and Liskinsky branches of VSU. For specialty 010500 – Applied mathematics and computer science INTRODUCTION The word “algorithm” comes from algorithmi – the Latin spelling of the name of the Uzbek mathematician and astronomer who lived in the 8th–9th centuries (783–850), Muhammad ben Musa al-Khwarizmi. The greatest mathematician from Khorezm (a city in modern Uzbekistan) was known under this name in Medieval Europe. In his book “On Indian Counting,” he formulated the rules for writing natural numbers using Arabic numerals and the rules for operating on them. Then the concept of an algorithm began to be used in more in a broad sense and not only in mathematics. For both mathematicians and practitioners, the concept of an algorithm is important. Thus, we can say that an algorithm is a precise prescription for performing a certain system of operations in a certain order to solve all problems of the same type, defining a sequence of actions that ensures obtaining the required result from the initial data. Note that this is not a definition of the concept “algorithm”, but only its description, its intuitive meaning. The algorithm can be designed to be executed either by a person or by an automatic device. This idea of ​​an algorithm is not rigorous from a mathematical point of view, since it uses such concepts as “exact instructions” and “initial data”, which, generally speaking, are not strictly defined. A feature of any algorithm is its ability to solve a certain class of problems. For example, this could be an algorithm for solving systems linear equations , finding the shortest path in a graph, etc. Creating an algorithm, even the simplest one, is a creative process. It is available exclusively to living beings, and for a long time it was believed that only to humans. Another thing is the implementation of an existing algorithm. It can be entrusted to a subject or object who is not obliged to delve into the essence of the matter, and perhaps is not able to understand it. Such a subject or object is usually called a formal performer. An example of a formal performer is an automatic washing machine, which strictly performs the actions prescribed to it, even if you forgot to put powder in it. A person can also act as a formal performer, but first of all, various automatic devices, including a computer, are formal performers. Each algorithm is created with a very specific performer in mind. Those actions that the performer can perform are called his permissible actions. The set of permissible actions forms a system of performer commands. The algorithm must contain only those actions that are permissible for a given performer. Therefore, several general properties of algorithms are usually formulated to distinguish algorithms from other instructions. The algorithm must have the following properties. Discreteness (discontinuity, separateness) – the algorithm must represent the process of solving a problem as a sequential execution of simple (or previously defined) steps. Each action provided by the algorithm is executed only after the previous one has completed execution. Certainty - each rule of the algorithm must be clear, unambiguous and leave no room for arbitrariness. Thanks to this property, the execution of the algorithm is mechanical in nature and does not require any additional instructions or information about the problem being solved. Efficiency (finiteness) – the algorithm should lead to solving the problem in a finite number of steps. 4 Massiveness - the algorithm for solving the problem is developed in a general form, that is, it should be applicable for a certain class of problems that differ only in the initial data. In this case, the initial data can be selected from a certain area, which is called the area of ​​applicability of the algorithm. Algorithm theory is a branch of mathematics that studies the general properties of algorithms. There are qualitative and metric theories of algorithms. The main problem of the qualitative theory of algorithms is the problem of constructing an algorithm that has specified properties. This problem is called an algorithmic problem. Metric theory of algorithms examines algorithms in terms of their complexity. This branch of the theory of algorithms is also known as algorithmic complexity theory. When searching for solutions to some problems, it took a long time to find the appropriate algorithm. Examples of such problems are: a) indicate a method according to which, for any predicate formula, in a finite number of operations one can find out whether it is identically true or not; b) is the Diophantine equation (an algebraic equation with integer coefficients) solvable in integers? Since it was not possible to find algorithms for solving these problems, the assumption arose that such algorithms do not exist at all, which was proven: the first problem was solved by A. Church, and the second by Yu.V. Matiyasevich and G.V. Chudnovsky. It is in principle impossible to prove this using the intuitive concept of an algorithm. Therefore, attempts have been made to give a precise mathematical definition of the concept of an algorithm. In the mid-30s of the twentieth century, S.K. Kleene, A.A. Markov, E. Post, A. Turing, A. Church and others have proposed various mathematical definitions 5 of the concept of algorithm. Subsequently, it was proven that these different formal mathematical definitions are in some sense equivalent: they calculate the same set of functions. This suggests that the main features of the intuitive concept of an algorithm appear to be correctly captured in these definitions. Next, we consider a mathematical refinement of the algorithm proposed by A. Turing, which is called a Turing machine. 6 1. TURING MACHINE § 1. Mathematical model of the Turing machine The idea of ​​​​creating a Turing machine, proposed by the English mathematician A. Turing in the thirties of the 20th century, is associated with his attempt to give a precise mathematical definition of the concept of an algorithm. A Turing machine (MT) is a mathematical model of an idealized digital computer. A Turing machine is the same mathematical object as a function, derivative, integral, group, etc. Just like other mathematical concepts, the concept of a Turing machine reflects objective reality and models certain real processes. To describe the MT algorithm, it is convenient to imagine a device consisting of four parts: tape, read head, control device and internal memory. 1. The tape is assumed to be potentially infinite, divided into cells (equal cells). If necessary, an empty cell is attached to the first or last cell in which the symbols are located. The machine operates in time, which is considered discrete, and its moments are numbered 1, 2, 3, …. At any moment the tape contains a finite number of cells. Only one symbol (letter) from the external alphabet A = (L, a1 , a 2 ,..., a n -1 ), n ³ 2 can be written into cells at a discrete time. An empty cell is designated by the symbol L, and the symbol L itself is called empty, while the remaining symbols are called non-blank. In this alphabet A, the information that is supplied to the MT is encoded in the form of a word (a finite ordered set of symbols). The machine “processes” information presented in the form of a word into a new word. 2. The reading head (a certain reading element) moves along the tape so that at each moment of time it views exactly one cell of the tape. The head can read the contents of a cell and write a new character from the alphabet A into it. In one cycle of operation, it can move only one cell to the right (R), left (L) or remain in place (N). Let us denote the set of movements (shifts) of the head D = (P, L, N). If in this moment at time t the head is in the outermost cell and moves into the missing cell, then a new empty cell is added, above which the head will be at time t + 1. 3. The internal memory of the machine is a certain finite set of internal states Q = ( q0 , q1 , q 2 , ..., q m ), m ³ 1 . We will assume that the power |Q | ³ 2. Two states of the machine have special meaning: q1 – initial internal state (there can be several initial internal states), q0 is the final state or stop state (there is always one final state). At each moment of time, the MT is characterized by the position of the head and internal state. For example, under the cell above which the head is located, the internal state of the machine is indicated. ¯ a2 a1 L a2 a3 q1 4. At each moment t, the control device, depending on the symbol on the tape being read at that moment and the internal state of the machine, performs the following actions: 1) changes the symbol ai read at the moment t to a new symbol a j (in particular, leaves it unchanged, i.e. ai = a j); 2) moves the head in one of the following directions: N, L, R; 3) changes the internal state of the machine 8 qi existing at moment t to a new q j in which the machine will be at time t +1 (it may be that qi = q j). Such actions of the control device are called a command, which can be written in the form: qi ai ® a j D q j , (1) where qi is the internal state of the machine at the moment; a i – the symbol being read at this moment; a j – the symbol to which the symbol a i changes (can be ai = a j); the symbol D is either N, or L, or P and indicates the direction of movement of the head; q j is the internal state of the machine at the next moment (maybe qi = q j). The expressions qi ai and a j D q j are called the left and right sides of this command, respectively. The number of commands in which the left-hand sides are pairwise distinct is a finite number, since the sets Q \ (q 0 ) and A are finite. There are no commands with identical left-hand sides, i.e. if the machine program T contains the expressions qi ai ® a j D q j and qt at ® ak D qk , then qi ¹ qt or ai ¹ at and D O (P, L, N). The set of all instructions is called a Turing machine program. The maximum number of commands in a program is (n + 1) x m, where n + 1 = A and m + 1 = Q. It is believed that the final state of the command q0 can only appear on the right side of the command, the initial state q1 can appear on both the left and the right side of the command. Executing one command is called a step. The computation (or operation) of a Turing machine is a sequence of steps one after another without skipping, starting with the first. So, MT is given if four finite sets are known: the external alphabet A, the internal alphabet Q, the set D of head movements and the machine program, which is a finite set of commands. 9 § 2. Operation of a Turing machine The operation of the machine is completely determined by the task at the first (initial) moment: 1) words on the tape, i.e. e. sequences of symbols written in the cells of the tape (a word is obtained by reading these symbols across the cells of the tape from left to right); 2) head position; 3) the internal state of the machine. The combination of these three conditions (at the moment) is called configuration (at the moment). Typically, at the initial moment, the internal state of the machine is q1, and the head is either above the first left or first right cell of the tape. Specified word on a tape with the initial state q1 and the position of the head above the first word is called the initial configuration. Otherwise, we say that the Turing machine is not applicable to a word of initial configuration. In other words, at the initial moment the configuration is representable in the following form: on a tape consisting of a certain number of cells, in each cell one of the symbols of the external alphabet A is written, the head is located above the first left or first right cell of the tape and the internal one. the machine's position is q1. The resulting word on the tape and head position resulting from the implementation of this command is called the final configuration. For example, if at the initial moment the word a1La 2 a1a1 is written on the tape, then the initial configuration will look like: a1 a2 L a1 a1 q1 (under the cell above which the head is located, the internal state of the machine is indicated). 10

There is probably not a person today who has not at least once heard of such a concept as the Alan Turing test. Most people are probably far from understanding what such a testing system is. Therefore, let us dwell on it in a little more detail.

What is the Turing Test: Basic Concept

Back in the late 40s of the last century, many scientific minds were engaged in the problems of the first computer developments. It was then that one of the members of a certain non-governmental group Ratio Club, engaged in research in the field of cybernetics, asked a completely logical question: is it possible to create a machine that would think like a person, or at least imitate his behavior?

Do I need to say who invented the Turing test? Apparently not. The initial basis of the entire concept, which is still relevant today, was the following principle: will a person, after some time of communication with some invisible interlocutor on completely different arbitrary topics, be able to determine who is in front of him - a real person or a machine? In other words, the question is not only about imitating the behavior of a machine real person, but also to find out whether she can think for herself. this issue still remains controversial.

History of creation

In general, if we consider the Turing test as a kind of empirical system for determining the “human” capabilities of a computer, it is worth saying that the indirect basis for its creation were the curious statements of the philosopher Alfred Ayer, which he formulated back in 1936.

Ayer himself compared, so to speak, the life experiences of different people, and on the basis of this expressed the opinion that a soulless machine would not be able to pass any test, since it could not think. At best it will be clean water imitation.

In principle, this is how it is. Imitation alone is not enough to create a thinking machine. Many scientists cite the example of the Wright brothers, who built the first airplane, abandoning the tendency to imitate birds, which, by the way, was characteristic of such a genius as Leonardo da Vinci.

Istria is silent whether he himself (1912-1954) knew about these postulates, however, in 1950 he compiled a whole system of questions that could determine the degree of “humanization” of the machine. And it must be said that this development is still one of the fundamental ones, although only when testing, for example, computer bots, etc. In reality, the principle turned out to be such that only a few programs managed to pass the Turing test. And then, “pass” is said with great stretch, since the test result has never had an indicator of 100 percent, at best - a little more than 50.

At the very beginning of his research, the scientist used his own invention. It was called the Turing test machine. Since all conversations were to be entered exclusively in printed form, the scientist set several basic directives for writing responses, such as moving the printing tape to the left or right, printing a specific character, etc.

Programs ELIZA and PARRY

Over time, the programs became more complex, and two of them, in situations where the Turing test was applied, showed stunning results at that time. These were ELIZA and PARRY.

As for "Eliza", created in 1960: based on the question, the machine had to determine the key word and based on it create a return answer. This is what made it possible to deceive real people. If there was no such word, the machine returned a generalized answer or repeated one of the previous ones. However, the passage of the Eliza test is still in doubt, since the real people who communicated with the program were initially psychologically prepared in such a way that they thought in advance that they were talking to a person and not to a machine.

The PARRY program is somewhat similar to Eliza, but was created to simulate the communication of a paranoid person. What’s most interesting is that real clinic patients were used to test it. After recording the transcripts of the conversations via teletype, they were assessed by professional psychiatrists. Only in 48 percent of cases were they able to correctly assess where the person was and where the machine was.

In addition, almost all programs of that time worked taking into account a certain period of time, since a person in those days thought much faster than a machine. Now it's the other way around.

Supercomputers Deep Blue and Watson

The developments of the IBM corporation looked quite interesting; they not only thought, but had incredible computing power.

Many people probably remember how in 1997 the supercomputer Deep Blue won 6 chess games against the then current world champion Garry Kasparov. Actually, the Turing test is very conditionally applicable to this machine. The thing is that it initially contained many game templates with an incredible amount of interpretation of the development of events. The machine could evaluate about 200 million positions of pieces on the board per second!

The Watson computer, consisting of 360 processors and 90 servers, won the American television game show, outperforming the other two participants in all respects, for which, in fact, it received a $1 million bonus. Again, the question is moot because the machine was loaded with incredible amounts of encyclopedic data, and the machine simply analyzed the question for the presence of a keyword, synonyms or general matches, and then gave the correct answer.

Eugene Goostman emulator

One of the most interesting events In this area, there was a program by Odessa resident Evgeniy Gustman and Russian engineer Vladimir Veselov, now living in the United States, which imitated the personality of a 13-year-old boy.

On June 7, 2014, the Eugene program demonstrated its full capabilities. Interestingly, 5 bots and 30 real people took part in testing. Only in 33% of cases out of a hundred were the jury able to determine that it was a computer. The point here is that the task was complicated by the fact that a child has lower intelligence than an adult, and less knowledge.

The Turing test questions were the most general, however, for Eugene there were also some specific questions about the events in Odessa that could not go unnoticed by any resident. But the answers still made me think that the jury was a child. For example, the program answered the question about place of residence immediately. When the question was asked whether the interlocutor was in the city on such and such a date, the program stated that it did not want to talk about it. When the interlocutor tried to insist on a conversation in line with what exactly happened that day, Eugene disowned himself by saying, they say, you yourself should know, why ask him? In general, the child emulator turned out to be extremely successful.

However, this is still an emulator, not a thinking creature. So the machine uprising will not happen for a very long time.

but on the other hand

Finally, it remains to add that so far there are no prerequisites for creating thinking machines in the near future. Nevertheless, if earlier recognition issues related specifically to machines, now almost every one of us has to prove that you are not a machine. Just look at entering a captcha on the Internet to gain access to some action. So far it is believed that not a single one has been created yet electronic device, capable of recognizing distorted text or a set of characters other than a person. But who knows, everything is possible...

ARTIFICIAL INTELLIGENCE

The Turing test is known to every person interested in artificial intelligence. It was formulated in 1938 by Alan Turing in the article “Can a Machine Think?” The test is as follows. The experimenter communicates with the interlocutor without seeing him (for example, over a computer network), typing phrases on the keyboard and receiving a text response on the monitor. He then tries to determine who he was talking to. If the experimenter mistakes a computer program for a person, then it has passed the Turing test and can be considered intelligent.

A person will still receive a gold medal

The most famous program, which showed the real possibility of passing this test back in the 60s, was the legendary ELIZA. It was created in 1966 by scientists Winograd, Weizenbaum and Colby. ELIZA found key words in a phrase (for example, “mother”) and issued a template request, mechanically reacting to these words (“Tell me more about your mother”). Subsequently, Toddy Winograd, based on ELIZA, created a more advanced version of “Psychotherapist”. The appearance of ELIZA went down in the history of artificial intelligence along with such events as the release of the first industrial robot in 1962 or the start of Pentagon funding for developments in the field of image and speech recognition in 1975-1976.

In 1991, the first private but very respectable Turing Test tournament was held, to which the authors of suitable computer programs (called bots) were invited. This tournament was founded by Hugh Loebner (www.loebner.net/Prizef/loebner-prize.html). For winning it, a prize of $100 thousand and a gold medal were awarded.

So far, no one has won the main prize. However, in 1994, Loebner made a major change to the rules, requiring the program to communicate with the judge not only in text format, but also by generating an image of a virtual person. She must also be able to synthesize and recognize speech information. These conditions were considered by many to be extremely difficult, and so far no contenders for the main prize have appeared under the new rules. And for winning in the old, “text” mode, they are now promised 25 thousand dollars and a silver medal. It should be noted that the probability of a judge’s subjective error when communicating with the program according to the old rules is quite high. In addition, bots are improving quite quickly, and we will probably wait for a winner in the Turing test in the coming years.

Judging at the competition is very strict. Experts prepare for the tournament in advance and select very tricky questions in order to understand who they are communicating with. Their conversation with the programs resembles an interrogation of an investigator. Judges like, for example, to repeat some questions after a certain time, since weak bots do not know how to follow the history of the dialogue and can be caught giving monotonous answers.

But the developers do not waste time, setting up their virtual charges in such a way that in response to provocative questions like “are you a machine or a person?” The program answered very convincingly: “Of course, a person!”

On November 8, 1991, at the Boston Computer Museum, eight programs went head-to-head with live experts. The result was stunning. Five out of ten judges decided that there was a person hiding behind the Whimsical Conversation pseudonym. In fact, it was a Therapist program running on an IBM PC 386 and created in 1986 based on the legendary ELIZA by Joseph Weintraub, a specialist in psychology. However, this result is largely due to the skepticism and prejudice of experts who believed that programs communicate at a much more primitive level than was in reality. Subsequently, the judges became accustomed to the computer style of communication, and the bots have not been able to achieve such a successful result since then.

Therapist originally consisted of seven modules written in C, Turbo-Prolog and QuickBasic. Later it was supplemented with a speech synthesis system and a knowledge base. The program uses a dictionary of 70,000 words and remembers everything that is told to it, using this information in subsequent dialogues and adapting to the interests of a particular interlocutor. Therapist won the Loebner Prize back in 1992, 1993 and 1995. It was then enhanced with specialized modules capable of supporting conversations on “man versus woman” and “liberal versus conservative” topics.

Here is an example from one of her winning dialogues:

Judge: Do you think that American soldiers should go to Bosnia?

Therapist: I think that what Clinton has done in sending 20,000 peacekeepers to bosnia will ensure a lasting peace.

Judge: Do you think most Americans feel the way you do?

Therapist: What did you mean when you said do you think that american soldiers should go to bosnia? Did anyone ever tell you that you think too much? Oh, I think most Americans feel the way I do?

In 1994, the tournament was won by Thomas Wylen, a doctor of experimental psychology specializing in the study of natural languages. His TIPS program was trained to store and analyze phrases (there were several thousand of them in the database) and search for lexical patterns.

In 1996, the bronze medal was awarded to Jason Hutchence and his SEPO bot, which communicated something like this:

Judge: I think. I do research. I write. I love. I Live.

SEPO: I wonder whether you think sometimes! I am feeling like the judge here!

In 1997, David Levy became the champion with the Converse bot. In 1998 and 1999, Robbie Garnett achieved victory, believing that passing the Turing test does not require real intelligence. It is interesting that almost all the authors, mentioning scientific approaches to learning (neural networks, genetic algorithms), used fairly simple heuristic methods in their decisions.

Garnett entered into the competition a Computational Behaviorist program based on a stimulus-response principle similar to that of TIPS and ELIZA. Only his bot was looking for not one, but up to three keywords in a phrase. At the same time, realizing that the program required something more than a monotonous answer to questions, he built into it a number of additional heuristic algorithms that created a more complete illusion of communication with a person.

During the development of Behaviorist, technical difficulties arose related to the complexity of implementing knowledge search in databases that were large at that time, which led to noticeable time delays in communication, which immediately betrayed the computer interlocutor. Therefore, Garnet combined two publicly available bots - Albert, written in C++, and one of the Pascal versions of ELIZA and implemented them in the Visual DataFlex development environment, which allowed the use of standard database query algorithms.

In 2000 and 2001, the minor prize went to Richard Wallace's ALICE program. Today, on the basis of ALICE, the ALICE AI Foundation (http://alice.sunlitsurf.com/) has been organized, which is engaged in standardizing activities for creating bots. In particular, ALICE has been supplemented with database support tools in the AIML (Artificial Intelligence Markup Language) format - a subset of XML aimed at formalizing the presentation of key phrases and answers. Now anyone unfamiliar with programming can take a basic version of ALICE and fill it with their own knowledge base in any language using a regular editor.

Unfortunately, this summer, as Wired reported, Mr. Wallace began to have mental problems (he threatened one of his fellow professors with physical harm, claiming that corruption was rampant in a number of American universities and that the teaching staff was planning a large-scale plan against Wallace CONSPIRACY). The scientist is currently under investigation.

One of the most likely candidates for victory this year (the tournament will be held in October) is Joshua Smith, the author of the Anna program (AIML extension of ALICE, freely available on the website http://annabot.sourceforge.net/). Mr. Joshua notes that, unlike his colleagues, he from the very beginning created a bot that pretends to be a human during communication. Anna really considers herself a living being, has a set of individual qualities and is quite lively in conversation.

Are there similar Russian developments - bots capable of communicating in Russian? The PC Week/RE editors are ready to hold Russian competition to pass the Turing test. Write to the author at: [email protected].

TURING

TURING(Turing) Alan (1912-54), English mathematician and logician who formulated theories that later became the basis of computer technology. In 1937 he came up with Turing machine - a hypothetical machine capable of transforming a set of input commands. It was the forerunner of modern computers. Turing also used the idea of ​​a computer to give an alternative and simpler proof of Gödel's incompleteness theorem. Turing played a major role in solving Enigma, a complex encryption method used by Germany during World War II. In 1948 he participated in the creation of one of the world's first computers. In 1950 he came up with Turing test - it was supposed to be a test of a computer's ability to “think.” Essentially, it stated that a person would not be able to distinguish a dialogue with a machine from a dialogue with another person. This work paved the way for the creation of ARTIFICIAL INTELLIGENCE. Turing was also involved in theoretical biology. In progress "Chemical basis of morphogenesis"(1952) he proposed a model describing the origin of the various structural patterns of organisms in biology. Since then, such models have often been used to describe and explain many systems observed in nature. Turing committed suicide after being officially accused of homosexuality.


Scientific and technical encyclopedic dictionary.

See what "TURING" is in other dictionaries:

    Turing, Alan Mathison Alan Turing Alan Mathison Turing Monument in Sackville Park Date of birth ... Wikipedia

    - (Turing) Alan Mathieson (1912 54), English mathematician. In 1936-1937 he introduced the mathematical concept of an abstract equivalent of an algorithm, or a computable function, which was then called the Turing machine... Modern encyclopedia

    - (Turing), Alan Matheson (June 23, 1912 - June 7, 1954) - English. logician and mathematician. In 1936–37 he proposed an idealized machine model of computation. process - a computational scheme close to the actions of the person performing the calculations, and put forward... ... Philosophical Encyclopedia

    Turing A.- Turing A. English mathematician. Topics information security EN Turing… Technical Translator's Guide

    Alan Turing Alan Turing Monument in Sackville Park Date of birth: June 23, 1912 Place of birth: London, England Date of death: June 7, 1954 ... Wikipedia

    Turing- English mathematician Alan M. Turing, one of the creators of the logical foundations of computer technology, in particular, gave one of the formal definitions of the algorithm; proved that there is a class of computers that can simulate... ... Lem's World - Dictionary and Guide

    - (Turing) Alan Mathieson (23.6.1912, London, 7.6.1954, Wilmslow, near Manchester), English mathematician. Fellow of the Royal Society (1951). After graduating from Cambridge University (1935), he worked on his doctoral dissertation at Princeton... ... Great Soviet Encyclopedia

    Turing A. M.- TURING (Turing) Alan Mathieson (191254), English. mathematician. Basic tr. in mathematics logic, calculates. mathematics. In 193637 he introduced mathematics. the concept of an abstract equivalent of an algorithm, or a computable function, then called. car T... Biographical Dictionary

    - (full Alan Mathison Turing) (June 23, 1912, London June 7, 1954, Wilmslow, UK), British mathematician, author of works on mathematical logic and computational mathematics. In 1936-1937 he introduced the mathematical concept... encyclopedic Dictionary

Books

  • Can a machine think? General and logical theory of automata. Issue 14, Turing A., This book, containing the works of Alan Turing and John von Neumann, who were at the origins of the creation of the first thinking computers, belongs to the classics of philosophical-cybernetic... Category: Databases Series: Artificial Sciences Publisher: URSS, Manufacturer: URSS,
  • Can a machine think? General and logical theory of automata. Issue No. 14, Turing A., This book, containing the works of Alan Turing and John von Neumann, who were at the origins of the creation of the first “thinking machines” of computers, belongs to the classics of the philosophical-cybernetic direction... Category:

Artificial intelligence (AI, English: Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs. AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

What is artificial intelligence

Intelligence(from Lat. intellectus - sensation, perception, understanding, understanding, concept, reason), or mind - a quality of the psyche consisting of the ability to adapt to new situations, the ability to learn and remember based on experience, understand and apply abstract concepts and use one’s knowledge for management environment. Intelligence is the general ability to cognition and solve difficulties, which unites all human cognitive abilities: sensation, perception, memory, representation, thinking, imagination.

In the early 1980s. Computational scientists Barr and Fajgenbaum proposed the following definition of artificial intelligence (AI):


Later, a number of algorithms and software systems began to be classified as AI, the distinctive property of which is that they can solve some problems in the same way as a person thinking about their solution would do.

The main properties of AI are understanding language, learning and the ability to think and, importantly, act.

AI is a complex of related technologies and processes that are developing qualitatively and rapidly, for example:

  • natural language text processing
  • expert systems
  • virtual agents (chatbots and virtual assistants)
  • recommendation systems.

National strategy for the development of artificial intelligence

  • Main article: National strategy for the development of artificial intelligence

AI Research

  • Main article: Artificial Intelligence Research

Standardization in AI

2019: ISO/IEC experts supported the proposal to develop a standard in Russian

On April 16, 2019 it became known that the ISO/IEC subcommittee on standardization in the field of artificial intelligence supported the proposal of the Technical Committee “Cyber-physical systems”, created on the basis of RVC, to develop the “Artificial intelligence” standard. Concepts and terminology" in Russian in addition to the basic English version.

Terminological standard “Artificial intelligence. Concepts and terminology" is fundamental to the entire family of international regulatory and technical documents in the field of artificial intelligence. In addition to terms and definitions, this document contains conceptual approaches and principles for constructing systems with elements, a description of the relationship between AI and other end-to-end technologies, as well as basic principles and framework approaches to the regulatory and technical regulation of artificial intelligence.

Following the meeting of the relevant ISO/IEC subcommittee in Dublin, ISO/IEC experts supported the proposal of the delegation from Russia to simultaneously develop a terminological standard in the field of AI not only in English, but also in Russian. The document is expected to be approved in early 2021.

The development of products and services based on artificial intelligence requires an unambiguous interpretation of the concepts used by all market participants. The terminology standard will unify the “language” in which developers, customers and the professional community communicate, classify such properties of AI-based products as “security”, “reproducibility”, “reliability” and “confidentiality”. A unified terminology will also become an important factor for the development of artificial intelligence technologies within the framework of the National Technology Initiative - AI algorithms are used by more than 80% of companies in the NTI perimeter. In addition, the ISO/IEC decision will strengthen the authority and expand the influence of Russian experts in the further development of international standards.

During the meeting, ISO/IEC experts also supported the development of a draft international document Information Technology - Artificial Intelligence (AI) - Overview of Computational Approaches for AI Systems, in which Russia acts as a co-editor. The document provides an overview current state artificial intelligence systems, describing the main characteristics of systems, algorithms and approaches, as well as examples of specialized applications in the field of AI. The development of this draft document will be carried out by a specially created within the subcommittee working group 5 “Computational approaches and computational characteristics of AI systems” (SC 42 Working Group 5 “Computational approaches and computational characteristics of AI systems”).

As part of their work at the international level, the Russian delegation managed to achieve a number of landmark decisions that will have a long-term effect on the development of artificial intelligence technologies in the country. The development of a Russian-language version of the standard, even from such an early phase, is a guarantee of synchronization with the international field, and the development of the ISO/IEC subcommittee and the initiation of international documents with Russian co-editing is the foundation for further promoting the interests of Russian developers abroad,” he commented.

Artificial intelligence technologies are in wide demand in a variety of sectors of the digital economy. Among the main factors hindering their full-scale practical use is the underdevelopment of the regulatory framework. At the same time, it is the well-developed regulatory and technical framework that ensures the specified quality of technology application and the corresponding economic effect.

In the area of ​​artificial intelligence, TC Cyber-Physical Systems, based on RVC, is developing a number of national standards, the approval of which is planned for the end of 2019 - beginning of 2020. In addition, work is underway together with market players to formulate a National Standardization Plan (NSP) for 2020 and beyond. TC "Cyber-physical systems" is open to proposals for the development of documents from interested organizations.

2018: Development of standards in the field of quantum communications, AI and smart city

On December 6, 2018, the Technical Committee “Cyber-Physical Systems” based on RVC together with the Regional Engineering Center “SafeNet” began developing a set of standards for the markets of the National Technology Initiative (NTI) and the digital economy. By March 2019, it is planned to develop technical standardization documents in the field of quantum communications, and, RVC reported. Read more.

Impact of artificial intelligence

Risk to the development of human civilization

Impact on the economy and business

  • The impact of artificial intelligence technologies on the economy and business

Impact on the labor market

Artificial Intelligence Bias

At the heart of everything that is the practice of AI (machine translation, speech recognition, natural language processing, computer vision, automated driving and much more) is deep learning. It is a subset of machine learning, characterized by the use of neural network models, which can be said to mimic the workings of the brain, so it would be a stretch to classify them as AI. Any neural network model is trained on large data sets, so it acquires some “skills,” but how it uses them remains unclear to its creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such a system AI and can systems built on machine learning be trusted? The implications of the answer to the last question extend beyond the scientific laboratory. Therefore, media attention to the phenomenon called AI bias has noticeably intensified. It can be translated as “AI bias” or “AI bias”. Read more.

Artificial Intelligence Technology Market

AI market in Russia

Global AI market

Areas of application of AI

The areas of application of AI are quite wide and cover both technologies familiar to the ear and emerging new areas that are far from mass application, in other words, this is the whole range of solutions, from vacuum cleaners to space stations. You can divide all their diversity according to the criterion of key points of development.

AI is not a monolithic subject area. Moreover, some technological areas of AI appear as new sub-sectors of the economy and separate entities, while simultaneously serving most areas in the economy.

The development of the use of AI leads to the adaptation of technologies in classical sectors of the economy along the entire value chain and transforms them, leading to the algorithmization of almost all functionality, from logistics to company management.

Using AI for Defense and Military Affairs

Use in education

Using AI in business

AI in the fight against fraud

On July 11, 2019 it became known that in just two years artificial intelligence and machine learning will be used to combat fraud three times more often than in July 2019. Such data was obtained during a joint study by SAS and the Association of Certified Fraud Examiners (ACFE). As of July 2019, such anti-fraud tools are already used in 13% of organizations that took part in the survey, and another 25% said that they plan to implement them within the next year or two. Read more.

AI in the electric power industry

  • At the design level: improved forecasting of generation and demand for energy resources, assessment of the reliability of power generating equipment, automation of increased generation when demand surges.
  • At the production level: optimization of preventive maintenance of equipment, increasing generation efficiency, reducing losses, preventing theft of energy resources.
  • At the promotion level: optimization of pricing depending on the time of day and dynamic billing.
  • At the level of service provision: automatic selection of the most profitable supplier, detailed consumption statistics, automated customer service, optimization of energy consumption taking into account the customer’s habits and behavior.

AI in manufacturing

  • At the design level: increasing the efficiency of new product development, automated supplier assessment and analysis of spare parts requirements.
  • At the production level: improving the process of completing tasks, automating assembly lines, reducing the number of errors, reducing delivery times for raw materials.
  • At the promotion level: forecasting the volume of support and maintenance services, pricing management.
  • At the level of service provision: improving planning of vehicle fleet routes, demand for fleet resources, improving the quality of training of service engineers.

AI in banks

  • Pattern recognition - used incl. to recognize customers in branches and convey specialized offers to them.

AI in transport

  • The auto industry is on the verge of a revolution: 5 challenges of the era of unmanned driving

AI in logistics

AI in brewing

AI in the judiciary

Developments in the field of artificial intelligence will help radically change the judicial system, making it fairer and free from corruption schemes. This opinion was expressed in the summer of 2017 by Vladimir Krylov, Doctor of Technical Sciences, technical consultant at Artezio.

The scientist believes that existing solutions in the field of AI can be successfully applied in various spheres of the economy and public life. The expert points out that AI is successfully used in medicine, but in the future it can completely change the judicial system.

“Looking at news reports every day about developments in the field of AI, you are only amazed at the inexhaustible imagination and fruitfulness of researchers and developers in this field. Messages about scientific research are constantly interspersed with publications about new products breaking into the market and reports of amazing results obtained through the use of AI in various fields. If we talk about expected events, accompanied by noticeable hype in the media, in which AI will again become the hero of the news, then I probably won’t risk making technological forecasts. I can imagine that the next event will be the emergence somewhere of an extremely competent court in the form of artificial intelligence, fair and incorruptible. This will happen, apparently, in 2020-2025. And the processes that will take place in this court will lead to unexpected reflections and the desire of many people to transfer to AI most of the processes of managing human society.”

The scientist recognizes the use of artificial intelligence in the judicial system as a “logical step” to develop legislative equality and justice. Machine intelligence is not subject to corruption and emotions, can strictly adhere to the legislative framework and make decisions taking into account many factors, including data that characterize the parties to the dispute. By analogy with the medical field, robot judges can operate with big data from government service repositories. It can be assumed, that

Music

Painting

In 2015, the Google team tested neural networks to see if they could create images on their own. Then artificial intelligence was trained by example large quantity various pictures. However, when the machine was “asked” to depict something on its own, it turned out that it interpreted the world around us in a somewhat strange way. For example, for the task of drawing dumbbells, the developers received an image in which the metal was connected by human hands. This probably happened due to the fact that during the training stage, the analyzed pictures with dumbbells contained hands, and the neural network interpreted this incorrectly.

On February 26, 2016, at a special auction in San Francisco, Google representatives raised about $98 thousand from psychedelic paintings created by artificial intelligence. These funds were donated to charity. One of the most successful pictures of the car is presented below.

A painting painted by Google's artificial intelligence.

Loading...