More than 60 years ago, man defined that he could have the help of machine logic to help his development. See below, how artificial intelligence emerged, what were its first applications and how much it has developed over the decades. Its origin was the need to speed up processes known to free humans to focus on other abstract thoughts.
How is the area defined?
Artificial intelligence — or AI — is a discipline in its early sixties that is a collection of sciences, theories, and techniques — including mathematical logic, statistics, probabilities, computational neurobiology, computer science — that aim to mimic the cognitive abilities of a human being.
Started at the height of World War II, its developments are closely linked to those of computing and have led computers to perform increasingly complex tasks, which previously could only be done by a human being.
The beginnings (1940 – 1960)
It is not possible to separate the origin of artificial intelligence from the evolution of the computational process. Starting from this principle, we can’t stop talking about Alan Turing , the great father of computing, who simply created the machine that made the allies win the war faster.
Speaking of this computer genius, many experts now believe that the Turing test is not a good measure of artificial intelligence, but rather an efficient chatbox tool , which strongly inspired the concept that would create artificial intelligence.
The period between 1940 and 1960 was marked by technological development — with the Second World War being an accelerator — and the desire to understand how to bring the operation of machines and organic beings closer together.
The term “AI” can be attributed to John McCarthy of MIT ( Massachusetts Institute of Technology ), where we can define it as construction of computer programs that engage in tasks that are performed more satisfactorily by human beings due to high mental processes level, such as: perceptual learning, memory organization and critical reasoning.
Around 1960, artificial intelligence cooled down due to technical limitations at the time, such as the lack of computer memory.
The second era (1972 – 1997)
During this period, the technical limitations of computers were partially resolved, with memory expansion. The great thing that revived technology was art and cinema, individuals who had contact with concepts in their youth unleashed their creativity.
In the technical area, in fact, it was microprocessors that made the idea possible again. Even so, the truth is that little has evolved in a palpable way and with ample knowledge, evolutions were restricted to researchers.
A first big step was taken at StanfordUniversity in 1972 with the MYCIN (system specializing in the diagnosis of blood disorders and prescription drugs). This system was based on an “inference machine”, which was programmed to be a logical mirror of human reasoning. When entering the data, the engine provided answers with a high level of expertise.
In 1997, the computer Deep Blue beat chess master Garry Kasparov , despite this, the IBM computer was an expert in a limited universe, not capable of modeling and calculating an entire world.
The current artificial intelligence (2010 – present)
Two main factors triggered this new Era. First, access to large volumes of data. The second factor was the discovery of the very high efficiency of computer graphics card processors in accelerating the calculation of learning algorithms.
- In 2012, Google X (Google Research Lab) is able to make an AI recognize cats in a video — a machine learns to distinguish something;
- In 2016, AlphaGO (Google IA specializing in Go games) won the European champion (Fan Hui) and the world champion (Lee Sedol). The GO game has absurdly larger variations than Deep Blue’s chess .
How was it possible? Through a complete paradigm shift from expert systems. The approach became inductive. In short, it is no longer about coding rules for expert systems, but about allowing computers to discover them through correlation and classification, based on a large amount of data and their own answers.
All of a sudden, the vast majority of research teams turned to this technology with immense benefits.
This type of learning has also allowed considerable advances in text recognition, but there is still some way to go to produce text comprehension systems. When we say this, it’s because AI still can’t fully contextualize and analyze our intentions with certain ways of writing.