The Story of Artificial Intelligence

The story of artificial intelligence dates back to antiquity. Classical philosophers who worked to explain human thinking as a mechanical process of manipulating symbols, essenced the idea of AI technology afterwards. Significantly, with the invention of programmable computers in 1940; scientist took such philosophy a step forward and began to research whether it is possible to build an electronic brain which functions like the human brain.The period of tremendous technological developments accelerated with World War 2 has lasted 20 years after 1940 and it has been the most important era for the birth of AI. 

During such period important works on relating the machine and human functions together have been put forward. Cybernetics had an important role in such work. According to the leader of the area, Norbert Wiener, the aim of cybernetics was to create a theory that can be used to understand the control and communication mechanisms of both animals and machines. Moreover, in 1943, Warren McMulloch and Walter Pitts created the first computer and mathematical model of the biological neuron. With analyzing the developed models of neurons and their networks, they improved logical functions that worked with idealized artificial neurons. Such invention was the foundation of today’s neural networks.

Computing Machinery and Intelligence by Alan Turing.

Retrieved from:


The well-known works like “Computing Machinery and Intelligence” by Alan Turing which question the possible intelligence of a machine, have been put forward at the beginning of 1950. Alan Turing answered such question in his paper with the test called the Turing Test. Such Test suggested that if computers come to a place where they can’t be distinguished with humans in a conversation, now that it can be said that they are thinking like humans. Even though there have been many arguments on such test, it is known as the first serious philosophical claim on AI.  Alan Turing’s work with John Von Naumann had a significant influence on AI’s future, also. Although their work was not referred to as AI, it had the main logic behind it. They have put forward decimal and binary logics of computers and showed that computers can universally be used on execution of programs. 

The term and the discipline of ‘AI’ was founded in the Summer Conference in Dartmouth College, 1956; especially in a workshop organized during the conference. The 6 participants of the workshop, including John McCharthy and Marvin Minsky, became the leaders of AI discipline for the following years. They have foreseen that a machine that thinks like a human can be developed in not much time and have been funded for such vision. After such significant workshop, important works have been put forward – such as programs of reasoning in search, natural language and micro-worlds – sophisticated new programs led computers to execute mathematical, geometrical problems and learn languages.  Such influential works increased the optimism about AI’s future. According to one of the AI leaders of the era Marvin Minsky, for only in one generation artificial intelligence would be solved to a great extent. 

The future leaders of AI in Dartmouth Summer Conference, 1965.

Retrieved from:


However, the optimism did not last for so long. The critics on the area of AI have arisen fast especially at the beginning of 1970’s. Such critics mainly concentrated on the relatively slow progress the area is taking in the era of over anticipation. Eventually, governmental fundings on the researches have been cut and there began a serious slow back in AI advancement that is known as the ‘First AI Winter’. After a period of slow progress on AI technologies, in the 1980’s the advancement of expert systems – a computer that has the knowledge about a subject as its expert – and the invention of microprocessors started the acceleration in the advancement on AI again, fundings started to be directed again especially on the information based expert systems. However, even though such projects had significance on the history of artificial intelligence, ‘Second AI Winter’ started in the 1980’s due to similar criticisms and irrational over hype. Fundings were cut again in the late 1980’s and early 1990’s, such periods were financially difficult times for AI researchers. For instance, the articles related to AI in the New York Times started to decrease in 1987 and had its lowest point in 1995. 

Deep Blue vs. Gary Kasparov

Retrieved from:


Even in such difficult times, developments in the area continued. With the help of Moore’s Law’s applications, computers had much higher capacities while working faster than ever. Also, other concepts’ implications in computer science, such as probability, decision theory, Bayesian networks and many more, had strong influence in AI’s development. Eventually in 1997, IBM’s expert system Deep Blue defeated chess grandmaster Gary Kasparov. Especially for gaining  anticipation again, such victory was also an important milestone in AI history.
After such, as it is known the advancements in the 2000’s and especially 2010’s were exponential with the help of tremendous amounts of data and much faster processing systems. In 2020, loads of new articles are published about new AI researches and developments everyday. Furthermore, the up and downs explained in the history of AI, created the advanced technology that it is today.