Tracing the evolution of Artificial Intelligence from its history
Thomas Bayes
Artificial intelligence requires the ability to learn and make decisions, often based on incomplete information.
In 1763, Thomas Bayes developed a framework for reasoning about the probability of events, using math to update the probability of a hypothesis as more information becomes available.
Thanks to his work, Bayesian inference would become an important approach in machine learning, and marks one of the earliest milestones on our artificial intelligence timeline.
English mathematician Ada Lovelace
In 1842, English mathematician Ada Lovelace was helping Charles Babbage publish the first algorithm to be carried out by his Analytical Engine, the first general-purpose mechanical computer.
Yet Lovelace saw opportunities beyond the math. She envisioned a computer that could crunch not just numbers, but solve problems of any complexity.
At the time it was revolutionary that machines have applications beyond pure calculation. She called the idea Poetical Science.
Czech writer Karel Čapek introduces the word "robot"
Czech writer Karel Čapek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots).
The word "robot" comes from the word "robota" (work or slave).
Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity”
This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as mimicking the brain.
This marks a critical point in our artificial intelligence timeline, even though deep learning will still take decades to reach mainstream popularity.
Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes:
“Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill….These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”
The debate over machine intelligence - what constitutes thinking, creativity, autonomy, and even consciousness - rages on today.
In 1950, “I Robot” was published – a collection of short stories by science fiction writer Isaac Asimov.
Asimov was one of several science fiction writers who picked up the idea of machine intelligence, and imagined its future. His work was popular, thought-provoking and visionary, helping to inspire a generation of roboticists and scientists.
The term 'artificial intelligence' was coined for a summer conference at Dartmouth University
Organised by a young computer scientist, John McCarthy.
Top scientists debated how to tackle AI. Some, like influential academic Marvin Minsky, favoured a top-down approach: pre-programming a computer with the rules that govern human behaviour
Arthur Samuel coins the term “machine learning,”
Reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”
This marks a historic point in our artificial intelligence timeline, with the coining of a phrase that will come to embody an entire field within AI.
Minsky influenced science fiction too. He advised Stanley Kubrick on the film 2001: A Space Odyssey, featuring an intelligent computer, HAL 9000. During one scene, HAL is interviewed on the BBC talking about the mission and says that he is "fool-proof and incapable of error."
Shakey was the first general-purpose mobile robot able to make decisions about its own actions by reasoning about its surroundings.
Since 1973, Harold Cohen—a painter, a professor at the University of California, San Diego, and a onetime representative of Britain at the Venice Biennale—has been collaborating with a program called AARON.
AARON has been able to make pictures autonomously for decades; even in the late 1980s Cohen was able to joke that he was the only artist who would ever be able to have a posthumous exhibition of new works created entirely after his own death.