Ticker

6/recent/ticker-posts

History of Artificial Intelligence(AI)

Beginning: 1943 – 1952

  • 1943: J. McCulloch, W. Pitts: model of the artificial neuron
  • 1949: D. Hebb: rule to modify the connection strength between two neurons
  • 1951: Minsky and Edmons: the first neural network containing 40 neurons (vacuum tubes)
  • 1950: A. Turing: Turing test, machine learning, genetic algorithms, reinforcement learning
Alan Turing

1952 – 1969: Early enthusiasm, high hopes 

  • 1952: A. Samuel: game of checkers, a program that learns
  • 1956: Newell, Shaw and Simon: Logic Theorist (LT) – a shortened proof of a theorem from the book Principia Mathematica
  • 1957: Newel & Simon: GPS, the first program that embodied the human way of thinking
  • 1958: J. McCarthy: LISP
  • 1960 – 1962: Widrow i Hoff: Adaline
  • 1962: F. Rosenblatt: proof of perceptron convergence
  • 1965: Joseph Weizenbaum – ELIZA chatterbot
  • 1965: Robinson – resolution rule
  • 1966: Quillian – semantic networks
  • 1969: Minsky & Papert: “Perceptrons” – a limitation of neural networks
Minsky

1952 – 1969: Sobering up 

  • Early systems performed poorly when applied to a wider range of problems or on more difficult problems
  • Early systems contained little or no knowledge, the output was the  result of  relatively simple syntactic manipulations

First failure of machine translation (1957)
Machine translation (financed to speed up translating Russian papers on Sputnik) was based on syntactic transformations and word substitution using English and Russian gram-mars. The result:
     “The spirit is willing but the flesh is weak”
      → “The vodka is good but the flesh is rotten”

1952 – 1969: Sobering up 

  • Another big problem – intractability of many problems that AI was trying to solve
  • Initial success was possible because the problems were reduced to “microworlds” with only a handful of combinations
  • Before the development of computability theories, it was believed that scaling up to larger problems can be accomplished by increasing the processing power
  • 1969, Minsky and Papert: Perceptrons – a discouragement of further research in neural networks

1970 – 1979: Knowledge-based systems

  • DENDRAL, Fiegenbaum, Buchanan (Stanford) – a knowledge based system performs reasoning about molecular structures of organic compounds based on mass spectroscopy – 450 rules
  • MYCIN, Shortliffe (Stanford), 550 rules, different from DENDRAL: no theoretical model as a foundation, introduces the “certainty factors”
  • Advances in natural language processing
  • PROLOG – logical programming language popular in Europe
  • 1975, Minsky: frame theory

1980 – 2010

  • 1980 – AI becomes an industry! (from several million dollars in 1980 up to a billion dollars in 1988)
  • 1982 McDermott – DEC R1 expert system
  • 1980 – Comeback of neural networks (Werbos – backpropagation algorithms)
  • Intelligent agents (agent – perception of the environment through sensors and acting on it through actions)
  • Robotics
  • Machine learning
Robotics

2010 – today

  • The era of deep learning
  • Deep learning – machine learning of multilayered data abstractions
  • Typically using neural networks on large amounts of data
  • Stunning advances in computer vision, promising improvements in natural language processing

Deep Learning

For Reference:

I have five best book for it you can go through it. Below are the links-


Post a Comment

0 Comments