Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Chapter 2. History of artificial intelligence research

 

In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.

The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle 60s their research was heavily funded by the U.S. Department of Defense and they were optimistic about the future of the new field:

1965, H. A. Simon: "Machines will be capable, within twenty years, of doing any work a man can do"

1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. This was the first AI Winter.

In the early 80s, AI research was revived by the commercial success of expert systems (a form of AI program that simulated the knowledge and analytical skills of one or more human experts) and by 1985 the market for AI had reached more than a billion dollars. Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.

In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas. The success was due to several factors: the incredible power of computers today, a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.


Summary

 

So, as we can see, the AI researches develop a lot of useful invention which can be used in different branches of modern life. However, it is connected with a large amount of unsolved problems. The AI progress is impossible without interacting of scientists with each other and combining achievements of different areas.



 

Bibliography

 

1. Kathleen Jensen and Niklaus Wirth: PASCAL - User Manual and Report. Springer-Verlag, 2009

2. N. Wirth, and A. I. Wasserman: Programming Language Design. IEEE Computer Society Press, 2010

3. Niklaus Wirth: The Programming Language Pascal. Acta Informatica, 2008

4. http://en.wikipedia.org

5. http://www-formal.stanford.edu

6. http://www.elsevier.com

7. http://www.sciencedaily.com

 


Date: 2015-12-18; view: 1016


<== previous page | next page ==>
Chapter 1. General facts about artificial intelligence | Overall Control Questions
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.008 sec.)