Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






History of AI at Edinburgh

The Department of Artificial Intelligence can trace its origins to a small research group established in a flat at 4 Hope Park Square in 1963 by Donald Michie, then Reader in Surgical Science. During the Second World War, through his membership of Max Newman's code-breaking group at Bletchley Park, Michie had been introduced to computing and had come to believe in the possibility of building machines that could think and learn. By the early 1960s, the time appeared to be ripe to embark on this endeavour. Looking back, there are four discernible periods in the development of AI at Edinburgh, each of roughly ten years' duration. The first covers the period from 1963 to the publication of the Lighthill Report by the Science Research Council in l973. During this period, Artificial Intelligence was recognised by the University, first by establishing the Experimental Programming Unit in January 1965 with Michie as Director, and then by the creation of the Department of Machine Intelligence and Perception in October 1966. By then Michie had persuaded Richard Gregory and Christopher Longuet-Higgins, then at Cambridge University and planning to set up a brain research institute, to join forces with him at Edinburgh. Michie's prime interest lay in the elucidation of design principles for the construction of intelligent robots, whereas Gregory and Longuet-Higgins recognized that computational modelling of cognitive processes by machine might offer new theoretical insights into their nature. Indeed, Longuet-Higgins named his research group the Theoretical Section and Gregory called his the Bionics Research Laboratory. During this period there were remarkable achievements in a number of sub-areas of the discipline, including the development of new computational tools and techniques and their application to problems in such areas as assembly robotics and natural language. The POP-2 symbolic programming language which supported much subsequent UK research and teaching in AI was designed and developed by Robin Popplestone and Rod Burstall. It ran on a multi-access interactive computing system, only the second of its kind to be opened in the UK. By 1973, the research in robotics had produced the FREDDY II robot which was capable of assembling objects automatically from a heap of parts. Unfortunately, from the outset of their collaboration these scientific achievements were marred by significant intellectual disagreements about the nature and aims of research in AI and growing disharmony between the founding members of the Department. When Gregory resigned in 1970 to go to Bristol University, the University's reaction was to transform the Department into the School of Artificial Intelligence which was to be run by a Steering Committee. Its three research groups (Jim Howe had taken over responsibility for leading Gregory's group when he left) were given departmental status; the Bionics Research Laboratory's name was retained, whereas the Experimental Programming Unit became the Department of Machine Intelligence, and (much to the disgust of some local psychologists) the Theoretical Section was renamed the Theoretical Psychology Unit! At that time, the Faculty's Metamathematics Unit, which had been set up by Bernard Meltzer to pursue research in automated reasoning, joined the School as the Department of Computational Logic. Unfortunately, the high level of discord between the senior members of the School had become known to its main sponsors, the Science Research Council. Its reaction was to invite Sir James Lighthill to review the field. His report was published early in 1973. Although it supported AI research related to automation and to computer simulation of neurophysiological and psychological processes, it was highly critical of basic research in foundational areas such as robotics and language processing. Lighthill's report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade - the so-called "AI Winter".



Since the new School structure had failed to reduce tensions between senior staff, the second ten year period began with an internal review of AI by a Committee appointed by the University Court. Under the chairmanship of Professor Norman Feather, it consulted widely, both inside and outside the University. Reporting in 1974, it recommended the retention of a research activity in AI but proposed significant organizational changes. The School structure was scrapped in favour of a single department, now named the Department of Artificial Intelligence; a separate unit, the Machine Intelligence Research Unit, was set up to accommodate Michie's work, and Longuet-Higgins opted to leave Edinburgh for Sussex University. The new Department's first head was Meltzer who retired in 1977 and was replaced by Howe who led it until 1996. Over the next decade, the Department's research was dominated by work on automated reasoning, cognitive modelling, children's learning and computation theory (until 1979 when Rod Burstall and Gordon Plotkin left to join the Theory Group in Computer Science). Some outstanding achievements included the design and development of the Edinburgh Prolog programming language by David Warren which strongly influenced the Japanese Government's Fifth Generation Computing Project in the 1980s, Alan Bundy's demonstrations of the utility of meta-level reasoning to control the search for solutions to maths problems, and Howe's successful development of computer based learning environments for a range of primary and secondary school subjects, working with both normal and handicapped children.

Unlike its antecedents which only undertook teaching at Masters and Ph.D. levels, the new Department had committed itself to becoming more closely integrated with the other departments in the Faculty by contributing to undergraduate teaching as well. Its first course, AI2, a computational modelling course, was launched in 1974/75. This was followed by an introductory course, AI1, in 1978/79. By 1982, it was able to launch its first joint degree, Linguistics with Artificial Intelligence. There were no blueprints for these courses: in each case, the syllabuses had to be carved out of the body of research. It was during this period that the Department also agreed to join forces with the School of Epistemics, directed by Barry Richards, to help it introduce a Ph.D. programme in Cognitive Science. The Department provided financial support in the form of part-time seconded academic staff and studentship funding; it also provided access to its interactive computing facilities. From this modest beginning there emerged the Centre for Cognitive Science which was given departmental status by the University in 1985.

The third period of AI activity at Edinburgh begins with the launch of the Alvey Programme in advanced information technology in 1983. Thanks to the increasing number of successful applications of AI technology to practical tasks, in particular expert systems, the negative impact of the Lighthill Report had dissipated. Now, AI was seen as a key information technology to be fostered through collaborative projects between UK companies and UK universities. The effects on the Department were significant. By taking full advantage of various funding initiatives provoked by the Alvey programme, its academic staff complement increased rapidly from 4 to 15. The accompanying growth in research activity was focused in four areas, Intelligent Robotics, Knowledge Based Systems, Mathematical Reasoning and Natural Language Processing. During the period, the Intelligent Robotics Group undertook collaborative projects in automated assembly, unmanned vehicles and machine vision. It proposed a novel hybrid architecture for the hierarchical control of reactive robotic devices, and applied it successfully to industrial assembly tasks using a low cost manipulator. In vision, work focused on 3-D geometric object representation, including methods for extracting such information from range data. Achievements included a working range sensor and range data segmentation package. Research in Knowledge Based Systems included design support systems, intelligent front ends and learning environment. The Edinburgh Designer System, a design support environment for mechanical engineers started under Alvey funding, was successfully generalised to small molecule drug design. The Mathematical Reasoning Group prosecuted its research into the design of powerful inference techniques, in particular the development of proof plans for describing and guiding inductive proofs, with applications to problems of program verification, synthesis and transformation, as well as in areas outside Mathematics such as computer configuration and playing bridge. Research in Natural Language Processing spanned projects in the sub-areas of natural language interpretation and generation. Collaborative projects included the implementation of an English language front end to an intelligent planning system, an investigation of the use of language generation techniques in hypertext-based documentation systems to produce output tailored to the user's skills and working context, and exploration of semi-automated editorial assistance such as massaging a text into house style.

In 1984, the Department combined forces with the Department of Lingistics and the Centre for Cognitive Science to launch the Centre for Speech Technology Research, under the directorship of John Laver. Major funding over a five year period was provided by the Alvey Programme to support a project demonstrating real-time continuous speech recognition.

By 1989, the University's reputation for research excellence in natural language computation and cognition enabled it to secure in collaboration with a number of other universities one of the major Research Centres which became available at that time, namely the Human Communication Research Centre which was sponsored by ESRC.

During this third decade, the UGC/UFC started the process of assessing research quality. In 1989, and again in 1992, the Department shared a "5" rating with the other departments making up the University's Computing Science unit of assessment.

The Department's postgraduate teaching also expanded rapidly. A masters degree in Knowledge Based Systems, which offered specialist themes in Foundations of AI, Expert Systems, Intelligent Robotics and Natural Language Processing, was established in 1983, and for many years was the largest of the Faculty's taught postgraduate courses with 40-50 graduates annually. Many of the Department's complement of about 60 Ph.D. students were drawn from its ranks. At undergraduate level, the most significant development was the launch, in 1987/88, of the joint degree in Artificial Intelligence and Computer Science, with support from the UFC's Engineering and Technology initiative. Subsequently, the modular structure of the course material enabled the introduction of joint degrees in AI and Mathematics and AI and Psychology. At that time, the Department also shared an "Excellent" rating awarded by the SHEFC's quality assessment exercise for its teaching provision in the area of Computer Studies.

The start of the fourth decade of AI activity coincided with the publication in 1993 of "Realising our Potential", the Government's new strategy for harnessing the strengths of science and engineering to the wealth creation process. For many departments across the UK, the transfer of technology from academia to industry and commerce was uncharted territory. However, from a relatively early stage in the development of AI at Edinburgh, there was strong interest in putting AI technology to work outside the laboratory. With financial banking from ICFC, in 1969 Michie and Howe had established a small company, called Conversational Software Ltd (CSL), to develop and market the POP-2 symbolic programming language. Probably the first AI spin-off company in the world, CSL's POP-2 systems supported work in UK industry and academia for a decade or more, long after it ceased to trade. As is so often the case with small companies, the development costs had outstripped market demand. The next exercise in technology transfer was a more modest affair, and was concerned with broadcasting some of the computing tools developed for the Department's work with schoolchildren. In 1981 a small firm, Jessop Microelectronics, was licensed to manufacture and sell the Edinburgh Turtle, a small motorised cart that could be moved around under program control leaving a trace of its path. An excellent tool for introducing programming, spatial and mathematical concepts to young children, over 1000 were sold to UK schools (including 100 supplied to special schools under a DTI initiative). At the same time, with support from Research Machines, Peter Ross and Ken Johnson re-implemented the children's programming language, LOGO, on Research Machines microcomputers. Called RM Logo, for a decade or more it was supplied to educational establishments throughout the UK by Research Machines.

As commercial interest in IT in the early 1980s exploded into life, the Department was bombarded by requests from UK companies for various kinds of technical assistance. For a variety of reasons, not least the Department's modest size at that time, the most effective way of providing this was to set up a separate non-profit making organisation to support applications oriented R&D. In July 1983, with the agreement of the University Court, Howe launched the Artificial Intelligence Applications Institute. At the end of its first year of operations, Austin Tate succeeded Howe as Director. Its mission was to help its clients acquire know-how and skills in the construction and application of knowledge based systems technology, enabling them to support their own product or service developments and so gain a competitive edge. In practice, the Institute was a technology transfer experiment: there was no blueprint, no model to specify how the transfer of AI technology could best be achieved. So, much time and effort was given over to conceiving, developing and testing a variety of mechanisms through which knowledge and skills could be imparted to clients. A ten year snapshot of its activities revealed that it employed about twenty technical staff; it had an annual turnover just short of £1M, and it had broken even financially from the outset. Overseas, it had major clients in Japan and the US. Its work focused on three sub-areas of knowledge-based systems, planning and scheduling systems, decision support systems and information systems.

Formally, the Department of Artificial Intelligence disappeared in 1998 when the University conflated the three departments, Artificial Intelligence, Cognitive Science and Computer Science, to form the new School of Informatics.

 

Text 2

A gift of tongues

Troy Dreier

PC MAGAZINE July 2006.

1. Jokes about the uselessness of machine translation abound. The Central Intelligence Agency was said to have spent millions trying to program computers to translate Rus­sian into English. The best it managed to do, so the tale goes, was to turn the Famous-Russian saying "The spirit is willing but the flesh is weak" into "The vodka is good but the meat is rotten." Sadly, this story is a myth. But machine translation has certainly produced its share of howlers. Since its earliest days, the subject has suffered from exaggerated claims and impossible expectations.

2. Hype still exists. But Japanese researchers, perhaps spurred on by the linguistic barrier that often seems to separate their country's scientists and technicians from those in the rest of the world, have made great strides towards the goal of reliable machine translation—and now their efforts are being imitated in the West.

3. Until recently, the main commercial users of transla­tion programs have been big Japanese manufacturers. They rely on machine translation to produce the initial drafts of their English manuals and sales material. (This may help to explain the bafflement many western consumers feel as they leaf through the instructions for their video recorders.) The most popular program for doing this is e-j bank, which was designed by Nobuaki Kamejima, a reclusive software wizard at AI Laboratories in Tokyo. Now, however, a bigger market beckons. The explosion of foreign languages (especially Japa­nese and German) on the Internet is turning machine trans­lation into a mainstream business. The fraction of web sites posted in English has fallen from 98% to 82% over the past three years, and the trend is still downwards. Consumer software, some of it written by non-Japanese software houses, is now becoming available to interpret this electronic Babel to those who cannot read it.

Enigma variations

4. Machines for translating from one language to another were first talked about in the 1930s. Nothing much happened, however, until 1940 when an American mathematician called Warren Weaver became intrigued with the way the British had used their pioneering Colossus computer to crack the military codes produced by Germany's Enigma encryption machines. In a memo to his employer, the Rockefeller Foundation, Weaver wrote: "I have a text in front of me which is written in Russian but I am going to pretend that it is really written in English and that it has been coded in some strange symbols. All I need to do is to strip off the code in order to retrieve the information contained in the text."

5. The earliest "translation engines" were all based on this direct, so-called "transformer", approach. Input sentences of the source language were transformed directly into output sentences of the target language, using a simple form of parsing. The parser did a rough/analysis of the source sentence, dividing it into subject, object, verb, etc. Source words were then replaced by target words selected from a dictionary, and their order rearranged so as to comply with the rules of the target language.

6. It sounds simple, but it wasn't. The problem with Weaver's approach was summarized succinctly by Yehoshua Bar-Hillel, a linguist and philosopher who wondered what kind of sense a machine would make of the sentence "The pen is in the box" (the writing instrument is in the container) and the sentence "The box is in the pen" (the container is in the[play]pen).

7. Humans resolve such ambiguities in one of two ways. Either they note the context of the preceding sentences or they infer the meaning in isolation by knowing certain rules about the real world—in this case, that boxes are bigger than pens (writing instruments) but smaller than pens (play-pens) and that bigger objects cannot fit inside smaller ones. The computers available to Weaver and his immediate successors could not possibly have managed that.

8. But modern computers, which have more processing power arid more memory, can. Their translation engines are able to adopt a less direct approach, using what is called "linguistic knowledge". It is this that has allowed Mr. Kamejima to produce e-j bank, and has also permitted NeocorTech of San Diego to come up with Tsunami and Typhoon - the first Japanese-language-translation software to run on the stand­ard (English) version of Microsoft Windows.

9. Linguistic-knowledge translators have two sets of grammatical rules—one for the source language and one for the target. They also have a lot of information about the idiomatic differences between the languages, to stop them making silly mistakes.

10. The first set of grammatical rules is used by the parser to analyze an input sentence ("I read" The Economist "every week"). The sentence is resolved into a tree that describes the structural relationship between the sentence's components ("I" [subject], "read" (verb), "The Economist" (object) and "every week" [phrase modifying the verb). Thus far, the process is like that of a Weaver-style transformer engine. But then things get more complex. Instead of working to a pre-arranged formula, a generator (i.e., a parser in reverse) is brought into play to create a sentence structure in the target language. It does so using a dictionary and a comparative grammar—a set of rules that describes the difference between each sentence component in the source language and its counterpart in the target language. Thus a bridge to the second language is built on deep structural foundations.

11. Apart from being much more accurate, such linguis­tic-knowledge engines should, in theory, be reversible—you should be able to work backwards from the target language to the source language. In practice, there are a few catches which prevent this from happening as well as it might - but the architecture does at least make life easier for software design­ers trying to produce matching pairs of programs. Tsunami (English to Japanese) and Typhoon Japanese to English), for instance, share much of their underlying programming code.

12. Having been designed from the start for use on a personal computer rather than a powerful workstation or even a mainframe, Tsunami and Typhoon use memory extremely efficiently. As a result, they are blindingly fast on the latest PCs—translating either way at speeds of more than 300,000 words an hour. Do they produce perfect translations at the click of a mouse? Not by a long shot. But they do come up with surprisingly good first drafts for expert translators to get their teeth into. One mistake that the early researchers made was to imagine that nothing less than flawless, fully automated machine translation would suffice. With more realistic expectations, machine translation is, at last, beginning to thrive.

Text 3


Date: 2015-12-24; view: 1061


<== previous page | next page ==>
Ďđŕęňč÷ĺńęčĺ çŕäŕíč˙ | Suspicious behavior
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.008 sec.)