The past, present and future of AI

24th Oct 2009 | 09:00

The past, present and future of AI

Amazing milestones in the field of artificial intelligence

A history of AI machines

All being well, IBM plans to enter its Watson computer into the US gameshow Jeopardy! in 2010. In order to win, the machine will not only have to understand the questions, but dig out the correct answers and speak them intelligibly.

After all the broken promises from the over-optimistic visionaries of the '50s and '60s, are we finally moving towards a real-life HAL? If we are, why has it taken so long?

We caught up with some of the leading minds in artificial intelligence to find out. We've also looked at some of the most amazing milestones in the field, including a robot scientist that can make discoveries on its own.

Jeopardy!

CLEVER WATSON:IBM is so confident in Watson's abilities that the machine will be entered into the US game show Jeopardy!

The roots of AI

It's been 41 years since Stanley Kubrick directed 2001: A Space Odyssey, but even in 2009 the super-intelligent HAL still looks like the stuff of sci-fi.

Despite masses of research into artificial intelligence, we still haven't developed a computer clever enough for a human to have a conversation with. Where did it all go wrong?

"I think it's much harder than people originally expected," says Dr David Ferrucci, leader of the IBM Watson project team.

The Watson project isn't a million miles from the fictional HAL project: it can listen to human questions, and even respond with answers. Even so, it's taken us a long time to get here. People have been speculating about 'thinking machines' for millennia.

The Greek god Hephaestus is said to have built two golden robots to help him move because of his paralysis, and the monster in Mary Shelley's Frankenstein popularised the idea of creating a being capable of thought back in the nineteenth century.

Once computers arrived, the idea of artificial intelligence was bolstered by early advances in the field.

The mathematician Alan Turing started writing a computer chess program as far ago as 1948 – even though he didn't have a computer powerful enough to run it. In 1950, Turing wrote 'Computing Machinery and Intelligence' for the journal Mind, in which he outlined the necessary criteria for a machine to be judged as genuinely intelligent.

Alan turing

THE TURING TEST:Alan Turing (seen in this statue at Bletchley Park) brought us the Turing Test, still the holy grail of AI researchers

As you may well know, this was called the Turing Test, and it stated that a machine could be judged as intelligent if it could comprehensively fool a human examiner into thinking the machine was human.

The Turing Test has since become the basis for some of the AI community's challenges and prizes, including the annual Loebner Prize, in which the judges quiz a computer and a human being via another computer and work out which is which. The most convincing AI system wins the prize.

Turing also gave his name to the annual Turing Award, which Professor Ross King, who heads the Department of Computer Science at Aberystwyth University, described as the computing equivalent of the Nobel Prize.

Turing aside, there were also plenty of other advances in the 1950s. Professor King cites the Logic Theorist program as one of the earliest milestones. Developed between 1955 and 1956 by JC Shaw, Alan Newell and Herbert Simon, Logic Theorist introduced the idea of solving logic problems with a computer via a virtual reasoning system that used decision trees.

Not only that, but it also brought us a 'heuristics' system to disqualify trees that were unlikely to lead to a satisfactory solution.

Logic Theorist was demonstrated in 1956 at the Dartmouth Summer Research Conference on Artificial Intelligence, organised by computer scientist John McCarthy, which saw the first use of the term 'artificial intelligence'.

The conference bravely stated the working principle that 'every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it'.

The AI revolution had kicked off with a bang, and these impressive early breakthroughs led many to believe that fully fledged thinking machines would arrive by the turn of the millennium. In 1967, Herman Khan and Anthony J Wiener's predictive tome The Year 2000 stated that "by the year 2000, computers are likely to match, simulate or surpass some of man's most 'human-like' intellectual abilities."

Meanwhile, Marvin Minsky, one of the organisers of the Dartmouth AI conference and winner of the Turing Award in 1969, suggested in 1967 that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".

You can see why people were so optimistic, considering how much had been achieved already. But why are we still so far from these predictions?

What's next in the field of AI?

More than chess players

"The artificial intelligence community was so impressed with the really cool algorithms they were able to come up with and these toy prototypes in the early days," explains Ferrucci.

"They were very inspiring, innovative and extremely suggestive. However, the reality of the engineering requirements and what it really takes to make this work was much harder than anybody expected."

The word 'toy' is the key one here. Ferrucci refers to a paper from 1970 called 'Reviewing the State of the Art in Automatic Questioning and Answering', which concluded that "all the systems at the time were toy systems. The algorithms were novel and interesting, but from a practical perspective they were ultimately unusable."

For example, by the 1970s computers could play chess reasonably well, which rapidly led to false expectations about AI in general. "We think of a great chess player as being really smart," says Ferrucci. "So, we then say that we have an artificially intelligent program if it can play chess."

However, Ferrucci also points out that a human characteristic that marks us out as intelligent beings is our ability to communicate using language. "Humans are so incredibly good at using context and cancelling out noise that's irrelevant and being able to really understand speech," says Ferrucci, "but just because you can speak effectively and communicate doesn't make you a super-genius."

IBM's Deep Blue computer might have beaten chess champion Garry Kasparov back in 1997, but even now computers struggle to communicate with a human through natural language.

Thinking robots

Language isn't everything when it comes to AI, though. Earlier this year, Ross King's department at Aberystwyth University demonstrated an incredible robotic machine called Adam that could make scientific discoveries by itself.

"Adam can represent science in logic," explains King, "and it can infer new hypotheses about what can possibly be true in this area of science. It uses a technique called abduction, which is like deduction in reverse. It's the type of inference that Sherlock Holmes uses when he solves problems – he thinks [about] what could possibly be true to explain the murder, and once he's inferred that then he can deduce certain things from what he's observed.

Adam

ALMOST AUTONOMOUS:Ross King's Adam machine can make scientific discoveries on its own

"Adam can then abduce hypotheses, and infer what would be efficient experiments to discriminate between different hypotheses, and whether there's evidence for them," King expands. "Then it can actually do the experiments using laboratory automation, and that's where the robots come in. It can not only work out what experiment to do; it can actually do the experiment, and it can look at the results and decide whether the evidence is consistent with the hypotheses or not."

Adam has already successfully performed experiments on yeast, in which it discovered the purpose of 12 different genes. The full details can be found in a paper called 'The Automation of Science' in the journal Science.

King's team are now working on a new robot called Eve that can do similar tasks in the field of drug research.

Understanding language

Adam is an incredible achievement, but as King says, "the really hard problems you see are to do with humans interacting. One of the advantages with science as a domain is that you don't have to worry about that. If you do an experiment, it doesn't try to trick you on purpose."

Getting a computer to communicate with a human is a definite struggle, but it's a field that's progressing. As a case in point, the chatbot Jabberwacky gets better at communicating every day. I log into it, and it asks if I like Star Wars. I tell it that I do, and ask the same question back. Jabberwacky tells me that it does like Star Wars. "Why?" I ask.

"It's a beautiful exploration, especially for the mainstream, of dominance and submission," it says. I think I smell a rat, and I ask Jabberwacky's creator Rollo Carpenter what's going on. "None of the answers are programmed," claims Carpenter. "They're all learned."

Jabberwacky thrives on constant input from users, which it can then analyse and store in its extensive database. "The first thing the AI said was what I had just said to it," explains Carpenter, but 12 years later it now has over 19 million entries in its database.

With more input, Jabberwacky can use machine learning to discover more places where certain sentences are appropriate. Its opinion on Star Wars was a response from a previous user that it quoted verbatim at the appropriate time.

The smart part here isn't what it says, but understanding the context. However, Carpenter is confident that it will soon evolve beyond regurgitating verbatim sentences.

"The generation of all sentences will come quite soon," says Carpenter. "It's already in use in our commercial AI scripting tools, and will be applied to the learning AI."

Carpenter's latest project is Cleverbot, which uses a slightly different technique for understanding language.

Cleverbot

HIT AND MISS:Cleverbot sometimes says inappropriate things, but on occasions it's indistinguishable from a human

"Jabberwacky uses search techniques," explains Carpenter, "whittling down selections ever-smaller for numerous and ever-more contextual reasons until a final decision is made. Cleverbot uses fuzzy string comparison techniques to look into what's been said and their contexts in more depth. When responses appear planned or intelligent, it's always because of these universal contextual techniques, rather than programmed planning or logic."

So convincing is Cleverbot that Carpenter regularly gets emails from people thinking that the chatbot is occasionally switched with a real person. Cleverbot's answers aren't always convincing, but Carpenter's techniques have managed to secure him the Loebner Prize for the 'most humanlike' AI in 2005 and 2006.

It's elementary

However, perhaps the biggest milestone when it comes to natural language is IBM's massive Watson project, which Ferrucci says uses "about 1,000 compute nodes, each of which has four cores".

The huge amount of parallelisation is needed because of the intensive searches Watson initiates to find its answers. Watson's knowledge comes from dictionaries, encyclopedias and books, but IBM wanted to shift the focus away from databases and towards processing natural language.

"The underlying technology is called Deep QA," explains Ferrucci. "You can do a grammatical parse of the question and try to identify the main verb and the auxiliary verbs. It then looks for an answer, so it does many searches."

Each search returns big lists of possibly relevant passages, documents and facts, each of which could have several possible answers to the question. This could mean that there are hundreds of potential answers to the question.

Watson then has to analyse them using statistical weights to work out which answer is most appropriate. "With each one of those answers, it searches for additional evidence from existing structured or unstructured sources that would support or refute those answers, and the context," says Ferrucci.

Once it has its answer, Watson speaks it back to you with a form of voice synthesis, putting together the various sounds of human speech (phonemes) to make the sound of the words that it's retrieved from its language documents. In order to succeed in the Jeopardy! challenge, Watson has to buzz in and speak its answer intelligibly before its human opponents.

Not only that, but it has to be completely confident in its answer – if it's not then it won't buzz in.

Watson doesn't always get it right, but it's close. On CNN, the computer was asked which desert covers 80 per cent of Algeria. Watson replied "What is Sahara?" The correct answer is in there, and intelligible, but it was inappropriately phrased.

The future

As you can see, we're still a long way from creating HAL, or even passing the Turing Test, but the experts are still confident that this will happen. Ross King says that this is 50 years away, but David Ferrucci says that 50 years would be his "most pessimistic" guess.

His optimistic guess is 10 years, but he adds that "we don't want a repeat of when AI set all the wrong expectations. We want to be cautious, but we also want to be hopeful, because if the community worked together it could surprise itself with some really interesting things."

The AI community is currently divided into specialist fields, but Ferrucci is confident that if everyone worked together, a realistic AI that could pass the Turing Test would certainly arrive much quicker.

"We need to work together, and hammer out a general-purpose architecture that solves a broad class of problems," says Ferrucci. "That's hard to do. It requires many people to collaborate, and one of the most difficult things to do is to get people to decide on a single architecture, but you have to because that's the only way you're going to advance things."

The question is whether that's a worthwhile project, given everybody's individual goals, but Ferrucci thinks that's our best shot. Either way, although the timing of the early visionaries' predictions was off by a fair way, the AI community still looks set to meet those predictions later this century.

-------------------------------------------------------------------------------------------------------

First published in PC Plus Issue 286

Liked this? Then check out Heavyweights of the supercomputing world

Sign up for TechRadar's free Weird Week in Tech newsletter
Get the oddest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/register

Follow TechRadar on Twitter

AI
Share this Article
Google+

Apps you might like:

Most Popular

Edition: UK
TopView classic version