Computer learns human language to teach itself to play games
MIT's developed a machine-learning system that allows a computer to read the instructions for playing Civilization - in one of several different languages - and improve its game.
It's worth noting that game manuals don't give specific instructions for winning - just very general advice. However, once the computer was given the manual, its rate of victory jumped from 46 percent to 79 percent.
"Games are used as a test bed for artificial-intelligence techniques simply because of their complexity," says SRK Branavan of University College London.
"Every action that you take in the game doesn't have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways."
The system begins with virtually no prior knowledge about either the task or the language in which the instructions are written. It has a list of actions it can take - like right-clicks, left-clicks or moving the cursor.
It also has access to the information displayed on screen, and some way of gauging its success. But it doesn't know what actions correspond to what words in the instruction set, and it doesn't know what the objects in the game world represent.
Initially, then, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded.
In the case of software installation, the system was able to reproduce 80 percent of the steps that a human reading the same instructions would execute. In the case of the computer game, it won 79 percent of the games it played, while a version that didn't rely on the written instructions won only 46 percent.
"If you'd asked me beforehand if I thought we could do this yet, I'd have said no," says Eugene Charniak, University Professor of Computer Science at Brown University. "You are building something where you have very little information about the domain, but you get clues from the domain itself."
Most complex computer games include algorithms that allow players to play against the computer, rather than against other people - meaning that programmers have to develop strategies for the computer to follow and then write the code that executes them. Branavan says the MIT system could make that job much easier, automatically creating better-performing algorithms.