Machine-learning system learns language by playing games
July 13, 2011
Researchers at University College London have augmented a machine-learning system so that it could use a player’s manual to guide the development of a game-playing strategy for the game “Civilization,” causing its rate of victory to jump from 46 percent to 79 percent.
The machine-learning system began with virtually no prior knowledge about the task it was intended to perform or the language in which the instructions were written.
It had a list of actions it could take: for example, right-clicks or left-clicks, or moving the cursor. It had access to the information displayed on-screen, and it had some way of gauging its success: whether the software had been installed or whether it won the game. But it didn’t know what actions corresponded to what words in the instruction set, and it didn’t know what the objects in the game world represent.
Initially, its behavior was almost totally random. But as it took various actions, different words appeared on screen, and it could look for instances of those words in the instruction set. It could also search the surrounding text for associated words, and develop hypotheses about what actions to which those words corresponded. Hypotheses that consistently led to good results were given greater credence, while those that consistently led to bad results were discarded.
In the case of software installation, the system was able to reproduce 80 percent of the steps that a human reading the same instructions would execute. In the case of the computer game, it won 79 percent of the games it played, while a version that didn’t rely on the written instructions won only 46 percent.
The researchers have begun to adapt their meaning-inferring algorithms to work with robotic systems.