Virtual character with child-like reasoning abilities enters Second Life

  • Troy (MI) – Interacting with avatars in a virtual world may soon change dramatically: Researchers are trying to find ways to create lifelike synthetic characters, which act just like humans and would make it difficult for players to determine whether they are dealing with a real or simulated person. Scientists from the Rensselaer Polytechnic Institute claim that have completed a critical first step, injecting Eddie, a virtual character with the reasoning abilities of a 4-year old child into Second Life.

    Eddie is part of a project that has set the goal to engineer characters that have their own beliefs and can create opinion about the beliefs of others. Eventually, the researchers belief that such synthetic characters will be able to predict and manipulate the behavior of their human co-players: This would allow them to simulate a scenario in which human players would not be able to tell whether they are dealing with a human or a synthetic character.


    “Current avatars in massively multiplayer online worlds - such as Second Life - are directly tethered to a user’s keystrokes and only give the illusion of mentality,” said Selmer Bringsjord, head of Rensselaer’s Cognitive Science Department and leader of the research project. “Truly convincing autonomous synthetic characters must possess memories; believe things, want things, remember things.”

    Bringsjord’s group believes such characters can only be created with logic-based artificial intelligence and computational cognitive modeling techniques with the processing power of a supercomputer.  “The principles and techniques that humans deploy in order to understand, predict, and manipulate the behavior of other humans is collectively referred to as a theory of mind,” they said. According to a press release, the research is now starting to engineer part of that theory, which would enable artificial agents to understand, predict, and manipulate the behavior of other agents, “in order to be genuine stand-ins for human beings or autonomous intellects in their own right.”

    Such a simulation of course would include complex behavior – and not just the good side. “Declarative definitions of all of the concepts central to a theory of the mind, including lying, betrayal, and even evil,” are covered as well, according to Bringsjord.

    Eddie apparently has still some way to go until he reaches that goal. At this time, the researchers claim, the character has the reasoning abilities of a 4-year-old child – with the ability to draw the line between his own beliefs and what others may think and do. In a demonstration video, Bringsjord’s group showed that Eddie in fact can predict simple actions of other characters, despite the fact that these actions contradict his own knowledge.   

    “Our aim is not to construct a computational theory that explains and predicts actual human behavior, but rather to build artificial agents made more interesting and useful by their ability to ascribe mental states to other agents, reason about such states, and have — as avatars — states that are correlates to those experienced by humans,” Bringsjord said. “Applications include entertainment and gaming, but also education and homeland defense.”