Scientists give computer schizophrenia

Researchers at Yale and the University of Texas say they’ve made a computer schizophrenic – causing it to claim responsibility for a terrorist incident.

The team was examining the ‘hyperlearning’ theory of schizophrenia, which suggests that sufferers have brains that lose the ability to forget or ignore as much as they normally would. They can’t extract what’s meaningful out of the immensity of stimuli the brain encounters, and start making connections that aren’t real.

“The hypothesis is that dopamine encodes the importance – the salience – of experience,” says Uli Grasemann of the University of Texas.

“When there’s too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn’t be learning from.”

The team used a neural network called Discern to simulate the excessive release of dopamine in the brain, and found the network recalled memories in a distinctly schizophrenic-like fashion.

They started by teaching it a series of simple stories, which were assimilated into Discern’s memory in much the way the human brain stores information – not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

Then, in order to model hyperlearning, they ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system’s learning rate — essentially telling it to stop forgetting so much.

“It’s an important mechanism to be able to ignore things,” says Grasemann. “What we found is that if you crank up the learning rate in Discern high enough, it produces language abnormalities that suggest schizophrenia.”

And, after the retraining, Discern began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told. In one answer, for instance, it claimed responsibility for a terrorist bombing.

In another instance, it began showing evidence of ‘derailment’ —replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

“Information processing in neural networks tends to be like information processing in the human brain in many ways,” says Grasemann. “So the hope was that it would also break down in similar ways. And it did.”

The results aren’t absolute proof the hyperlearning hypothesis is correct, says Grasemann – but they do support it.

“We have so much more control over neural networks than we could ever have over human subjects,” he says. “The hope is that this kind of modeling will help clinical research.”