(ANTIMEDIA) — We tend to think of artificial intelligence entities as flawless intellects, early prototypes of the powerful ‘artilects’ futurists imagine will one day rule our world. We also tend to think of them as not being subject to unhappy thoughts or feelings. But one company has created an artificially intelligent machine-learning system that suffers from mental instability, or the AI equivalent, and the creators deliberately designed it to be unstable.
This tortured artist of an AI is called DABUS, short for “device for the autonomous bootstrapping of unified Sentience.” It was created by computer scientist Stephen Thaler, who used a technique called “generative adversarial networks” to mimic the extreme fluctuations in thought and emotion experienced by humans who suffer from mental illness. His Missouri-based company, Imagination Engines, developed a two-module process: Imagitron infused digital noise into a neural network, causing DABUS to generate new ideas and content; then, a second neural network, Perceptron, was integrated to assess DABUS’s output and provide feedback. Then they added their secret sauce.
This method of creating an echo chamber between neural networks is not new or unique. However, what Thaler and his company are using it for — deliberately tweaking an AI’s cognitive state to make its artistic output more experimental — is. Their process triggers ‘unhappy’ associations and fluctuations in rhythm. The result is an AI interface with symptoms of insanity.
“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says, describing DABUS’s faculties and temperament. “At the other, we have reduced cognitive flow and depression.”
Thaler believes that integrating human-like problem-solving — and human-like flaws, such as mental illness — may significantly enhance an AI’s ability to create innovative artwork and subjective output. While everyone is familiar with the psychedelic and surreal canvases produced by Google’s Deep Dream algorithm, they may be uniquely impressed by the more measured and meditative work of DABUS.
Above: a few of DABUS’s surreal pieces, born of neural networks
Thaler also believes this technique will improve the abilities of AI in stock market predictions and autonomous robot decision-making. But what are the risks to infusing mental illness into a machine mind? Thaler believes there are limits but that psychological problems could be just as natural to AI as they are to humans.
“The AI systems of the future will have their bouts of mental illness,” Thaler speculates. “Especially if they aspire to create more than what they know.”
Since you’re here…
…We have a small favor to ask. Fewer and fewer people are seeing Anti-Media articles as social media sites crack down on us, and advertising revenues across the board are quickly declining. However, unlike many news organizations, we haven’t put up a paywall because we value open and accessible journalism over profit — but at this point, we’re barely even breaking even. Hopefully, you can see why we need to ask for your help. Anti-Media’s independent journalism and analysis takes substantial time, resources, and effort to produce, but we do it because we believe in our message and hope you do, too.
If everyone who reads our reporting and finds value in it helps fund it, our future can be much more secure. For as little as $1 and a minute of your time, you can support Anti-Media. Thank you. Click here to support us