Researchers at Los Alamos National Laboratory are trying to understand computer systems that work like neurons inside human brains. They found that artificial intelligence may have to sleep to function correctly, according to a recent report in Scientific American.  “It likely would come as no surprise to any teacher of young children that we found that our networks became unstable after continuous periods of learning,” wrote AI researcher Garrett Kenyon. “However, when we exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. It was as though we were giving the neural networks the equivalent of a good, long nap.” Kenyon and his team made their discovery as they worked on training neural networks to view objects in a similar way that humans do. The networks were instructed to classify objects without having any examples to compare them. The AI networks began “spontaneously generating images that were analogous to hallucinations,” Kenyon said. Once the networks were allowed the electronic equivalent of sleep, the hallucinations stopped. 

Sleep, or ‘Sleep’?

But physicist Stephen L. Thaler, the president and CEO of machine intelligence company Imagination Engines, cautions against taking the term “sleep” too literally when it applies to AI. “Instead, it needs to cycle between chaos and calm,” he said in an email interview. “So, even risk exercise (i.e., adrenaline—noradrenaline secretion from contact sports or skydiving) followed by relaxation (e.g., serotonin and GABA secretion, as when Einstein got on his sailboat or played his violin) will promote original synthetic thought.” Previous research has found that, like humans, neural networks perform better when allowed to sleep. Computer scientists in Italy found programming a neural network to sleep could remove unnecessary information and, ultimately, make it more efficient. The machines were programmed with the computer equivalent of rapid-eye-movement sleep and slow-wave sleep.  “Inspired by sleeping and dreaming mechanisms in mammal brains, we propose an extension of this model displaying the standard on-line (awake) learning mechanism (that allows the storage of external information in terms of patterns) and an off-line (sleep) unlearning & consolidating mechanism,” the researchers wrote in their paper. 

Dreaming of Electric Sheep

Not only does AI need to sleep, but it may dream as well. It may be possible for an AI to arrive at new answers or learn new ways of doing things by dreaming, John Suit, advising chief technology officer at robotics company KODA, said in an email interview. “This is how humans work,” he added. “We are presented with problems or challenges, we overcome them, and we learn. If we don’t learn the best way, we are faced with new very similar challenges until we arrive at the best or ‘wise’ answer. A dream state may be the ‘key’ to achieving this for AI.” KODA is developing a robot dog, and Suit said that he is often asked whether the dog will dream. “The answer we give to all of these is that it may be possible,” he said. “With a robot, not just a dog, you have a variety of sensors, plus serious computing power for real decentralized AI. This means they are processing input from several sensors in real-time, referencing its knowledge base, and performing all the functions it needs to.” Humans tend to imagine bizarre images when they dream, and it turns out that AI may do the same. A team of Google engineers announced in 2015 that a neural network could “dream” up objects. They used Google’s image recognition software, which uses neural networks to simulate the human brain. The engineers ran an experiment to see what images the networks “dream.”  The Google team created the “dreams” by feeding a picture into the network. They then requested that the network recognize a feature of the image and modify it to emphasize the part it recognized. The altered picture then was put back into the system, and eventually, the program loop changed the picture beyond all recognition. The results of the experiment were bizarre, and some might even call them artistic. “The results are intriguing—even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes,” the engineers wrote on a Google blog. “This network was trained mostly on images of animals, so naturally, it tends to interpret shapes as animals. But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features.” Thaler argues that AI will need to sleep and dream more as the field progresses. “One cannot have capable AI without creativity,” he said. “That creativity stemming from the cycling of simulated neurotransmitter levels within artificial neural nets, those cycles, in turn, the result of the ebb and flow (sleep and wakefulness) of said simulated neurotransmitters.” More ominously, Thaler said that AI also eventually could suffer from mental illnesses. “It will experience the same pathologies as human minds as the above swings in neurotransmitter levels occur (e.g., bipolar disorders, schizophrenia, OCD, criminality, etc.),” he added. 

AI on Drugs? 

Sleep might not even be necessary for AI to alter its consciousness. According to a recent article published in the journal Neuroscience of Consciousness, drugs might do just as well. In the study, researchers discussed how psychedelic drugs such as DMT, LSD, and psilocybin could alter serotonin receptors’ function in the nervous system. They tried giving virtual versions of drugs to neural network algorithms to see what would happen to investigate this phenomenon. The result? AI can trip, it seems. The networks’ usually-photorealistic outputs became distorted blurs, similar to how people have described their DMT trips. “The process of generating natural images with deep neural networks can be perturbed in visually similar ways and may offer mechanistic insights into its biological counterpart—in addition to offering a tool to illustrate verbal reports of psychedelic experiences,” Michael Schartner, the paper’s co-author and a member of the International Brain Laboratory at Champalimaud Centre for the Unknown in Lisbon, wrote in the article.  The field of artificial intelligence is rapidly accelerating. Perhaps it’s time, though, to consider whether AI will be getting enough naps before it starts taking over the world. The dreams of machines could be enlightening or frightening.