A woman walking to a bus stop realizes that she forgot her keys; she suddenly turns around and runs home. Such spontaneous activities are hallmarks of animal behavior. Eager to capture the essence of the human brain, roboticists have tried to imitate these sorts of actions. It’s a daunting challenge. But a recent study in Science Advances offers a simple approach for mastering this feat, inducing a computerized neural network to spontaneously switch between several activities using algorithms that mimic the controlled chaos of the animal brain.
“Basically, our work is about how to design spontaneous behavior switching by exploiting chaotic dynamics,” says coauthor Kohei Nakajima, an applied mathematician at the University of Tokyo in Japan. Typically, engineers would design a robot to walk and run; the experimenter would use an external handheld controller to toggle between those behaviors. But to make the leap from such a controlled setting to one in which the robot can switch behaviors autonomously, the researchers sought to emulate chaotic itinerancy. Frequently observed in the animal brain and other dynamic systems, chaotic itinerancy occurs when a system unpredictably, but deterministically, switches between several stereotypical patterns, whether walking, running, or any number of other behaviors.
Roboticists have strived to mimic chaotic itinerancy before, notes lead author Katsuma Inoue, a PhD student at the University of Tokyo. One robot designed in 2006 simulated a human infant with somatosensory systems and hundreds of motors representing muscles in the body, each connected to multiple chaotic oscillators—the rough equivalent of motor neurons. The somatosensory systems communicated with the chaotic oscillators, which then signaled the “muscles” to move. Designed to mimic early human motor development, the system reproduced motions similar to chaotic itinerancy by alternating among several stereotypical behaviors, including crawling and rolling over.
Other work has sought to design spontaneous switching of behaviors in robots using a hierarchical structure with a higher-level neural network governing lower-level modules that correspond to each behavior, Nakajima says. In those experiments, however, “the learning should take lots of time in general,” he says.
To overcome those challenges, the new study skipped the hierarchical design. Instead, in a three-step method using a machine-learning framework, the researchers first defined several possible behaviors and trained a neural network to reproduce them according to commands. Then the researchers trained the network to switch between these behaviors in a specific order, and finally devised probabilistic transitions between these behaviors using chaotic dynamics. The result was a system with features of chaotic itineracy.
The study’s key insight, says Nakajima: a simpler, more elegant way to design chaotic itinerancy. “The final goal is to somehow realize animal-like actions, and animals have spontaneity,” he says.
The recent work was, however, limited to a neural network on a computer. This “lack of embodiment” points to a clear next step, says computer scientist Alexandre Pitti at CY Cergy Paris University, in Cergy-Pontoise, France, who was not involved in the study. “The true challenge is now on its embodiment and if they can obtain similar results,” he says. Indeed, the researchers now plan to move from computers into physical robots in hopes of eventually creating machines that behave autonomously and spontaneously.
A hallmark of the human brain, mental plasticity enables people to “acquire new knowledge without destroying old memories,” Pitti notes. He sees this paper as a step toward the ultimate goal of constructing “a synthetic brain that can have memory that can interact with the environment through an artificial body.”
Other recent papers recommended by Journal Club panelists: