Not of this world: scientists have created an artificial brain made of silver and forced him to learn

Tiny-organized network of artificial synapses remembers his experiences and is able to solve simple tasks. Its creators hope that someday based on this artificial brain will be created of the device, its efficiency is not inferior to the computing power of the brain. In General, the brains, if you omit their achievement in thinking and problem solving, is perfect in its efficiency. To work brain needs as much energy as it absorbs a 20-watt incandescent light bulb. And one of the most powerful and fastest supercomputers in the world, the K computer in Kobe, Japan, to consumes 9.89 megawatts of energy – about as much as 10 000 homes. But in 2013, even with this energy, the car took 40 minutes to simulate 1% of the activity of the human brain throughout 1 second.

Here and engineers-researchers from the California NanoSystems Institute at the University of California in Los Angeles hope to compete with the computing and energy efficient abilities of the brain, thanks to systems that reflect the structure of the brain. They create a device, perhaps the first of its kind, which is “inspired by the brain to produce properties that allow the brain to do what he does,” said Adam, Stig, researcher and associate Professor at the Institute, leading the project together with Jim Gimzewski, Professor of chemistry at the University of California in Los Angeles.

Their device is quite unlike conventional computers, which are based on small wires imprinted on a silicon chip into the higher-order schemes. Current experimental version is represented as a grid of 2 x 2 mm of the silver nanowires, connected by artificial synapses. Unlike silicon circuitry with its geometric precision, this device is bound as “well-mixed plate of spaghetti,” says Stig. While its fine structure is organized from a random chemical and electrical processes, and are not designed carefully.

The complexity of this silver network resembles the brain. Per square centimeter of the grid have a billion artificial synapses, which is several orders of magnitude different from the real brain. The electrical activity of the network also shows a property unique to complex systems like the brain: “criticality”, the condition between order and chaos, indicating maximum efficiency.

This network is extremely intertwined nanowires may look chaotic and random, but its structure and behavior resemble the behavior of neurons in the brain. Scientists of the NanoSystems are developing it as a device-the brain for learning and computing

Moreover, preliminary experiments show that this neuromorphic (i.e., similar to the brain) silver wire mesh has a great functional capacity. She can perform a simple learning and logical operations. It can clean the signal is from unwanted noise, and this is an important ability for voice recognition and similar tasks that cause problems for traditional computers. And its existence proves the principle that one day it will be possible to create devices with energy efficiency close to the efficiency of the brain.

Particularly interesting of these benefits look at the background approaching the limit of miniaturization and efficiency of silicon microprocessors. “Moore’s law is dead, semiconductors can no longer become smaller, and the people begin to wail, saying, what shall we do,” says Alex Nugent, CEO Knowm involved in neuromorphic computing, and not part of the project of the University of California. “I like this idea, this direction. Common computational platform is a billion times less efficient”.

Switches as synapses

When Gimzewski began to work on his project with silver mesh 10 years ago, he was interested in not energoeffektivnost. He was bored. Using a scanning tunneling microscope to study electronics at the atomic scale within 20 years, he finally said, “I’m tired of perfection and precise control and a little tired from reductionism”.

Reductionism, it would seem, is the basis of all modern microprocessors, when complex phenomena, and the schema can be explained using simple events and elements.

In 2007 he was offered to study individual atomic switches (or switches), developed by a group of Masakazu Aono International center for materials nanoarchitectonics in Tsukuba, Japan. These switches contain the same ingredient, which stains a silver spoon black when it deals with eggs: iron sulfide, sandwiched in the sandwich between the solid metallic silver.

Supply voltage to the device pushes positively charged silver ions in silver sulfide to silver layer of the cathode, where they are restored to metallic silver. Atomic threads of silver grow, eventually closing the gap between the metallic silver sides. The switch is turned on, and current can flow. The current reversal has the opposite effect: silver bridges are reduced and the switch turns off.

However, shortly after the development of the switch group Aono began to observe unusual behavior. The more you used the switch, the easier it is included. If he has not been used, it will eventually turn off on their own. In fact, the switch remembered the history. Aono and his colleagues also found that the switches seem to have interacted with each other, so the inclusion of one switch sometimes blocked or turned off others nearby.

Most of the group Aono wanted to construct these strange properties outside of the switches. But Gimzewski and the Stig (who just received his doctorate in the group Gimzewski) remembered the synapses switches between nerve cells in the human brain that also changes relationships with getting experience and interaction. And so was born the idea. “We thought, why not try to put it all in structure, reminiscent of the bark of the brain of a mammal, and to study it?”, says Stig.

The creation of such a complex structure was definitely difficult, but Stig and Adrius, Avizinis, who had just joined the group as a graduate student, developed the Protocol. Pouring silver nitrate on a tiny copper sphere, they could cause the growth of microscopically thin silver wires intersecting. Then they could pass through this mesh sulfur gas to create a layer of silver sulfide between the silver wires, as in the original nuclear command switch Aono.

– Organized criticality

When Gimzewski and the Stig told others about your project, nobody believed that it would work. Some said that the device will show one type of static activity and it will settle, says Stig. Others have suggested the opposite: “They said that the switch will be cascading and the whole structure will simply burn out,” says Gimzewski.

But the device is not melted. On the contrary, when Gimzewski and the Stig watched him through an infrared camera, the input current continued to change the way that passed through the device — proving that the activity was not localized, but rather distributed as in the brain.

One fall day in 2010, when Avizienis and his colleague Henry Sillin increased input voltage in the device, they suddenly noticed that the output voltage start to oscillate if the grid wires came to life. “We sat down and looked at it, we were in shock,” says Sillin.

They guessed that she found something interesting. When Avizienis analyzed monitoring data for a few days, he found that the network remained at the same level of activity for short periods more often than in a long. Later they found that a small region of activity are more common than large ones.

“My jaw dropped,” says Avizienis because they first learned of your device’s power law. Power laws describe the mathematical relationship in which one variable varies as the degree of the other. They apply to systems that are bigger, longer events are less common than smaller and shorter, however, common and not accidental. Per Bak, a Danish physicist, deceased in 2002, was first proposed by power laws as a defining feature of all types of complex dynamic systems, which can be organized on large scales and long distances. Such behavior, he said, indicates that a complex system of balances and operates in the middle ground between order and chaos, in a state of “criticality”, and all its parts interact and relate to for maximum efficiency.

As predicted Tank, power-law behavior was observed in the human brain: in 2003, Dietmar Plenz, neurophysiologist, National institutes of health, observed that groups of nerve cells activated by others who, in turn, activated other, often triggering a systemic cascades of activations. Plants found that the size of these cascades follow a distribution according to power law, and the brain really acted thus in order to maximize the activity without losing control over its distribution.

The fact that the device of the University of California had also demonstrated the power law in action, this is very important, says Plenz. Because of this it follows that, as in the brain, it is a delicate balance between activation and inhibition, which maintains the sum of its parts. The activity doesn’t overwhelm the set, but does not stop.

Later Gimzewski and Stig found another similarity between silver network and the brain: just as sleep the human brain demonstrates less short cascades of activation than the waking brain, as the short activation of silver network becomes less common at lower input energies. In some way, reducing the power consumption in the device may cause symptoms similar to the sleeping condition of the human brain.

Training and computing

And here’s the question: if a network of silver wire has properties similar to properties of the brain, whether it is to solve computational problems? Preliminary experiments have shown that the answer is Yes, although the device, of course, not remotely compare with a regular computer.

First, software is not. Instead, the researchers exploit the fact that the network may distort the incoming signal in different ways, depending on where you measure the output. This suggests a possible use for voice recognition or image because the device must be able to clean noisy input signal.

It also follows that the device can be used for so-called reservoir computing. Because a single input may in principle generate many millions of different conclusions (hence the reservoir), users can choose or combine the insights so that the result was the desired calculation input. For example, if you promote the device in two different places at the same time, there is a chance that one of the millions of different conclusions will represent the sum of the two input.

The challenge is to find the right insights and to decode them and figure out how to better encode information so that the network can understand. This can be done by training the device: by running task with hundreds or thousands of times, first with one type of input, then with the other, and compare what output better cope with the task. “We are not a programmable device, but choosing the best way to encode information so that the network behavior was useful and interesting,” says Gimzewski.

In the work, which will be published soon, scientists will tell how trained a network of wires to perform simple logical operations. In unpublished experiments, they trained the network to solve a simple problem on the memory, which is usually asking rats (T-maze). In the test of T-maze a rat is rewarded if it makes the right turn in response to light. With its own version for training, the network can make the right choice in 94% of cases.

Still, these results were nothing more than a proof of principle, says Nugent. “Little rat, making the decision in the T-maze, never approaches anything from the field of machine learning that can assess your system” on a traditional computer, he says. He doubts that this device can make a useful chip in the next few years.

But the potential is huge, he said. Because the network, like the brain, does not share processing and memory. Traditional computers it is necessary to transmit information between different areas that handle these two functions. “All this extra communication is accumulated because the wires need energy,” says Nugent. Taking traditional computers, you would have to shut the power off to France, to simulate a complete human brain in a decent resolution. If devices like silver network will be able to solve problems with efficiency of machine learning algorithms running on traditional computers, they can use a billion times less energy. And then it is small.

The findings of scientists also support the view that, given the right circumstances, an intelligent system can be formed by self-organization, without any template or process for their development. Silver chain “arose spontaneously,” says Todd Hylton, former Manager of DARPA supported project in the early stages.

Gimzewski believes that the network of silver wires or similar devices may be better than traditional computers in predicting complex processes. Traditional computers model the world with equations, which are often only approximately describe complex phenomena. Neuromorphic network switches at the atomic level of its own internal complexity of the phenomena that model. And they also do it quickly — the state of the network can vary with speed up to tens of thousands of changes per second. “We use a complex system to understand complex phenomena,” says Gimzewski.

Earlier this year at a meeting of the American chemical society in San Francisco Gimzewski, Stig and their colleagues presented the results of an experiment in which they fed to the device for the first three years of a six-year dataset on road traffic in Los Angeles, in the form of a series of pulses indicating the number of passing cars per hour. Through hundreds of hours of training conclusion finally, predicted a statistical trend in the second half of the dataset quite well, although the device didn’t show it.

Perhaps one day, joking Gimzewski, it uses the network to predict the stock market.

Not of this world: scientists have created an artificial brain made of silver and forced him to learn
Ilya Hel


Date:

by