Can machines have consciousness, according to neuroscientists? It seems that Yes

As if the Director wanted to force you to believe in it, the protagonist of the film “the car” in 2015, directed by Andrew Garland, is not Caleb is a young programmer who has been assigned to assess machine consciousness. No, main character is Ava, a striking humanoid AI, naive in appearance and mysterious inside. Like most films of its kind “of the car” leaves the viewer to answer the question whether the ABA was conscious? The film skillfully avoids the thorny question that tried to answer high-profile films on the topic of AI: what is consciousness and can it be the computer?

Hollywood producers are not the only one trying to answer this question. Because machine intelligence is evolving at breakneck speed — not only surpassing the ability of people in games such as DOTA 2 and go, but doing it without human help — this issue rises again in the broad and narrow circles.

Will make consciousness in machines?

This week in the prestigious journal Science published an overview for cognitive scientists, doctors Stanislas Dehane, Hockanum Lau and sid Guidera of the French College at the University of California in Los Angeles and PSL Research University. In it scientists said: not yet, but there is a clear path forward.

The reason? Consciousness is “completely computed”, say the authors, because it occurs during specific types of information processing are made possible due to the hardware of the brain.

There is no magical broth, no divine spark, even the empirical components (“what is it like to have consciousness?”) not required to implement consciousness.

If consciousness is derived purely from the calculations in our polutorachasovom body, the equipment of cars of the same property — only a question of the translation of the biology into code.

Just as a modern powerful machine learning methods is heavily borrowed from neurobiology, we can achieve artificial consciousness, studying the structures in our own brains generate consciousness, and realizing these ideas as computer algorithms.

From the brain to the robot

Undoubtedly, the area AI to a high degree received a boost, thanks to the study of our own brain, how it forms and functions.

For example, deep neural networks, architectural algorithms, which formed the basis AlphaGo created on the model of multi-layered biological neural networks are organized in our brains.

Reinforcement learning, type of learning, in which the AI learns on millions of examples, rooted in the centuries-old technique of training dogs if the dog does something right, he gets a reward; otherwise, she would have to repeat.

In this sense, the translation of the architecture of human consciousness on machines seems a simple step towards an artificial consciousness. There is only one big problem.

“No one in the field of AI is not working on creating conscious machines, because nothing for us to take. We just don’t know what to do,” says Dr. Stuart Russell.

Multi-layered consciousness

The hardest part that must be overcome before the creation of thinking machines, is to understand what consciousness is.

For Dechene and colleagues consciousness is a layered construct with two “dimensions”: C1, the information that is stored in finished form in the mind, and C2, the capacity to receive and track information about yourself. Both are important for consciousness, and cannot exist without each other.

Let’s say you are driving a car and lights up a beacon, warning of a small remnant of gasoline. The perception of the indicator is C1, the mental representation with which we can interact: we see it, there are (refuel) and talk about it later (“ran out of gas on the descent, I was lucky to come”).

“The first value we want to separate from consciousness, is a notion of global availability,” explains Dechene. When you realize the word, all your brain understands, so you can pass this information through various modalities.

But C1 is not just a “mental album”. This measurement is a whole architecture that allows the brain to involve several modalities of information from our senses or from memories of related events.

In contrast to the subconscious processing that often relies on certain “modules” competent to address a particular set of tasks S1 is a global workspace that allows the brain to integrate information, to decide on action and follow through.

By “consciousness” we mean a certain representation at a certain point in time, struggling to access mental workspace, and wins. The winners are distributed among different computational schemes of the brain and stored in the center of attention throughout the decision-making process in determining behavior.

The consciousness of the C1 stable and globally — use all related diagrams of the brain, explain the authors.

For a complex machine like a smart car S1 — is the first step to solving a looming problem, like low fuel. In this example, the indicator itself is a subconscious signal: when it lights up, all other processes remain neproporcionalnimi car, and the car even being equipped with the latest visual processing — without hesitation sweeps past a gas station.

With C1 fuel tank will notify the car’s computer (will allow the indicator to enter in the “conscious mind” of the car), so that, in turn, activated the GPS to find the nearest station.

“We believe that the machine converts it into a system that will retrieve information from all available modules and make it available to any other processing module that this information can be useful,” says Dechene. “This is the first feeling of consciousness.”

Meta-cognition

In a sense, C1 reflects the ability of the mind to retrieve information from the outside. S2 also takes introspective.

The authors define a second network of consciousness, C2 as “meta-cognition”: it reflects, when you learn or perceive or simply make a mistake. (“I think I had to refuel at the last station, but I forgot”). This dimension reflects the relationship between consciousness and sense of self.

C2 is a level of consciousness that allows you to feel more or less confident in making decisions. From the point of view of computing, this algorithm, which outputs the probability that the decision (or computation) is correct, even if it is often perceived as a “sixth sense”.

S2 also runs the roots in memory and curiosity. These algorithms are self-control allow us to know what we know and what we do not know, is a “meta-memory”, which helps you to find the right word “on the tip of my tongue”. The observation of what we know (or not know) is especially important for children, says Dechene.

“Young children absolutely must follow what they know to learn and be curious,” he says.

These two aspects of consciousness are working together: C1 pulls the relevant information into our working mental space (discarding other possible ideas or solutions), and S2 helps with long-term reflection about whether the conscious mind to the useful result or answer.

Returning to the example with the indicator of small fuel C1 allows the car to solve the problem immediately — these algorithms globalize information and the car learns about the problem.

But to solve the problem, the car will need the catalog “cognitive abilities” — self-awareness of what resources are readily available, for example, GPS map gas stations.

“The car with the self-knowledge of this kind is what we call working with S2,” says Dechene. Since the signal is available globally and tracked so that the machine looks at himself from the outside, the car will attend to the indicator of the small amount of fuel and will behave just like people — will reduce fuel consumption and find a gas station.

“Most modern systems of machine learning have no self-control”, note the authors.

But their theory seems to be going in the right direction. In those instances where it has been implemented system-observation — structure of algorithms or a separate network — the AI has developed “internal models, which were meta-cognitive in nature, allowing the agent to develop a (limited, implicit, practical) understanding yourself.”

To conscious machines

Whether the machine, with models C1 and C2, to behave as if it has consciousness? Very likely: smart car will “know” that something sees, to Express my confidence in this, to inform others and to find the best solution of the problem. If the mechanisms of self-observation breaks, he also may experience hallucinations or visual illusions, peculiar to the people.

Thanks S1, it can use the information available to it and to use it flexibly, and thanks to C2, he will know the limits of what he knows, says Dechene. “I think this machine will have consciousness”, and not just be seen as that people.

If you feel that consciousness is much more than a global exchange of information and introspection, you are not alone.

“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge. “But we’re trying to take a radical step, perhaps simplifying the problem. Consciousness is a functional property, and as we continue to add functions to the machines, at some point these properties will characterize what you have in mind under the consciousness,” concludes Dehane.

Can machines have consciousness, according to neuroscientists? It seems that Yes
Ilya Hel


Date:

by