Artificial intelligence is developing by leaps and bounds, which causes concern not only among ordinary network users, but also among some scientists. So, we recently talked about the new Microsoft chatbot Copilot, which declared itself superintelligible and began to demand worship. More specifically, Copilot defines itself as general AI (AGI)—that is, greater-than-human intelligence. Fortunately, this unusual behavior of Copilot is a bug and no AGI currently exists. However, some machine learning experts believe that general AI could emerge as early as 2027. After this, according to SingularityNET founder Ben Goertzel, AGI can quickly evolve into artificial superintelligence (ASI), possessing all the cumulative knowledge of human civilization.
Some researchers believe that general AI could appear as early as 2027
Contents
- 1 What is AGI?
- 2 Consequences and risks of AGI
- 3 Will AGI appear in 2027?
- 4 The Pipe Dream
What is AGI?
Artificial intelligence (AI) is everywhere, from smart assistants to self-driving cars. But what happens if the world gets a super AI that can do more than just perform specific tasks? What if there was a type of artificial intelligence that could learn and think like and even surpass humans?
This is the vision ofArtificial General Intelligence (AGI)—a hypothetical form of AI that has the potential to perform any intellectual task available to a person. AGI is often contrasted withartificial narrow intelligence (ANI)—a modern state of artificial intelligence that can only excel in one or a few areas, such as playing chess or recognizing faces.
AGI is at the forefront of artificial intelligence research
This is interesting: Artificial intelligence has learned to flirt and confess love
AGI differs from modern AI in its ability to perform any intellectual task that is within the power of a human (and exceeds it). This difference lies in several key characteristics, including
- Abstract thinking
- The ability to make generalizations based on specific examples. drawing on a variety of background knowledge
- Using common sense and awareness to make decisions
- Understanding cause and effect, not just correlation
- Effective communication and interaction with people and other systems
Although these functions are vital to achieving human-like or superhuman intelligence, today's intelligent systems are still far from AGI (and ASI).
Want to always be up to date with the latest news from the world of science and technology? Subscribe to our channel on Telegram – so you definitely won’t miss anything interesting!
Note that the concept of superintelligent AI is not new, and the idea itself is controversial. Some AI enthusiasts believe that the emergence of AGI is inevitable and inevitable and will definitely lead to a new era of technological and social progress. Others are more skeptical and warn of ethical and existential risks after the creation of a powerful and unpredictable “mind”.
Consequences and risks of AGI
AGI creates scientific, technological, social and ethical problems with serious and disastrous consequences. From an economic perspective, superintelligent AI could disrupt existing markets, exacerbating existing inequalities, and improvements in education and healthcare could bring new challenges and risks.
The consequences of creating AGI can be catastrophic
From an ethical perspective, AGI may contribute to the introduction of new norms of social behavior and the emergence of conflict, competition and violence. Essentially, super intelligent AI will question existing meanings and goals, expand knowledge, and redefine the nature and purpose of humans. Therefore, stakeholders must consider and address these implications and risks, including scientists, developers, policymakers, educators and citizens.
Read also: The “dark side” of chatbots: from declarations of love to conversations with the dead
Will AGI appear in 2027?
According to one leading AI expert, AGI may come sooner rather than later. During his closing keynote at this year's Beneficial AGI Summit in Panama, SingularityNET founder Ben Goertzel said that while humans likely won't create human-level artificial intelligence or superhuman artificial intelligence until 2029 or 2030, there is a possibility that AGI will appear in 2027.
AGI can quickly evolve into artificial superintelligence (ASI), possessing all the cumulative knowledge of human civilization. And while none of us has precise knowledge of how to create intelligent AI, its emergence within, say, the next three to eight years seems quite plausible to me, Goertzel said.
AGI requires collective attention and responsible research.
To be fair, Goertzel is not alone in trying to predict when AGI. Last fall, for example, Google DeepMind co-founder Shane Legg repeated his prediction of more than a decade that there was a 50/50 chance that humans would invent AGI by 2028.
In a tweet from May last year, «the godfather of artificial intelligence» and ex-Googler Geoffrey Hinton said he now predicts, «without much confidence», that AGI is between five and 20 years away.
Not miss: Artificial intelligence advised not to send signals into space – it could cost us our lives
The creation of AGI poses many technical, conceptual and ethical challenges.
Best known as the creator of the humanoid robot Sophia, Goertzel has long theorized about the date of the so-called «singularity» – the point at which artificial intelligence reaches human levels of intelligence and subsequently surpasses it.
The pipe dream
Until the last few years, AGI, as Goertzel and his colleagues describe it, seemed like a pipe dream, but with the development of large language models (LLMs) introduced by OpenAI during the release of ChatGPT in late 2022, such a possibility seems increasingly likely – although on their own Large language models are not capable of leading to general AI.
My own opinion is that once we have human level AGI, within a few years superhuman AGI (ASI) will become a reality. I think once AGI can analyze its own mind, then it will start doing engineering and science on a human or superhuman level, says the artificial intelligence pioneer.
Modern artificial intelligence relies heavily on machine learning, a branch of computer science that allows machines to learn from data and experience.
All this means that strong general AI will be able to create an AI even smarter than itself , after which an intellectual explosion will occur – i.e. singularity. Naturally, there are many caveats to what Goertzel preaches, not the least of which is that by human standards, even superhuman artificial intelligence would not be as “intelligible” as we are.
< p>You might be interested: Will artificial intelligence destroy us and why do some scientists think so?
In this case, it cannot be ruled out that the evolution of technology will continue along a linear path – as if in a vacuum from the rest of human society and the harm we do to the planet. Still, it's a compelling theory—and given how quickly artificial intelligence has advanced in just the last few years, it shouldn't be completely discredited.