While machine learning experts debate the future of artificial intelligence, a former employee of OpenAI, the company that gave the world ChatGPT, claims that the world is in a race for general AI (AGI). In his voluminous report, researcher Leopold Aschenbrenner writes that today only a few hundred people have situational awareness of the technology and how its achievements will affect the future, and the thirst for profit forces developers to develop AGI at such a pace that they ignore all the risks and limitations. “We are creating systems that can think and justify their actions. By 2025/26, they will be ahead of many university graduates, and by the end of the decade they will be smarter than you and me,” the report says. Are we really going to witness the emergence of superintelligence? Let's figure it out!
Artificial General Intelligence (AGI)– a field of theoretical artificial intelligence research that seeks to create software with human-like intelligence and the ability to self-learn.
Contents
- 1 In Pursuit of AGI
- 1.1 What Do Corporations Want?
- 1.2 The Aschenbrenner Report, the Main Points
- 2 What Will Happen in 2027?
- 3 The Transition to Superhuman AI
- 4 The Intelligence Explosion and Its Consequences
In pursuit of AGI
Predictions of the death of cinema at the hands of generative AI pale in comparison to warnings about the fate of humanity if scientists succeed in creating artificial general intelligence (AGI), a still hypothetical system capable of operating at human or superhuman levels. Opinions among scientists, however, vary widely: some believe AGI is unattainable, others predict its appearance in decades, and still others are confident that AGI will appear before 2030.
Recently, former OpenAI employee Leopold Aschenbrenner, whose brainchild is ChatGPT, published a large 165-page report describing the development of generative AI systems over the next 10 years. It may seem that Aschenbrenner is exaggerating, but his words make you think: the scale of AI funding is already exceeding the wildest expectations.
It is important to note that Aschenbrenner is not a member of some sect or an alarmist preaching the end of the world. He, like Elon Musk, agrees with those who call for more «situational» awareness of the potential of AI and government intervention to limit the power of companies that own AI.
You may be interested in: Neural networks will destroy humanity. True or not?
While the report doesn't name names, it's clear the company in question is OpenAI, where Aschenbrenner was responsible for safeguards around AGI development. He was fired in April of this year, apparently for criticizing management, which he said was “ignoring security for the sake of money.” OpenAI officials, for their part, said Aschenbrenner was fired for leaking sensitive information about the company's readiness to implement AGI.
In his defense, the former employee said the information he shared was “completely normal” because it was based on publicly available data. He suspects the company was simply looking for a way to get rid of him. Interestingly, the heads of the department where Aschenbrenner worked also quit.
Another former OpenAI employee, Daniel Kokotailo, agrees with Aschenbrenner, saying that OpenAI is developing increasingly powerful AI systems with the goal of eventually surpassing human intelligence in every way. In his opinion, AGI will be either the best or worst thing that has ever happened to humanity, and that trust in OpenAI’s leadership has been eroding, mostly due to irresponsible behavior and a disregard for safety.
The race for AGI has already begun. By the end of the decade, we will have superintelligence in the true sense of the word. If we are lucky, we will enter an all-out race with other countries, and if not, a world war may break out, the report says.
What do corporations want?
To better understand the current situation, let us recall that generative AI occupies an important place in the technological environment. We have seen that artificial intelligence has made companies like Microsoft the most valuable in the worldwith a market value of over $3 trillion. Market analysts attribute this rapid growth to the fact that the company quickly mastered the technological innovation.
Even NVIDIA is on the verge of bringing AI to its iPhones after recently overtaking Apple to become the world's second-most valuable company due to high demand for GPUs for AI development, experts say.
Microsoft and OpenAI appear to be among the leading tech companies investing heavily in artificial intelligence. However, their partnership has sparked controversy, with insiders noting that Microsoft has become a “glorified IT department for promising startups.”
Want to be the first to know the latest news from the world of science and high technology? Subscribe to our channel on Telegram – this way you will definitely not miss anything interesting!
Billionaire and SapceX founder Elon Musk even says that OpenAI appears to have «essentially become a closed-source subsidiary of Microsoft».
It’s no secret that the companies have a complicated partnership, and the recent controversy surrounding OpenAI doesn’t help matters. As noted above, several high-level employees left OpenAI after GPT-4 launched. While the reasons for their departures are unclear at best, Jan Leike, Aschenbrenner’s former team leader, has said he’s concerned about the company’s AI development.
All of this means that it is extremely difficult to predict the trajectory of artificial intelligence in the next few years. However, NVIDIA CEO Jensen Huang has indicated that “we may be on the cusp of the next wave of artificial intelligence.” He also claims that robotics is the next big industry, dominated by self-driving cars and humanoid robots.
Don’t miss: Neural networks have learned to lie, and they do it on purpose
Fortunately, Aschenbrenner’s report gives some insight into what the future holds. However, his predictions are quite alarming, and some of them are hard to believe.
Aschenbrenner’s report, the main
- Here are the key findings of the five-chapter, 165-page report:
- Artificial intelligence is advancing by leaps and bounds, and by 2027, instead of a chatbot, we will have something more like a colleague. By that time, neural networks will be able to do the work of an AI researcher/engineer.
- AI progress will not stop at the human level, and we will quickly move from AGI to completely superhuman AI systems. This superintelligence will likely appear by 2030.
- No team of professionals can handle superhuman AI. Right now, there are only a few hundred people in the world who understand what awaits us and how crazy the upcoming events can become.
- Necessary steps to take: immediate and radical strengthening of AI lab security; building AGI computing clusters in the US; readiness of AI labs to cooperate with the military.
What will happen in 2027?
AGI by 2027 looks astonishingly plausible. You don't have to believe in science fiction to see it, you just have to believe in the straight lines on the graph of AI growth, writes a former OpenAI employee.
So, according to Aschenbrenner's report, AI is developing by leaps and bounds, and if this trend does not change, then by 2027 artificial intelligence will become strikingly similar to humans. His words really make sense, because in just four years – from GPT-2 to GPT-4 – AI has gone from a preschooler to a smart high school student. “If we follow the trends in computing and algorithm efficiency, we can confidently say that by 2027 we will see another qualitative transition from a “preschooler to a high school student,” writes Aschenbrenner.
Moreover, according to the former OpenAI employee, AI progress will not stop at the human level, as hundreds of millions of AIs will be able to automate research in this area in the near future, reducing a decade of algorithmic progress to a year. This means that the transition from human-level to completely superhuman AI systems will happen very quickly, and the power — and danger — of superintelligence will be colossal.
With AI revenues growing rapidly, trillions of dollars will be poured into GPUs, data centers, and more computing power by the end of the decade in an “extraordinary technocratic acceleration,” the report says.
It’s hard to call that far-fetched, since GPT-4 (which has been described as mildly scary at best) is already outperforming professional analysts and advanced AI modelsin predicting future revenue trends without access to quality data. However, there are serious concerns about energy supply, leading OpenAI to see nuclear fusion as a likely alternative for the foreseeable future.
The report also suggests that more corporations will join the race for AGI and invest trillions of dollars in developing these intelligent systems. This comes amid reports that Microsoft and OpenAI are investing over $100 billion in a project called Stargate, to get rid of the over-reliance on NVIDIA GPUs.
Read also: How do neural networks affect the climate and the environment?
The transition to superhuman AI
Aschenbrenner is confident that once humanity gets its hands on AGI, it will not stop and will begin to create even more powerful algorithms that will be able to not only match human capabilities, but also surpass them many times over. This transition will likely occur after 2029, as AI will be able to automate and accelerate its research and development. And as more countries and institutions around the world prepare for the implementation of AGI and the emergence of superhuman AI, the sector will receive more corporate and government funding.
Before we know it, we will have superintelligence – artificial intelligence systems far smarter than humans, capable of new, creative, complex behavior that we cannot even understand. Perhaps even a small civilization with billions of such systems. Their power will also be enormous. Extremely complex scientific and technological problems that would have stymied humans for decades will seem obvious to them. We will be like high school students stuck on Newtonian physics while they are learning quantum mechanics, the author of the report warns.
Finally, such superintelligences will be able to train even more complex artificial intelligence themselves. “They will be able to write millions of lines of complex code with ease, keep the entire code base in context, and will not spend decades (or more) checking and rechecking each line of code for errors and optimization. They will be superbly competent in all aspects of their work,” the former OpenAI employee states.
This is interesting: How to find love with ChatGPT and artificial intelligence?
All this means that all the scientific and technological progress that humanity has made in the 20th century will be surpassed in just one decade. This, in turn, will provide decisive and overwhelming military superiority to the country leading the AI arms race.
Using superintelligence against weapons that existed before it would be like a modern military clashing with 19th-century cavalry. The time immediately following the emergence of superintelligence will be among the most unstable, tense, dangerous, and unruly periods in human history, concludes the former OpenAI employee.
The author of the report rightly believes that countries will take more stringent national security measures to manage and control developments in the field of artificial intelligence. However, international competition, especially between the US and China, could intensify and lead to a «full-scale war».
We talked in more detail about what will happen when artificial intelligence reaches the peak of its development here, don’t miss it!
Intellectual explosion and its consequences
As the talk ends, Ascherbrenner’s concerns become increasingly clear, as the nation’s leading AI labs treat security as an afterthought. They are essentially handing over the key secrets of AGI on a silver platter, while reliably managing AI systems that are much smarter than us is an unsolved technical problem.
Reliably managing AI systems that are much smarter than us is an unsolved technical problem. And while it is a solvable problem, it can easily go awry in a time of rapid intelligence explosion.
In fact, the situation that the former OpenAI employee describes in his report can be compared to the creation of an atomic bomb, since the survival of modern civilization will be at stake in the race to create AGI. Ultimately, no one knows whether we will be able to avoid catastrophe and self-destruction along the way.
Fortunately (and Aschenbrenner admits this), modern developments in AI should help scientists understand what exactly is happening in the algorithms and try to make them “interpretable enough” to ensure safety.
More on the topic: Will artificial intelligence destroy us and why some scientists think so?
At the same time, many experts and researchers in the field of machine learning do not share the fears of the former OpenAI employee, believing that the implementation of AGI will not happen so quickly and will go in a different direction. At the same time, few experts doubt that
artificial intelligence will eventually become smarter than people, take their jobs and turn work into a hobby. For these reasons (and many others), there is growing concern in academic circles about the implications that AI may have for humanity.