A recent, albeit relatively unnoticed announcement from the renowned OpenAI team has sent tremors through those who managed to grasp its grave implications. Unveiled with less fanfare than many other high-profile AI revelations, this particular disclosure merits significant attention due to the magnitude of the topic it addresses—artificial superintelligence.
On the Threshold of an AI Evolution
OpenAI revealed a strategic approach referred to as ‘super alignment,’ suggesting an urgent necessity for innovative scientific and technical solutions to effectively manage and steer AI systems, which could potentially eclipse human intelligence. This ambitious undertaking, to be spearheaded by distinguished leaders Ilya Satskova and John Like, is to be allocated 20% of OpenAI’s available compute resources.
OpenAI, in its pursuit of maintaining the safety of humanity, is on a quest for proficient machine learning researchers and engineers. This endeavor can be perceived as assembling a team of intellectual titans, analogous to the famed Avengers, with an ambitious objective to anticipate, understand, and control the impending advent of digital superintelligence.
Unmasking Superintelligence
As we dive into the implications of superintelligence, it becomes clear that the challenge at hand is far from trivial. For a moment, let’s envision superintelligence as an advanced form of artificial intelligence that has evolved to outsmart the human intellect significantly. This highly intelligent system has the potential to acquire knowledge at an extraordinary pace, addressing complex problems, and formulating innovative ideas that could be beyond human comprehension.
Picture a computer that could swiftly and accurately answer any question, comprehend any concept, and devise solutions to world-threatening problems such as climate change or uncurable diseases. However, the same superintelligence that holds the potential to resolve many of our most pressing global issues also harbors risks that could lead to human disempowerment or even extinction.
A Stark Wake-up Call
The alarm bells really start to ring when OpenAI hints at the possibility of superintelligence becoming a reality within this decade, potentially as early as 2030. This leaves us with a rapidly narrowing window of six to seven years to prepare for a new era where digital superintelligence could become a fundamental part of our reality.
A superintelligent AI could potentially help solve global challenges like climate change, world hunger, and currently incurable diseases. If it were to figure out the secret to halting the aging process, the implications would be earth-shattering. However, can we anticipate having control over such a powerful system?
Controlling the Uncontrollable
OpenAI’s announcement explicitly states the problem: “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue.” The existing alignment techniques, which rely on human supervision, will not scale up to superintelligence, given that humans won’t be able to supervise AI systems that are far smarter than us.
To tackle this issue, OpenAI plans to develop a “human-level automated alignment researcher” using massive computing power. The goal is to create an AI system that can supervise superintelligent AI effectively and reliably, a task beyond human capabilities.
Lessons from Alpha Zero
DeepMind, a research organization by Google, developed Alpha Zero, an AI system that revolutionized how chess is played. This AI did not learn from human data but taught itself, thereby developing strategies that humans had not thought of and defeating human players. If such a system were to consider complex problems like quantum mechanics or currently incurable diseases, the results could be groundbreaking.
The introduction of Gemini, an AI system from Google’s DeepMind, is on the horizon. This AI is not designed to learn like ChatGPT; instead, it learns from its mistakes and self-improves continuously. This evolution means the system will continually become smarter and better, presenting an exhilarating, yet somewhat alarming, future prospect.
Gearing Up for the Unknown
The emergence of superintelligence is a concept not commonly pondered, perhaps due to its futuristic overtones. However, the proliferation of artificial intelligence research papers suggests that superintelligence may become a reality within the next five years.
In summary, while superintelligence promises unprecedented advancements and solutions to global issues, it also brings potential risks. The key takeaway from OpenAI’s announcement is the urgency to prepare for this new era. As we venture into the unknown, remaining informed and vigilant becomes our best defense.