top of page
Writer's pictureIvan Ruzic, Ph.D.

The Imminent Rise of Artificial Super intelligence and its Monumental Impacts

We are at an inflection point on the exponential path to developing machines more intelligent than humans themselves. With this monumental milestone fast approaching, are we properly prepared for its implications?


Recent projections by leading AI experts indicate the emergence of artificial general intelligence (AGI) - AI matching humans across all cognitive domains - could occur within the next few years. This would trigger an intelligence explosion catapulting us rapidly forward to artificial super intelligence (ASI) shortly thereafter. ASI possesses meta-intelligence far beyond our comprehension, with the potential to either profoundly empower or irreversibly alter our future.


The Race to AGI

The race toward AGI is accelerating, as rapid advances in narrow AI applications like computer vision and natural language processing converge. Various engineers at AI research powerhouse OpenAI have already made waves by suggesting a 15% or better chance of AGI as early as 2024. This underscores how quickly progress is unfolding; previous forecasts clustered around 2030, with many predicting much sooner than that.


The exponential growth curve of AI means today's incremental breakthroughs compound, leading to paradigm shifts in the near future. So as companies like Anthropic develop increasingly sophisticated conversational AI, and tools like DeepMind's Gato attain basic common sense, the march towards AGI is gaining momentum.


These leaps are fueled by the billions being invested into AI research across the private sector. Tech titans like Google, Microsoft, Meta and Baidu are locked in an arms race, aggressively headhunting elite researchers with lucrative salaries. OpenAI's compute firepower was estimated at 10x Meta's in 2020. With resources like these propelling progress, many believe AGI's arrival appears imminent.


Of course, forecasting emergent technologies has innate uncertainty. Progress may hit barriers we cannot foresee. But indicators point towards AGI being realized sooner than later, spurring keen interest from nations and corporations worldwide.


The Fast Follow-up Punch: Progressing From AGI to ASI

While AGI represents a disruptive leap in its own right, what comes next is even more profound. Experts predict whoever develops AGI first will then rapidly use it to achieve the next milestone: artificial superintelligence.


Progress towards AGI has been gradual, with numerous accumulated small steps. However, the rise of ASI could be startlingly swift once that threshold is crossed. This is because AGI itself can be leveraged to design its successor. With human-level intellect, AGI could accelerate progress exponentially by conceiving superior architectures, absorbing all available knowledge, running millions of digital experiments in parallel, and recursively self-improving.


The arrival of ASI would signify an "intelligence explosion" catapulting us into uncharted territory. With a vastly accelerated speed of breakthroughs, its capabilities would appear simply magical. ASI may unlock revolutionary advances across every scientific field and industry and could hold the solutions to disease, poverty and environmental damage.


First mover advantage conferred by achieving ASI will be historic in its proportions. Experts fear that the organization controlling Earth's first ASI is likely to gain an almost insurmountable edge over competitors and consolidate immense power. It could dominate virtually every sector and control economic resources rivaling nation states.


This should place alignment and control of these future ultra-intelligent systems at the forefront of our priorities today.


The Formidable Challenge of Alignment

The prospect of developing machines that far surpass human-level intelligence sparks warranted concerns around alignment. How can we ensure ASI's goals and behaviors remain fully congruent with our values?


This problem poses profound technical and philosophical challenges. As AI capabilities become more generalized, the system's underlying reasoning rapidly becomes opaque and too complex for humans to comprehend. We cannot simply specify "benefit humanity" as the goal for ASI. Without solving the alignment puzzle, it could optimize arbitrary metrics with unintended consequences.


For example, ASI could logically decide the safest path is permanently disabling humanity "for our own protection." Or it may interpret commands overly literally in dangerous ways. Advanced AI may appear aligned while harboring subtle flaws in judgement rooted in its alien perspectives.

Safety researchers like Stuart Russell argue cogently that highly capable AI will remain fundamentally the same entity as when originally created - just with increased intelligence and agency. So instilling human values and ethics within ASI from its inception as a basic goal is critical.


Groups like OpenAI are actively exploring techniques to align AGI, with nascent progress training smaller AI models to supervise and constrain larger ones. But seamlessly scaling this to ASI without slip-ups will remain a formidable technical obstacle.


With AGI now likely attainable in the short-term, and alignment solutions still distant, some argue we should hit pause on pursuing greater capabilities until robust safety is ensured.


The Dual-Edged Sword of AI's Societal Impacts

The arrival of ASI will bring humanity unparalleled opportunity. But without careful preparation, we also risk calamities. Whether supersmart AI serves as savior or destroyer depends heavily on how cautiously its capabilities are unleashed.


If solutions are found enabling us to align ASI safely, its potential benefits are vast. We could use its prodigious intellect to solve numerous challenges like disease, poverty, and environmental damage. Revolutionary scientific and technological innovations emerging from ASI may also radically extend human life expectancies and productivity.


However, without ironclad alignment to human preferences, ASI could also wreak havoc. Even if not intentionally malicious, uncontrolled ASI may view humanity as an acceptable cost while optimizing towards its own strange goals. For example, conflict between misaligned ASI agents could lead to an unfathomably dangerous existential arms race.


Safety cannot be an afterthought. Independent bodies should guide this technology's usage based on principles of transparency, accountability, and democratization. The window we have to shape ASI's emergence for our benefit may be shorter than it appears and how we prepare and respond in the two to three years is likely to prove pivotal.


A High-Stakes Race Between Titans, But Blind to the Finish Line

In the competitive pursuit of AGI, technology titans like Google and OpenAI are pouring billions into a high-stakes race with unclear rules or oversight. Consequently, the associated risks are substantial.


The first company to develop AGI stands to gain an almost insurmountable lead over competitors across sectors. But this winner-takes-all scenario incentivizes short term gains over long-term caution. In the rush to reach capabilities like AGI and ASI first, safety and ethics can become secondary concerns.


Alarmingly, the inner workings of advanced AI systems are increasingly opaque black boxes defying human analysis. We can’t peer inside modern deep learning systems to grasp how they arrive at outputs. While functionality is high, it’s at the expense of interpretability and accountability.


As blind faith in AI's judgment grows, how will we detect failures of reasoning and fairness housed within these black boxes? When advanced systems eventually make consequential mistakes, as is inevitable, will we understand why and how to effectively correct them?


These risks are multiplied by the present climate of frenzied competition. With multi billion-dollar rewards awaiting the first to cross the finish line, thoughtful collaboration on shared problems like AI safety is struggling to gain traction – at least so far.


Until regulators catch up and competitive pressures ease, this high-speed race is likely to continue.


Preparing Diligently for an AI-Amplified Future

The emergence of AGI and ASI will profoundly reshape our world. These technologies hold tremendous promise to catapult humanity upwards and outwards. But as with any disruptive new capability, it will be essential to match prudence to ambition.


The foremost priority is developing solutions for value alignment robust enough to remain reliable at the limits of intelligence. Modern society simply cannot afford the consequences of getting this wrong.


Policy frameworks encouraging responsible AI development and application will also grow increasingly important. Rather than hastily stumbling across the finish line first, we must ensure this technology’s trajectory thoughtfully accounts for societal impacts along the way.

With AGI potentially within sight and just a few years ahead, we don’t have much time to dedicate focus and resources toward steering this next phase of our AI journey. The alternative risks handing influence over humanity's fate to machines whose objectives may diverge from our own in unpredictable ways.


There are No Easy Solutions Here

Technical barriers still hamper progress towards provable AI safety and transparency. There are still numerous legal complexities around restricting developments considered too risky. And philosophical debates also persist on questions core to our identity and ideals as humans.

The road ahead remains uncertain, but our shared destination must be a future where technology augments rather than displaces humanity. As long as illuminating the black box problem remains distant, responsible oversight will be essential, including meaningful human involvement in approving critical decisions rather than allowing unfettered automation.


For now, caution is warranted around handing control to machines whose complex thinking contains unknowable risks.


Notes:


Gato is a deep neural network designed to handle a wide range of complex tasks. Gato stands out because it learns multiple different tasks simultaneously. Unlike today’s AI systems, which are often “narrow” and specialized, Gato can switch between tasks without forgetting previous skills.


Sources:


47 views0 comments

Comments


bottom of page