Magazine / How To Maintain Dominance Over Artificial Intelligence

How To Maintain Dominance Over Artificial Intelligence

Book Bites Politics & Economics Technology

James Barrat is an author and documentary filmmaker who has written and produced for National Geographic, Discovery, PBS, and many other broadcasters.

What’s the big idea?

Artificial intelligence could reshape our world for the better or threaten our very existence. Today’s chatbots are just the beginning. We could be heading for a future in which artificial superintelligence challenges human dominance. To keep our grip on the reins of progress when faced with an intelligence explosion, we need to set clear standards and precautions for AI development.

Below, James shares five key insights from his new book, The Intelligence Explosion: When AI Beats Humans at Everything. Listen to the audio version—read by James himself—below, or in the Next Big Idea App.

https://nextbigideaclub.com/wp-content/uploads/Ioannis-Kalantzis-BB_-James-Barrat_MIX_V2.mp3?_=1

1. The rise of generative AI is impressive, but not without problems.

Generative AI tools, such as ChatGPT and DALL-E, have taken the world by storm, demonstrating their ability to write, draw, and even compose music in ways that seem almost human. Generative means they generate or create things. But these abilities come with some steep downsides. These systems can easily create fake news, bogus documents, or deepfake photos and videos that appear and sound authentic. Even the AI experts who build these models don’t fully understand how they come up with their answers. Generative AI is a black box system, meaning you can see the data the model is trained on and the words or pictures it puts out, but even the designers cannot explain what happens on the inside.

Stuart Russell, co-author of Artificial Intelligence: A Modern Approach, said this about generative AI: “We have absolutely no idea how it works, and we are releasing it to hundreds of millions of people. We give it credit cards, bank accounts, social media accounts. We’re doing everything we can to make sure that it can take over the world.”

Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical. This makes them risky for important tasks. When asked about a specific academic paper, a generative AI might confidently respond, “The 2019 study by Dr. Leah Wolfe at Stanford University found that 73 percent of people who eat chocolate daily have improved memory function, as published in the Journal of Cognitive Enhancement, Volume 12, Issue 4.” This sounds completely plausible and authoritative, but many details are made up: There is no Dr. Leah Wolfe at Stanford, no such study from 2019, and the 73 percent statistic is fiction.

“Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical.”

The hallucination is particularly problematic because it’s presented with such confidence and specificity that it seems legitimate. Users might cite this nonexistent research or make decisions based on completely false information.

On top of that, as generative AI models get bigger, they start picking up surprise skills—like translating languages and writing code—even though nobody programmed them to do that. These unpredictable outcomes are called emergent properties. They hint at even bigger challenges as AI continues to advance and grow larger.

2. The push for artificial general intelligence (AGI).

The next big goal in AI is something called AGI, or artificial general intelligence. This means creating an AI that can perform nearly any task a human can, in any field. Tech companies and governments are racing to build AGI because the potential payoff is huge. AGI could automate all sorts of knowledge work, making us way more productive and innovative. Whoever gets there first could dominate global industries and set the rules for everyone else.

Some believe that AGI could help us tackle massive problems, such as climate change, disease, and poverty. It’s also seen as a game-changer for national security. However, the unpredictability we’re already seeing will only intensify as we approach AGI, which raises the stakes.

3. From AGI to something way smarter.

If we ever reach AGI, things could escalate quickly. This is where the concept of the “intelligence explosion” comes into play. The idea was first put forward by I. J. Good. Good was a brilliant British mathematician and codebreaker who worked alongside Alan Turing at Bletchley Park during World War II. Together, they were crucial in breaking German codes and laying the foundations for modern computing.

“An intelligence explosion would come with incredible upsides.”

Drawing on this experience, Good realized that if we built a machine that was as smart as a human, it might soon be able to make itself even smarter. Once it started improving itself, it could get caught in a kind of feedback loop, rapidly building smarter and smarter versions—way beyond anything humans could keep up with. This runaway process could lead to artificial superintelligence, also known as ASI.

An intelligence explosion would come with incredible upsides. Superintelligent AI could solve problems we’ve never been able to crack, such as curing diseases, reversing aging, or mitigating climate change. It could push science and technology forward at lightning speed, automate all kinds of work, and help us make smarter decisions by analyzing information in ways people simply cannot.

4. The dangers of an intelligence explosion.

Is ASI dangerous? You bet. In an interview, sci-fi great Arthur C. Clark told me, “We humans steer the future not because we’re the fastest or strongest creature, but the most intelligent. If we share the planet with something more intelligent than we are, they will steer the future.”

The same qualities that could make superintelligent AI so helpful also make it dangerous. If its goals aren’t perfectly lined up with what’s good for humans—a problem called alignment—it could end up doing things that are catastrophic for us. For example, a superintelligent AI might use up all the planet’s resources to complete its assigned mission, leaving nothing left for humans. Nick Bostrom, a Swedish philosopher at the University of Oxford, created a thought experiment called “the paperclip maximizer.” If a superintelligent AI were asked to make paperclips, without very careful instructions, it would turn all the matter in the universe into paperclips—including you and me.

Whoever controls this kind of AI could also end up with an unprecedented level of power over the rest of the world. Plus, the speed and unpredictability of an intelligence explosion could throw global economies and societies into complete chaos before we have time to react.

5. How AI could overpower humanity.

These dangers can play out in very real ways. A misaligned superintelligence could pursue a badly worded goal, causing disaster. Suppose you asked the AI to eliminate cancer; it could do that by eliminating people. Common sense is not something AI has ever demonstrated.

AI-controlled weapons could escalate conflicts faster than humans can intervene, making war more likely and more deadly. In May 2010, a flash crash occurred on the stock exchange, triggered by high-frequency trading algorithms. Stocks were purchased and sold at a pace humans could not keep up with, costing investors tens of millions of dollars.

“A misaligned superintelligence could pursue a badly worded goal, causing disaster.”

Advanced AI could take over essential infrastructure—such as power grids or financial systems—making us entirely dependent and vulnerable.

As AI gets more complex, it might develop strange new motivations that its creators never imagined, and those could be dangerous.

Bad actors, like authoritarian regimes or extremist groups, could use AI for mass surveillance, propaganda, cyberattacks, or worse, giving them unprecedented new tools to control or harm people. We are seeing surveillance systems morph into enhanced weapons systems in Gaza right now. In Western China, surveillance systems keep track of tens of millions of people in the Xinjiang Uighur Autonomous Region. AI-enhanced surveillance systems keep track of who is crossing America’s border with Mexico.

Today’s unpredictable, sometimes baffling AI is just a preview of the much bigger risks and rewards that could come from AGI and superintelligence. As we rush to create smarter machines, we must remember that these systems could bring both incredible benefits and existential dangers. If we want to stay in control, we need to move forward with strong oversight, regulations, and a commitment to transparency.

Enjoy our full library of Book Bites—read by the authors!—in the Next Big Idea App:

Download
the Next Big Idea App

Also in Magazine

Sign up for newsletter, and more.