The Myth of AGI: A Modern Conspiracy Theory

Exploring the rise of Artificial General Intelligence (AGI) as a modern myth and its implications in technology and society.

Image 1

In recent years, Artificial General Intelligence (AGI) has been attributed almost mythical powers: some believe it will cure diseases, save the planet, and usher in an era of human prosperity; others warn it could bring ultimate disaster and end human civilization. Whether viewed as utopia or apocalypse, AGI has become a mainstream narrative, dominating capital flow, tech policy, and public imagination.

But when we peel back these layers of narrative, we must ask: what we fervently follow is a concrete technological future or a carefully woven modern myth? Increasing signs indicate that the collective fervor surrounding AGI exhibits characteristics akin to conspiracy theories—a high-level conspiracy of our time.

How Silicon Valley Got Brainwashed by AGI

As early as 2007, AI was far from the glamorous field it is today. Amazon and Netflix had ventured into machine learning, but only to recommend books and movies.

Ben Goertzel, however, was not satisfied with this. About ten years ago, this AI researcher founded a startup called Webmind, attempting to cultivate his envisioned “digital baby brain” in the early internet environment. But due to a lack of funding, the company quickly went bankrupt.

Goertzel is a central figure in a niche tech circle that has long dreamed of creating intelligence that can think like humans or even better. However, he needed a catchy name to distinguish it from the somewhat mundane AI.

At that time, Webmind employee Shane Legg proposed the term “AGI” (Artificial General Intelligence). It sounded somewhat distant but was accurate. Goertzel decided that was the name.

Years later, Legg co-founded DeepMind with Demis Hassabis and Mustafa Suleyman, also rooted in the AGI field.

At that time, most serious researchers viewed the notion that AI would eventually mimic human abilities as a joke. So what happened? How did AGI evolve from absurdity to widespread recognition in just over a decade?

Last month, I interviewed Goertzel and posed my questions to him. He said, “I consider myself a researcher of complex chaotic systems, so I hold a conservative attitude toward truly understanding the nonlinear dynamics of memory spaces.” (In simpler terms: things are complicated, and I can’t say for sure.)

Goertzel believes several factors helped promote this idea.

First is the AGI conference—often held alongside top mainstream academic gatherings, such as the Association for the Advancement of Science annual meeting, AI conferences, and the International Joint Conference on Artificial Intelligence.

“If I had just published a book titled ‘AGI,’ it might have quietly disappeared,” Goertzel stated. “But the conference is held annually, attracting more and more students to participate.”

“Secondly, credit goes to Legg, who brought the term AGI to DeepMind. I believe they were the first company in the mainstream business world to discuss AGI. Although it was not their repeatedly emphasized core topic, it undoubtedly legitimized the field.”

“Five years ago, when I first discussed AGI with Legg, he admitted that talking about AGI in the early 2000s would have been seen as crazy… Even when DeepMind was founded in 2010, we still faced a lot of skepticism at conferences. But by 2020, the winds had changed. While some still felt uneasy, it was gradually emerging from obscurity.”

Goertzel pointed out a third factor: the intersection between early AGI advocates and power brokers in tech giants. Goertzel had collaborated with PayPal founder Peter Thiel. “We talked a lot,” Goertzel recalled. He remembers spending an entire day with Thiel at the Four Seasons Hotel in San Francisco. “I was trying desperately to instill the AGI concept in him.”

At that time, Goertzel was unaware that he was not “fighting alone.” Another person was also pushing the AGI wave forward.

Image 3

The Doomsayers Emerge

This person is Eliezer Yudkowsky, whose contributions to promoting the AGI concept are at least on par with Goertzel’s, perhaps even more prominent. However, unlike Goertzel’s optimistic vision of AGI, Yudkowsky believes AGI will only lead humanity to disaster.

Initially, Yudkowsky’s views did not attract widespread attention. At that time, AI was still purely a science fiction concept. It wasn’t until 2014 that Oxford philosopher Nick Bostrom published “Superintelligence,” which truly popularized the concept of AGI.

Tech figures like Bill Gates and Elon Musk read the book and were influenced by it. Regardless of whether they agreed with his pessimistic doomsday narrative, Bostrom systematically presented Yudkowsky’s ideas in a compelling manner.

“All of this gave AGI publicity,” Goertzel added, “and it was no longer an abstract or absurd concept.”

Image 4: 图片

Today, Yudkowsky’s views are more popular than ever, attracting young doomsayers like David Krueger, a researcher at the University of Montreal.

“I believe we are steadily heading towards creating a superhuman AI system that will kill everyone,” Krueger believes. “We must halt this immediately.”

The media has also begun to report on this, with Yudkowsky even being dubbed the “Silicon Valley doomsday preacher” by The New York Times.

He seized the moment and co-authored a new book with Nate Soares, director of the Machine Intelligence Research Institute, titled “If Anyone Builds It, Everyone Dies,” presenting a series of astonishing claims lacking evidence: AGI in the near future will trigger global apocalyptic disasters.

The two hold extreme positions: they advocate for an international ban at all costs, even resorting to nuclear retaliation if necessary. After all, Yudkowsky and Soares wrote, “The death toll from data centers could exceed that of nuclear weapons.”

Upon its release, the book topped The New York Times bestseller list and received endorsements from numerous prominent figures, including many politicians, scientists, and social elites in the U.S. AGI began to attract significant social attention, with capital and policies starting to bet on it.

In 2023, OpenAI CEO Sam Altman posted on X platform: “In my view, Eliezer’s contributions to accelerating AGI development far exceed anyone else’s. Undoubtedly, he sparked interest in AGI among many people.” Altman added that Yudkowsky might one day receive a Nobel Peace Prize for this.

The impact of all this seems to contradict Yudkowsky’s ideas. But whether he accepts it or not, AGI has quietly permeated the mainstream and firmly taken root.

Image 5

But What is AGI? No One Knows

In 1945, just five years after the first electronic computer ENIAC was born, Alan Turing posed the famous question: “Can machines think?” Soon after, in 1951, he stated more plainly in a radio program: “Once machines learn to think, they will quickly surpass our limited capabilities. Machines do not die, can communicate with each other, and continuously evolve. So one day, we must be prepared to hand over control to them.”

Ten years later, in 1955, computer scientist John McCarthy and his colleagues applied for funding from the U.S. government to conduct a project they foresightedly named “artificial intelligence.”

This naming seemed almost fantastical at the time—after all, computers then were as bulky as rooms and functioned similarly to thermostats. But McCarthy wrote in the application: “We will explore how to enable machines to use language, form abstract concepts, solve problems that only humans can solve today, and achieve self-evolution.”

It is these early prophecies that planted the seeds of today’s AGI myth. The notion of machines being smarter and omnipotent is less a technical goal than a fantasy detached from reality.

Despite massive investments and heated debates, no one truly knows how to create AGI.

What’s more troubling is that most people do not even have a consensus on what it actually is—this explains why some can say it will save the world while others claim it will destroy humanity without anyone feeling contradictory.

Most definitions revolve around the same idea: machines can achieve human-level performance across a wide range of cognitive tasks. But this definition itself is untenable: which humans? Which cognitive tasks? How broad is “broad”?

“It has no precise definition,” pointed out Christopher Simmons, former head of the computer department at Oak Ridge National Laboratory. “If we take human-level as a benchmark—intelligence itself has countless possibilities, and everyone’s intelligence varies.”

Simmons believes we are thus caught in a strange race: what exactly do we want to create? “What do you want it to do?”

In 2023, the Google DeepMind team (including Legg, who participated in naming) attempted to clarify various definitions of AGI. Some believe it must be able to learn; others emphasize it must create economic value; still, others insist it must have a physical body capable of acting in the real world (like making coffee).

Legg told me that when he proposed the term for the book title, the ambiguity was key. “I didn’t have a particularly clear definition at the time and didn’t think it was necessary to define it,” he said. “In fact—it’s not like a specific thing, but more like a research field.”

So, when it finally appears, we will naturally know? The problem is, some believe AGI has already arrived.

In 2023, a Microsoft research team published a paper describing their testing experience with a pre-release version of OpenAI’s large language model GPT-4. The team referred to it as “the spark of AGI”—this assertion sparked intense debate in the industry. At that time, many researchers were shocked and attempted to explain the observed phenomena using existing theoretical frameworks.

“This thing performed even better than we expected,” Goertzel stated. “AGI seems no longer so unattainable.” Nevertheless, Goertzel still believes that while large language models exhibit remarkable text processing capabilities, they have not truly touched the core of general intelligence.

“What surprised me is that some technical experts with a deep understanding of the underlying mechanisms of these tools still believe they could evolve into human-level AGI,” he said. “But on the other hand, you really cannot completely deny that possibility.”

The fact is: you cannot prove it is impossible. Everyone is also guessing when it will be realized—5 years? 10 years? 25 years? No one knows.

This resembles conspiracy theories. Because predictions about when AGI will arrive have an accuracy comparable to astrologers predicting the end of the world. Such predictions bear no actual consequences, excuses are always refreshing, and timelines are constantly reset.

This summer’s highly anticipated GPT-5 is the best example.

However, this has not been seen as evidence that AGI is unattainable—believers simply keep postponing their predictions. It will come eventually—only, you know, always “next time.”

Image 6

Believe It or Not

Whenever I talk to those researchers or engineers, they casually discuss AGI as an established fact, as if they possess some secret I do not know. Yet no one can truly tell me what that secret is.

The truth is out there; you just need to know where to look. Jeremy Cohen once told me that the core of conspiracy theories lies in “revealing hidden truths”: “This is indeed a fundamental characteristic of conspiracy thinking, and we can clearly see this trait in discussions about AGI.”

Last year, 23-year-old former OpenAI employee and current investor Leopold Aschenbrenner published a widely discussed 165-page manifesto titled “Situational Awareness.”

You don’t even need to read the entire text to grasp its core idea: you either see the impending truth or remain in ignorance forever.

This cognition does not even require cold hard facts—intuitive perception is enough. And those who have yet to “awaken” are merely those who have not grasped the underlying truth.

Similar viewpoints subtly permeated my conversation with Goertzel. When I asked him why some people hold skeptical attitudes toward AGI, he replied: “Throughout history, every major technological breakthrough—from human flight to the widespread use of electricity—has seen many smart critics assert that it was impossible. The fact is, most people only want to believe in what they have seen with their own eyes.”

This makes AGI sound more like a belief system. I shared this view with AI researcher David Krueger, who firmly believes AGI could arrive within five years. He dismissively replied, “I think you’ve got it completely wrong.”

In his view, the real “belief” is the conviction that AGI will not be realized—those who still deny the “obvious” facts are the truly deluded.

The hidden truth attracts self-proclaimed “truth seekers” who are obsessed with revealing what they believe has always existed but remains unseen. However, for AGI, merely “revealing” is far from sufficient. It also requires an unprecedented act of creation, which is a significant reason for its allure.

“The notion of nurturing a ‘machine god’ clearly satisfies some people’s egos,” Shannon Vallor pointed out. “Imagining oneself laying the groundwork for such a transcendent being—this idea has an irresistible allure.”

This overlaps with conspiracy thinking. People yearn for their value in a seemingly chaotic and meaningless world—a desire to be that crucial person who can change everything.

Krueger, who conducts research at Berkeley, noted that he knows some individuals working in AI who view these technologies as humanity’s natural successors. “They regard these technologies almost as if they were their children,” he said, “By the way, these people usually don’t have children.”

Image 7

A New Spiritual Utopia

Jeremy Cohen found similarities between many modern conspiracy theories and the New Age movement, which peaked in the 1970s and 80s.

Believers in this movement are convinced that humanity stands on the threshold of a new era of spiritual well-being, believing that expanded consciousness will lead the world into a more peaceful and prosperous phase. In simple terms, the core belief is that through a series of pseudo-religious practices—such as astrology, carefully selected crystals, etc.—humanity can transcend its limitations and enter a “hippie-style” utopia.

Though today’s tech industry is built on computation and algorithms rather than crystals or zodiac signs, its understanding of certain fundamental propositions exhibits a similar mystical quality. “You know, that kind of imagination about a complete transformation—like we are about to usher in some millennial turning point, ultimately stepping into a technological utopia in the future,” Cohen pointed out, “and the belief that AGI will help humanity overcome all dilemmas is precisely a manifestation of this core imagination.”

In many people’s minds, the arrival of AGI will be sudden. The gradual development of AI will accumulate until one day its capabilities are strong enough to autonomously create even more powerful AI. By then—it will evolve at an astonishing speed, achieving AGI breakthroughs through what is known as an “intelligence explosion,” ultimately reaching an irreversible critical point referred to as the “singularity.” This special term, circulated in AGI circles for years, is still widely used today.

Science fiction writer Vernor Vinge was the first to borrow the concept of “singularity” from physics to describe this theoretical threshold of technological development. As early as the 1980s, he proposed that there exists an “event horizon” on the path of technological progress—once crossed, humanity will be rapidly surpassed by the machines it has created, which evolve at an exponential rate.

Shannon Vallor believes that the most significant feature of this belief system is that faith in technology has replaced faith in humanity itself. She points out that although New Age movement ideas carry mystical overtones, they at least retain a belief that humanity can change the world by unleashing its potential. However, in the pursuit of AGI, we are discarding this belief in “humanity” in favor of the notion that “only technology can save humanity.”

For many, this idea is extremely appealing, even comforting. “We are in an era where other avenues for human life and social material progress seem to have been exhausted.”

Technology was once seen as a ladder to a better future—steadily leading humanity toward social prosperity. But Vallor points out: “We have crossed that peak. Now it seems to me that the only thing that can rekindle hope and restore optimism about the future for many is AGI.”

She further states that if this logic is pushed to the extreme, AGI may ultimately be shaped into some kind of “deity”—an entity believed to bring ultimate relief from all the world’s suffering.

Sociologist Kelly Joyce from the University of North Carolina primarily studies how cultural, political, and economic beliefs influence people’s understanding and use of technology. In her view, all the fervent predictions about AGI are merely another manifestation of the tech industry’s long-standing “overcommitment model.” She said, “Interestingly, we always fall into this trap. People seem to always believe that technology is superior to humanity.”

Joyce believes this is why people tend to believe in hype whenever it arises. “It’s like a religion,” she said, “We believe in technology. Technology is our god. Resisting this notion is incredibly difficult—because people simply don’t want to hear otherwise.”

Image 8

Image 9

The Cost Behind the Final Fantasy

The fantasy that “computers will eventually complete all human tasks” is undoubtedly alluring. But like many widely circulated conspiracy theories, this fantasy brings real and heavy consequences.

It distorts our understanding of the true costs behind the current technological boom (and its potential decline) and may even lead the entire industry astray—drawing resources away from more urgent and pragmatic technological applications. More critically, it allows us to escape with a clear conscience. It lures us into believing that perhaps we can bypass the challenges that require global cooperation, political compromise, and high costs, whether it’s the climate crisis, public health, or systemic inequality. Since machines will soon solve everything for us, why should we bother?

The costs behind this development are rarely questioned.

Consider the resources invested in this gamble. Recently, OpenAI and NVIDIA announced a partnership worth up to $100 billion, and just to meet the operational needs of models like ChatGPT, it is expected to require at least 10 gigawatts of electricity—equivalent to the output of a large nuclear power plant. The energy released by a single lightning strike may not even match this.

For reference, the “flux capacitor” that powers the DeLorean time machine in the movie “Back to the Future” only requires about 1.2 kilowatt-hours. Just two weeks later, OpenAI announced another partnership with AMD, adding thousands of megawatts of power demand.

While promoting the NVIDIA deal, Sam Altman stated that without building more data centers, society would have to make a brutal choice between “curing cancer” and “providing free education.” “No one wants to face such a choice,” he said. (Ironically, just weeks later, OpenAI announced it would launch a feature for generating adult content for ChatGPT.) More disturbingly, this pursuit of a distant myth is encroaching on investments that could genuinely improve current lives.

“In my view, this is a massive missed opportunity,” said Christopher Simmons, chief AI scientist at AI healthcare company Lirio. “This is a severe misallocation of resources. Countless real and urgent problems need solving, yet we are pouring vast amounts of money into a vaguely defined, uncertain goal.”

“But when companies like OpenAI hold hundreds of billions in funding, they don’t have to make pragmatic choices,” Simmons added. “The scale of capital itself allows them to detach from the gravitational pull of real needs.”

This distorted narrative is also infiltrating policy domains. Tina Law, a tech policy researcher at the University of California, Davis, worries that policymakers are being swayed by lobbying forces, overly focusing on the distant hypothetical of “AI will eventually destroy humanity,” while neglecting the real harms of algorithmic bias, labor displacement, and surveillance expansion. The grand debate over “existential risk” marginalizes pressing issues like structural inequality and the digital divide.

“Hype is a profitable business for tech companies,” Law pointedly noted. “Its core strategy is to create an ‘inevitable’ narrative: if we don’t develop, others will take the lead. Once a technology is framed as a historical inevitability, people will not only hesitate to resist but will also doubt their ability to resist.”

Milton Mueller, a tech policy scholar at Georgia Tech, compared the AGI race to the nuclear arms race of the past: “It is built on a dangerous logic—whoever masters this technology first can control everyone. This mindset will completely distort our foreign policy and international relations.”

Mueller further pointed out that the enthusiasm of enterprises and even governments to promote the AGI myth is driven by clear commercial and strategic motives. The key is that this race has no recognized finish line. As long as the myth can attract investment and attention, it can be told indefinitely.

The conclusion of this story may not be complicated. It is neither utopia nor hell—more likely, before reaching any so-called “singularity,” OpenAI and its peers will have made a fortune in the process of chasing the myth.

Meanwhile, many real problems facing the world remain.

Image 10

The New “AGI” is Still on the Way

So far, we have not discussed a most typical feature: conspiracy theories often presuppose the existence of a power group manipulating things behind the scenes, and believers think that by relentlessly pursuing the “truth,” they can unveil the masks of this elite class.

Of course, those wary of AGI do not publicly accuse the Illuminati or the World Economic Forum of obstructing AGI’s realization or hiding its secrets.

But what if the real manipulators are not preventing AGI but are instead long-term advocates of the AGI narrative? Silicon Valley giants are investing massive resources to develop AGI—after all, it is primarily a business. For them, the myth of AGI holds immense commercial and strategic value.

As a recent executive from an AI company privately revealed: “AGI must always be described as ‘6 to 12 months away from realization.’ If it seems too distant, we cannot attract talent from top institutions; but if it seems too close… then how will the story be told?”

Shannon Vallor sharply pointed out: “If OpenAI openly stated they are just building machines to make the company more profitable, the public would not buy it.” You create a god, and you must also become a god.

As David Krueger observed, Silicon Valley is steeped in a deeply rooted logic: developing AI is the path to ultimate power (which is also one of the core arguments in Leopold Aschenbrenner’s “Situational Awareness”). “We are about to possess god-like power,” Krueger said, “but our consciousness, ethics, and institutions are not prepared. Many believe that whoever realizes AGI first will essentially control the world.”

“They invest tremendous energy in selling a vision of a future filled with AGI, and with their influence, they have achieved considerable success,” he added.

Ben Goertzel even expressed a near-sorrowful sentiment about the success of this hidden group. He began to long for the days when AGI was still on the fringes, unnoticed. “We, the generation engaged in AGI research, need both foresight and stubbornness—even a certain degree of recklessness,” he said. “But now? It has become like your grandmother advising you not to study philosophy and to find a proper job instead.”

“This idea has become so mainstream that it is genuinely confusing,” he admitted. “It almost makes me want to switch to something truly obscure—those fields that have not yet been drowned out by the crowd.” He half-jokingly (I guess) said, “Clearly, wrapping up AGI is more important than satisfying my personal preference for exploring the frontier.”

But I still do not understand: what exactly are they perfecting? If we are so obsessed with this technological fairy tale, what does it mean for genuine technological development? In many ways, I believe the entire AGI concept is built on distorted expectations of artificial intelligence capabilities—even built on misconceptions about the nature of “intelligence.”

Ultimately, the core of the AGI argument is that artificial intelligence has made rapid progress and will continue to improve. But setting aside technical doubts—what if it fails to continue improving?—the remaining claims are merely: as long as we have enough data, computing power, or neural networks, intelligence can be infinitely acquired like a commodity.

But the reality is not so. Intelligence is not a quantifiable metric that can be infinitely increased. A wise person may excel in one area yet be mediocre in others. For example, some Nobel laureates may have poor piano-playing or parenting skills, while some so-called “smart people” insist AGI will arrive next year.

We cannot help but ask: what will be the next myth to hook us?

Before ending the call, Goertzel casually mentioned that he had just attended an event in San Francisco, “the theme was about extrasensory perception, predicting the future… those things.”

“That was AGI’s situation 20 years ago,” he said. “Back then, everyone thought this idea was simply absurd.”

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.