skip to Main Content
Humans Are a Bigger Existential Risk Than AI
Artificial Intelligence, Future , Philosophy , Tesla

Elon Musk continues to warn us of the potential dangers of AI, from debating the topic with Mark Zuckerberg to saying it’s more dangerous than North Korea. He’s called for regulating AI, just as we regulate other industries that can be dangerous to humans. However, Musk and the other AI debaters underestimate the biggest threat to humanity in the AI era: humans.

For the purposes of the current debate, there are three potential outcomes debaters of artificial intelligence propose:

  1. AI is the greatest invention in human history and could lead to prosperity for all.
  2. A malevolent AI could destroy humanity.
  3. An “unwitting” AI could destroy humanity.

There are few arguments in between worth considering. If the first possibility was not the ultimate benefit, then the development of AI wouldn’t be worth exploring given the ultimate risks (2, 3).

There’s certainly a non-zero chance that a malevolent AI destroys humanity if one were to develop; however, malevolence requires intent, which would require at least human level intelligence (artificial general intelligence, or AGI), and that is probably several decades away.

There’s also a non-zero chance that a benign AI destroys humanity because of some effort that conflicts with human survival. In other words, the AI destroys humanity as collateral damage relative to some other goal. We’ve seen early AI systems begin to act on their own in benign ways where humans were able to stop them. A more advanced AI with a survival instinct might be more difficult to stop.

There’s also a wild card relative to the first outcome that the two sides of the AI debate. On the road to scenario one, the positive outcome and probably the most likely outcome, humans will need to adapt to a new world where jobs are scarce or radically different than work we know it today. Humans will need to find new purpose outside of work, likely in the uniquely human capabilities of creativity, community, and empathy, the things that robots cannot authentically provide. This radical change will likely scare many. They may rebel with hate toward robots and the humans that embrace them. They may band behind leaders that promise to keep the world free of AI. This could leave us with a world looking more like the Walking Dead than utopia.

Since the advent of modern medicine, humans have been the most probable existential threat to humanity. The warning bells on AI are valid given the severity of the potential negative outcomes (even if unlikely), and some form of AI regulation makes sense, but it must be paired with plans to make sure we address the human element of the technology as well. We need to prepare humans for a post-work world in which different skills are valuable. We need to consider how to distribute the benefits of AI to the broader population via a basic income. We need to transform how people think about their purpose. These are the biggest problems we face as we prepare to enter the Automation Age, perhaps even bigger than the technical challenges of creating the AI that will take us there.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio. Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Back To Top
Search