Some Myths around Artificial General Intelligence

Some Myths around Artificial General Intelligence

Recently, I finished reading the great book “Life 3.0” by the MIT professor Max Tegmark. It is mainly about how our lives (or life in general) will change with the emerge of Artificial General Intelligence (AGI). Many parts of it inspired me, so I decided to write something about it.

This article is about some common myths around AGI mentioned in the book. But first let’s make clear what we’re talking about. AGI is not the same as AI. When experts speak about AGI they usually mean the ability to accomplish any cognitive task at least as well as humans. Nowadays, we already see AI that outperforms humans at many narrow tasks such as games (see AlphaGo). This is not AGI but rather Narrow Artificial Intelligence. An AGI would be capable of beating us at Go, driving our cars, trading stocks for us and much more. To my knowledge, we’re not there yet. Still, there are already lots of misconceptions.

#1 AGI by 2100 is possible/impossible

Regarding the timeline of Artificial General Intelligence, the opinions range from “It’s almost here!” to “Humanity will never get there!”. Well, the truth is that there is no consensus among the world-leading AI experts. Even the brightest minds in the field can’t tell if it will happen at all. However, on an AI conference the median answer on when AGI will emerge was 2055! But there’s no science behind extrapolating from the past AI advancements.We just don’t know!

Elon Musk publicly states a rather early timeline for AGI adoption. (

#2 AI might turn “evil”

This is probably the most common myth. Technically speaking there is no such thing as an “evil AI”. The keyword here is Value Alignment. We humans consider something evil if it doesn’t align with our most basic values such as not harming/killing other humans. If an AI starts killing people it is not because it is evil in itself; it simply follows other goals which are misaligned with ours. Imagine an AI gets a positivie reward for the generation of energy. To make this AI much safer, the utility function must return an incredibly high negative reward if a human is harmed (if this sounds interesting check out intelligent agent theory). Otherwise, the AI will do everything to produce energy even if it includes burning humans to convert heat into electricity. An evil AGI is basically a misaligned one which pursues its goals!

#3 AI becomes “conscious”

The author of “Life 3.0” himself conducts research on consciousness and it is obvious there’s still a lot of work in answering the big questions. Instead of worrying that an AI becomes conscious, we should rather think about the case where an AI becomes competent. In this context competency refers to the ability to achieve a certain goal. If an AI is very competent at achieving goals AND is misaligned with our values, we might get into trouble because the AI will reach its goals even if we’re in the way. Conscious AI might be one of the problems for the far far future.

#4 HELP! Killer AI Robots will vaporize humans!

Killer Robot
For now, let’s not worry too much about this.

When it comes to intelligent killer robots, people show endless imagination. In fact, there are threatening developments in the field of Autonomous Weapon Systems (AWS). The AI Ethics community is very concerned about this and proposed the AI Open Letter signed by the brightest minds of our time (Hawking, Musk, Russell, Norvig, …). Artificial General Intelligence doesn’t need to be deployed onto AWS to become dangerous. An internet connection is everything a super-human intelligence would need to succeed. Just think of how AI might find its own way through the internet by crawling the web which contains almost all of the human knowledge. Although I’m personally very concerned about using AI for warfare, I think that a competent, misaligned AI with internet access could cause much greater harm.

TL;DR: There are a lot of myths around Artificial General Intelligence such as heart-eating, flying, conscious killer robots with extraordinary chess skills.

AGI in the next 100 years?We have no idea! Not even AI experts know.
Evil AIAI is not evil in itself. It might be misaligned with human values so we perceive its actions as evil.
Conscious AICompetency in achieving goals is much more likely than consciousness.
Killer AI RobotsAn internet connection is enough to cause tremendous harm (also applies to some presidents).