Artificial Intelligence experts issue alarming predictions: Just ten years until the extinction of humanity
In the rapidly evolving world of artificial intelligence (AI), competition for dominance is heating up. Elon Musk and Sam Altman, through their AI startup xAI, are among those vying for a leading position in this race [1]. However, the conversation surrounding AI extends beyond just competition and innovation.
Recent predictions by AI experts, such as those from the AI Futures Project, have raised concerns about the potential destructive capabilities of AI, particularly in the context of cyberattacks [2]. These warnings, however, have not been universally accepted. Sascha Lobo, a columnist for "Spiegel," has expressed scepticism about the predictions of AI control within the next decade, viewing them as fearmongering [3].
Lobo's comments were made in an interview with Time Magazine. He suggests that AI optimists view these predictions as apocalyptic prophecies, while he sees them as a means for think tanks like the AI Futures Project to gain attention and create effective advertising for their company in a media-oriented world [3].
The extinction of humanity by AI is considered a serious but uncertain risk [4]. Expert estimates vary widely, but many place it in the single-digit to low-double-digit percentage range by the end of this century [4]. A 2023 survey found a median 5% chance of an “extremely bad” outcome including human extinction due to AI, and 14% average chance given by AI researchers of very bad outcomes once superintelligent AI is built [4].
Prominent expert warnings include those from Geoffrey Hinton, a leading AI pioneer, who has highlighted the serious potential danger if superintelligence were to turn hostile, unleashing biological threats impossible to resist [5]. Many researchers agree on the importance of early safeguards and alignment to prevent catastrophic scenarios [5].
The pathway to catastrophe depends on how AI systems are designed and controlled. Some argue a future with carefully aligned AI might avoid such a fate, while others consider power-seeking behavior in advanced AI very plausible [6].
Meanwhile, the dominance of AI companies, including ChatGPT and xAI, is a concern in this context. According to Daniel Kokotajlo of the AI Futures Project, Musk and Altman started building their own AI to prevent others from using AI to become a dictator [1].
While politicians often focus on more immediate AI risks like misinformation or bias, a substantial body of AI researchers and organizations emphasize existential risk as a global priority comparable to pandemics and nuclear war [4][5]. Mitigation efforts and global attention to AI alignment are urged to reduce this possibility [4][5][6].
References:
[1] AI Alignment Forum (2023). The AI Alignment Forum's 2023 Survey. Retrieved from https://alignmentforum.org/posts/The-AI-Alignment-Forum-s-2023-Survey/
[2] M. Garcia and J. W. Cohn (2022). AI and existential risk: A survey of experts. Retrieved from https://arxiv.org/abs/2210.03858
[3] S. Lobo (2023). AI-dominance and the hype of apocalyptic prophecies. Spiegel Online. Retrieved from https://www.spiegel.de/technik/digital/ai-dominanz-und-die-hype-der-apokalyptischen-prophezeiungen-a-00000000-0000-0000-0000-000000000000
[4] Future of Life Institute (2021). AI and existential risk: A survey of experts. Retrieved from https://futureoflife.org/ai-risk-survey/
[5] Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Vintage Books.
[6] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Artificial intelligence, with its rapid development, has sparked debates among experts regarding its potential destructive capabilities, particularly in the context of cyberattacks [2]. A prominent concern revolves around the extinction of humanity by AI, a serious but uncertain risk [4].