Over 3,000 tech leaders call for a halt to "superintelligence"
On October 22, top experts and technology leaders in the field of artificial intelligence from China and the United States jointly launched an appeal, calling for a halt to the research and development of "superintelligence" until the scientific community reaches a broad consensus on the "safe and controllable development of superintelligence".
![]()
The statement was initiated by the nonprofit Future of Life Institute and was signed by a number of prominent figures, including AI pioneer Geoffrey Hinton, Apple co-founder Steve Wozniak, Virgin Group chairman Richard Branson, economist Daron Acemoglu, and former U.S. National Security Advisor Susan Rice.
Notably, Prince Harry and his wife Meghan, the Duke and Duchess of Sussex, Steve Bannon, and other prominent figures also participated in the joint signing of the statement.
As of noon on October 23, the statement had garnered 3,193 signatures. Among them were Chinese scholars such as Yao Qizhi, Academician of the Chinese Academy of Sciences and Turing Award laureate; Zhang Yaqin, Chair Professor of Intelligent Science at Tsinghua University and Dean of the Institute for Intelligent Industry; and Xue Lan, Academic Committee Member of the Center for Strategic and Security Studies at Tsinghua University and Dean of Schwarzman College.
![]()
"Superintelligence" is a form of artificial intelligence that surpasses humans in all cognitive tasks. Unlike the vast majority of companies currently developing general artificial intelligence, the prospect of "superintelligence" has raised concerns in the industry. The statement said that many leading AI companies are planning to create superintelligence, "raising concerns ranging from the obsolescence of the human economy and the loss of power, freedom, civil liberties, dignity, and control, to national security risks, and even potential human extinction."
Zeng Yi, director of the Beijing Institute for AI Security and Governance and a researcher at the Institute of Automation, Chinese Academy of Sciences, who participated in this call, told The Paper (www.thepaper.cn) that we currently lack solid scientific evidence and feasible methods to ensure the safety of superintelligence and prevent it from posing catastrophic risks to humanity. The world is not yet ready to embrace superintelligence, which is not a controllable tool.
Currently, superintelligence has become a hot topic in the field of artificial intelligence. Companies from Elon Musk's xAI to Sam Altman's OpenAI are racing to launch more advanced large language models. Meta even named its LLM division the "Meta Superintelligence Lab."
Opinions on artificial intelligence (AI) are becoming increasingly polarized in the tech world: one side sees it as a powerful force for social progress and believes it should be developed without restriction; the other side worries about its potential risks and advocates for stronger regulation.
However, even leaders of the world's leading AI companies, such as Musk and Altman, have warned of the dangers of superintelligence in the past. Before becoming CEO of OpenAI, Altman wrote in a 2015 blog post: "The development of superhuman machine intelligence (SMI) may be the greatest danger to the continued existence of humanity."
Zeng Yi believes that the vast majority of companies are only developing general-purpose AI tools, not superintelligence. However, the risks of superintelligence are not scientifically controllable. American companies, such as Meta's establishment of a superintelligence lab, and Alibaba's claims to be developing superintelligence, are not merely developing general-purpose AI tools.
The Paper noted that, according to the New York Times, on October 22, the day the joint statement was released, Meta laid off 600 people from its superintelligence lab. This department had approximately 3,000 employees authorized to develop "superintelligence," that is, artificial intelligence surpassing the human brain.
Meta's newly appointed Chief Artificial Intelligence Officer, Alexandr Wang, stated that the layoffs are aimed at streamlining the organization due to the overly rapid expansion of Meta's AI efforts over the past three years, thereby helping Meta develop AI products more quickly. Meta executives emphasized that the layoffs do not mean they are scaling back their AI work; superintelligence remains one of Zuckerberg's top priorities.
