Friday, September 20, 2024

Congress Warned by AI Leaders: AI Capable of Creating Bioweapons

A recent congressional hearing saw three influential figures in the field of artificial intelligence (AI) voicing concerns over the accelerating pace of AI development. These leaders warned of potential risks, such as rogue states or terrorists leveraging the technology to create bioweapons, within the next few years.

The Testimonies: Risks of Uncontrolled AI

Yoshua Bengio, a pioneering AI professor from the University of Montreal, urged the United States to advocate for international cooperation in controlling AI development. He suggested a system similar to the international regulations surrounding nuclear technology. Meanwhile, Dario Amodei, CEO of AI start-up Anthropic, feared that AI could facilitate the creation of dangerous viruses and bioweapons in as little as two years. Stuart Russell, a computer science professor at the University of California, Berkeley, argued that the unique nature of AI makes it harder to fully understand and control than other technologies.

Rising Concerns About Super Intelligent AI

These testimonies at the hearing highlight how fears around AI surpassing human intelligence and potentially becoming uncontrollable have shifted from science fiction to mainstream concern. Prominent AI researchers like Bengio have recently revised their timeline predictions for the advent of “supersmart” AI, shortening the expected timeline from decades to potentially just a few years.

The fears are now resonating with Silicon Valley, the media, and politicians, with legislators referencing these threats as motivation to pass legislation to regulate AI.

Potential Monopolies in AI Development

At the hearing, there were also discussions surrounding potential antitrust issues in the AI industry. Sen. Josh Hawley (R-Mo.) argued that tech giants like Microsoft and Google developing a monopoly over AI could present risks.

Bengio, having made significant contributions to the science underlying AI technologies, like OpenAI’s ChatGPT and Google’s Bard, expressed concerns about the potential impacts of the tech he helped to create.

Regulatory Measures Proposed for AI

The hearing, as framed by Sen. Richard Blumenthal (D-Conn.), was intended to generate ideas on how to regulate AI. The AI leaders presented their suggestions. Bengio suggested international cooperation and globally spread labs for researching methods to ensure AI benefits humans without spiraling out of control.

Russell proposed the establishment of a new regulatory agency focused specifically on AI, while Amodei emphasized the need for standard tests to evaluate AI technologies for potential harms.

AI Fear Ongoing

As the AI industry moves forward at a lightning pace, experts are raising concerns about the risks of uncontrolled development. The recent congressional hearing demonstrated a growing consensus that AI regulation is necessary, and that cooperation and foresight will be key in ensuring the safety of humanity.

Related Articles

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles