What Dangers May Sophisticated AI Models Bring If Misused?

According to sources, the Biden administration is preparing to launch a new front in its fight against China and Russia by erecting barriers around the most sophisticated AI models.

Researchers from the public and commercial sectors are concerned that adversaries of the United States may use the models—which mine enormous volumes of text and picture data to produce content and summarise information—to launch aggressive cyberattacks or maybe develop highly effective biological weapons.

Threats presented by AI include the following:

Deepfakes and Misinformation

In the divisive field of American politics, artificial intelligence (AI) algorithms trained on vast amounts of web footage are producing realistic-looking but phoney movies known as “deepfakes,” which are making their way into social media.

While this type of artificial media has been around for a while, in the last year, a number of new “generative AI” tools, including Midjourney, have given it a major boost by making it simple and affordable to produce realistic deepfakes.

Researchers noted in a paper published in March that even while OpenAI and Microsoft have regulations prohibiting the creation of false material, their artificial intelligence-powered image creation tools can be used to create images that might support misinformation connected to elections or voting.

Some misinformation efforts only use AI’s capacity to imitate genuine news stories to spread misleading information.

Although big social media sites like Facebook, Twitter, and YouTube have tried to forbid and delete deepfakes, their efficiency in controlling this kind of information varies.

The Department of Homeland Security (DHS) stated in its 2024 homeland threat assessment that, for instance, a Chinese government-controlled news site last year promoted a previously circulated false claim that the US was operating a lab in Kazakhstan to create biological weapons for use against China. The site used a generative AI platform.

Speaking at an AI event in Washington on Wednesday, National Security Advisor Jake Sullivan stated that the problem is difficult to solve because it combines AI’s potential with “the intent of state, non-state actors, to use disinformation at scale, to disrupt democracies, to advance propaganda, to shape perception in the world.”

“Right now the offence is beating the defence big time,” he stated.

Bioweapons

The hazards associated with foreign entities obtaining significant AI capabilities are causing worry among academics, think tanks, and the intelligence establishment in the United States. Researchers at Rand Corporation (opens new tab) and Gryphon Scientific (opens new tab) have observed that sophisticated AI models can yield data that might be used to develop biological weapons.

Gryphon conducted a study to investigate the potential uses of large language models (LLMs) by hostile actors in the life sciences. LLMs are computer programmes that generate responses to queries by mining vast amounts of text. Gryphon discovered that LLMs “can provide information that could aid a malicious actor in creating a biological weapon by providing useful, accurate and detailed information across every step in this pathway.”

For instance, they discovered that an LLM may offer post-doctoral level expertise to troubleshoot issues when utilising a virus that is capable of causing a pandemic.

According to Rand study, LLMs might be useful in the preparation and carry out of a biological assault. They discovered that an LLM may, for instance, recommend aerosol botulinum toxin delivery techniques.

Cyberweapons

In its 2024 homeland security assessment, DHS stated that cyber actors, opens new tab, will probably utilise AI to “develop new tools” to “enable larger-scale, faster, efficient, and more evasive cyber attacks” against vital infrastructure, including pipelines and railroads.

According to DHS, China and other adversaries are creating AI technologies, such as generative AI programmes that facilitate malware assaults, that might compromise US cyber defences.

In a February study, Microsoft stated that it has monitored hacker organisations connected to the governments of China and North Korea, as well as Russian military intelligence and the Revolutionary Guard of Iran, as they worked to refine their large-scale language model hacking activities.

As it implemented a complete prohibition on state-sponsored hacking organisations utilising its AI technologies, the corporation made the discovery public.

New Efforts to Address Threats

A bipartisan group of legislators of the U.S. released a measure that would facilitate the Biden administration’s imposition of export curbs on AI models, an attempt to protect the highly valued U.S. technology from foreign adversaries.

The bill, which is supported by Democrats Raja Krishnamoorthi and Susan Wild and Republicans Michael McCaul and John Molenaar in the House, would also explicitly grant the Commerce Department the power to forbid Americans from collaborating with foreign parties to create AI systems that endanger the national security of the United States.

Policymakers in Washington, according to Tony Samp, an AI policy advisor at DLA Piper in Washington, are attempting to “foster innovation and avoid heavy-handed regulation that stifles innovation” in order to manage the myriad hazards related to the technology.

However, he issued a warning, saying that “stifling AI development through regulation could inhibit potential breakthroughs in areas like infrastructure, national security, drug discovery, and others, and cede ground to competitors overseas.”

(Adapted from Reuters.com)



Categories: Creativity, Economy & Finance, Regulations & Legal, Sustainability, Uncategorized

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.