Details About The EU AI Act And The Likely Time Of The Regulation Being Enforced

The first-ever artificial intelligence (AI) regulations being negotiated by the European Union appeared to be heading towards a dramatic conclusion on Wednesday as legislators began what many believe will be the last round of talks on the historic legislation.

The decisions made could serve as a model for other governments creating regulations for their own artificial intelligence sectors.

Governments and parliamentarians were unable to reach a consensus on a number of important topics prior to the meeting, including how to regulate the rapidly expanding field of generative AI and how law enforcement should use it.

What is known as follows:

How Was The AI Act Derailed by ChatGPT?

The primary problem stems from the fact that the initial draught of the rule was created in early 2021—nearly two years before ChatGPT, one of the fastest-growing software apps ever—was released by OpenAI.

Regulators have been rushing to write legislation while businesses like Microsoft-owned OpenAI are finding new applications for their technology.

Computer experts including Sam Altman, the founder of OpenAI, have also sounded the alarm about the risk of building extremely intelligent, potent computers that could endanger humankind.

Legislators back in 2021 concentrated on certain use cases, classifying AI tools according to risk levels and regulating them according to the functions for which they were intended.

AI applications in biometric surveillance, aviation, and education have all been classified as high risk because to possible human rights violations or as an extension of current product safety regulations.

The introduction of ChatGPT in November 2022 compelled legislators to reconsider that.

This so-called “General Purpose AI System” (GPAIS) is capable of completing a wide range of activities, including writing computer code, composing sonnets, and having human-like conversations. It was not designed with a specific use-case in mind.

The act’s original risk categories did not clearly apply to ChatGPT and other generative AI tools, which has led to a continuing debate about how they should be governed.

What is being proposed?

General intent Developers can build new applications “on top of” AI systems, commonly referred to as foundation models.

When it comes to ChatGPT’s tendency of “hallucinating” false answers—a situation in which the underlying model is trained to best predict sentences but occasionally generates answers that sound convincing but are in fact false—researchers have occasionally been taken off guard by AI’s behaviour. Additionally, any underlying quirks hidden in the code of a foundation model may have unexpected consequences when applied in different contexts.

The European Union has proposed regulations pertaining to foundation models, which require enterprises to provide a comprehensive record of their system’s training data and capabilities, attest to the fact that they have taken precautions against potential hazards, and submit to audits by outside experts.

The EU’s most powerful nations, France, Germany, and Italy, have contested that in recent weeks.

Rather of imposing strict regulations, the three countries prefer that creators of generative AI models be given the freedom to exercise self-regulation.

Strict laws, they claim, will make it harder for European businesses to compete with big American firms like Microsoft and Google.

Stricter regulations would apply to smaller businesses developing tools on top of OpenAI code, but not to providers like OpenAI.

What Does Law Enforcement Have a Problem With?

According to sources who spoke to Reuters, lawmakers disagree on the deployment of AI systems by law enforcement to identify people biometrically in areas that are open to the public.

Legislators in the European Union seek regulations that safeguard citizens’ fundamental rights; but, member states also want certain latitude so that technology can be utilised for national security purposes, such as by border protection agencies or the police.

According to one source, if exemptions for the use of remote biometric identification were specific, limited, and well-defined, MEPs might abandon the planned ban on use.

What Is The Most Likely Result?

The bill might potentially be voted into law by the EU Parliament later this month if a final version is agreed upon on Wednesday. Even then, it might not take effect for nearly two years.

Governments and lawmakers in the EU may alternatively negotiate a “provisional agreement” in the absence of a final accord, with the details worked out over the course of several weeks of technical talks. That could spark old arguments again.

They would still need to prepare a deal for a spring vote. Without it, there’s a chance that the legislation would be shelved until after the June parliamentary elections, which would mean that the 27-member EU would forfeit its first-mover advantage in regulating the technology.

(Adapted form Reuters.com)



Categories: Economy & Finance, Regulations & Legal, Strategy, Uncategorized

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.