Back

How are India and the US learning to regulate AI in the EU?

In many ways, the EU is a pioneer in the field of AI regulation.

Major countries such as India and the US are paying attention to the European AI bill, and many experts say the upcoming EU law has global implications, including, for example, India, which is already working on its own responsible AI framework. intelligence.

India

The "Brussels effect" has long referred to how EU regulations can affect standards in countries outside of Europe, affecting trade and technology cooperation - and India is no exception. Indian companies serving EU customers may face compliance challenges and the additional implications of risk assessment of AI models. Although the exact impact remains uncertain, the law opens up the following possibilities:

1) due to the huge EU market, India may consider aligning some of its AI policies to ensure there are no barriers to trade;

2) such an agreement can become an opportunity for cooperation in the field of responsible development and implementation of AI,

3) it can facilitate data sharing and cooperation in the field of AI;

4) political lessons learned as a result of the implementation of EU measures open opportunities for effective regulation of AI for India.

What about general-purpose AI?

If we summarize the General Purpose Artificial Intelligence (GPAI) provisions of the AI Act, we can see that AI models capable of solving different tasks will be divided into standard, open-licensed and systemic risk models. Providers of GPAI, and likely those who modify existing models, should adhere to detailed documentation requirements, allow downstream users to understand the capabilities and limitations of the model, and compile and publish a brief description of the content used for training. In addition, those who create systemic risks must implement cybersecurity measures, conduct model assessments, assess and mitigate risks, and document and report incidents.

Compliance with the requirements will be monitored by the new AI Office, which will result in fines of up to 3% of the total global turnover. Other relevant laws apply to GPAI, including the GDPR for privacy, copyright laws for data mining exceptions, and intellectual property laws for conflicts with learning and output creation. The rules of the GPAI Act shall enter into force 12 months after their enactment.

USA

What can the US learn? In the City Journal, Samuel Hammond, senior economist at the Fund for American Innovation, warned the US against following the EU's AI Act on the grounds that its risk-based approach to AI deployment focuses too much on issues of fairness rather than disaster.

The law imposes strict obligations on high-risk AI systems, including pre-market approval in some cases, and creates legal risk in the event of non-compliance. Specialists highly appreciate the special attention to general-purpose AI developers as the most reasonable and purposeful provision of the Law. However, the Act's unrealistic requirements and potential fines may deter American developers from releasing the latest EU models altogether. It is suggested that a smarter approach would focus only on the truly catastrophic risks and control of artificial intelligence laboratories.

Maria Villegas Bravo, a research fellow at EPIC, has written an article that assesses the AI Act's strengths and weaknesses, breaking them down into "good," "bad," and "ugly."

In the Good section, Bravo praises the Act's ban on various intrusive artificial intelligence programs and recognition of the risks that certain algorithms pose to fundamental rights.

Meanwhile, "Bad" highlights the challenges in regulating General Purpose Artificial Intelligence (GPAI) models and open source software that lead to violations.

Ultimately, "Ugly" highlights the shortcomings in the use of biometric identification systems, particularly in the resolution of exceptions for use by law enforcement agencies. In light of this, Bravo suggests that the US take a different approach, avoiding a framework based on harm due to the lack of a robust human rights system. Instead, she advocates for comprehensive privacy legislation that would underpin effective regulation of AI, stressing the importance of privacy laws as a foundation for any future US AI legislation.

If you have questions about the legal aspects of AI regulation, please contact Avitar.

Subscribe to our channels on social networks:

LinkedIn

YouTube

Instagram

Facebook

Telegram

Medium

Contact us:

business@avitar.legal

Authors:

Serhii Floreskul

,

Violetta Loseva

,

6.9.2024 19:51
Іконка хрестик закрити

Let's discuss your project

Application successfully sent
Request submission error
By clicking "Allow all" you agree to store cookies on your device to enhance website navigation, analyse usage and assist in our marketing efforts
Allow chosen

Submit

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
You can find more in our
Cookie Policy