Back

How AI is regulated in different countries

Generative artificial intelligence is developing so quickly that the problem of its lack of regulatory and legal regulation has worsened to the limit. Countries around the world have stepped up efforts to develop AI legislation, but different regions around the world are at different stages of regulatory development.

We recently wrote about the fact that in Ukraine the Ministry of Digital Transformation is implementing a new approach to the regulation of AI technologies and presenting a road map that will reflect the regulation of artificial intelligence technologies in Ukraine.

And what is happening in the sphere of regulation of generative AI in other countries?

European Union

At this time, the European Parliament is developing a bill on AI (AI Act), and the regulation of artificial intelligence in the region has been going on for several years.

The AI regulatory system proposed by the European Commission provides for four levels of risk:

Minimal or no risk - including systems such as AI-enabled video games or spam filters that may use generative AI.

Limited risk - including the use of chatbots, users should be clearly informed that they are interacting with such systems from the outset.

High risk - the use of generative AI in critical infrastructure, such as transportation, education, law enforcement, recruitment and selection, and robotics in health care.

Unacceptable risk - includes all systems that pose a clear threat to the security and rights of citizens, such as social scoring and voice assistants that encourage harm.

California, USA

Regulating AI innovation in Silicon Valley will be an ongoing challenge for regulators, as California is home to companies such as OpenAI and major investors Microsoft and Google.

To solve the pressing problem, regulatory bodies plan to develop a large-scale AI proposal.

Building on the nation's AI Bill of Rights, the proposed legislation aims to prevent discrimination and harm in sectors such as education, utilities, healthcare and financial services. Annual impact assessments submitted by developers and users to the state Department of Civil Rights are offered as safeguards. California (California Civil Rights Department), which will detail the types of automated tools involved. Along with this, it is planned to oblige developers to implement a management system with a detailed description of the use of technologies and possible consequences.

Great Britain

At this time, the regulation of generative AI in the UK remains a matter for regulators in the industries in which it is used, and there are no plans for a general law that goes beyond GDPR. In official statements on the subject, the government chose an "innovative approach" and the country aims to take a leading position in the global AI race. However, questions remain about the extent to which generative AI is subject to risks such as system malfunction, misinformation, and bias.

To help reduce these risks, the UK government has published an Impact Assessment document, which aims to identify appropriate and fair regulation of AI developers. This measure is part of a wider National Strategy on AI, the summary of which recognizes a number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory inefficiencies) which mean that the risks associated with AI are not being adequately addressed. The government intends to propose an appropriate cross-sectoral regulatory regime. Among the tasks set are stimulating the growth of small and medium-sized enterprises in the field of AI, increasing the level of public trust, as well as maintaining or improving the country's position in the Stanford Global AI Index.

Canada

The legislation governing AI in Canada is currently governed by different data privacy, human rights, and intellectual property laws depending on the state. However, the adoption of the Artificial Intelligence and Data Act (AIDA) is planned no earlier than 2025.

At this time, a framework for managing the risks and "pitfalls" of generative AI, as well as other areas of application of this technology in Canada, is being developed, aimed at encouraging a responsible approach.

According to the Canadian government, the risk-based approach is in line with similar rules in the US and the EU, while it plans to use existing Canadian consumer and human rights legislation, recognizing the need for "high-impact" AI systems to comply with industry legislation. human rights and security.

Six main areas of responsibility have been identified that must be met by high-impact systems:

  • accountability;
  • justice and equality;
  • human understanding and monitoring;
  • security;
  • transparency;
  • authenticity and reliability.

Australia

The federal budget for 2023 announced the creation of the Responsible AI Network, as well as the allocation of 41.2 million Australian dollars (26.9 million US dollars) for the responsible implementation of various AI technologies in the country. Regulators seek to allay concerns about ChatGPT and Bard "causing harm to society".

In addition, regulators are debating whether to amend more general data protection law to address the lack of transparency that can arise when training AI models. The use of analytics and biometric data to train models is also being discussed, which may require the introduction of additional privacy rules.

Brazil

AI regulation in Brazil has moved forward with the development of a legislative framework that was approved by the government in September 2022 but has been widely criticized for being too vague.

Following ChatGPT's release in November 2022, discussions between lobbyists sent a report to the Brazilian government detailing recommendations for AI regulation. The study, prepared by experts in the field of law and science, as well as company executives and members of the national data protection organization ANPD, included three main areas:

  • citizens' rights - including "preventing discrimination and redressing direct, indirect, unlawful or offensive discriminatory bias" and providing clarity about when users interact with AI;
  • risk categorization - establishing and recognizing risk levels for citizens, with "high risk" covering basic services, biometric verification, and employment; and "excessive risk" - for the exploitation of vulnerable sections of the population and social scoring;
  • management measures and administrative sanctions - how exactly will companies that violate the rules be punished; recommended fines in the amount of 2% of the turnover for moderate non-compliance with the rules and fines in the amount of 9 million dollars. for serious damages.

China

Currently, the Cyberspace Administration of China (CAC) is developing a project to regulate AI-based services that serve citizens, including boots. A proposal has been prepared, according to which Chinese technology companies should prospectively register models of generative AI before the public release of products.

The evaluation of these products will take into account the "legitimacy of the data source for prior training" and developers must demonstrate the products' compliance with the "core values of socialism," according to the bill. Products will be prohibited from using personal data for training purposes and must require users to verify their true identity. In addition, AI models that distribute extremist, violent, pornographic content, as well as messages calling for the "overthrow of state power" will be considered a violation.

India

The Indian government has announced that it will take a "soft approach" to AI regulation to support innovation in the country, and there are currently no plans to specifically regulate AI. At the same time, the Ministry of Electronics and Information Technology, refusing to regulate the development of AI, called the field of technology "important and strategic" and said it would put in place policies and infrastructure measures to combat bias, discrimination, and ethical issues.

South Korea

South Korea's AI law is currently in its final stages of development. The bill as it stands now requires regulations to allow any user to create new models without prior government approval, while systems deemed "highly used" about the lives of citizens must gain long-term trust.

The draft law emphasizes innovation in the country, taking into account ethical standards, and enterprises that use generative AI will receive state support for a responsible approach to system development.

In addition, the Personal Information Protection Commission announced plans to create a working group to rethink the protection of biometric data in light of the development of generative AI.

We continue to monitor the situation in the field of AI development and its regulation and will inform you about news and details in the legislation of various countries.

If you have any questions about AI regulation, please contact us.

Subscribe to our channels on social networks:

LinkedIn

YouTube

Instagram

Facebook

Telegram

Medium

Contact us:

business@avitar.legal

Authors:

Serhii Floreskul

,

Violetta Loseva

,

6.4.2024 18:35
Іконка хрестик закрити

Let's discuss your project

Application successfully sent
Request submission error
By clicking "Allow all" you agree to store cookies on your device to enhance website navigation, analyse usage and assist in our marketing efforts
Allow chosen

Submit

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
You can find more in our
Cookie Policy