The argument that the regulatory system is lagging behind the rapid development of technology is best reflected in the debate surrounding the EU's Artificial Intelligence Act (AI Act). Like many other laws related to digital products, services, and technologies, the AI Act is being discussed "on the fly", when programs and systems that use artificial intelligence (AI) are already actively operating in the market.
The EU AI Bill is the first AI bill passed by a major regulatory body in the world. The draft law defines risk categories to which AI applications and systems belong. For example, apps and systems that pose an unacceptable risk (like China's state-run social scoring system) are banned. High-risk programs (like a resume scanning tool that ranks applicants) are subject to certain legal requirements. Apps that aren't clearly banned and don't fall into the high-risk category, as seen today, mostly remain unregulated.
The bill would ban apps that use subliminal techniques, target vulnerable audiences, or allow remote biometric identification. The main idea of the draft law is that the AI system should be transparent to the extent that the user can understand what is happening in the data system, how to use it, what benefits it brings, and how it affects security.
Of course, AI systems influence what information a user sees online, predict what content may be of interest, collect and analyze data about users to enforce laws or personalize advertising, and are used to diagnose and treat diseases. In other words, AI affects many aspects of the user's life, and in this regard, the bill requires companies to publish instructions and technical documentation before the system is launched.
Like the EU General Data Protection Regulation (GDPR) in 2018, the EU AI Law could become a global standard that defines the extent to which AI affects a user's life, regardless of where the user is located.
Today, the AI Act is already causing an international storm. Back in late September 2021, the Brazilian Congress passed a bill creating a legal framework for artificial intelligence, and already in March of this year, ChatGPT experienced a large-scale data leak, after which the Italian data protection regulator blocked ChatGPT, making claims against OpenAI. Both claims - illegal data collection and the lack of a system to verify the age of minors - are quite severe violations of the GDPR. This and other low-profile cases have prompted all EU watchdogs to better coordinate efforts and take another look at the provisions of the proposed EU AI bill.
Civil society organizations, lawyers, business representatives, and virtually all participants in the discussion of the law agree on one thing: the proposed law has loopholes and exceptions that limit the ability of the law to ensure that AI will only bring benefits to the user's life. For example, facial recognition by the police is currently prohibited unless the images are captured with a delay or the technology is used to locate missing children. In addition, the law is inflexible. If a dangerous AI application is used in an unforeseen sector two years from now, the law does not provide a mechanism to mark it as high-risk.
The Future Life Institute (FLI), an independent non-profit organization that aims to maximize the benefits of technology and reduce the risks associated with it, has shared its recommendations for the EU AI Law with the European Commission. It is argued that the Act should ensure that AI providers consider the impact of their applications on society as a whole, not just on the individual. AI applications that cause little harm to individuals can cause significant harm at the societal level. For example, a marketing application used to influence citizens' voting behavior can affect the outcome of an election.
The Center for the Future of Intelligence and the Center for the Study of Existential Risk, two leading institutions at the University of Cambridge, have also submitted their feedback on the proposed bill to the European Commission. They hope the Act will help set international standards to ensure the benefits and reduce the risks of AI. One of their recommendations is to allow proposed changes to the list of restricted and high-risk systems, which would increase regulatory flexibility.
Access Now, an organization that protects and extends users' digital rights, has also provided feedback on the EU AI Act. They are concerned that the law in its current form will not be able to achieve the goal of protecting the basic rights of the user. Despite the fact that the current draft of the Law advocates obligations about transparency of biometric applications such as emotion recognition or AI polygraphs, Access Now recommends stricter measures to reduce risks.
Future Society, a non-profit organization registered in Estonia that advocates the responsible implementation of AI for the benefit of humanity, presented its feedback to the European Commission. One suggestion is to ensure that management is in line with technological trends. This can be achieved by improving the exchange of information between national and European institutions and the systematic collection and analysis of incident reports from Member States.
Experts say that the EU Law on Artificial Intelligence does not always accurately recognize the errors and harms associated with different types of AI systems, and does not properly allocate responsibility for them. They also state that the proposal does not provide an effective basis for enforcing legal rights and obligations. The proposal does not pay due attention to ensuring transparency, accountability and public participation rights.
The Center for Data Innovation, a non-profit organization dedicated to data-driven innovation, has published a report which claims that the EU AI Law will cost €31 billion over the next five years and reduce investment in AI by almost 20%.
The AI Act is part of the new European digital strategy, which, in addition to AI, affects other aspects of the digital world. In particular, the AI Act provides for the creation of new regulators, but it is not yet known how the new and old regulators will interact.
Like the GDPR, the EU AI Law will apply to all companies planning to enter the EU market. Fines are planned up to EUR 30 million or up to 6% of annual turnover (as in connection with the GDPR, it depends on what is greater).
There is also an opinion that the calls and letters of some leaders of the digital world to suspend or slow down the development of AI are the "voice" of those who have not had time to profitably take advantage of artificial intelligence systems. And that is why now (unlike in other situations) these companies are standing for stricter regulation of the use of AI systems. Of course, they could be reminded of whose side they were on when they violated the GDPR and fought against the fines that were imposed on them. But…
In any case, already in May of this year, the EU Parliament, Council, and Commission are planning a final debate on the AI Act. Time will tell whether they will really be "final".
If you have questions about AI regulation, please contact Avitar.
Subscribe to our channels on social networks:
Contact us:
business@avitar.legal
Violetta Loseva
,
Serhii Floreskul
,