top of page
Writer's pictureRoyzz & Co

Setting the Stage: European Artificial Intelligence Act 2024 Draws Global Attention

ORIGIN: 

 

Since 2016, many AI ethics guidelines have been published to maintain social control over the technology. Artificial intelligence act was proposed by the European Commission on 21 April 2021, mainly to foster AI innovation and to manage AI risks. 

 

EVOLUTION: 

 

In February 2020, the European Commission published the "White Paper on Artificial Intelligence – A European approach to excellence and trust." This was followed by debates between EU leaders in the European Council in October 2020. In April 2021, the European Commission proposed a new Artificial Intelligence Act. By December 6, 2022, the Council adopted its general approach to the AI Act. In June 2023, the European Parliament - adopted its negotiation position for the draft AI Act, leading to final negotiations between the Council, Commission, and Parliament from June to December 2023. A provisional agreement on the proposed rules was reached in December 2023, culminating in an agreement between the EU Council and Parliament on December 9, 2023. The Act was subsequently passed by the European Parliament on March 13, 2024, and approved by the EU Council on May 21, 2024. 

 

SALIENT FEATURES: 

 

1. Enhancing Transparency and Accountability. 

This EU AI Act is designed to ensure that AI system provide Transparency and Accountability. This helps the new AI developers to conduct safe and secure risk assessments, thereby preventing the misuse of AI technologies. 

 

2. Future-Proof Legislation. 

The EU AI Act is designed to be future-proof, allowing it to adapt to the rapid pace of technological change in the AI field. It offers numerous advantages for innovation by creating a harmonized regulatory framework, supporting SMEs and start-ups, encouraging investment, promoting ethical AI, facilitating international collaboration, incentivizing high-quality AI development, reducing barriers to market entry, enhancing transparency and accountability. 

 

3. This Act Prohibits AI systems which: 

  • Deploy subliminal, manipulative, or deceptive techniques to distort behaviour. 

  • Exploit vulnerabilities related to age, disability. causing significant harm. 

  • Perform Social scoring evaluating social behaviour of peoples. 

  • Assesses  the risk of an individual committing criminal offenses solely based on profiling or personality traits. 

  • Compile or enhance facial recognition database by collecting facial images from the internet or CCTV footages without targeting specific people. 

  • Infer emotions in workplaces or educational institutions, except for medical use or safety systems. 

  • Utilize biometric categorization systems. based on sensitive attributes.  

  • Utilize real-time' remote biometric identification (RBI) in publicly accessible spaces for law enforcement. 

 

4. This Act categories different levels of risk generated by AI:. 

  • Unacceptable risk – This represents the highest level of risk. AI applications in this category are prohibited because they manipulate human behavior or use real-time remote biometric identification, such as facial recognition in public spaces. High-risk – AI applications in this category are those that pose significant threats to health, safety,  the fundamental rights of persons or the environment.  

  • Limited risk – AI systems in this category have transparency obligations, ensuring users are informed that they are interacting with an AI system and allowing them to make informed choices. 

  • Minimal risk – This is the lowest risk level defined by the EU AI Act. AI systems in this category do not belong to the above-mentioned categories and have no specific restrictions or mandatory obligations. However, they must adhere to general principles such as human oversight, non-discrimination and fairness. Examples include categories comprised of video games or spam filters. 

 

5. Harmonized Regulatory Framework: 

The EU AI Act provides a harmonized regulatory framework across all EU member states, for encouraging innovation in consistent manner by establishing a set of rules and standards. This Act ensures that AI developers can focus on innovation rather than navigating a patchwork of national regulations. 

 

6. Regulatory Sandboxes: 

The introduction of Regulatory Sandboxes towards AI innovation is a major advantage of the EU AI Act. These sandboxes provide a controlled environment where AI developers can test their technologies under real-world conditions without the immediate burden of full regulatory compliance. This approach benefits start-ups, small and medium sized enterprises by allowing them to experiment and study on their AI models. 

 

7. Support for SMEs and Start-Ups: 

The EU AI Act provides specific provisions aimed at supporting SMEs and startups. Recognizing the financial and operational situation faced by these smaller entities, the Act mandates that national authorities provide testing environments and other resources to help them develop and refine their AI technologies. These measures are very helpful for the development of smaller entities in AI innovation and also reduces the administration and financial burdens on these entities. 

 

8. Encouraging Investment in AI: 

This Act creates a harmonized regulatory environment that encourages investment in AI technologies based on certain rules. This regulation provides clarity for investors considering funding AI start-ups and projects. The Act promotes a trustworthy and ethical AI ecosystem, reducing risks and enhancing  the investors trust in AI investments. 

 

9. Promoting Ethical and Trustworthy AI: 

The EU AI Act plays a strong role in  promoting the ethical and trustworthy development of AI technologies, This is achieved by imposing strict requirements for transparency, and accountability. The Act aims to ensure that AI systems are developed in a manner that respects fundamental rights and societal values, thereby enhancing public trust AI technologies. 

 

10. Facilitating International Collaboration: 

The EU AI Act aims to position the EU as a global leader in AI governance. This approach facilitates international collaboration on AI research and development, enabling the EU to play a proactive role in shaping global AI standards and practices. 

 

11. Incentivizing High-Quality AI Development: 

The EU AI Act strongly encourages the development of High-Quality AI with strict requirements for high-risk AI applications. These requirements promote the use of high quality datasets, robust risk management systems, and comprehensive documentation necessary for the advancement of AI technologies. 

 

12. Reducing Barriers to Market Entry: 

This Act provides a clear set of rules and regulations to assist new AI developers entering the market. It reduces barriers between new AI developers and AI markets, stimulating competition and innovation in the AI market. 

 

CONCLUSION: 

 

The European Commission aims to balance promoting AI development and innovation with effectively managing emerging risks. It ensures that AI systems in the EU market are safe and respect public rights and values. The act Provides legal certainty to facilitate investment and innovation in AI systems, while also striving to develop an unified EU market for lawful, safe, trustworthy and transparent AI applications, thus preventing market fragmentation. 

 

The EU AI Act uniquely classifies AI systems based on risk management and seeks to reduce risks by prohibiting unacceptable AI systems. Each level of risks is subject to different degrees of regulations and obligations. It aims to prevent the use of algorithmic video surveillance techniques that might influence or distort a person’s behavior to their detriment or that of others, minimizing unfair or inadequate outcomes.  

Comentários


Os comentários foram desativados.
bottom of page