+ INFORMATION

Share on social networks!

The upcoming Artificial Intelligence law

The Advances in artificial intelligence (AI) systems will have a great economic and social impact which will bring various benefits to all industries. All these benefits and strengths of AI, However, they are not exempt from potential risks. 

Take advantage of the opportunities of Artificial Intelligence

In this global context that we can already see, Europe is firmly committed to seizing the opportunities of AI and addressing the challenges it presents., promoting its development and adoption in line with European values.

Which, given our proven ability to create secure, reliable and sophisticated products, our strong position in the digitized industry, academic strength and position in quantum computing, as well as the expected new wave of data (from 33 zetabytes in 2018 to a forecast of 175 zetabytes in 2025) opens the opportunity to turn Europe into a world leader of innovation in the data economy and its applications, being able to produce reliable systems in an ecosystem of excellence throughout the entire value chain.

He trust ecosystem What the Union wants to achieve in terms of artificial intelligence is based on the ethical development of AI based on respect for fundamental rights.

Achieving this ecosystem involves regulating the design, manufacturing and sale of AI, guaranteeing the existence of a legal security framework from which both users and producing, marketing and implementing companies can benefit.

The The complexity of the AI regulatory framework lies in the need to leave room to address its development in the future. minimizing the different risks from suffering damages that may entail its use, which mainly affect the application of the standards designed to protect the Fundamental rights, the cybersecurity with impact on the physical world, as well as certain issues relating to the civil liability.

While EU law is, in principle, fully applicable regardless of the use of AI, It is important to evaluate whether it can be adequately executed to address the risks generated by AI systems or whether specific legal instruments need to be adapted..

the artificial intelligence law in europe

Improve the regulatory framework

The Commission considers that it is appropriate improve the regulatory framework to address the following risks and situations:

  • Effective application and enforcement of existing EU and national legislation: The aim is to overcome the lack of transparency of AI that makes it difficult to detect and prove non-compliance with regulations.
  • Limitations on the scope of existing EU legislation: The lack of transparency (AI opacity) makes it difficult to detect and prove possible breaches of legislation, especially legal provisions that protect fundamental rights, assign responsibilities and allow claims for compensation.
  • Changes in the functionality of AI systems: The existence of updates may give rise to new risks that did not exist at the time the system was introduced to the market. These risks are not adequately addressed in current legislation, which focuses on safety risks at the time of marketing.
  • Uncertainty regarding the attribution of responsibilities between the different economic agents in the supply chain: In general, the EU legislation on product safety allocates responsibility to the producer of the marketed product, including all its components, such as AI systems. However, these rules may become unclear when AI is incorporated into the product, once it has been marketed, by someone other than the producer. Furthermore, EU legislation on civil liability product liability regulates the liability of producers and leaves national liability rules to take care of other participants in the supply chain.
  • Changes in the concept of security: The use of AI in products and services can generate risks that EU legislation not explicitly addressed at present. These risks may be linked to cyber threats, personal security (for example, in relation to new uses of AI, such as in the case of home appliances), loss of connectivity, etc., and may exist at the moment of marketing the products or arise as a result of updating the computer programs and machine learning of the product when the latter is being used.

Specific legislation on Artificial Intelligence

In addition to adapting the existing framework, the Commission considers that new specific legislation on AI may be required, in order to adapt the EU legal framework to current and future technological and commercial developments. For this new regulatory framework to be effective in achieving its objectives without being overly prescriptive, The Commission considers that a risk-based approach should be followed

Thus, consider that AI systems should be considered high risk depending on what is at stake, and considering whether both the sector and the intended use pose significant risks, especially from the perspective of safety protection, consumer rights and fundamental rights.

Risks with Artificial Intelligence applications

More specifically, An AI application should be considered high risk when it meets the sum of the following two criteria:

  • That the AI application is used in a specific sector in which, due to the characteristics or activities that are normally carried out, it is foreseeable that significant risks exist. The sectors it covers must be specifically and exhaustively detailed in the new regulatory framework. For example, healthcare, transport, energy and certain areas of the public sector. This list should be reviewed periodically and amended where appropriate based on relevant developments in practice. 
  • That the application of AI in the sector in question is also used in a way that significant risks may arise. This second criterion reflects the recognition that not all use of AI in the aforementioned sectors necessarily involves significant risks. For example, although the healthcare may be an important sector, a failure in a hospital's appointment allocation system will not in principle entail a significant risk that justifies legislative intervention. The risk level assessment of a given use may be based on the implications for affected parties. For example, the use of AI applications with legal or similar effects on the rights of an individual or a company; applications that present a risk of causing injury, death, or significant material or non-material damage; applications that produce effects that natural or legal persons cannot reasonably avoid.

Notwithstanding the above, There may also be exceptional cases where, due to what is at risk, the use of AI applications for certain purposes is considered high risk in itself; that is, regardless of the sector in question and when the requirements presented below continue to apply. For example, the use of AI applications in hiring procedures and in situations that impact workers' rights or remote biometric identification.

As a result of the conviction of the importance and potential of AI to turn Europe into a world leader in innovation in the data economy and its applications, On May 7, 2021, the Commission published a proposal for a European Regulation that would establish the first legal framework on AI. Very good news for the scientific community and for European innovation.

You can be a professional at Normative compliance and learn more about this area, training yourself with the Master in Compliance and Data Protection Management in EIP. In just 11 months, you will be a highly qualified professional.

Lawyer specialized in IT/IP at Grupo SIA

Subscribe to our newsletter to stay up to date with all the news

Basic information on data protection.
Responsible for the treatment: Mainjobs Internacional Educativa y Tecnológica SAU
Purpose: Manage your subscription to the newsletter.
Legitimation for processing: Explicit consent of the interested party granted when requesting registration.
Transfer of data: No data will be transferred to third parties, except under legal obligation.
Rights: You may exercise the rights of Access, Rectification, Deletion, Opposition, Portability and, where applicable, Limitation, as explained in the additional information.
Additional information: You can consult additional and detailed information on Data Protection at https://www.mainfor.edu.es/politica-privacidad
Blog Master Dpo

Leave a comment