Share on social networks!

IS IT NECESSARY TO REGULATE AI?

Artificial Intelligence is already impacting multiple areas of our daily lives, from voice recognition on our phones to data analysis in medical research. Its advanced technology offers countless advantages, such as task automation, efficiency, the ability to process large volumes of data, and the personalization of multiple services. However, these advantages come with challenges related to privacy, algorithmic bias, liability in the event of errors, and the impact on employment.


One of the most significant dangers of unregulated Artificial Intelligence is bias. We have already seen incidents in this regard that have led to discrimination based on gender or race, among other factors. These systems learn from data, and if that data reflects existing prejudices in society, it could perpetuate or even exacerbate those biases. This could manifest itself in hiring, lending, and judicial systems. Furthermore, without proper regulation, this technology could be used to collect, analyze, and share personal data without proper consent.

On the other hand, the question of liability arises in the event of errors or accidents caused by AI systems. Without a clear legal framework, determining who is responsible can be a challenge. Another problem arising from the use/development of this technology is malicious attacks or manipulations, which could have serious consequences in areas such as national security or critical infrastructure.

untitled design 1


When it comes to combating these challenges, AI is no different from other high-impact technologies, such as the automobile or the internet. These also required regulation to prevent negative consequences. In this case, and given its rapid evolution, dynamic and adaptive regulation is undoubtedly required. In any case, in most countries, AI regulation is in its early stages. For example, in the United States, regulation has been more sectoral and largely dependent on individual states, although some federal frameworks exist in specific areas, such as privacy and discrimination.


At the global level, the challenge lies in balancing innovation with citizen protection. Excessive regulation could stifle innovation, while a lack of regulation could leave individuals unprotected. In this regard, the EU has been developing the so-called AI ACT for several years now, a joint effort between European regulatory bodies, businesses, Artificial Intelligence experts, and civil society. Its objective is to protect people's fundamental rights, ensure transparency in decision-making by AI systems, and establish adequate accountability and human oversight mechanisms, among other key issues.

They don't intend to regulate the technology itself, as this would pose a problem for its implementation and development in EU industry, but rather specific use cases that could pose a risk. In doing so, they establish a kind of "traffic light."

Artificial Intelligence: AI ACT traffic light system

In red, "prohibited," are systems used to randomly scan biometric data from social media or surveillance cameras to create or expand facial recognition databases.

Also, biometric categorization systems that use "sensitive characteristics" such as gender, race, ethnicity, religion, or political orientation, except for "therapeutic" use; those systems used for social scoring by public authorities (as is the case in China); predictive surveillance systems to assess the risk of an individual or group of individuals committing a crime or offense (based on profiling, the location of said individuals, or their criminal history); or remote "real-time" biometric identification systems in publicly accessible spaces for law enforcement, unless... Missing children, terrorist attacks, arrest warrant, prior judicial authorization.

artificial intelligence


The "orange traffic light" (the focus of most of the AI ACT) includes high-risk systems, considerations, and requirements. This would be any system whose implementation or development could have a negative impact on fundamental human rights, the health and safety of citizens, or the environment. This includes, for example, Generative Artificial Intelligence, if it has a significant impact on people's lives.

Use of Artificial Intelligence in: Education and vocational training; employment, workforce management, and access to self-employment; access to and enjoyment of essential private services and public services and benefits; migration, asylum, and border control management; administration of justice and democratic processes; and systems that can influence voters in political campaigns, as well as recommendation systems used by social platforms.

All of them must meet certain requirements, including:

  • High-quality data sets feeding the system to minimize risks and discriminatory outcomes
  • Activity logs to ensure traceability of results
  • Appropriate human supervision measures to minimize risk
  • High level of robustness, safety and precision, among others.

Those who fail to comply will face fines of up to 6% of their global turnover, or €30 million.

The "yellow light" would include systems with limited risk. For example, those designed to interact with individuals (chatbots) or even deepfakes. Here, we would have transparency obligations to avoid creating consumer confusion.


Finally, the 'green light' would be reserved for systems that are purely automated, with no risk. For example, spam filters or those used in video games.

Impact of AI ACT on companies

One of the main problems with European regulations on this type of technology is its potential negative impact on small and medium-sized enterprise initiatives. The legal requirements for high-risk systems are not easy for companies with limited resources to meet. Large companies, however, are expected to hire specialized consulting firms to handle this task.

With the aim of helping Spanish SMEs, the State Secretariat for Digitalization and AI, as part of the National Strategy for AI (ENIA), is developing a Regulatory Sandbox that will produce specific technical guides for each article of the AI Act. These guides will help SMEs that use or develop high-risk AI systems understand and comply with the regulations more easily and using open-source tools. In any case, as previously explained, only high-risk AI systems will have to comply with these requirements.

untitled design 2

In short, Artificial Intelligence presents immense transformative potential. However, like any powerful tool, it entails both opportunities and risks. Although efforts are underway to develop robust regulation, there is still a long way to go to ensure the safe and ethical use of this type of technology in our society. In any case, we must remember that the danger lies not in the tool itself (at least for now), but in the use we, as humans, make of it. The key is to increase public and professional awareness of the risks, and to use tools, from the very beginning, to prevent them.

The point is that, today, it is not an option to consider its use, but its ethical and responsible use/development should be encouraged from the design perspective.

If you want to know more about Artificial Intelligence and cybersecurity, visit our blogg

Subscribe to our newsletter to stay up to date with all the news

EIP International Business School informs you that the data in this form will be processed by Mainjobs Internacional Educativa y Tecnológica, SAU as the party responsible for this website. The purpose of collecting and processing personal data is to manage your subscription to the newsletter as well as to send commercial information about the services of the data controller. The legitimacy is the explicit consent of the interested party. Data will not be transferred to third parties, except under legal obligation. You may exercise your rights of access, rectification, limitation and deletion of data at compliance@grupomainjobs.com, as well as the right to lodge a complaint with the supervisory authority. You can consult additional and detailed information on Data Protection in the Privacy Policy that you will find on our website.
Master Cybersecurity Professional Master

Leave a comment

EIP International Business School informs you that the data in this form will be processed by Mainjobs Internacional Educativa y Tecnológica, SAU as the party responsible for this website. The purpose of collecting and processing personal data is to manage your subscription to the newsletter as well as to send commercial information about the services of the data controller. The legitimacy is the explicit consent of the interested party. Data will not be transferred to third parties, except under legal obligation. You may exercise your rights of access, rectification, limitation and deletion of data at compliance@grupomainjobs.com, as well as the right to lodge a complaint with the supervisory authority. You can consult additional and detailed information on Data Protection in the Privacy Policy that you will find on our website.