The future of Artificial Intelligence
Artificial Intelligence (AI) is on the front page of newspapers almost daily. It's a trend, the latest fad that every provider and market player needs to mention. The AI hype, we might say, provokes both excitement and fear, all at the same time. And while this author believes it will be—and already is, in fact—fundamental to the future of any profession in terms of understanding behavioral patterns and accelerating and multiplying our human limits, it's also true that we must attend to the needs of the societies where we live and work, as well as celebrate some of the decisions the world's governments are making.
A few months ago, the still President of the United States, Joe Biden, signed a –Executive Order EO – to establish “new standards” for AI safety. This means that AI models must be notified, and the results of all tests shared, before the algorithm and AI in question ever see the light of day. This paves the way for this rapidly evolving market and represents a kind of “voluntary” code of conduct for providers of this technology.
There are other initiatives that aim to bring peace of mind and security to citizens around the world. For example, the United Nations (UN) announced a new meeting to explore AI governance (see direction, strategy, metrics, performance, etc.), and the European Union has drafted the Artificial Intelligence Act with members of Parliament, agreeing on an AI rulebook and a list of high-risk AI vendors to be monitored.
The goal of all this is to protect citizens from the potential risks of AI systems and, for example, in the United States, it synchronizes (and extends) the Defense Production Act of 1950. The recent Executive Order (Executive Order, for its acronym in English) explicitly states that “the measures will ensure that AI systems are safe and trustworthy before companies make them public.”

Society already uses AI (through the use of one of its subsets, Machine Learning and, more specifically, unsupervised and reinforcement learning) in various industries and initiatives. Not only are we as a society better able to detect threats, but different industries will amplify their reach by using algorithms designed, not in multipurpose mode, but for a single task. The same is true for the capabilities of the application security market, where AI improves the detection of errors in code and suggests more robust and capable applications. Furthermore, the use of AI/ML in privacy and data protection to "automagically" protect data, depending on a series of variables and sources of information, is already a reality. The use of LLM (Language Models Extended, Large Language Models, (for its acronym in English) is imperative in this new era of technology, and many companies are strongly embracing this dimension.
Ultimately, regulating AI is imperative due to the speed, variety, and volume of change it represents… and to protect and defend the rights of all people, with regard to data collection and labeling, as well as access to truthful, complete, and tamper-evident information. That is the regulator's commitment, and we certainly hope it knows how to do so and can fulfill it.
Find out all the latest news about artificial intelligence in our Blog and train in Artificial Intelligence with our Master in Artificial Intelligence.