Talking about artificial intelligence is commonplace. ChatGPT, Open Assistant, Bard and other closely related models constantly appear in our daily conversations. However, just because it is trendy does not mean that AI is something new. In fact, it has been around for many years, albeit in less impressive implementations than in large language models.
What’s new is the exponential explosion in these systems’ capabilities democratization For artificial intelligence that occurs with its interruption, from a dual perspective of users and developers. This is why the recent call by Elon Musk and other technologists to halt development of “systems more capable than GPT-4” has caught me off guard. The Nuclear Non-Proliferation Treaty that aims to prevent the spread of nuclear weapons makes sense in a world where one cannot go to the mall and buy centrifuges to enrich uranium. However, anyone with minimal technology knowledge can buy space on a cloud server, and start developing AI systems based on open source libraries and publicly accessible training datasets. Therefore, banning the development of AI applications, without going into whether it is a good idea or not, does not seem feasible to me. Another question is banning or restricting certain uses of AI, and this is exactly what a future European law on AI intends, among other things.
Instead of trying to open the doors of the field, what needs to be done is to organize and draw red lines that cannot be crossed, while strengthening the innovative capacity of European universities and industries in the rest. I realize that today there are more questions than answers in this field, and therefore, determining what can and cannot be done is almost a daunting task. However, although it is not easy and we still have a lot to learn from AI, the task of the regulator is to define the responsibilities of each of the actors involved in the life of the AI system (because the responsibilities cannot be downloaded to the machines), in order to enable the oversight agencies to Rapid intervention and removal from the circuit of those who intend to use this technology irresponsibly. Regulations must combine a solid foundation with the flexibility to adapt to the rapid developments that are happening (and will continue to happen in this area).
For this reason, my suggestion is that we approach the governance of AI in the same way as the regulation of commercial aviation was approached in its day. In other words, with stringent international standards regarding safety, regardless of costs and with a continuous process of improvement and modernization, where professionals don’t just learn from accidents (which fortunately are increasingly rare in commercial aviation). But it is no small accident or mistake.
Establishing the International Civil Aviation Organization (ICAO) nearly 80 years ago, the Chicago Convention establishes a regulatory framework for international governance and rigorous technical standards, which ICAO member states must develop into laws in their jurisdictions, and airlines must comply with. for a letter, if they want to travel across international borders. Regional or national supervisory authorities of ICAO member states (in the case of the European Union, the European Aviation Safety Agency – EASA) only grant their authorizations after a long series of certifications.
Pilots only get their flying licenses after very rigorous training and the same is true, at their respective level, of the mechanics who check the planes or the air traffic controllers. Airplanes can only be sold and put into operation after passing countless tests, checking every nut or bolt. Even the solvency and ability of airline management teams are reviewed and approved, because a highly risky activity like this cannot be left in the hands of a team of people who do not show sufficient ability and experience. This is pure common sense.
Obviously, an aviation fanatic who builds his own plane is not the same as Airbus or Boeing making models of the commercial planes in which millions of passengers fly; A computer fanatic who builds an AI system for their own use is not the same as a company or country implementing an AI system that will have an impact on the lives of thousands or millions of people. In this case the regulation, existence and adoption of standards is particularly important.
Comprehensive review of algorithms
I find it interesting to point out that the commercial aviation approach is data-driven (data-driven approach) Since, of course, you don’t wait for a plane to crash to take a look at its black box. Airlines, under the supervision of inspectors and authorities, carefully analyze the slightest noise or anomaly in the data recorded by the systems during the flight. In other words: everything is processed and compared, over and over, so that flying an airplane is a very low-risk activity and we all benefit from this very smart ecosystem. those of us who have boarded an airplane, to be able to reach our destination safely; Companies and professionals in this sector to make a decent living. Note that, contrary to the usual rhetoric of technology companies, in commercial aviation safety is never counted as a cost or as a barrier to innovation, but rather as a condition without which the business itself would immediately cease.
In this sense, I welcome the fact that the proposal for regulating artificial intelligence is based on the HACCP system and stipulates a series of requirements that apply to high-risk AI systems, in particular to system providers, such as the obligation to prepare an EU declaration of conformity and CE marking. These certificates should, logically, complement data protection certificates, seals and labels, which of course must also be applied with the same precision to those systems that process personal data. In addition, in the same way that mechanical inspections are expected to be carried out in aircraft from time to time or when checking certain parameters, AI systems must undergo regular mandatory audits in which, as if they were the nuts and bolts of an aircraft, algorithms are reviewed. The data behind it is run by inspectors and mechanics to ensure system integrity and prevent accidents. It is not worth that this information is protected by the alleged intellectual property rights. For competition, it could be; For the inspector, never.
In short, if everything points to it, artificial intelligence will continue to advance at an astounding rate in the coming years, replacing people and allowing machines to make decisions that will affect our lives on a daily basis, as natural and fundamental intelligence advises us. To copy the successful model of those who make a living safely transporting people at thirty thousand feet and more than five hundred miles at speeds.
Likewise, in the same way that passengers can claim compensation in the event of a plane crash, there must be clear and specific rules in the field of AI in the event of accidents; That there will be, first of all, in the beginning. In this sense, I applaud the fact that a proposal is being addressed at European level on liability in the field of AI, although it would be desirable that the same AI regulation also include a right of appeal for damages caused by AI systems. Only then will effective enforcement and enforcement of AI laws be ensured.
Therefore, it is in our power now that this sector is still in its infancy, to lay the foundations of safe artificial intelligence, with equivalent international standards, which transmits the necessary confidence to citizens and contributes positively to the progress of humanity.
So, ladies and gentlemen, fasten your seat belts, artificial intelligence is coming.
Leonardo Cervera Navas He is the Director of the European Data Protection Supervisor
You can follow country technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.