The standards framework for the evolution of artificial intelligence (AI), in all its many applications, is necessarily established at European level. Despite recent legislation (AI Act) and efforts underway to expand and update it, it must be acknowledged that a single continental digital market has yet to be created. This, on its own, makes it impossible to accomplish the strategic goal of “digital sovereignty”. Moreover, competitiveness in this sector also calls for a strong presence in the hardware field (especially in microelectronics) to keep up with the rapid progress underway in AI. Another priority is to incentivize the adoption of AI systems by businesses, including small and medium-sized ones.
The fundamental criteria adopted by the EU for the regulation and orientation of AI development basically boils down to innovation, transparency and safety. A balanced approach is needed, yet that is hard to put into practice, given the rapid and exponential evolution of some technologies and their need for constant monitoring, adjustment and updating.
The EU is the largest digital market in the world, despite its less than prominent representation among the sector’s leaders. This is due at least in part to the fact that the major players thus far are the same ones (for the most part US-owned) that developed the digital ecosystem of the internet, where they have maintained a dominant role. Successfully closing the gap calls for a combination of public and private investments, in addition to transatlantic agreements – possibly in the context of the G7 – to create an international environment favorable to the entry and growth of new companies.
The speed of the changes imposed by these technologies is, in any case, both an intellectual and cultural as well as organizational challenge; moreover, it necessitates intervention at the level of education and training in order to make it possible for users to move safely in more advanced digital environments. Guiding the disruptive introduction of AI calls for training. This must naturally begin at the compulsory school level in order to provide the general knowledge base and simple tools necessary to at least contain the risks associated with the technology’s use and facilitate widespread digital literacy.
Another emerging aspect of this unwieldy management has to do with the massive amounts of energy required by the most advanced digital networks. The issue calls for addressing the relationship between digital technology and the environment, with a view to making technologies and use methods more sustainable from every standpoint.
At the same time, the business world is warning that regulatory uncertainty is an obstacle to the adoption of AI, especially by small and medium-sized enterprises; even for larger ones, an excess of legal requirements can lead to considerable additional costs. Some observers point to proportionality as a means for avoiding this problem, i.e., ensuring that legal compliance is compatible with the process or product being regulated.
The basic underlying problem here is the relationship between governments and “big tech”; in other words, the interweave of strategic political leadership decisions and the legitimate interests of the major corporations that end up defining even the context of competition between global powers, with inevitable repercussions on security and conflicts (ongoing or potential).
From a strictly military standpoint, the true leap of quality regards the diffusion of autonomous systems and software capable of gathering enormous amounts of data (even encrypted) and of disseminating them for the purpose of damaging an adversary. These two factors are actually capable of triggering a paradigm shift in strategic competition, and pose a specific threat to liberal democratic governments and open societies due to their reluctance to discount the human factor in the use of weapons. It is therefore necessary to strike a delicate balance between weapons system efficacy and their safety in terms of human control.
After all, both domestic and international security needs necessitate the adoption of a holistic approach to the many implications of AI: law and ethics – in as much as these are cultural creations inseparable from impulse as well as from rationality – will therefore be an integral part of the management of new technologies.