Skip to content

The Energy of AI

  • Venice
  • 30 October 2025

        The adoption of artificial intelligence is reshaping global competition along three intertwined axes: energy, technology and governance. The combined impact of the explosion of computational capacity, the unprecedented increase in electricity demand, and the need for coordinated policies calls for a systemic approach that links and encompasses energy security, industrial competitiveness, fundamental rights, and social sustainability.

        The first variable concerns energy. The AI revolution requires continuous and programmable power. Some forecasts estimate an increase in electricity demand of up to 165% by the end of the decade with respect to 2023 levels, with a potentially disruptive impact for grids, planning and investment. This growth can be absorbed more rapidly in flexible systems. In more centralized systems, however, robust policies for the coordination and acceleration of authorization processes are needed; without these, bottlenecks will delay both infrastructure and industrial innovation projects.

        More robust, integrated and digitized electricity grids are therefore a prerequisite for managing the digital transition. They need to distribute additional capacity, balance variable sources, and accommodate new technologies, starting with those in the nuclear field. Small modular reactors (SMRs) are often held up as a potential means of supporting ever-growing AI loads. However, the first commercial projects in the West will not arrive until 2029 at the earliest, in Canada, with more realistic estimates suggesting 2035.

        A crucial question in this scenario is whether AI can function on less energy. The evolution of the models has improved the energy efficiency of the algorithms by about one thousandfold, but the number of operations required has grown at an even faster pace. A decisive strand in the research concerns the integration of computational capacity and memory to reduce the amount of data in transit – one of the highest consumption items. Alternative models such as DeepSeek show that lighter architectures can achieve competitive results even with limited computational power. Indeed, China is making a virtue of necessity in the face of the Trump Administration’s attempts to restrict the sale of advanced chips abroad. Another factor to consider is that this sector is the focus of sharp geopolitical tensions, with much production concentrated in a sensitive area such as Taiwan.

         

         

        The second critical axis is the adaptation of economic and social systems. Rigid labor markets slow the adoption of AI, while more flexible systems enable businesses to reorganize quickly. The transition requires continuing education, rapid retraining pathways and a review of educational models. AI, however, will not just replace manual or repetitive tasks; it marks a shift towards an era in which activities are no longer justified solely by their utility and their intrinsic value becomes part of the social and cultural design and planning of work.

        The third axis concerns governance. Three models are currently under discussion: the Chinese model, based on state control and the intensive use of data for social supervision; the US model, based on market and industrial concentration; and the European model, based on risk management and the protection of rights. The AI Act was a first step in setting limits and establishing values, but by itself is not enough. There is no integrated policy framework to underpin the development of advanced technologies for Europe and in Europe. Dependence on data centers controlled by non-European players, even if they are located on the continent, risks supporting industries that are doing nothing to strengthen Europe’s technological sovereignty.

        From a technology governance perspective, deep learning poses a structural problem: the more powerful the model, the less interpretable it is. Neural networks create a kind of black box without any possibility of reconstructing the logical pathway leading to a given algorithmic response. This opacity is posing immediate risks in areas where AI use is already under way: from granting mortgages to personnel selection or dismissal processes. In this context, to paraphrase von Clausewitz, artificial intelligence is the continuation of epistemology by other means.

         

        The second critical axis is the adaptation of economic and social systems. Rigid labor markets slow the adoption of AI, while more flexible systems enable businesses to reorganize quickly. The transition requires continuing education, rapid retraining pathways and a review of educational models. AI, however, will not just replace manual or repetitive tasks; it marks a shift towards an era in which activities are no longer justified solely by their utility and their intrinsic value becomes part of the social and cultural design and planning of work.

        The third axis concerns governance. Three models are currently under discussion: the Chinese model, based on state control and the intensive use of data for social supervision; the US model, based on market and industrial concentration; and the European model, based on risk management and the protection of rights. The AI Act was a first step in setting limits and establishing values, but by itself is not enough. There is no integrated policy framework to underpin the development of advanced technologies for Europe and in Europe. Dependence on data centers controlled by non-European players, even if they are located on the continent, risks supporting industries that are doing nothing to strengthen Europe’s technological sovereignty.

        From a technology governance perspective, deep learning poses a structural problem: the more powerful the model, the less interpretable it is. Neural networks create a kind of black box without any possibility of reconstructing the logical pathway leading to a given algorithmic response. This opacity is posing immediate risks in areas where AI use is already under way: from granting mortgages to personnel selection or dismissal processes. In this context, to paraphrase von Clausewitz, artificial intelligence is the continuation of epistemology by other means.

        In the cultural and information sectors, however, AI is amplifying both creativity and systemic risks. Automated content production, platform saturation, hallucinations and deepfakes threaten to undermine the collective ability to distinguish fact from fiction. This is the “liars’ dividend”: the weakening of trust becomes an advantage for authoritarian actors and a risk for democracy. Artificial intelligence can strengthen democratic control by enabling investigative journalism and data analysis, but it must be integrated into processes with a significant human presence that ensures oversight and control.

        Turning to the medical field, technology offers extraordinary opportunities, thanks in part to the wide dissemination of wearable devices that provide significant amounts of information about patients’ health and conditions. The value of AI lies not in replacing doctors, but in enhancing their judgment and treatment capabilities. However, applications in vulnerable contexts require stringent limits: ethical principles and codes of practice must be incorporated into the models used and process transparency must be ensured.

        The infrastructure built in the coming years, in all fields involved in this revolution, will be crucial in ensuring that the potential offered by AI evolves in a positive direction. Without a credible system of global cooperation – or at least a global understanding of the limits, responsibilities, and use of tools such as the physical AI that will characterize robots and drones – the risk of conflict, misuse, and the loss of systemic control will increase. The direction of travel is not predetermined: it will depend on the collective ability to combine innovation, investment, regulation, and strategic vision.