Key findings of the report
This report proposes five foundational steps that can be taken by governments to begin progress towards environmentally sustainable artificial intelligence (AI).
AI systems and services place high levels of demand on energy, water and critical materials – and the exact extent of this demand is often unclear. The recommendations within this report aim to reduce this unsustainable resource consumption and related environmental impacts.
If implemented, these foundational steps will help guide the UK, and other countries, towards a more sustainable future with AI. Foundational steps can improve understandings of environmental impacts, enable more sustainable infrastructures, and support clear leadership from government to accelerate change.
This report is the first on AI sustainability from our engineering responsible AI programme. Further work will consider long-term interventions, and the capacity for AI systems and services to deliver societal benefit.
![](/media/cywhzxpc/code-on-a-screen-500x450.jpg)
Why do we need to talk about AI sustainability?
Recent expansions in the number of data centres, and the prevalence of AI tools and services, have heightened environmental risks. Data centres, and the AI systems they host, consume significant amounts of energy, water and critical materials. These rapidly growing resource demands could have far-reaching effects, such as creating competition for renewable energy or drinking water.
Recent government reforms outlined in the AI Opportunities Action Plan involve creating AI Growth Zones to build new infrastructures like data centres. To ensure these new infrastructures deliver a net benefit to the UK, policies to manage and monitor the environmental risks they pose are urgently needed – particularly as reliable data on how much resource these infrastructures consume is not currently readily available. Data centres and other AI infrastructure can be designed to use less energy, drinking water and critical materials, but doing so effectively and at scale will require access to resource use data.
In recent years advances in AI systems and services has largely been driven by a race for size scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing in size at a rate unparalleled by other high energy systems – and generally without much regard for resource efficiency. This is a dangerous trend, and we face a real risk that our development, deployment and use of AI could do irreparable damage to the environment.
Professor Tom Rodden CBE FREng FRS FBCS, Pro-Vice-Chancellor of Research & Knowledge Exchange and Professor of Computing, University of Nottingham and Chair of the working group
Foundational steps to sustainable AI
This report sets out five foundational steps that jurisdictions can take towards sustainable AI in the short-term. These steps are also critical to better understanding and managing the environmental impacts of AI systems and services in the longer term.
What are the environmental impacts of AI?
Energy
Energy is required for all data centre operations, as well for the production of data centres and their supporting equipment and infrastructures. While estimates of AI’s future energy demands vary at present, even the more conservative estimates project that future demand will strain local grids. More dramatic projections suggest that AI’s energy demand may be so significant that it disrupts efforts to reach net zero, and causes new fossil fuel sources to be brought online.
According to the Clean Power 2030 Action Plan, current projections indicate that for the UK to reach its goal of at least 95% of energy generation coming from clean power, current wind and solar capacity must increase by approximately 2.7 times. Where additional demand is added, the scale of this challenge grows.
Water
Water is a key resource for used for the cooling of data centres during operation, and in the manufacture of computing hardware. A lot of this water is withdrawn from potable (drinkable) sources. For instance, Google reported that approximately 78% of their global water withdrawals came from potable sources in 2023.
As water must be taken from local grids, water consumption is primarily a local issue. Data centres, generally, withdraw and consume water from nearby sources, meaning that businesses and communities located in regions with a high concentration of data centres could be exposed to the risk of water scarcity.
Critical materials
The consumption of critical materials across the AI lifecycle creates both environmental and strategic risks. The hardware components, including both general-purpose and specialised AI components, that sit within data centres, networks, and user terminals, are made up of a variety of critical materials including antimony, gallium, indium, silicon, and tellurium.
Where recycled critical materials are not available, they must be sourced via mining. Mining operations cause environmental harms such as the loss of land (including carbon sinks), loss of habitat and biodiversity through direct displacement and chemical pollution, and drought and freshwater pressures that can impact local ecosystems. Critical materials are frequently part of complex international supply chains which are also vulnerable to disruption.
Conclusions and the future of AI
These foundational steps are intended to be implemented by jurisdictions in the short-term to monitor and reduce the environmental impacts of AI systems and services. Together, they can enable better understandings of environmental impacts, more sustainable infrastructure, and clear leadership from governments to accelerate change.
Foundational steps can also be leveraged to support the delivery of the AI Opportunities Action Plan. As noted in this report, there are a number of opportunities to ensure delivery of the plan promotes environmental sustainability – such as using AI Growth Zones to implement new environmental sustainability requirements for data centres, or to review data from new environmental reporting mandates to inform future updates to the UK’s long-term compute strategy.
No jurisdiction will be able to solve AI sustainability on its own, but there is an opportunity now for leaders to establish themselves and facilitate collaboration on this global challenge.
We welcome opportunities to collaborate with likeminded organisations, who are passionate about engineering responsible AI and minimising the environmental risks, to collectively drive change.
![](/media/svkepyl4/eneni-in-a-meeting-with-other-engineers-this-is-engineering.jpg)
Contact us
For further information about this project, or to work with our digital infrastructure team on the issue of AI and sustainability please contact the Royal Academy of Engineering team: [email protected]
Acknowledgements
The report has been developed by the Royal Academy of Engineering in partnership with the Institution of Engineering and Technology and BCS, the Chartered Institute of IT, under the National Engineering Policy Centre (NEPC).
Work group
This report was delivered by a National Engineering Policy Centre Working Group made up of the following experts.
Chair: Professor Tom Rodden CBE FREng FRS FBCS, Pro-Vice-Chancellor of Research & Knowledge Exchange and Professor of Computing, University of Nottingham
Professor Adisa Azapagic MBE FREng FRSC FIChemE, Professor of Sustainable Chemical Engineering, University of Manchester
Alex Bardell FBCS, Founder, SDAdvocate
Professor Ana Basiri, Professor of Geospatial Data Science, University of Glasgow
Professor Mandy Chessell CBE FREng FBCS HonFIED, Founder of Pragmatic Data Research Ltd and President of the Institute of Engineering Designers
Dame Dawn Childs DBE FREng FICE FIMechE FRAeS, CEO of Pure Data Centres Group
Professor Steve Furber CBE FREng FRS, Professor Emeritus in the Department of Computer Science, University of Manchester
Professor Sarvapali (Gopal) Ramchurn FIET, Professor of Artificial Intelligence, University of Southampton and CEO of Responsible AI UK
Professor Stephen Roberts FREng FIET, Professor of Machine Learning, University of Oxford
Dr Eve Schooler, Royal Academy of Engineering Visiting Professor of Sustainable Computing, University of Oxford
Professor Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, University of Edinburgh
Dr Kommy Weldemariam, Director of Sustainability Science and Innovation, Amazon
Academy Staff
Eliot Gillings, Policy Advisor, Digital and Physical Infrastructure
Dr Alexandra Smyth, Head of Policy – Infrastructure and resilience
![](/media/kwcb0qzu/bcs-2021.png)
![](/media/zmphaxns/iet.png)
Related policy work
Engineering Responsible AI
The Academy's Engineering Responsible AI campaign explores how the emerging technology of artificial intelligence (AI)…
Blog series: Engineering Responsible AI
This programme explores themes around the safe and ethical development and deployment of AI through interviews with AI…
Critical materials - reducing demand and ensuring sustainability
A new report from the National Engineering Policy Centre about resource efficiency and demand reduction for critical ma…