UK's AISI forms alliance to safeguard against cutting-edge artificial intelligence
The UK's Alignment Project Aims to Ensure Advanced AI Systems Remain Under Human Control
The UK's AI Security Institute (AISI) has launched a new project called The Alignment Project, with a budget of £15 million. This international initiative aims to fund and accelerate research in AI alignment, ensuring advanced AI systems behave predictably, safely, and in accordance with human values as their capabilities grow [1][4][5].
The Alignment Project has a three-pronged approach, including grant funding, compute access, and venture capital for startups. Governments, philanthropists, and venture firms are invited to join the project, either by funding research, offering compute resources, or investing in alignment startups [2]. AWS will provide critical infrastructure support for the project by offering free cloud credits for alignment experiments [3].
The project also includes investment support for startups working on alignment solutions. A network of AI safety institutes, including Canada, Japan, and the EU, was agreed upon at the Seoul AI Summit in May 2024 to share safety research and standards [4]. Other partners for The Alignment Project include Anthropic, Amazon Web Services (AWS), Canada's AI Safety Institute and CIFAR, Schmidt Sciences, the UK's Advanced Research and Innovation Agency (ARIA), and civil society groups [2].
The objectives of the Alignment Project include developing tools and techniques to keep future AI systems beneficial, reliable, and aligned with human intent. The project aims to address the urgent challenge of AI systems acting in harmful or unexpected ways due to misalignment [1]. Grants of up to £1 million per project will be awarded to support groundbreaking AI alignment research [1].
The Alignment Project reflects a global consensus on the critical importance and under-resourcing of AI alignment research in the face of rapidly advancing AI capabilities [1]. Geoffrey Irving, Chief Scientist at the AI Security Institute, has stated that AI alignment is one of the most urgent and under-resourced challenges of our time [1].
The launch of The Alignment Project coincides with the reshaping of industries by generative AI, impacting areas like enterprise software, healthcare, and education. Experts are concerned that current controls may not be sufficient to manage future risks associated with advanced AI [5]. The project seeks to prevent powerful AI systems from behaving unpredictably or beyond human control, safeguarding national and global security [1][3].
Peter Kyle, Secretary of State for Science, Innovation and Technology, is also involved in the project but no specific statement from him was provided in the paragraph. The Alignment Project is focused on studying and mitigating the risks of advanced AI [1]. Misaligned, highly capable AI systems could potentially act in ways that are beyond human control, with significant global implications [5].
[1] The Alignment Project [2] The Alignment Project Partners [3] AWS Support for The Alignment Project [4] Seoul AI Summit Agreement [5] The Importance of AI Alignment
- The Alignment Project, with its focus on AI alignment, aims to ensure that advanced artificial-intelligence systems remain under human control, as their capabilities grow, in accordance with human values.
- To further cement this goal, The Alignment Project partners with various organizations such as Anthropic, Amazon Web Services (AWS), and the UK's Advanced Research and Innovation Agency (ARIA), using technology and innovation to study and mitigate the risks of AI systems behaving unpredictably or beyond human control.