UK (Parliament Politic Magazine) – The recently concluded Global AI Safety Summit at Bletchley Park marked a historic moment as the world‘s first AI Safety Institute was launched, garnering support from leading AI companies and nations.
Over the course of four months, the UK government meticulously assembled a team tasked with evaluating the safety risks associated with cutting-edge AI models, often referred to as ‘frontier’ AI. The Frontier AI Taskforce is set to transform into the AI Safety Institute, under the continued leadership of Ian Hogarth.
Launch of the World’s First AI Safety Institute in the UK
The Institute will benefit from guidance provided by an external advisory board composed of prominent figures in fields ranging from national security to computer science, ensuring comprehensive expertise for this groundbreaking global initiative.”
The AI Safety Institute will conduct comprehensive pre- and post-release assessments of emerging frontier AI models, aiming to address the potential risks associated with their capabilities. These assessments encompass a wide spectrum of risks, from societal issues like bias and misinformation to the most improbable yet catastrophic scenarios, such as a loss of control over AI by humanity.
In its research endeavors, the AI Safety Institute will collaborate closely with the Alan Turing Institute, the national center for data science and AI.
Global leaders and major AI corporations have expressed their wholehearted support for the Institute’s mission. Distinguished researchers from the Alan Turing Institute and Imperial College London have welcomed the Institute’s establishment, as have representatives from the tech sector in techUK and the Startup Coalition.
The AI Safety Institute’s Mission: Ensuring Safe Testing of Emerging AI Technologies
Furthermore, the UK has already initiated partnerships with two AI giants: the US AI Safety Institute and the government of Singapore, fostering cooperation in the domain of AI safety testing.
Prime Minister Rishi Sunak stated, ‘Our AI Safety Institute will serve as a pivotal global hub for AI safety, taking the lead in critical research concerning the capabilities and risks associated with this rapidly advancing technology.'”
As we anticipate the release of powerful new AI models in the upcoming year, with potentially incompletely understood capabilities, the AI Safety Institute’s primary mission is to swiftly establish robust processes and systems for their comprehensive testing prior to their market introduction. This includes open-source models.
Global Collaboration for AI Safety: Governments and Tech Companies Unite
Beyond its immediate testing role, the Institute’s research will inform policymaking in the UK and internationally. Additionally, it will furnish technical tools for governance and regulation, enabling the analysis of data used for training these systems to identify and mitigate bias.
This proactive approach by the government aims to ensure that AI developers do not act as their own evaluators in matters of safety. A joint statement by governments and AI companies emphasizes the pivotal role both parties must assume in the rigorous testing of the next generation of AI models
The nations represented at Bletchley Park have also committed to endorsing Professor Yoshua Bengio, a Turing Award-winning AI scholar and member of the UN’s Scientific Advisory Board, to spearhead the inaugural “Frontier AI State of the Science” report.
Read More: Storm Ciarán: England and Channel Islands Prepare for Disruption
Future AI Safety Summits: A Global Commitment to Advancing AI Safety
The insights gathered in this report will lend crucial support to forthcoming AI Safety Summits, the groundwork for which is already underway. The Republic of Korea has agreed to co-host a compact virtual summit on AI within the next six months, while France is slated to host the subsequent in-person summit one year from now.
Hogarth, the chair of the AI Safety Institute, remarked, “The backing of international governments and corporations underscores the significance of our commitment to advance AI safety and ensure its responsible development.
Through the AI Safety Institute, we will assume a pivotal role in mobilizing the global community to address the challenges posed by this rapidly evolving technology.”
Bengio further stated, “The secure and responsible advancement of AI is a matter of global significance. While substantial investments have been made to enhance AI capabilities, the corresponding focus on safeguarding the public’s interests through AI safety research and governance has been insufficient.
I am delighted to contribute to the critical endeavor of international collaboration in managing AI safety, by collaborating with colleagues worldwide to present the most up-to-date evidence on this paramount issue.”