Promise not to deploy AIs if severe risks cannot be addressed and mitigated.

AI

Leading artificial intelligence companies have signed up to a new round of voluntary commitments on AI safety, the UK and South Korean governments have announced.

The companies, which include tech giants Amazon, Google, Meta, and Microsoft as well as Sam Altman-led OpenAI, Elon Musk’s xAI, and Chinese developer Zhipu AI, will publish frameworks outlining how they will measure the risks of their “frontier” AI models.

The groups committed “not to develop or deploy a model at all” if severe risks could not be mitigated, the two governments said ahead of the opening of a global AI summit in Seoul on Tuesday.

The announcement builds on the so-called Bletchley Declaration made at the inaugural AI Safety Summit hosted by UK Prime Minister Rishi Sunak in November.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said in a statement. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

According to a communique outlining the agreement, the AI companies will “assess the risks posed by their frontier models or systems… including before deploying that model or system, and, as appropriate, before and during training.”

The companies will also set out the “thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable” and how such mitigations will be implemented.Advertisement

“The field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science,” said Anna Makanju, vice-president of global affairs at OpenAI.

“We remain committed to collaborating with other research labs, companies, and governments to ensure AI is safe and benefits all of humanity.”

Tuesday’s announcement echoes “voluntary commitments” made at the White House in July last year by Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI “to help move towards safe, secure, and transparent development of AI technology.”

It remains unclear, however, how companies might be held to account if they fail to meet their commitments.

Tuesday’s communique said the 16 companies would “provide public transparency” on implementation of their pledges, “except insofar as doing so would increase risk or divulge sensitive commercial information to a degree disproportionate to the societal benefit.”

Speaking in Seoul before a virtual leaders’ meeting on Tuesday evening, UK Science Secretary Michelle Donelan told the Financial Times that voluntary agreements made at Bletchley had been working.

“Therefore, we believe that these agreements again will continue to pay dividends,” Donelan said.

“But this isn’t just about what more can the companies do, it’s also what more can the countries do,” she added. Donelan confirmed the Chinese government would be represented at meetings being held on the second day of the summit on Wednesday.

Dan Hendrycks, safety adviser to xAI, said the voluntary commitments would help “lay the foundation for concrete domestic regulation.”

But Donelan reiterated the UK’s stance that it was too early to consider legislation to enforce AI safety.

“We need to get a better handle on the risks,” she said, noting the UK government would offer up to £8.5 million in grants to study AI-related risks such as deepfakes and cyber attacks.

She added that if the UK government had initiated legislation on the issue last year, “by the time it came out, it would probably be out of date.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here