AI Safety
Image by Koshiro K | Shutterstock

Frontier Model Forum founders Anthropic, Google, Microsoft, and OpenAI have announced the AI safety forum’s first executive director Chris Meserole, along with a $10 million AI Safety Fund to promote responsible artificial intelligence research.

It’s the first major announcement to come from the quad since establishing the Frontier Model Forum industry body back in July.

Meserole formally served as the Director of the Artificial Intelligence and Emerging Technology Initiative at the DC based non-profit public policy think tank, the Brookings Institution.

“Chris Meserole brings a wealth of experience focusing on the governance and safety of emerging technologies and their future applications,” the Forum said in Wednesday’s joint announcement.

The Forum’s goal is to ensure that any advancements in AI technology and research remain safe, secure, and under human control while identifying best practices and standards. It also aims to facilitate information-sharing among policymakers and the industry.

Open AI coins “frontier AI” models as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.”

“The most powerful AI models hold enormous promise for society, but to realize their potential, we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum,” Meserole said about his new role.

The new $10 million AI Safety Fund will support independent researchers from academia, start-ups, and research institutions from around the world – which the quad individually pledged to help facilitate earlier this year to Washington.

Another part of the Forum’s role in the industry will be to create a standard baseline defining AI terms, concepts, and processes for the industry.

It released its first working update on the process of “AI red teaming,” as part of Wednesday’s announcement, based on several Microsoft and Google case studies.

Red teaming in cybersecurity is a practice where companies hire ethical hackers to try and infiltrate their own network systems to discover any security weaknesses and fix them before bad actors do.

The Forum has defined red teaming for AI models as “a structured process for probing AI systems and products for the identification of harmful capabilities, outputs, or infrastructural threats.”

The AI red teaming concept will also include developing “new responsible disclosure processes” where AI labs can share information about dangers and vulnerabilities and the possible ways to mitigate them.

The Forum said it plans to establish an Advisory Board over the next few months to help with strategy.

The AI safety body also announced its plan to deepen partnerships with other governments, NGOs, and multinational AI-centric organizations, such as the non-profit Partnership on AI (PAI) and ML Commons, an AI and machine learning engineering consortium.

LEAVE A REPLY

Please enter your comment!
Please enter your name here