AI Safety: UK and US sign landmark agreement

BBC NewsThe UK and US have signed a landmark deal to work together on testing advanced artificial intelligence (AI).

The agreement signed on Monday says both countries will work together on developing “robust” methods for evaluating the safety of AI tools and the systems that underpin them.

It is the first bilateral agreement of its kind.

UK tech minister Michelle Donelan said it is “the defining technology challenge of our generation”.

“We have always been clear that ensuring the safe development of AI is a shared global issue,” she said.

“Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

The secretary of state for science, innovation and technology added that the agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November 2023.

The event, attended by AI bosses including OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and tech billionaire Elon Musk, saw both the UK and US create AI Safety Institutes which aim to evaluate open and closed-source AI systems.

While things have felt quiet on the AI safety front since the summit, the AI sector itself has been extremely busy.

Competition between the biggest AI chatbots – such as ChatGPT, Gemini, Claude – remains ferocious.

So far the almost exclusively US-based firms behind all of this activity are still cooperating with the concept of regulation, but regulators have yet to curtail anything these companies are trying to achieve.

Similarly, regulators have not demanded access to information the AI firms are unwilling to share, such as the data used to train their tools orthe environmental cost of running them.

TheEU’s AI Act is on its way to becoming lawand once it takes effect it will require developers of certain AI systems to be upfront about their risks and share information about data used.

This is important, after OpenAIrecently saidit would not release a voice cloning tool it developed due to “serious risks” the tech presents, particularly in an election year.

In January, a fake, AI-generated robocall claiming to be from US President Joe Biden urged voters to skip a primary election in New Hampshire.

Currently in the US and UK, AI firms are mostly regulating themselves.




No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Comments