Microsoft, Amazon, OpenAI, and other tech giants have reached a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit on Tuesday, May 21. According to CNBC, this pact involves voluntary commitments to ensure the safe development of their most advanced AI models.
In this agreement, companies from various countries, including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, will publish safety frameworks outlining how to measure and mitigate risks associated with frontier AI models.
These frameworks will include "red lines" defining intolerable risks, such as automated cyberattacks and bioweapons threats.
Kill Switch for AI
To address extreme risks, the companies plan to implement a "kill switch" that would halt the development of their AI models if they cannot ensure risk mitigation - a global first that will ensure transparency and accountability in AI development.
This new agreement expands on commitments made by AI developers last November and focuses on generative AI technologies like OpenAI's GPT models, which power ChatGPT.
The companies will seek input from trusted actors, including home governments, and aim to release their safety thresholds before the AI Action Summit in France in early 2025.
Since the debut of ChatGPT in November 2022, regulators and tech leaders have grown concerned about the risks of advanced AI systems. Recently, VCPost reported that OpenAI has shut down its AI safety team, which worsens the concerns about AI.
In response, the European Union has approved its AI Act to regulate AI development. At the same time, the U.K. has adopted a more flexible approach, applying existing laws to the technology and considering future legislation for frontier models.
Join the Conversation