Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Follow Us
Follow Us
Login Login

State and company leaders pledge AI safety in Seoul.

Seoul-Ai-Summit Seoul-Ai-Summit

On Tuesday, government representatives and CEOs in the artificial intelligence sector agreed to set up a worldwide safety research network and implement basic safety precautions in the fast-moving area.

Seoul AI summit

Britain and South Korea are holding the AI safety summit this week in Seoul, almost six months after the first worldwide conference on artificial intelligence safety at Bletchley Park in England. The conference emphasizes the new possibilities and difficulties the world faces with the development of artificial intelligence.

To hasten the development of AI safety research, the British government revealed on Tuesday a new agreement between ten nations and the European Union to create a worldwide network akin to the first publicly supported organisation, the U.K.’s AI Safety Institute. The network will advocate a shared knowledge of artificial intelligence safety and match its activities with standards, research, and testing. Signing the pact are Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the United Kingdom, and the United States.

Advertisement

Global leaders and top AI businesses gathered virtually under U.K. prime minister Rishi Sunak and South Korean president Yoon Suk Yeol on the first day of the AI Summit in Seoul to address artificial intelligence safety, innovation, and inclusivity. 

During the talks, the leaders agreed to the larger Seoul Declaration, emphasizing increased international collaboration in building AI to address major global issues, uphold human rights, and bridge digital gaps worldwide, stressing the importance of being “human-centric, trustworthy, and responsible.”

AI is a highly fascinating technology, and the United Kingdom has led worldwide efforts to cope with its potential, convening the world’s first AI Safety Summit last year,” Sunak said in a U.K. government statement. “But if we want the benefits, we have to make sure it’s safe.” This makes me happy, as we now have a consensus for a network of AI safety institutes. 

The United Kingdom and the United States finalised a cooperation memorandum of understanding only last month to work on research, safety assessments, and AI safety guidelines. 

The agreement published today follows the initial AI safety commitments from 16 businesses engaged in AI, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, Samsung Electronics, Technology Innovation Institute, xAi, and Zhipu.ai (Zhipu.ai is a Chinese business funded by Alibaba, Ant, and Tencent).

The AI businesses, including those from the U.S., China, and the United Arab Emirates (UAE), have agreed to the safety obligations to “not develop or deploy a model or system at all if mitigations cannot keep risks below the thresholds,” according to the U.K. government statement. 

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Sunak added. “These pledges guarantee the top AI firms in the world will offer openness and responsibility for their strategies to create safe artificial intelligence.” 

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Microsoft unveils Copilot+ PCs to make Windows AI-powered.

Next Post

Paytm warns of layoffs as losses mushroom following an RBI crackdown.

Advertisement