The new safety group for OpenAI is made up of people who work there.

OpenAI has created a committee to review “critical” safety and security decisions for its programmes and operations. Ethicists will be outraged that OpenAI has named corporate executives like CEO Sam Altman to the committee.

Altman and the Safety and Security Committee, which includes OpenAI board members Bret Taylor, Adam D’Angelo, Nicole Seligman, chief scientist Jakub Pachocki, Aleksander Madry (who leads OpenAI’s “preparedness” team), Lilian Weng (head of safety systems), Matt Knight (head of security), and John Schulman (head of “alignment science”), will evaluate OpenAI’s safety processes and safeguards over the next 90 days. The committee will next present its findings and recommendations to the OpenAI board of directors, who will examine and publish an update on any approved ideas “in a manner that is consistent with safety and security.”

OpenAI states, “OpenAI has recently begun training its next frontier model, and we anticipate the resulting systems to bring us to the next level of capabilities on our path to [artificial general intelligence].” While we are proud to build and release industry-leading capabilities and safety models, we welcome a robust debate at this important moment.”

Many high-profile safety staff members have left OpenAI in recent months, and some have expressed concerns about what they regard as a purposeful de-prioritisation of AI safety.

OpenAI governance team member Daniel Kokotajlo departed in April after losing faith that OpenAI would “behave responsibly” surrounding the deployment of more sophisticated AI, he wrote on his personal blog. After a long struggle with Altman and his friends, OpenAI co-founder and former head scientist Ilya Sutskever departed in May, purportedly citing Altman’s drive to introduce AI-powered products at the cost of safety work.

Recently, Jan Leike, a former DeepMind researcher who worked on ChatGPT and InstructGPT at OpenAI, resigned from his safety research role, saying in a series of X posts that OpenAI “wasn’t on the trajectory” to get AI security and safety “right.” Leike and AI policy researcher Gretchen Krueger, who left OpenAI this week, urged the business to increase accountability, transparency, and “the care with which [it uses its] own technology.”

According to Quartz, at least five of OpenAI’s most safety-conscious employees, including former board members Helen Toner and Tasha McCauley, have either left or lost their jobs since late last year. On Sunday, Toner and McCauley argued in The Economist that we cannot trust Altman’s OpenAI to maintain its responsibility.

“Based on our experience, we believe self-governance cannot reliably withstand profit incentives,” Toner and McCauley stated.

Eltrys revealed earlier this month, as Toner and McCauley noted, that OpenAI promised its Superalignment team, which developed techniques to regulate and guide “superintelligent” AI systems, 20% of the company’s computational resources, but they rarely received even a fraction. Superalignment has been disbanded, and most of its work has been transferred to Schulman and OpenAI, a safety advisory group founded in December.

OpenAI supports AI regulation. While it hired an in-house lobbyist and recruited lobbyists at other law firms, OpenAI spent hundreds of thousands of dollars on U.S. lobbying in Q4 2023 to influence that rule. The U.S. Department of Homeland Security named Altman to its newly formed Artificial Intelligence Safety and Security Board, which will recommend “safe and secure development and deployment of AI” in critical infrastructures.

To avoid ethical fig-leafing, OpenAI will hire third-party “safety, security, and technical” experts, such as cybersecurity veteran Rob Joyce and former U.S. Department of Justice official John Carlin, to support the executive-dominated Safety and Security Committee. The firm hasn’t disclosed the number or composition of this outside expert group, or its control over the committee beyond Joyce and Carlin.

On X, Bloomberg journalist Parmy Olson writes that corporate oversight bodies such as the Safety and Security Committee and Google’s Advanced Technology External Advisory Council “do virtually nothing in the way of actual oversight.” OpenAI claims it wants the committee to address “valid criticisms” of its work, but “valid criticisms” are subjective.

Altman said outsiders will contribute to OpenAI’s governance. In 2016, he said in the New Yorker that OpenAI will enable global elections for governance board seats. That never happened and looks improbable now.

Juliet P.
Author: Juliet P.

Share this article
Shareable URL
Prev Post

For $3 million, FairCado wants to get people to buy used things.

Next Post

All YouTube users get ‘Playables’ free games.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified of the best deals on our WordPress themes.