An OpenAI insider’s open letter warns of ‘serious risks’ and calls for whistleblower protection

An OpenAI insider’s open letter warns of ‘serious risks’ and calls for whistleblower protection

An OpenAI insider group is demanding artificial intelligence companies be more transparent about the “serious risks” of AI – and that they protect employees who voice concerns about the technology they build.

“AI companies have strong financial incentives to avoid effective oversight,” reads an open letter published Tuesday signed by current and former employees at AI companies including OpenAI, the creator behind the viral ChatGPT tool.

They also called on AI companies to foster a “culture of open criticism” that welcomes, rather than punishes, people who speak up about their concerns, especially as the law struggles to catch up with the rapidly developing technology.

Companies have acknowledged the “serious risks” posed by AI – from manipulation to loss of control, known as the “singularity,” which could potentially lead to human extinction – but they should do more to educate the public about the risks and protective measures, the group wrote.

Given the current laws, AI workers say, they don’t believe AI companies will voluntarily share critical information about the technology.

Therefore, it’s important for current and former employees to speak up — and for companies not to enforce “disparaging” agreements or otherwise respond to those who voice risk-related concerns. “Typical whistleblower protections are insufficient because they focus on illegal activities, while many of the risks we are concerned about remain unchecked,” the group wrote.

Their letter comes as companies move quickly to implement generative AI tools into their products, while government regulators, companies and consumers grapple with responsible use. Meanwhile, many technologists, researchers and leaders have called for a temporary halt in the AI ​​race, or for the government to step in and create a moratorium.

OpenAI response
In response to the letter, an OpenAI spokesperson told CNN that it is “proud of our track record of providing the most capable and safest AI systems and believes in our scientific approach to addressing risk, adding that the company agrees” rigorous debate is essential given the importance of this technology.”

OpenAI says it has an anonymous integrity hotline and a Safety and Security Committee chaired by members of its board of directors and security leaders from the company. The company does not sell personal information, build user profiles or use that data to target anyone or sell anything.

But Daniel Ziegler, one of the organizers behind the letter and an early machine learning engineer who worked on OpenAI between 2018 and 2021, told CNN that it’s important to remain skeptical of the company’s commitment to transparency.

“It’s very difficult to know from the outside how seriously they take their commitment to safety assessment and thinking about the dangers of society, especially because there is such strong commercial pressure to move quickly,” he said. “It’s important to have the right culture and processes in place so employees can speak up in a targeted way when they have concerns.”

He hopes more professionals in the AI ​​industry will go public with their concerns as a result of the letter.

Meanwhile, Apple is widely expected to announce a partnership with OpenAI at its annual Worldwide Developers Conference to bring generative AI to the iPhone.

“We see generative AI as a key opportunity across our products and believe we have a differentiating advantage there,” Apple CEO Tim Cook said on the company’s latest earnings call in early May.

About Kepala Bergetar

Kepala Bergetar Kbergetar Live dfm2u Melayu Tonton dan Download Video Drama, Rindu Awak Separuh Nyawa, Pencuri Movie, Layan Drama Online.

Leave a Reply

Your email address will not be published. Required fields are marked *