When OpenAI's leaders return to work on Monday they'll have one thing at the top of their to-do list: figure out what to do about the nonprofit board that nearly killed them.
They've already begun setting up a governance structure that will guide them in a more commercial direction,(1)and though that's great news for OpenAI's investors, it does fly in the face of its founding principles of prioritizing humanity while building super-intelligent machines. OpenAI's leadership can do something about that. They must think carefully about the remaining board members they add, and not just to look progressive. They need women, people of color and other diverse voices for whom biased language models are most likely to cause harm, and who will speak up about those risks.
Inscrutable machine-learning systems have denied women job opportunities, and they are poised to reinforce stereotypes in a flood of AI-generated content hitting the web. It doesn't help that women make up about a third of the people building AI systems today, and just 12% at OpenAI, according to a 2023 study of LinkedIn profiles by Glass.ai. Little wonder women are among AI's most vocal critics. But they are also more likely to be silenced. One of the most influential research papers about the dangers of large language models — the 2021 Stochastic Parrots paper — was written by female academics and AI scientists, and Google fired two of them from its ranks, Timnit Gebru and Margaret Mitchell.
And of the four OpenAI board members who voted to oust Sam Altman as chief executive officer last week, two who ended up being booted by the company were academic Helen Toner and robotics entrepreneur Tasha McCauley. The resulting social media blowback has largely focused on
Read more on tech.hindustantimes.com