This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.
After a remarkable weekend in Silicon Valley that saw its latest innovation, A.I., sitting dead center between typical corporate boardroom politics, OpenAI's former C.E.O Sam Altman returned to his company triumphantly. Altman's friction with OpenAI's board has caused a lot of discourse in social and traditional media, with the disagreements believed to stem from the divide between OpenAI and its holding company's profit/non-profit nature. However, a for-profit approach might not have been the only reason for the divide, as a fresh report from the New York Times shares that a key source of friction was a research paper written by board member Helen Toner.
Ms. Toner is a director at Georgetown University's Center for Security and Emerging Technology, and in October, she wrote a case study that covered how governments and companies could structure their communication to avoid misinterpretation by others. The paper, co-authored with others associated with Georgetown, defined communications tools called 'signals' that national security and A.I. space actors could rely on to clarify their intentions.
The four signals in the paper are tying hands, sunk costs, installment costs and reducible costs. These approaches vary from tying hands, which limits a firm by policy or other announcements that would be difficult to walk back from, to installment costs with higher initial costs (such as costly compliance commitments) that reduce over time as benefits accrue.
On this front, Ms. Toner's paper specifically focused on OpenAI's actions surrounding the launch of the GPT-4 model. OpenAI announced
Read more on wccftech.com