The five-day drama, which showcased the firing of OpenAI CEO Sam Altman and then reinstating him as the company’s Chief Executive, is just another example of how haywire the realm of Silicon Valley can be. However, Altman’s firing was not due to him maintaining a rebellious attitude with the board but due to an AI breakthrough discovery by OpenAI researchers that could potentially be dangerous to humanity.
Assuming AI is left unchecked or unregulated, it could lead to deleterious results, and that is what Reuters had reported about when sources familiar with the matter told the publication that the board was growing increasingly concerned with how AI was advancing and how Sam Altman may not have been privy to the consequences. An internal message referred to the project as ‘Q*’ or Q-star, noting that it could be a breakthrough in the AI startup’s search for creating artificial general intelligence (AGI).
OpenAI believes that AGI could surpass humans in most tasks, which also makes it highly dangerous, as it can limit options for what the global population can do to attain livelihood, though the consequences may reach a whole new scale. Given near-limitless resources, the new AGI model was able to solve certain math problems, and though these problems were the equivalent of grade-level students, acing them made OpenAI’s researchers highly optimistic about Q*’s future.
Currently, AI cannot solve math problems reliably, which is where the advantage of AGI comes in. The report further states researchers believe that in solving math problems, there is only one correct answer, and if AI can scale this obstacle, it is considered a massive milestone. Once AI can consistently solve math problems, it can make decisions that resemble
Read more on wccftech.com