The saga of Sam Altman's firing and re-hiring from OpenAI happened at a distance from the video game industry. Some game developers have been experimenting with the GPT-4 API to create chatbot-like NPCs, but major platform owners like Valve have signaled they won't allow games built on the model to be sold without proof they were built on data owned by the developer.
That wrinkle in the video game industry's AI adoption speaks to one adjective bandied about by developers when discussing generative AI tools: the word "ethical." We started hearing the phrase as early as 2017, and Unity outlined its plans for ethical AI in 2018. Throughout 2023 we've heard AI developers big and small roll out the phrase, seemingly with the awareness that there is general unease about how AI tools are made and how they are used.
Last week's events, tumultuous as they were, should make things clear for developers: when push comes to shove, it's profits, not ethics, that are driving the AI boom–and those loudly championing the ethics of their own AI tools deserve the most scrutiny.
2023 has given us a bounty of case studies to unpack why developers are worried about the "ethics" of generative AI. Unity's Marc Whitten explained to us in a recent chat that the company's AI tools have been ethically designed so developers can ensure they own their data, and that the data used to make their game content has been properly licensed.
That explanation addressed concerns about data ownership and generative AI tools, which have been repeatedly shown to be harvesting words and images that the developers did not have the rights to.
The flip side of the ethical AI coin is the deployment of the tools. Voice actors have become the first victims of AI deployment,
Read more on gamedeveloper.com