Neural networks have grown substantially over the recent years from the iteration of the first Generative Pre-trained Transformers. As NVIDIA has refined their latest graphics cards and Tensor Cores, the capabilities of those networks has been greatly improved in recent years. Traditionally just text-based inputs, NVIDIA has been leveraging TensorRT for deep learning and audio interpretation. In two separate hands-on demonstrations with Inworld AI technology at GDC 2024, we had the opportunity to try and turn both Ubisoft’s NEO NPCs and NVIDIA’s Covert Protocol.
Ask any DM who championed a session of Dungeons & Dragons that I’ve taken part in in the past decade: taking the opportunity to push the boundaries of the campaign (and morality) is where the fun is. Both demos offered me free reign to control either experience almost entirely by voice while my handler sat off to the side, presumably taking notes or shaking their heads and debating whether or not to pull off the plug. By the time I finally got to see Ubisoft’s NEO NPCs program upfront, I was already working on my second cup of coffee before 10:00 AM and already abuzz with a font of PG-13 rated creativity.
For Ubisoft Paris’ Inworld demonstration, I went hands-on with a combination of Inworld’s AI Character Engine combined with NVIDIA’s Audio2Face automatic facial animations. A buzzword of ‘authenticity’ popped up a few times throughout my session and for the most part, it did hold true, as their guardrails were in place to try and keep the illusion that I was being recruited into “the resistance” by way of a beanie-wearing urbanite named Bloom.
To gamify the Inworld AI experience, Ubisoft kept a popup list of recommended activities to engage in, from learning about the megacorps or Bloom and the resistance at a holistic level. Any time I tried to veer the conversation, Bloom would bring up those invisible barriers, provide
Read more on wccftech.com