Nvidia Demo Speaking to AI Game Characters: Nvidia CEO Jensen Huang recently showed the world a peek of what it would be like when gaming and AI come together with a graphically stunning rendering of a cyberpunk ramen shop where you can actually interact with the owner at Computex 2023 in Taipei.
Seriously, it suggests that you could hold down a button, simply speak with your own voice, and receive a response from a video game character rather than selecting conversation options. It is what Nvidia refers to as a “peek at the future of games.”
Unfortunately, the conversation itself is somewhat lacking; perhaps Nvidia might use GPT-4 or Sudowrite next time?
Specifically, a collection of middleware called Nvidia ACE (Avatar Cloud Engine) for Games that can operate both locally and in the cloud was used to produce the demo, which was built by Nvidia and partner Convai to help promote the technologies that were used to make it. The company’s NeMo tools for deploying large language models (LLMs), Riva speech-to-text and text-to-speech, among other components, are all included in the whole ACE package.
Of course, the demo employs more than just those; it’s developed in Unreal Engine 5 with a tonne of ray-tracing, and it’s visually amazing to the point where, in comparison, the chatbot component feels underwhelming. Even if they can occasionally be banal and derivative, chatbot interaction has so far been much more compelling.
Nvidia VP of GeForce Platform Jason Paul said during a pre-briefing for Computex that the technology can expand to support more than one character at once and could theoretically even allow NPCs to communicate with one another, but he admitted he hadn’t really seen it tried.
S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis will use a feature Nvidia calls “Omniverse Audio2Face,” which aims to synchronise facial animation of a 3D character to their voice actor’s speech, although it’s unclear whether any developers will embrace the full ACE toolkit the way the demo attempts.
Nvidia Brings ChatGPT-Like AI to Video Games
Video games now have generative AI thanks to Nvidia. Nvidia ACE is a new platform that enables developers to employ generative AI to power dialogues with game characters. It was unveiled during its Computex 2023 keynote.
Imagine ChatGPT but with a chatbot that has a distinct narrative and lore rather than a general-purpose chatbot. Nvidia emphasises that ACE’s adaptability is one of its most crucial features since it will enable characters to have a rich past that shapes their reactions and prevents them from straying too far from the subject at hand. This is helped by the company’s recently unveiled NeMo Guardrails, which steer dialogue away from subjects the developer doesn’t want to address.
Similar to what we saw with Square Enix’s The Portopia Serial Murder Case, ACE isn’t just designed to produce words. It has an entire AI system. According to Nvidia, ACE employs AI to animate the character models to correspond with the characters’ reactions in addition to generating responses for characters.
Also Read: Motherboard for Ryzen 5 1600
Who is responsible for a game’s dialogue if it suddenly becomes poor?
It’s not difficult to picture how awesome this might be – the recently released Shadows of Doubt has me intrigued about the potential for AI-generated emergent gameplay. There are many things that could go wrong, though.
To begin with, according to Nvidia, the system is set up such that characters can communicate with one another without your intervention. Although that’s fantastic, Nvidia hasn’t tried it. If we didn’t witness two AI-driven characters descend into some irrational rabbit hole, similar to the early days of Bing Chat, I would be amazed.
Another hypothesis is that these AI personalities are uninteresting in general. Although experimenting with generative AI tools can be a lot of fun, game developers control the dialogue and character interactions, with dialogues chosen for a particular purpose. Who is responsible for a game’s dialogue if it suddenly becomes poor? The authors or AI, then? Each character may have a deep, intricate past, but the AI may not reveal it, resulting in lifeless interactions.
That was unquestionably the case with Square Enix’s The Portopia Serial Murder Case, which promised distinctive dialogue with every interaction. Instead, without any of the flair that comes from written dialogue, it largely directed players down a particular path. Additionally, we should expect some horrifying facial expressions, which Square Enix’s demo didn’t have to worry about.
The future of video games appears to be this, though. For part of its conversation, Ubisoft is using a generative AI tool, while leaders in game engines like Unity are already observing developers start to make use of AI frameworks. A long road of strange, disturbing, and amusing hiccups must likely be travelled before we see useful applications of generative AI in games.
With ACE, Nvidia is beginning to go that path. Nvidia was careful not to make any guarantees about when we’ll see ACE in action because it is still under development. At Computex, the business displayed a demo made with Unreal Engine, indicating that we might soon see a plug-in for that engine. The AI-generated speech in the demo seemed a little artificial, but there may be a use for this in the future.
Although the actual release date and functionality of ACE are unknown, Nvidia claims that the majority of the system is cloud-based. That should imply that you don’t require a particular graphics card in order to utilise ACE, as the majority of the AI processing doesn’t take place on your computer.
Even with certain reservations, it is ultimately up to the game designers. It’s up to developers to figure out how to best utilise Nvidia’s new technology, if at all, as it only enables more complex AI in video game characters.