A really good place would be background banter. Greatly reducing the amount of extra dialogues the devs will have to think of.
Give the AI a proper scenario, with some Game lore based context, applicable to each background character.
Make them talk to each other for around 5-10 rounds of conversation.
Read them, just to make sure nothing seems out of place.
Bundle them with TTS for each character sound type.
Sure, you’ll have to make a TTS package for each voice, but at the same time, that can be licensed directly by the VA to the game studio, on a per-title basis and they too, can then get more $$$ for less work.
And probably royalties for using their “likeness.” But longer-term, I’m guessing total revenue for VAs will drop, and there will probably be less variety.
They won’t because of hallucinations. They could work in mature games though where its expected that whatever the AI says is not going to break your brain.
But yeah a kid walks up to toad in the next Mario game and toad tells Mario to go slap peaches ass, that game would get pulled really quick.
The content is… AI assisted (maybe a better way to put it).
And yes, now you don’t need to get the VA every time you add a line, as long as the License for the TTS data holds.
You still want to be having proper VAs for lead roles though. Or you might end up with empty feeling dialogues. Even though AI tends to put inflections and all, from what I have seen, it’s not good enough to reproduce proper acting.
Of course that would mean that those who cannot do the higher quality acting [1] will be stuck with only making the TTS files, instead of getting lead roles.
But that will mean that now, places where games could not afford to add voice, they now can. Specially useful for cases where someone is doing a one dev project.
Even better if there can be an open standard format for AI training compatible TTS data. That way, a VA can just pay a one time fee to a tech, to create that file, then own said file and licence it whichever way they like.
e.g. most Anime English dubs. I have seen a few exceptions, but they are few enough to call exceptions ↩︎
You know the way these programmers talk about AI, I think they just don’t want to have to work with anyone else.
How is this not taking from voice actors and giving to yourself in that regard? The system you described would mean only the biggest names get paid, all so a developer can avoid learning social skills.
You are right. I don’t want to have to socialise just to add a bit of voice to my game characters.
If I have to, I’d rather ship without voicing any of them.
The system you described would mean only the biggest names get paid
Rather, it’s more like, we as the user get a greater variety of background NPC banter, for the same game price.
Take X4 for instance. The only banter we get is different types of “hello”.
Only in cases of quests, is there any dialogue variety. When there is any such banter out of quests, it’s mostly incoherent (or was that another game, I need to check again).
It doesn’t really make sense that 2 or more people meet in a docking area, say, “Hi”, “Hello”, “Good day to you” and then just keep on standing staring at each other’s faces as if they were using some sort of telepathy, or just staring at each other without any conversation.
It would be fun to be able to have conversations that, while clear that they would not be able to yield any Quest, should still have variety enough to be fun when the player stops by, eavesdropping.
This sort of thing is there in a lot of games by high budget studios, while at the same time, the games have pretty large file sizes.
This way, we can reduce both production and distribution costs.
And the VAs, they don’t need to do all the work of speaking each dialogue every time the story writers come up with new banter, but the studio will be getting their voice for those lines, essentially increasing the value of the licensed TTS package, meaning the VA gets more work done than the work they do and gets paid more (well, the last part depends more upon the market condition).
Oh come on, LLMs don’t hallucinate 24/7. For that, you have to ask a chatbot to say something it wasn’t properly trained for. But generating simple texts for background chatter? That’s safe and easy. The real issue is the amount of resources required by modern LLMs. But technologies tend to become better with time.
I currently use AI, through Nvidia Broadcast, to remove the sound of the literal server rack feet away from my xlr mic in my definitely not sound treated room so people I’m gaming with don’t wind up muting me. It also removes the clickety clack of my blue switches and my mouse clicks, all that shit.
It’s insanely reliable, and honestly a complete godsend. I could muck around with compressors and noise gates to try to cut out the high pitch server fan whine, but then my voice gets wonky and I’ve wasted weeks bc I’m not an audio engineer but I’m obsessive, and the mic is still picking up my farts bc why not use an xlr condenser mic to shit talk in cs?
Edit - Oh, I also use the virtual speaker that Broadcast creates as the in-game (or Discord or whatever) voice output, and AI removes the same shit from other people’s audio. I’ve heard people complaining about background music from another teammates open mic while all I hear is their perfectly clear voice. It’s like straight up magic.
you guys joke but AI npcs have the potential of being awesome
A really good place would be background banter. Greatly reducing the amount of extra dialogues the devs will have to think of.
Sure, you’ll have to make a TTS package for each voice, but at the same time, that can be licensed directly by the VA to the game studio, on a per-title basis and they too, can then get more $$$ for less work.
I’m pretty sure it’ll be less money for less work, at least after the first few titles. Companies really don’t like paying more than they have to.
True, but if the VA (or their agent) has any sense of business they’ll be making more per hour
And probably royalties for using their “likeness.” But longer-term, I’m guessing total revenue for VAs will drop, and there will probably be less variety.
One can dream.
They won’t because of hallucinations. They could work in mature games though where its expected that whatever the AI says is not going to break your brain.
But yeah a kid walks up to toad in the next Mario game and toad tells Mario to go slap peaches ass, that game would get pulled really quick.
I just re-read my comment and realised I was not clear enough.
You bundle the text and the AI-TTS. Not the AI text generator.
So the content stays the same but you don’t need a voice actor now?
The content is… AI assisted (maybe a better way to put it).
And yes, now you don’t need to get the VA every time you add a line, as long as the License for the TTS data holds.
You still want to be having proper VAs for lead roles though. Or you might end up with empty feeling dialogues. Even though AI tends to put inflections and all, from what I have seen, it’s not good enough to reproduce proper acting.
Of course that would mean that those who cannot do the higher quality acting [1] will be stuck with only making the TTS files, instead of getting lead roles.
But that will mean that now, places where games could not afford to add voice, they now can. Specially useful for cases where someone is doing a one dev project.
Even better if there can be an open standard format for AI training compatible TTS data. That way, a VA can just pay a one time fee to a tech, to create that file, then own said file and licence it whichever way they like.
e.g. most Anime English dubs. I have seen a few exceptions, but they are few enough to call exceptions ↩︎
You know the way these programmers talk about AI, I think they just don’t want to have to work with anyone else.
How is this not taking from voice actors and giving to yourself in that regard? The system you described would mean only the biggest names get paid, all so a developer can avoid learning social skills.
You are right. I don’t want to have to socialise just to add a bit of voice to my game characters.
If I have to, I’d rather ship without voicing any of them.
I agree, but with AI instead of socializing.
Rather, it’s more like, we as the user get a greater variety of background NPC banter, for the same game price.
Take X4 for instance. The only banter we get is different types of “hello”.
Only in cases of quests, is there any dialogue variety. When there is any such banter out of quests, it’s mostly incoherent (or was that another game, I need to check again).
It doesn’t really make sense that 2 or more people meet in a docking area, say, “Hi”, “Hello”, “Good day to you” and then just keep on standing staring at each other’s faces as if they were using some sort of telepathy, or just staring at each other without any conversation.
It would be fun to be able to have conversations that, while clear that they would not be able to yield any Quest, should still have variety enough to be fun when the player stops by, eavesdropping.
This sort of thing is there in a lot of games by high budget studios, while at the same time, the games have pretty large file sizes.
This way, we can reduce both production and distribution costs.
And the VAs, they don’t need to do all the work of speaking each dialogue every time the story writers come up with new banter, but the studio will be getting their voice for those lines, essentially increasing the value of the licensed TTS package, meaning the VA gets more work done than the work they do and gets paid more (well, the last part depends more upon the market condition).
As a consumer I’d rather a real person voice acted it live or not at all. Thats petty to put your entertainment above someone’s livelihood.
Oh come on, LLMs don’t hallucinate 24/7. For that, you have to ask a chatbot to say something it wasn’t properly trained for. But generating simple texts for background chatter? That’s safe and easy. The real issue is the amount of resources required by modern LLMs. But technologies tend to become better with time.
I still really don’t understand what amount of local resources it would require to run a trained LLM
ai being used for good may unfortunately have to wait till the destruction of capitalism
AI being used for anything will have to wait until it actually works reliably.
it doesn’t do well most of the things it’s shoved in as a buzzword to try and impress shareholders, that doesn’t mean it’s completely useless
I currently use AI, through Nvidia Broadcast, to remove the sound of the literal server rack feet away from my xlr mic in my definitely not sound treated room so people I’m gaming with don’t wind up muting me. It also removes the clickety clack of my blue switches and my mouse clicks, all that shit.
It’s insanely reliable, and honestly a complete godsend. I could muck around with compressors and noise gates to try to cut out the high pitch server fan whine, but then my voice gets wonky and I’ve wasted weeks bc I’m not an audio engineer but I’m obsessive, and the mic is still picking up my farts bc why not use an xlr condenser mic to shit talk in cs?
Edit - Oh, I also use the virtual speaker that Broadcast creates as the in-game (or Discord or whatever) voice output, and AI removes the same shit from other people’s audio. I’ve heard people complaining about background music from another teammates open mic while all I hear is their perfectly clear voice. It’s like straight up magic.
TBH this is a great space for modding and local LLM/LLM “hordes”