United Kingdom-based startup Sonantic developed an artificial technology system that can turn written texts into spoken words complete with emotions, reported Venture Beat. This creation came after the company received funding earlier this year.
The Sonantic AI voice-over tech is being used by triple-A game developers for audio storytelling and engineering. Around 200 video game development firms are using this technology to enhance their games.
The development came after the startup received a 2.3 million-euro infusion to its funding through a round led by EQT Ventures, reported Tech Crunch.
This system‘s offering is reminiscent of text-to-speech engines which allow users to feed text to the system to get audio output. However, Sonantic offers a great edge over these engines by providing emotional depth to the text.
According to Venture Beat, “The AI can provide true emotional depth to the words, conveying complex human emotions from fear and sadness to joy and surprise.” In a way, the startup’s system enables revolutionary audio engineering for gaming and film creators.
It is expected to allow hyper-realistic products through expressive and controllable voices. Sonantic co-founder Zeena Qureshi said, “Our first pilots were for triple-A companies, and then we started building this,” referring to the AI voice-over tool.
Qureshi added, “We went a lot more vertical and deeper into just working very closely with these types of partners. And what we found is the highest quality bar is for these studios. And so it’s really helped us bring our technology into a very great place.”
The system’s similarity with text-to-speech is not incidental. Sonantic developers built upon the technology and made great distinctions between robotic reading and human-sounding voices, offering believability to its voice-overs.
As companies are now using it for audio engineering, Qureshi clarified that this development does not intend to replace human voice actors. Instead, it creates a readable and reviewable script that actors can analyze to study their performance and characters.
Qureshi said, “This technology isn’t made to replace actors. What it actually helps with is at the very beginning of game development. Triple-A games can take up to 10 years to make. But they typically get in actors at the very early stages, because they’re constantly iterating.”
It is expected to help in the creative process as it can serve as a sounding board for voice actors, allowing them to evaluate their execution and make changes along the way.