Tuesday, December 3, 2024

Are you feeling sad, excited or bored? This startup claims its AI can tell (more often than not).

Alan Cowen feigns a dejected expression. “My dog ​​died this morning,” he says, chatting with an AI model from startup Hume that claims to acknowledge greater than 24 different emotional expressions in an individual’s voice — from nostalgia to awkwardness to fear — and accordingly responded to it.

“I’m so sorry to hear of your loss. “Losing a pet is never easy,” the AI ​​responded within the voice of Matt Forte, Hume’s creative producer, with a touch of compassion and disappointment.

Cowen, a former Google researcher, founded Hume in 2021 with the goal of developing an “emotionally intelligent” conversational AI that may interpret emotions based on the best way people speak and generate an appropriate response. Since then, over 1,000 developers and 1,000 firms, including SoftBank and Lawyer.com, have used Hume’s API to develop AI-based applications that may capture and measure a wide selection of emotional signals in human speech based on elements reminiscent of rhythm, tone and timbre the voice in addition to sighs, “umms” and “ahhs”.

“The future of AI interfaces will be voice-based because voice is four times faster than typing and carries twice as much information,” Cowen said Forbes. “But to take advantage of this, you really need a conversational interface that captures more than just voice.”

The New York-based startup announced Wednesday that it has raised $50 million in a Series B funding round led by Swedish investment firm EQT Ventures, with participation from Union Square Ventures and angel investors Nat Friedman and Daniel Gross has. The influx of recent funding values ​​the startup at $219 million.

The company also announced the launch of “Hume EVI,” a conversational speech API that developers can integrate into or construct upon existing products to create apps that recognize nuances of expression in audio and text, adjusting the words and tone AI can generate “emotionally tailored” output. For example, when the AI ​​detects sadness and fear within the user’s voice, it responds with hints of compassion and “empathetic pain” in its own verbal response.

These empathetic reactions usually are not entirely latest. When Forbes tested OpenAI’s ChatGPT Plus with the identical prompt – “My dog ​​died this morning” – and gave Hume an almost similar verbal response. However, the startup desires to stand out for its ability to discover underlying expressions.

To this end, Hume’s in-house large language model and text-to-speech model is trained on data collected from multiple million participants in 30 countries. This includes hundreds of thousands of human interactions and self-reported data from participants who responded to videos and interactions with other participants, Cowen said. The database’s demographic diversity helps the model learn cultural differences and be “explicitly unbiased,” he said. “Our data is less than 30% Caucasian.”

“The future of AI interfaces will be voice-based because voice is four times faster than typing and carries twice as much information.”

Alan Cowen, CEO and founding father of Hume

Hume uses its internal model to interpret emotional tone, but for more complex content it relies on external LLMs, including OpenAI’s GPT 3.5, Anthropic’s Claude 3 Haiku, and Microsoft’s Bing Web Search API, which offer responses inside 700 milliseconds generated. The 33-year-old CEO said Hume’s technology is designed to mimic the style and rhythm of human conversations and may detect when an individual is interrupting the AI ​​to finish the conversation and know when it’s their turn to talk speak. It also occasionally pauses while speaking and even laughs – which is a bit disconcerting when hearing it from a pc.

Although Hume’s technology appears to be more sophisticated than previous kinds of emotional recognition AI that relied more on facial expressions, using any variety of AI to acknowledge complex and multi-dimensional emotional expressions through voice and text is difficult imperfect science and one Hume’s admitting AI is one in every of his biggest challenges. Emotional expressions are highly subjective and influenced by a lot of aspects, including gender and social and cultural norms. Even if AI is trained on different data, using it to interpret human expressions may very well be helpful biased Results, Studies have shown.

When asked in regards to the obstacles the AI ​​must overcome to have human-like conversations, the AI ​​said it’s difficult to answer “the nuances of emotions, context and language.” “It is a complex task to accurately interpret tone, intent and emotional signals in real time.”

Hume’s AI is just not all the time accurate either. When Forbes checked Hume’s AI asked him questions reminiscent of “What should I have for lunch”, the AI ​​recognized “boredom” and five other expressions reminiscent of “interest” and “determination”.

Cowen, who has published greater than 30 research papers on AI and emotion science, said he first recognized the necessity for tools that might recognize and measure human expressions in 2015 when he was advising Facebook on methods to make changes to its advice algorithms , which might put people’s well-being on the forefront -Being.

Hume’s AI has been integrated into applications in industries reminiscent of health and wellness, customer support and robotics, Cowen said. For example, online legal directory Lawyer.com uses Hume’s AI to measure the standard of its customer support calls and train its agents.

In the health and wellness space, use cases are still emerging. Stephen Heisig, a research scientist on the Icahn School of Medicine, the medical school of the New York-based Mount Sinai Health System, said he’s using Hume’s expression AI models in an experimental study called “”Deep brain stimulation“A treatment through which patients have electrodes implanted of their brains. (The study only enrolls patients for whom no other treatments or therapies have worked, he said.) Hume’s AI models will likely be used to detect how patients are feeling and whether the treatment works in on a regular basis life. Heisig said Hume’s AI might be utilized by psychiatrists to provide them more context about emotions that is probably not easy to acknowledge.

“The patients we have in the DBS study keep two video diaries every day. They have sessions with the psychologist and psychiatrist, and we record those and use Hume’s models to characterize facial expression and vocal prosody,” Heisig said Forbes.

Hume’s models have also been integrated into Dot, a productivity chatbot that helps people plan and take into consideration their day. Samantha Whitmore, co-founder of New Computer, an OpenAI-backed early-stage startup developing the chatbot, said Hume’s AI provides “augmented context” about how an individual is feeling.

“If it detects levels of stress or frustration, it might say, ‘It sounds like you have a lot at stake, we should try to figure out how we can make this more manageable,'” she said. “It helps to meet them where they are.”

MORE FROM FORBES:

MORE FROM FORBESWhy $4.6 billion health records giant Epic is betting big on generative AIMORE FROM FORBESAI investors are courting startups with huge computing clustersMORE FROM FORBESMeet the Bryan Johnson-approved AI whole-body MRI startup that just raised $21 millionMORE FROM FORBESIs your Tinder date actually an AI chatbot? Perhaps.

Latest news
Related news