This new AI-enabled wearable by MIT can point out the tone in a conversation. Here's how

Read full story
“I got a new iPhone,” when your friend tells you that—you can either take it as a brag or excitement. To help detect the tone behind speech, a team at the Massachusetts Institute of Technology (MIT) built a wearable app that can parse conversation to identify the emotion behind each part of the story.

The researchers say that the system's performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is quick to call that they’ve developed this system keeping privacy in mind: The algo runs locally on a user's gadget as a method for protecting personal information.

How does it work?

Numerous emotion-detection studies demonstrate participants "happy" and "sad" recordings, or ask them to artificially act out specific emotive states. In any case, with an end goal to evoke more organic emotions, the team rather requested that subjects recount a happy or sad story of their own picking.

As the Blog cites, “After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.”

When can we use it?

The calculation is not yet sufficiently reliable to be deployed for social coaching, however Alhanai (Co-author of the paper) says that they are actively working towards that goal. For future work the team plans to collect data on a much bigger scale, conceivably utilizing commercial gadgets, for instance-- Apple Watch that would permit them to more effectively implement the system out in the world.
Add Comment()

Comments ()

X
Sort By:
Be the first one to comment.
We have sent you a verification email. This comment will be published once verification is done.