How often have you spoken to someone over the phone only to discover that they don’t look like how you imagined when you meet them in real life? Turns out that AI might actually have a better grasp of predicting what people look like based on how they sound, thanks to the work of researchers.
Published in a paper last month dubbed Speech2Face: Learning the Face Behind a Voice, a group of MIT researchers developed an algorithm that seemed to be eerily accurate at predicting and generating a portrait of someone based on how they sound like. To train the AI to be able to make these predictions, they fed it about 100,000 video clips from YouTube that featured a variety of speakers, thus allowing the AI to associate certain sounds with certain looks.
While it has been noted that the portraits generated are not a 100% replica of what the person looks like, it does come pretty close and the fact that it can generate a somewhat accurate portrait based purely on sound alone is quite disturbing. However, we imagine that it could have a number of uses, such as being able to allow people to know who’s on the other line and if they are who they say they are.