
Just saw a presentation by the researcher shown at the beginning of this clip. The project was slightly different from this one. Here they're manipulating cues to gender independently in the audio and video signal. Cool.
( , Sat 5 Mar 2011, 22:19, Reply)

What they discuss in the presentation?
( , Sat 5 Mar 2011, 22:26, Reply)

When you talk, some face movements are cues to the speech contrasts (like lip closing in pick vs tick), and other movements are cues to things like your emotional state (smiling, squinting, etc.). Its effortless for people to separate these, but not so easy for a computer. They had been working on some algorithms that would separate the two, but this is pretty hard to do except in a very basic way- filtering out the fast movements as speech cues and the slower movements as the rest.
It was cool to see though. The input was a video of someone talking, and after analysis the video was a resynthesized face that just had the smile/frown/surprise expressions changing over time.
( , Sat 5 Mar 2011, 22:35, Reply)