
When you talk, some face movements are cues to the speech contrasts (like lip closing in pick vs tick), and other movements are cues to things like your emotional state (smiling, squinting, etc.). Its effortless for people to separate these, but not so easy for a computer. They had been working on some algorithms that would separate the two, but this is pretty hard to do except in a very basic way- filtering out the fast movements as speech cues and the slower movements as the rest.
It was cool to see though. The input was a video of someone talking, and after analysis the video was a resynthesized face that just had the smile/frown/surprise expressions changing over time.
( , Sat 5 Mar 2011, 22:35, Reply)