Google Brain is making a hybrid child of deep learning and vision/graphics.
That means you just need a photo or recording and you can recreate "The Actor". SkyNet is alive and talking.
What aspects of a person can you infer by just looking at their photos and videos? Meaning what is Google Brain able to infer (guess) and model and replicate the characteristics of "The Actor" given the tone of a new play script.
Look how far the intern level devs are ...
What Makes Tom Hanks Look Like Tom Hanks
International Conference on Computer Vision (ICCV), 2015
Synthesizing Obama: Learning Lip Sync from Audio
by Supasorn Suwajanakorn https://homes.cs.washington.edu/~supasorn/
"Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track"
"My goal is to bring computer vision out of the lab into the real world and make it really work in-the-wild. My focus revolves around the question [What aspects of a person can you infer by just looking at their photos and videos? Can you model and replicate them?] In particular, I worked on how to build a "moving" 3D face model out of just photos, how to create facial textures that can smile with creases and wrinkles just like the real thing, how to generate videos of a person from their voice, how to model their persona, and more. I tried very hard to make my solutions to complex problems as simple as possible.
My research interest lies in the intersection of computer vision, graphics, and machine learning, but also includes computational photography and optimization. I went to Cornell for undergrad, and had a great pleasure working with Prof. John Hopcroft on social graph algorithms, and later got inspired by Prof. Noah Snavely with his computer vision class. I love hacking, coding, tackling hard problems, and I think computer vision is fun because I get to see why on earth eigenvector is useful.
I'm graduating soon and will be working at Google Brain as a resident making a hybrid child of deep learning and vision/graphics!"