3D/2.5D Hybrid Approach for Realtime Photoreal Human Rendering

Inspiration

I was inspired to take on this challenge after a series of recent personal-projects encouraged me that technology has gotten to a point where I'd be able to lean on my past experiences to tie together a realtime system that is capable of driving photoreal human speech. 

The Past

I've been working in computer graphics since I was in high school (1998). To illustrate my undying love of all things CG, allow me to introduce to you my first sci-fi feature film that I produced for our high school film festival, The Fight for Terran Earth. It's an an embarassingly bad special-effects-laden passion project that nearly killed me, but it gave me a better understanding of what it takes to pull of mixed reality visual effects, and just how hard it is to do. 

Disclaimer: This really isn't good. Like seriously, not good. Keep in mind that I did all of the effects in my parent's basement, after school, on a home-built computer, before the GPU existed. Oh, and the Internet was just starting to take off, too.

The Approach

Along the way, I've had a bunch of theories about how to achieve the goal of realtime photoreal human rendering (with speech!). Some were validated, some were not. Some haven't been tested yet. 

Previous
Previous

Plug System

Next
Next

3D Motion Blur