Back

See PinScreen’s real-time facial performance at RTC 2020


Creating digital humans for face replacement, de-aging, digital makeup and character creation is one of the hottest areas of VFX development, and its time may have come as we look at ways to “science the s***” out of getting actors to interact on screen while maintaining a safe distance.

With its PaGAN and PaGAN II systems, LA-based PinScreeen is one of the companies at the forefront of creating synthetic digital humans. See this post from March after the company attracted attention at the World Economic Forum in Davos in February., prompting fxguide’s Mike Seymour to interview founder Professor Hao Li. More recently, Mike has written this follow-up piece.

“The entire motivation for Pinscreen is a comprehensive system where you can enable interaction with virtual humans,” 

Professor Han Li, Founder, PinScreen

Once lengthy machine learning training has been performed, PinScreen’s generative rendering process is fast as it infers the lighting, performance, and responses for the final result, but artist control is sacrificed .

With its newer neural rendering approach, deep neural networks do the heavy lifting, while artists retain control over elements such as illumination, camera parameters, posing, appearance, motion and lipsync dialogue.

To learn more about Pinscreen’s work, sign up for the RealTime Conference – RTC 2020. Mike is co-chairing a session on June 9 with Facebook’s Christophe Hery which will feature a presentation by Professor Li.



RELATED INSIGHTS