The Workflow of 21st-Century Puppeteers
Digital humans are real and they walk among us. From ‘Meet Mike’ to ‘Siren’, we’re seeing a whole new level of photorealism and intelligence in avatars, allowing users to become more immersed in the virtual space than ever before. Digital humans have far-reaching applications in film, the school classroom, and even in customer service and beyond – all of which can be presented in virtual reality (VR).
Not only that, virtual production combined with VR allows creators to visualize digital humans assets much earlier in the creative process, therefore radically transforming the world of cinematography.
But where do digital humans come from? What does it take to manipulate the actions of pseudo-humans so seamlessly that they can blend in with society? Here is how Cubic Motion is pushing incredibly lifelike human avatars to previously-unseen heights.
Explaining computer vision
Cubic Motion’s machine learning algorithms are based on years of experience and extensive work in the field of computer vision, allowing the PhD-led team to precisely track facial features even without physical markers. Model-based algorithms allow specialists to build trackers quickly from small training data sets to capture the identity of a performer and track subtle shifts in expression in real-time.
For instance, computer vision can be put to use tracking the human eye to record how we blink and how the pupil moves. With eyes conveying so much emotion, they are perhaps the most important factor when creating a digital human. Siren – a high-fidelity, real-time digital human – is particularly complex in this respect.
Developed by an international team of artists and engineers – including Cubic Motion, Epic Games, 3Lateral, Tencent and Vicon – Siren can be “driven” live by an actress on stage to recreate human expressions and movements in real-time. There are plenty of subtle details that make a significant impact in the overall look of Siren, from scanning dental casts to adding peach fuzz on her face. All of this can be rendered in real-time in Unreal Engine.
Facial muscles tense and relax, eyes flicker and dart – and if you look closely enough, even skin pores and tiny wrinkles can be seen. With each passing year, the uncanny valley becomes ever shallower. Siren is just the latest breakthrough in an ongoing series of advancements to create the perfect digital human.
Computer vision technology is particularly suited to real-time applications in social virtual reality, gaming and film production. One incredible use-case debuted at Siggraph last year: a VR experience called Meet Mike.
A wholly digital version of VFX reporter Mike Seymour was “driven” and rendered in real-time, interviewing industry legends from companies like Pixar and Weta. The facial movements of the interviewees were also tracked and used to drive their own avatar, allowing them to interact with Mike in VR with additional participants watching the conversation unfold.
Meet Mike presents a whole new format for interviews and informative media. It’s one that could never have been possible before but might just succeed in making dry, news-based content more digestible for the modern audience. Now, we’re seeing a wave of digital presenters and influencers make their debuts. Just look at Instagram’s infamous Lil Miquela or the AI anchors created by China’s state-run press agency, Xinhua.
So what’s next?
Digital humans are the keys to unlocking a virtual world — one that is truly interactive and all the more immersive as a result. Truly believable avatars will make VR content significantly more appealing and engaging, pulling users back time and again. And the Cubic Motion team is enabling content producers and game developers to more easily streamline VR character creation, populating the landscape with believable avatars.
Human beings have always shaped the environment around us, but now we’re creating a whole new environment, and we’re creating whole new humans to shape it.