Outsourcing wave to hit Hollywood actors

Posted by Wolfgang Gruener

Opinion – Every time a new processor announced, we often wonder what we could use this new horsepower for besides a few more frames in games and faster Photoshop filters. Ten years ago, a 450 MHz (single-core) CPU sounded like overkill, and in ten years we may be wondering what that heterogeneous many-core chip might be good for. You know what I mean. We are typically looking for those new types of applications the hardware vendors tell us have not been invented yet. However, for the next generation of hardware, I will make a prediction: Multi-core processors and capable graphics hardware will enable movie production studios to outsource Hollywood actors, and keep Angeline Jolie as young and attractive as she is now for as long as they want. And there may even be more uses for this technology.

To witness technology progress over an extended period of time is fascinating. That is, in fact, why I chose this profession - to enable me to stay in close touch with those who are so profoundly impacting our everyday life (and to prepare myself for what I will have to learn to keep up with what as my children find new exciting, entertaining utility from technology.) Most developments are just evolutionary or short-lived, but once in a while when I see something that could have the potential to revolutionize a part of your life (for the better or worse), I step back and take a closer look. This was the case after a demonstration in AMD’s CES suite – a demo that was incredibly stunning, and yet scary at the same time.

So, what was it? To show off the horsepower of its Dragon platform (Phenom II processor, 4000-series graphics), AMD showcased an application developed by Image Metrics. It basically creates an animated human face by using "a proprietary video analysis system to produce facial animation". The system films an actor and then analyzes all the details to apply it to the computer-generated character. The Image Metrics system enables animators to extract all the data the camera can see at the pixel level - including the eyes and mouth.

We have seen this in the past with humans being digitized by wearing masks and suits that are equipped with sensors. In fact, Hollywood has been experimenting with this for some time now. This particular demonstration, however, ran on a mainstream computer. It did not require the human to wear a mask and it showed a fantastic real life animation of the virtual character.

 

The truly amazing part of this demonstration was how the resolution was extremely high, going far beyond the detail of what a human eye can even see once you are three or more feet away. The detail went down to tiny imperfections like individual hairs in eyebrows and even skin pores. Once captured, this virtual person could be instructed by the computer to do anything you like, say anything you like - at least in theory. Stand a few feet back and, provided you do not know that this is a virtualization, I’d be pretty sure you would have no idea that it wasn’t a real person you're looking at.

The obvious applications for this technology are dry topics such as basic human-to-human interactions, customer support for example. Or even face recognition. But conceivably, anyone could be virtualized. Think about this, you could keep yourself (virtually) alive for future generations to see. Or think about the entertainment industry. The Gorge Clooneys and Angelina Jolies could have endless roles, multiple simultaneous "virtual shoots," and so much more. They could even be reduced to mere licensing contracts - having scanned them into the computer, correcting any imperfections, and then making them available to the highest bidder. The actual acting would be handled by voice generation software, and the input to make the virtual person move for video could be created somewhere else – perhaps in India or another place where talent can be hired for less money than here in the U.S. I have no problems imagining the entire creation of even completely digital famous characters on-screen, eliminating the need for at least some actors. How about Lara Croft? Or the winner of some American contest: Create the next big star. You ain’t seen nothing yet.

 

 

 

If you’d like to go a bit more negative, consider also how such technology could be misused by politicians around the world to deliver speeches they actually never delivered, or to make their opponents deliver speeches they never gave. Exaggerated? Perhaps. But will it be possible? Absolutely. And, it's scary stuff, if you ask me.

Of course, there are less scary applications as well. Think about direct interactions with characters that may not exist. And if you really want, you may even be able to have life-like conversations with your favorite celebrity. Or you can imagine a much more bandwidth friendly video chat with your friends on Facebook, in which only animation data will be transmitted and not video data anymore. Face data, animation data, voice, all done. If we think about GPGPU-accelerated iPhones, you could even do this on a cellphone, and pretty soon.

The current stage of the development reminds me somewhat of 2001 when I interviewed the CEO of Electronic Arts for a German weekly and I was told that the company was working on injecting artificial intelligence in game characters. At the time, CPUs and graphics cards were able to run one character with artificial intelligence per screen. Today, they run dozens or hundreds.

AMD marketing manager Raymond Dumbeck told me that one Phenom II processor is capable of simulating three to four life-like faces per screen already. And AMD isn’t even using their graphics card floating point capabilities to do this yet. Add graphics card acceleration to the equation and much more powerful many-core processors within the next five to ten years and you may be able to simulate potentially hundreds of people on one screen.

It could be thought of as a precursor of that sequence at the end of the movie, The Running Man with Arnold Schwarzenegger, when the producer has the animation crew map his image into digital memory. He's then mapped onto a dummy, and they roll out the final fight scene. It's full CGI in the show, and they make it appear as though Arnold's character dies, thus ending the Saga from the public's point of view - even though he did not really die in the now.
 
Amazing technology, if you ask me. But I don’t know if we should be looking forward to it.