I’m at SIGGRAPH this week and NVIDIA surprised the industry with their new RTX video cards which use a technology first theorized last century to massively speed up rendering performance and, potentially, generate billions of dollars for companies that have aging movie, TV, and Video Game libraries. This last wasn’t really highlighted during the launch but, near term, may have the greatest opportunity to generate unanticipated revenue for movie studios.
Let’s talk about that this week.
A Little History
Apparently back in 1979 a guy by the name of Turner Whitted created a concept called Multi-Bounce Recursive Ray Tracing. On Paper it would change dramatically how realistic images were rendered but it was massively resource intensive. A relatively low resolution 512x512 image took 1.2 hours to render on a $1.5M midrange computer. He estimated that, to scale this to acceptable resolutions and anything approaching real time you’d need the equivalent of a Cray Supercomputer for every pixel you wanted to render. So, you could create amazing things with the technology, but the cost would exceed any budget short of US Defense level spending and thus the technology largely remained unutilized.
Well NVIDIA did release their DGX Station, kind of a desktop Supercomputer some months back and they used it to demonstrate they now had the power to make this work. To showcase this, they rendered a Star Wars scene that looks like it would fit right into the movie. High resolution, photorealistic, and they didn’t use actors (well for the visuals anyway, they did use voices). The DGX Station is a workstation approaching $100K in cost.
At SIGGRAPH they introduced thee cards, called the Quadro RTX, that would do the same thing faster and these cards all cost at or under $10K. (The entry level card is around $2,300 retail). Every major Workstation vendor (HP, Dell, and Lenovo) have announced workstations that use this card.
NVIDIA is estimating about a 6X performance increase with rendering with these new cards, but it is interesting how they get there. You see the card first renders in relatively low resolution then uses an AI component to up-convert the image to a higher (4K or 8K) resolution.
What makes this particularly interesting is the fact that this card could potentially batch remaster anything that was low resolution which means older movies, TV shows, and even video games. You could effectively take an old Title like Doom and remaster it with current generation graphics by training the AI to remove and replace all of the core elements, game play would be the same, but title would now look far more realistic and you should be able to get a critical mass of folks who loved that game to buy the up converted version to get a new, more immersive, experience.
Amazon, Netflix, HBO, Showtime and other video providers are constantly looking for inexpensive content and if they could update and then re-release some of the older movies that were well acted but just look old they could likely better meet their need for new content without breaking the bank.
I was even thinking that you might be able to take some of the failed movies and remaster them with new scenes to make them more interesting as well. For instance, The Lone Ranger was panned largely because they used a Caucasian actor to play the Indian. You could digitally remaster that movie with an a real American Indian and likely make it more compelling. Or take a movie like John Carter of Mars, which was massively expensive, but failed to pull an audience (largely because it kind of missed on both the Steam Punk vibe that the movie needed, and it likely needed to be more R than PG). You could fix those parts by re-rendering the flying machines and addressing some of the costumes not to mention making some of the aliens and animals more interesting in appearance. Granted it wouldn’t address the part where most of the folks that loved the books were either dead or ancient (like me) but, for a relatively small expense, you could release a very differing movie that might find a new audience and possibly turn a failure into a success. But, given the title is just languishing anything that drew interest back to it would be like finding money and likely incredibly interesting to the title owners.
Into The Future
This ability to use an AI to basically change something in a movie to something else will likely evolve into something even more compelling. Imagine creating a Storyboard, scanning it into a workstation, then rendering one image of each of the elements and using those images to train the AI in the workstation. Then, once trained, allowing the workstation, without actors, to render the entire movie basically up-converting the story board.
We aren’t there yet but that could be the future of this technology as it advances. This could also could eventually be an ideal way to create VR content from existing 2D film stock. You not only up-convert the existing movie you render the entire world around the main characters, so you can now walk through the scenes even though the title wasn’t initially filmed for VR.
As big as this announcement it, it is really just the fist step in what is likely to be a massive industry revolution.
The idea of a 6x improvement in rendering is huge in and of itself but add to that the concept of an AI that could learn to up-convert and remaster titles well that is astronomically big. And this is just the tip of what is likely to be a huge iceberg of change coming potentially changing most aspects of video creation. Used properly this could massively reduce the number of failed movies (allowing for far deeper changes after test audiences view them), reduce the cost of making any picture, and increase the productivity of graphics artists.
But I also think this is a huge step towards making the VR world as portrayed in Ready Player One real and the fact we didn’t really expect this level of performance for another decade suggests we are getting to that world far faster than any of us realized. Pretty damned amazing all in all.