Do we really need Cinema 2.0?
Opinion – AMD pitches an enticing idea that takes advantage of the massive horsepower hidden in the company’s graphics card. A second generation cinema, could the result, merging convincing cinematic digital effects with real time interactivity. But we wonder how successful such a Cinema 2.0, as AMD calls it, could be.
From a technology perspective, the potential of graphics chip technology is amazing. If only half of the general purpose GPU demos we have seen until today will be commercially available one day, I am absolutely convinced that this technology is the key to revolutionize the way we are using computers today, offering accurate speech recognition, the possibility of controlling elements on your screen with simple hand movements and real time 4D applications, for example within Google Earth.
One of these demos also included AMD’s Cinema 2.0 pitch yesterday, which AMD says “punches a sizeable hole in the sensory barrier that separates today’s visionary content creators and the interactive experiences they desire to create for audiences around the world.” If AMD’s vision is right, then we may be able to experience life-like environments and jump into the role of actors on the screen.
“With Cinema 2.0 you won’t just play movies, you’ll play in them. Imagine the ability to look around the environments in a sci-fi movie, put yourself in the driver’s seat in a race scene, duck behind things and pop up to see what’s going on in an intense firefight – all of these things are possible with Cinema 2.0,” said Charlie Boswell, director, Digital Media & Entertainment, at AMD.
According to the company, the secret of Cinema 2.0 is the “blending of highly complex and realistic graphics that traditionally are the exclusive domain of blockbuster films, with the dynamic 3D interactivity of popular video games.” Each frame for a movie today can consume up to 30 hours of rendering time and we are far away from a real-time experience. To achieve a 25-30 frames per second real time rendering environment, more computational speed is required and it is generally believed that the needed technology is about ten years out in the future.
With GPUs that are already capable of delivering 1 TFlops per chip and a trend that doubles the performance every 12 months, we could see AMD’s Cinema 2.0 come to life much earlier. An enticing idea, but the question is: Do we really want that?
I don’t want to be all negative here, but I remember a trial of a sort-of interactive movie back in Germany when I was a teenager. The movie was filmed with two different viewpoints – the viewpoints of the two main actors. The movie was shown on two channels at the same time and you were able to switch between the channels and change your location. Needless to say, the concept flopped and was never repeated. You might say that this movie’s interactivity was very limited and I would have to agree, but I do believe that AMD’s Cinema 2.0 will face similar challenges.
Playing a video game and watching a movie are two very different activities – one active and the other one passive. When I am watching a movie at the end of a day or on a Saturday night, the last thing I want to do is become hyperactive, morph into the role of the main actor and check out what is going on the sidewalk while I am in a car race. Video games are the complete opposite: This is the environment where you expect to be alert an active and will engage in activities. In front of a good movie, I prefer a bag of popcorn over hugging the remote and annoying my wife with changing viewing angles.
Cinema 2.0 sounds like a good demo of graphics horsepower to me. But let’s leave it at that – a demo.