MIT speeds up graphics rendering
Aiming to make video games look as photorealistic as possible, designers often try to build in motion blur like that which appears in the movies. It simply doesn't look realistic without.
But, surprisingly, this is actually much more computationally difficult than producing perfectly sharp images.
However, this August MIT computer scientists will present a paper at the Siggraph conference which describes new techniques for producing motion blur much more efficiently than ever before, and cutting the rendering time from hours to just minutes.
They say several major special-effects companies have already contacted them about the work.
Motion blur occurs as each pixel of an image gives an average of the different colors as an object moves across the screen.
To reproduce this, a computer has to determine how the objects in the scene are moving, and then calculate how rays of light from an imagined light source would reflect off the objects. Finally, it determines which rays of light would actually reach an imagined lens.
And if objects are moving rapidly, it may have to go through this process hundreds of times.
The MIT researchers made the simplifying assumption that the way in which light reflects off a moving object doesn’t change over the course of a single frame. For each pixel in the final image, their algorithm still averages the colors of multiple points on objects’ surfaces - but calculates those colors only once.
They represented the relationship between the color calculations and the shapes of the associated objects as entries in a table - which can simply be looked up.
Another technique simplifies the task of determining which rays of light would reach an imagined lens. While digital animators might normally consider more than 100 discrete points on the surfaces of moving objects in determining the color value of a single pixel, the MIT algorithm instead looks at just 16 or so points, making an educated guess about the color values of the points in between.
Unfortunately, putting all this into practice would require modifying the architecture of graphics chips - not particularly likely to happen. But the team says their techniques do show new strategies for solving such problems more efficiently.