Inside Intel's ray tracing research

Posted by Theo Valich

Interview - Intel will soon be extending its product portfolio into visual computing. And while Nvidia isn’t publicly admitting that Intel is an increasing threat to the company, we know that Intel isn’t aiming for second place. While Larrabee is one of the firm’s key hardware products, Intel is busy developing lots of related technologies in the background as well - including ray-tracing engines. We had a chance to talk to Intel’s Daniel Pohl, a young German engineer who first got our attention when he rewrote the rasterization engines of Quake III Arena and Quake IV into full-blown ray tracing engines. Back then, Daniel used OpenRT API. Today, he is developing API currently known as IntelRT and gave TG Daily editor Theo Valich a fascinating look into his work.

 

    
In our recent interview with EPIC founder and Unreal creator Tim Sweeney, we learned about the future integration of CPU and GPU instructions, indicating that we will see code that executes on graphics chips or the CPU, whichever proves to be the faster technology. It is a trend we have been seeing for a while especially on the GPU side and floating point accelerators. On the other hand, the x86 CPU seems to be moving towards GPU capability. Nvidia’s CUDA has the lead in general purpose GPU (GPGPU) computing today and is providing monstrous processing horsepower through the graphics chip. Intel recently announced AVX, 256-bit extensions that will join its processors with the 2010 Sandy Bridge 32nm processor generation and try to do the trick from the CPU side.

AMD is a somewhat silent player in this arena, but the company is actually deeply involved in GPGPU applications. Companies like British Petrol (BP) have been using FireStream cards based on X1900XTX chips for GPGPU purposes ever since ATI’s GPGPU platform was launched back in September of 2006.

But the development of GPUs and CPUs does not stop here: Upcoming cGPU products (Larrabee) are geared towards the task of solving computational problems such as ray tracing. In fact, Intel is developing a ray tracer called IntelRT. We sat down with Daniel Pohl, research scientist with Intel, to learn more about this particular project and how Intel plans to change the game of graphics.

TG Daily: What prompted you to use ray tracing in games and replace the common rasterization process?

Daniel Pohl: It all started in 2004 when I was studying at Erlangen University in Germany. I listened to a talk from professor Slusallek, who was researching real-time ray tracing. I got really excited, because I initially had thought that ray tracing could not be used for anything more than movie production. Slusallek showed some real-time demonstrations and I wondered why someone would not not use this technology in games. I contacted professor Slusallek and rewrote the Quake III engine based on ray trace graphics (based on OpenRT).


TG Daily:
After Quake III and some media reports, you rewrote Quake IV using ray tracing as well and somehow ended up at Intel in California. What are you working on these days?

Daniel Pohl: I am a research scientist in the ray tracing group. We explore real-time ray tracing, for example the dynamics of how to improve performance, improve scalability and, most importantly, how to improve the image quality. This, for example, includes Adaptive SuperSampling (AntiAliasing).

TG Daily: When we look at the differences between ray tracing and conventional rasterization, it appears that ray tracing can be used for much more than displaying motion.

Daniel Pohl: You are correct. Once you accelerate ray tracing to a decent speed, you can use it for collision detection, or use it for AI [Artificial Intelligence –ed] to determine certain assets. For example, you could use camouflage to test the AI: You can determine the visibility of something when it is hiding behind different objects such as trees or one or more other solid objects. Ray tracing is not all about rendering. It is about how much more you can do using a single technique.

TG Daily: That puts in-game physics into a whole different perspective.

Daniel Pohl: Yes, this is a very good thing because we can have everything game-related in a single structure. We don't need separate structures for collisions, a separate one for graphics, a separate one for physics, a separate for AI and so on. If you want, you can use a highly optimized ray tracing structure  to test everything. If you don't like [the results], you can use an additional [structure], but you are not forced into using it.

 

Read on the next page: How scalable is ray tracing?


 

 

TG Daily: One of things we heard about ray tracing is that the technology is extremely scalable. So, let me ask: Just how scalable is ray tracing?

Daniel Pohl: The nice thing about ray tracing is that each ray can be calculated individually. That means you can calculate pixels independent from each other and that is very good for parallelism. We created  a demo with four PCs using Core 2 Quad processors and connected these machines over a Gigabit Ethernet connection. Then, we measured the power of a single machine with one core enabled against the virtual 16-core machine: The speedup factor was 15.2, meaning this virtual machine was more than 15 times faster than the single-core machine.

TG Daily: This result included the delay over Gigabit Ethernet?

Daniel Pohl:
Yes, we knew that Gigabit Ethernet is adding a lot of latency, but the ray tracer was still able to achieve 15.2x acceleration.

 

 

 

TG Daily:  10 years ago, real time ray tracing was mostly science fiction, even when you used hundreds of processors running at clockspeeds that were close to the ones we are seeing today. What progress have you seen in this segment?

Daniel Pohl:
Because of those massive computing requirements, ray tracing was used only in movie productions. Terminator 2 is a good example: You have those shiny chromatic effects reflecting and bouncing different surfaces and you saw the Terminator emerging from the floor. Around 2001, the first real-time ray tracers were shown using huge cluster systems and hundreds of CPUs, networked over Ethernet. Nowadays, we can do some pretty decent ray tracing using a single machine with eight cores.

 

TG Daily: What technologies brought ray tracing down to the desktop computer?


Daniel Pohl:
First, the hardware is getting faster and faster over time, but as you can imagine, that was not the key. We have made great advances in using SSE instructions. Another factor is algorithmic improvement, which also comes down to SSE. We have improved the algorithms to make use of new features in hardware: We are shooting packets of rays or we can have unlimited number of rays at the same time. We are using acceleration structures like kd-tree, BSP-tree to improve the speed, which makes acceleration structures casting a single ray very cheap.

TG Daily: During your speech at the GDC 08 conference, you talked about artifacts and other image quality problems in 3D engines based on rasterization. What is wrong with today’s games?

Daniel Pohl: With rasterization, you can do almost every special effect imaginable. The problem is that this will eat up a lot of developer time. You need to use many tricks, you need to do multi-pass approaches and you end up with some estimates that don’t work out, for example in shadows. They are limited by texture resolution and while you could increase the texture resolution, you are still stuck with the same problem of having a single texture for a shadow. The result is that a shadow just does not look like as it should.
Also, reflections are usually pre-rendered in rasterization. It’s just a texture that is limited by the resolution of the texture itself. Reflections are usually rendered in lower-screen resolution, like we saw in a number of Need For Speed titles, because those reflections do not really reflect what is going on in the background.
With ray tracing, we have exact per-pixel shadows and we get physically-correct shadows and reflections. On the hood of a car, we have a real reflection of light and an image that is on the opposite side. Ray tracing is already used by German car manufacturers, who use the technology to look at their models to figure out how a particular model would look like in the real world, especially with mirrors and a moving background. Accurate mirroring is also one of key elements that is missing in today's games.

TG Daily: Given this discussion about shadows and their current lack of precision, which is very visible in Call of Duty 4 and Unreal Tournament 3, would you say that ray tracing is the ideal way to calculate shadows? Or is there an even faster and more precise way?

Daniel Pohl: We have exact per-pixel shadows that are based on ray tracing, so these shadows are artifact-free. We can do soft shadows, but, of course, soft shadows require more calculations based on the amount of neighboring pixels. But it's doable. If we take a look into the future, I would say we will soon be able to do global illumination, since most global illumination algorithms use ray tracing to define how light bounces from surfaces. That results in an extremely  good image that is very close to photorealism.

TG Daily: You indicated that shadows are not limited by their color and that it may be possible to color a shadow - and you could use this technology to create real mirrors, shadows in mirrors ...

Daniel Pohl: Yes. I think you have never seen a colored shadow in a game so far. But with ray tracing, it is no problem to create such a shadow. For example, if you have an orange glass sphere and you want to see the shadow of this orange glass sphere, there is a very simple function to calculate it: You can paint the shadow and add any color you want.

 

Read on the next page: Taking ray tracing down to ultra-portable computers


 

 

TG Daily: Ray tracing can also be used for precise reflections and refractions, currently created by using various tricks. How does ray tracing manage full resolution and full precision reflections and refractions?

Daniel Pohl: A lot of people think that ray tracing is compute intensive when it comes to reflections and refractions. But, all we have to do is to define the material as reflective, and the ray tracer will handle the reflection with reflection rays. So we are also able to gain multiple reflections and reflections-in-reflections. We can do adaptive surfaces as much as you like. Usually people will think that this is quite inefficient, because you need to calculate so much - reflections in reflections in reflections, and so on. But when we are moving from reflection to reflection, there are less reflection rays, because the area is getting smaller.
With ray tracing, you can have multiple scenarios with reflections and refractions showing different things, because it is calculated on an exact per-pixel basis and the computing cost is present only if you really see the reflection.

 

 

 

TG Daily: Valve was making news last year with The Orange Box titles such as Team Fortress 2 and more interestingly, Portal. How can you use ray tracing to create portals? What are the limits for this technology?

Daniel Pohl: It is very easy to create a portal in ray tracing. The number of portals is infinite. You can shrink the portals until they are one pixel big. We shoot this so-called primary ray and once it hits a surface, the camera records it. In this case, we would add an offset to this hit point and we would shoot the ray again in the same direction. And voila, we have our camera portal.
This can be done with three lines of shader code. It is very easy and it works on a per-pixel exact level. We can also create camera portals in portals, where we can also look to the back of ourselves, using the same three lines of code. We can even extend this to have two camera models and look back to the first two camera portals. You don't need to program individual portals or create some order - it is all handled by the ray tracer. Basically, there are no limits.

 

 

 

TG Daily: I would be interested in the hardware cost. You talked about networking four systems with quad-core processors. At GDC, you demonstrated a Quake 4 RT on a Sony UMPC device. How were you able to scale down the requirements for RT?

Daniel Pohl: I demonstrated ‘Ray Trace in Your Pocket’, an ultra-mobile PC device from Sony, equipped with a 1.2 GHz Core 2 Solo processor. Ray tracing has properties of near-perfect linear scaling. As the number of pixels is going down, you render less and, on small devices, the resolution is such that ray tracing is almost perfect. This UMPC has a good 4.5” LCD screen with a resolution of 1024x600 pixels.

 

 

 

TG Daily: You did a demonstration of Quake IV RT at IDF using an eight-core system. What kind of performance can you expect from a UMPC system that does not have a significantly lower resolution than what an eight-core V8 system would run?

Daniel Pohl: Of course, we scaled down the resolution for the portable device. If we have a machine that enables us to render a resolution of 1280x720 at 30 frames per second, this linear scaling would allow us to achieve 360 fps on a Sony PSP. If we go down to the screen of Nintendo DS, we would have almost 600 fps. Of course, we don't need 360 or 600 frames per second. The realistic target should always be between 30 and 60 frames.

 

 

 

TG Daily: What is your vision of ray tracing for the future?
 
Daniel Pohl: Ray tracing enables you to make high-quality images, in a very developer-friendly way. It will become faster and faster as hardware evolves and we will improve the algorithms. So there is a very exciting future ahead of us. We believe that ray tracing should become another way to do high quality renderings and we hope that it will provide a viable alternative to rasterizing.


TG Daily:
Thank you for your time.

 

Share your thoughts with us by writing a comment below!