Interview: Nvidia's Chief Scientist David Kirk

20th Mar 2008 | 10:39

Interview: Nvidia's Chief Scientist David Kirk

Chews the fat about ray-tracing and CUDA programming language

Nvidia was left looking a little lonely after ATI, its main rival in the graphics market, was gobbled up by AMD. But Nvidia's Chief Scientist David Kirk says the future remains bright for the world's leading producer of PC graphics chips.

For a wider discussion of Kirk's views on the future of Nvidia and the threat posed by ray tracing, fusion processors and Intel's foray into graphics, see our main story: Do new CPUs threaten Nvidia's future?

In the meantime, chew on these key highlights from our discussion with Kirk on hot topics including ray-tracing and the CUDA programming language.

The impact of ray-tracing

TechRadar: Other than simple performance issues, why has ray tracing not been widely adopted for real-time rendering on the PC?

David Kirk: It would be easy enough to "just do everything using RT (ray tracing)," but then you would have to do everything using RT! Doing everything using RT in practice means tracing an enormous amount of rays, more rays for anti-aliasing, more rays for soft shadows, more rays for global illumination, more rays for glossy reflections. And so on.

There are certainly clever ways to avoid each of these, but every clever thing requires more software and more special cases. After all that, RT is not sounding so much like the "simple, elegant, handles-everything-easily" solution.

I mentioned anti-aliasing [removing or avoiding jagged edges] first, because I've from some commentary that people seem to think I'm ignorant of all of these techniques. There are ways to do anti-aliasing and not trace a lot more rays, but they all require more work (clever software) and they all have flaws.

One example is adaptive anti-aliasing. In this technique, you trace fewer rays, and look for edges by comparing adjacent rays to see if they are different. If they are different, you have found an edge and you trace more rays to make it smooth.

This has several problems. First, you may miss small things or small parts of things if they fall between the rays. Second, (I wrote a paper about this about 10 years ago!) this method is flawed because it introduces bias. Bias means that the picture could be arbitrarily wrong. This decision making influences the resulting picture in undesirable ways.

One other issue is that RT is famous for shiny, metallic looking reflections. What if you don't want that? Maybe you want a glossy, soft reflection, like brushed metal, or something more like fabric? You require a more complex shader, that either looks a lot like the shaders that people write in a rasterisation pipeline, or ... (here it comes again) ... you need to trace a lot more rays.

Parallel graphics processing

TechRadar: You've suggested the idea of a hybrid approach to the introduction of ray-tracing rather than the wholesale replacement of raster hardware. How do you see this happening? Can ray-tracing taking place simultaneously with other methods such as raster in future game engines?

David Kirk: Yes, RT and rasterisation can (gasp!) coexist. I don't understand why people find this remarkable. A game engine could rasterise the environment (using hierarchy, to make the complexity log, not linear, as it touted with RT), and find out what object is in each pixel. This is much faster than RT.

Then, for each pixel, the shading could either be done using conventional (and hardware-accelerated) pixel shaders, or by tracing some rays to find reflections, shadows, or ambient occlusion / light inter-reflection, or any combination of the two techniques.

This is totally doable on current GPUs, since you can rasterise and shade with OpenGL or DirectX, and trace rays with a program written in CUDA (Nvidia's parallelised version of the 'C' programming language), running on the GPU. Not only is this doable, I believe that this is the preferred way for using RT.

Why trace rays for cases when rasterisation simply is better and faster? In short, use RT for the features that it can do best.

New Nvidia GPUs?

TechRadar: with that in mind, is Nvidia doing any specific work to optimise future architectures for ray-tracing? Do you think chips optimised for "hybrid" rendering would look substantially different?

David Kirk: As I said, GPUs can do this now. It is certainly possible that we could provide special hardware that would make RT better or faster, but I think that today's hardware is pretty good.

The combination of current APIs and CUDA allows developers to write any program they want, anyway. Some programs are faster and more efficient than others, though, and I expect we will work to optimise the hardware to run these better. RT is certainly one such program, but there are many others.

I think that chips optimised for hybrid rendering will look substantially the same as GPUs do now. They would have hardware for accelerating special features in the APIs, such as texture, rasterisation, and programmable shaders, and they would have a general purpose interface for running parallel C programs, like CUDA. We'll continue to expand CUDA to make it better for a larger class of programming problems, but I don't see any need for substantial changes yet.

TechRadar: Regards CUDA and our discussion about the possibility that it might be adopted by other vendors of graphics hardware and your suggestion that NVIDIA positively welcomes this - what's in it for Nvidia to have CUDA supported for competing hardware? How would this actually work - would licenses need to be acquired / paid for?

David Kirk: I don't have any comment about licensing - interested parties should enquire! I'm simply saying that in much the same way as C can be compiled for many architectures, whether x86 or PowerPC, CUDA is just parallel C and can be compiled for other parallel or serial architectures.

Broader adoption has the advantage that CUDA code can run in more places, so the investment of writing your application in CUDA becomes more valuable if it runs on other architectures. Write once, run anywhere, perhaps many times.

CUDA already runs on multi-core CPUs in our emulation mode, for debugging, and this could become a higher-performance solution. Not nearly as high-performance as a GPU that is optimised for CUDA (of course!), but faster than more parallel CPU code runs today.

Nvidiaray-tracinggraphicsKirk
Share this Article
Google+

Apps you might like:

Most Popular

Edition: UK
TopView classic version