Once again, I was helping out on Reddit /r/blender. Great place to see some works in progress, and help newbies, and discuss things about blender. Sometimes things amaze me there, but more often than not, it gets my brain going.
Whats better, this or that? Whats the difference between... How do I make this scene look better?
This weekend someone asked about Cycles, and GPU computing, and why he can't get AMD GPU's to work. It was mentioned that Nvidia cards are the cards Blender is being developed for, and to try some other builds for AMD cards. We get into a discussion about CUDA vs OpenCL, and that OpenCL has problems with shadows.
I decided to run a few tests.
I created a real simple scene. Five objects; one glass, one gloss, two emitter planes, one ground plane. A world sky for additional light.
OS: XUbuntu 12.04 (Linux 3.2.0-23-generic) 64bit
CPU: Intel Core i5-2300 @ 2.80GHz
Memory: 15.64 GiB (15.64GiB Swap)
Video: NVIDIA GeForce GTX 570/PCIe/SSE2
CUDA Cores: 480
Memory: 1280 MB
Mem Interface: 320-bit
Video Driver: NVIDIA 295.40
To start I put up some default settings, Full Global Illumination, 1000 samples.
Here's the CUDA render (with Stamp):
This is the OpenCL render (with Stamp):
So, what does this mean?
Not much, yet...
I put the two renders in gimp, and decided to take a look per pixel, what the difference was. Simply layer the two images, and use a "difference" layer mode. Generates this black image, which is expected, because they are very similar.
You can see with close detail there is some spotting, meaning they aren't perfectly matching. Taking a look at the histogram reveals the detail:
I highlighted the portion that really means anything. The rest is mostly because of the stamp in the images. Thanks to GIMP we can see the number of pixels that are actually different. (0 being black, I didn't select it.)
There is about 2.8% of pixel difference between the two renders. My first tests showed nearer to 2%, so some of the selection I highlighted is also part of the stamps, so I'd say about 2 to 2.5% difference between CUDA and OpenCL.
My last thing to do was to exaggerate this, a simple brightness/contrast effect is applied, both set to 126 to blow colored the pixels out, but leave the blacks black.
So, what is really affected by the difference is what you can see in this final image.
If the math was exact, between CUDA and OpenCL, there would be nothing here. But this'll put to rest one thing, the difference between the two, is not shadowing. Shadows are not affected by the difference of the two.
What is affected, is glass caustics. and edges. The refracted light out of the glass is what's making the major difference. And you can see the edges of every object, but they aren't as bad as the light refracted from the glass.
There is very little spotting coming from the glossy ball, this is probably from reflecting the caustics from the glass object.
So in my opinion, unless you have a lot of glass, there won't be much difference between the two. What is really boils down to, is time. If you have an AMD Card, and can't do CUDA; don't panic... OpenCL vs CPU is a dramatic decrease of time to render (depending on your card.)
But if you have a NVidia Card, and you can do CUDA, I recommend to do CUDA:
CUDA: 4:15 or 255 seconds
OpenCL: 6:04 or 364 seconds
The difference is about 30% savings in time, or a 42% increase if you're wondering.
For comparison CPU: 26:26 or 1586 seconds.
CUDA: 5.21 times slower, or 83% savings.
OpenCL: 3.35 times slower, or 77% savings
Interesting thing to note, the difference between CPU and CUDA was similar to the OpenCL difference map. But the OpenCL and CPU difference, was very low:
This leads me to believe that OpenCL and CPU Rendering is similar, and the odd one out, is actually CUDA! So if you want similarity between renders, OpenCL and CPU is your choices. But if you just want a faster render, go with CUDA, if you have an NVidia card.
I'll upload the scene so you can test it out for yourself, and compare your results, and brag that you have a faster card than me.