Advertisement

nVIDIA more strict then AMD ?

Started by January 28, 2014 06:18 AM
10 comments, last by _the_phantom_ 10 years, 9 months ago

Hello everyone.

In the thread Best Laptop for Game Development Uberwulu mentioned the following( Irrelevant info removed ):

In any case, for consumers, I recommend AMD video cards. For developers, I recommend nVidia but ONLY because nVidia is harder to develop for (you can get away with a lot more from AMD cards where nVidia would otherwise crash).

And I read somewhere that it was the other way around, So what are the differences between them on that note ?

Many thanks.

Never say Never, Because Never comes too soon. - ryan20fun

Disclaimer: Each post of mine is intended as an attempt of helping and/or bringing some meaningfull insight to the topic at hand. Due to my nature, my good intentions will not always be plainly visible. I apologise in advance and assure you I mean no harm and do not intend to insult anyone.

Usually there shouldn't be too much difference.

In my experience, I've found AMD's GLSL compiler to be completely strict, whereas nVidia's GLSL compiler will accept invalid code (including HLSL/Cg code...).

i.e. the opposite of Uberwulu. YMMV.

If you're developing for OpenGL, I'd definitely recommend using an nVidia, AMD and Intel GPU, because they all potentially have their own quirks.

Advertisement

So basically each one is more strict then the other in different area's ?

So for the main thing to look out for is how my shaders are written and used ?

Side question: I've seen it mentioned that OpenGL is a little quirky in that you need to test against different drivers unlike D3D, Is this true ?

And then of course: test, test, test and well test :)

Thanks! for your answer.

Never say Never, Because Never comes too soon. - ryan20fun

Disclaimer: Each post of mine is intended as an attempt of helping and/or bringing some meaningfull insight to the topic at hand. Due to my nature, my good intentions will not always be plainly visible. I apologise in advance and assure you I mean no harm and do not intend to insult anyone.

As long as you write solid, bug-free code and follow the standards properly both should work wonderfully.

Each one has their own tolerance of various bugs and errors, just like source code generally.

In programming generally there are many bugs that exhibit different behaviors on different machines. A bug might appear as a seemingly random crash on some machines may have a 100% reproduction rate on a machine with different memory timings or a different cpu configuration or different drivers. That doesn't necessarily condemn that machine, instead it is simply how the defect in the code makes itself manifest.

The same is true of code on graphics cards. Just like a buffer overrun may be innocuous in one situation but fatal in another, so too might a bug in your graphics code perform without visible fault in one place but problematic in another. One set of compilers or tools may detect potential buffer overrun, some might warn or even give errors for it, but you as the programmer are ultimately responsible for ensuring correctness of the source material

In the case of buffer overruns that might work correctly or might crash, you couldn't say that either behavior is correct because you have entered the territory of unspecified or undefined behavior. Any behavior, including appearing to work correctly, crashing, or being blocked from compilation, is unspecified. So even if one set of tools is currently a little more strict at detecting errors that is unlikely to be the only factor in determining the quality of the tools. My personal preference is to use any tools available to help identify and correct errors.


Just imagine the statement slightly modified: You should go with system X because it is harder to write for. System Y lets you get away with more bugs but System X is more likely to just crash on that same code.

I know I ran into one issue using OpenGL where my nVidia machine would run the program I wrote just fine, but my AMD machine did not render it out properly.

It is also worth noting that NVidia has essentially halted all OpenCL development for their hardware, and as such their cards only support OpenCL 1.1 (whereas everyone else has moved on to 1.2, soon to be 2.0). In exchange you get CUDA, which is more mature and usually a bit faster, but locks you into NVidia hardware. This may or may not be relevant to you.

DirectCompute should work the same on all hardware, however. In theory, anyway.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Advertisement


Just imagine the statement slightly modified: You should go with system X because it is harder to write for. System Y lets you get away with more bugs but System X is more likely to just crash on that same code.

QFT. As a general rule, code for the system that is stricter, and then the more tolerant system will work anyway.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

Thanks everyone!

Never say Never, Because Never comes too soon. - ryan20fun

Disclaimer: Each post of mine is intended as an attempt of helping and/or bringing some meaningfull insight to the topic at hand. Due to my nature, my good intentions will not always be plainly visible. I apologise in advance and assure you I mean no harm and do not intend to insult anyone.


Just imagine the statement slightly modified: You should go with system X because it is harder to write for. System Y lets you get away with more bugs but System X is more likely to just crash on that same code.

QFT. As a general rule, code for the system that is stricter, and then the more tolerant system will work anyway.

QFT to this too.

My rule of thumb (so far as OpenGL is concerned; D3D is different/more consistent) goes something like this:

  • If it works on Intel it will work on anything. Happy days.
  • If it works on AMD it will work on NVIDIA, it may not work on Intel.
  • If it works on NVIDIA it may not work on either AMD or Intel.

This of course is ignoring vendor-specific extensions and not accounting for stuff that may work but requires different code paths.

Another general rule (OpenGL again) is:

  • On NVIDIA, stuff that shouldn't work sometimes does.
  • On AMD, stuff that should work sometimes doesn't.
  • On Intel just be grateful for what does work.

At this stage it's probably obligatory to say that both AMD and Intel have been continuously improving, but also to wryly note that we've all been saying that for the past 15-odd years and we'll probably still be saying it in 15-odd years time.

Of course, as befits the nature of general rules, you're going to find cases where they don't apply or even where the opposite may be true.

So from that you can infer that for development AMD and Intel seem like good options: if you develop and test clean on these, you're pretty much guaranteed to work across the board. You can even omit Intel if you're not aiming for low-end. However, the nature of development is that you're going to be writing experimental new code so NVIDIA still have their uses: you'll get much faster turnaround times from their ability to soak up more abuse.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


Just imagine the statement slightly modified: You should go with system X because it is harder to write for. System Y lets you get away with more bugs but System X is more likely to just crash on that same code.

QFT. As a general rule, code for the system that is stricter, and then the more tolerant system will work anyway.

This rule is likely to get you in trouble. The correct rule is that you're headed for pain if you're not regularly testing on both systems.

As a practical matter, I find NVIDIA"s driver and tool quality to be significantly superior to AMD in all respects, particularly so in OpenGL. Unfortunately the NV compiler accepts a lot of illegal GLSL code, but it still tends to behave far more sanely with everything else. And I have seen plenty of code that works fine on AMD and not NV, or vice versa, or both and not Intel, or whatever.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

This topic is closed to new replies.

Advertisement