Just imagine the statement slightly modified: You should go with system X because it is harder to write for. System Y lets you get away with more bugs but System X is more likely to just crash on that same code.
QFT. As a general rule, code for the system that is stricter, and then the more tolerant system will work anyway.
QFT to this too.
My rule of thumb (so far as OpenGL is concerned; D3D is different/more consistent) goes something like this:
- If it works on Intel it will work on anything. Happy days.
- If it works on AMD it will work on NVIDIA, it may not work on Intel.
- If it works on NVIDIA it may not work on either AMD or Intel.
This of course is ignoring vendor-specific extensions and not accounting for stuff that may work but requires different code paths.
Another general rule (OpenGL again) is:
- On NVIDIA, stuff that shouldn't work sometimes does.
- On AMD, stuff that should work sometimes doesn't.
- On Intel just be grateful for what does work.
At this stage it's probably obligatory to say that both AMD and Intel have been continuously improving, but also to wryly note that we've all been saying that for the past 15-odd years and we'll probably still be saying it in 15-odd years time.
Of course, as befits the nature of general rules, you're going to find cases where they don't apply or even where the opposite may be true.
So from that you can infer that for development AMD and Intel seem like good options: if you develop and test clean on these, you're pretty much guaranteed to work across the board. You can even omit Intel if you're not aiming for low-end. However, the nature of development is that you're going to be writing experimental new code so NVIDIA still have their uses: you'll get much faster turnaround times from their ability to soak up more abuse.