Thanks guys for your inputs!
After considering them, I thought of going for a laptop under 1000$ (and building a rig later on). So when doing some research, I found out that laptops with Radeon graphic cards are much lower in price when compared to Laptops with Nvidia GPUs. For instance, a laptop with a R9 M290X costs around the same as a laptop with a Nvidia GTX 860M, which is considerably lower when it comes to performance. So I would like to know if having Radeon graphics becomes a bottleneck for game development in any way? for instance, do OpenGL, Unity, DirectX, etc. supports Radeon Graphic cards?
I would like to get your opinion on this!
Thank you!
The only problem you might face is that some applications, especially graphics programs, use GPGPU for accelerating certain parts of image creation / modelling / whatever.
Most of these applications, especially the non-Pro ones not written solely for Pro grade GPUs (very expensive), are primarly using CUDA for offloading work to the GPU. CUDA is a Nvidia only technology, you will not be able to use it on Radeons.
So most programs that only support CUDA will just not give you the option of using GPU acceleration in these cases, which means some operations or filters run slower on a Laptop with a Radeon than on a Laptop with a Nvidia Graphics card.
None of this should stop you though from getting a Laptop with Radeon card. CUDA Acceleration can be quite a speed up, but because people are not happy with openGL yet and CUDA is Nvidia only, GPGPU is hardly used. And even where it is used you usually only get the speed up for part of the workflow, so while rendering might take half the time on the GPU, the whole modelling process might still be running completly on the CPU, making the time saved in rendering more or less 10% of the total time or less.
I wouldn't worry about it.
Radeons support all the same Graphics APIs as Nvidia cards, some features are different but AFAIK both companies are quick to catch up and fix bugs in the drivers (I know some people seem to claim the opposite).
Unity runs just fine on Radeons. It even runs without a hitch on Intel iGPUs, just try not to stress it too much. But the basic DX and OpenGL support is more or less the same no matter if Nvidia, AMD or Intel.
One thing to note is, SLI and CFX both to my knowledge are ONLY usable if Nvidia / AMD has added a profile for the game to their drivers. Else the game, for example whatever you build with unity, will never see more than a single card.
In some other scenarios, a second card MIGHT be of some limited use.
Some applications that use CUDA acceleration as described above might have included schedulers of their own design to distribute workload among the cards available in the system. To my knowledge there is no general solution from Nvidia on that yet, so good luck finding any application whose devs went through the hassle of writing this kind of two tier schedulers for the tiny niche of people with multiple GPUs in their Systems.
Then there is the possibility of using a second Nvidia card as PhysX card to offload physics calculation to in games that support it. Note that this ONLY works with 2 nvidia cards, as Nvidia disables PhysX support on their cards if an AMD card is in the System.
And the last thing: you might want to make sure your GPU has enough VRAM. Not only for playing / testing games, but again, some applications are actually using VRAM for other stuff.
Some modelling apps can be quite VRAM hungry when you start cranking up the polygon count on a high poly model. Because of that reason I finally exchanged my GTX 580 for GTX 970 this year. GPU wise the GTX 580 was still going strong for my purposes, but the VRAM was just too small for some of the higher resolution sculpts in my modelling app of choice.
Lucky most newer cards come with ample VRAM space.... just make sure yours get 2-3G at minimum, so you don't hit a VRAM wall down the line and have to compromise because of that.