Advertisement

1 GPU or 2 GPUs?

Started by October 11, 2014 01:21 AM
19 comments, last by Promit 9 years, 11 months ago

Sorry for the late post. I'm getting into 3D modeling, and I'd like to do some offline stuff as well as real-time modeling for games. I've found that my current graphics card (GT 610) is trudging along with even basic games right now. One thing I'd like to do is work on animated shorts in Blender. Being able to make quicker renders without it taking hours would be nice since I'll have one of NVIDIA's most powerful comsumer-level GPUs running using Blender's CUDA-enabled renderer.

I also thought that dual GPUs would be a headache. I just wanted to get others' opinions before I went for it. I might start off with just a single 8GB stick of memory since that'll be upgradable to 32GB in the next few years.

Memory is the wrong place to skimp.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Advertisement

Hi,

If you are going to be doing a significant amount of rendering added to all the other things in game development, then you really need two computers. I do not know how you are going to be able to fit that into your budget unless you get two high performance used PCs.

I have two PCs and trust me: You can really improve workflow with two, as one is focused mainly on rendering and other tasks such as uploading and downloading thru GIT and so forth while you use the other PC (can be a cheap laptop) for coding and 3D modeling.

If you really insist on having only one PC, then go for the best value for the money.

Remember that you need a PC which is more realistic toward end-user level equipment, because most gamers have far short of the highest performance. Being able to somehow test for end-user issues is very important.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

Memory is the wrong place to skimp.

This. What budget are you looking at? It's contentious whether the GPU or CPU is more important (depends on what you're doing), but memory is almost always the most important.

If you're getting performance problems with your current computer while doing development by just using your dev tools, it's time to upgrade.

If you're getting performance problems while trying to run your game:
1) You may have a poorly designed/implemented algorithm and it needs to be fixed (*most likely)

2) you may just have a crappy computer which can't handle what the game is trying to do

3) You may be a bit to ambitious or over-the-top with what you're trying to do in game (ie, do you really need to simulate every cloud particle in real time within your RTS?)

For myself, I have a decently fast computer for dev. It can run my tools and my game without major lag. However, my game does drop down to 20fps. This is a good thing, I don't have a ridiculously overpowered system, which allows me to see the performance problems my target market may experience, and I can identify and fix them on the spot (without rustling up a second low-spec machine for testing).

2 GPUs are only useful for devlopment if you want to integrate support for that into your own game. No tool I know of in the Prosumer space uses more than one GPU for Computing, and even then only Nvidia Cards because of CUDA.

For gaming, if you don't want to use extreme AA mods, UHD Panels /3+ FHD Panels, or 120+ Hz screens, you don't need it.

And also, saving some bucks by SLI'ing two smaller GPUs is most probably not worth the hassle with microlags, SLI profiles and generally mixed support for it.

Be aware that while the GTX 980 and 970 cards are very powerful and attractive cards today, they are overpriced and will loose value quickly:

- The GM204 Chip used is Nvidias midrange chip. The true Highend Chip, GM200, will most probably appear in the Tesla and Quadro cards first, as happened before with the GK110 Chip of the GTX 780 Ti, that was available at the launch of the GTX 680 card (again, this card used the midrange chip GK104, yet was sold as Highend Card at the time). That means the high-end card you pay 600$ for now might be had in a better, more optimized form as 1070 next year for a little more than half the price.

Also there is talk of the big chip being unveiled in the next 4 months. Most probably this will be just a new Quadro or Tesla card, maybe a new Titan card if we are lucky. But AMD could force Nvidias hands with an extremly fast Fiji chip, and they might release a GTX 980 Ti with a slightly gimped big chip in the first half of 2015.

- The GM200 chip, or GM210 if they use an updated chip for the true Maxewell Highend card next year, will be much more powerful than the GTX 980. Most probably you will again go up to 250W TDP, prices will be higher as it is a larger, more complex chip than GM104 to produce. Still, chances are good that you get a card for around 700-750$ next year that is up to 50% faster than this years 600$ GTX 980. This will be the REAL big maxwell!

- The GM104 Chip in the GTX 980 and 970 is still produced in 28nm, even though Maxwell was planned in 20nm. The fact they were able to make the maxwell midrange chip as fast or even faster than the kepler highend chip is quite an impressive feat, seeing how it has a much smaller shader count and memory interface. Still, there are rumours Nvidia might plan a maxwell refresh in 20nm in 2015. This could mean the second maxwell wave due in a year (GTX 1080 and so on) might be even faster than if they just use the big chip instead of the midrange one for the consumer cards, if they refresh the chips in 20 nm.

It could also mean that we see a reworked GTX 900 series in the first half of 2015, again, if AMD forces Nvidias hands to do so, with reworked maxwell chips in 20nm.

Of course there is also talks that Nvidia and AMD could just ignore the problem ridden 20nm node and go directly to 16 nm when it is ready (most probably not before pascal, 2016 maybe)

TL; DR: IF you need a card now, you might want to stick to the GTX 970. Its nearly as fast as the 980 (10-15% slower), but you save more than 50%. The value of both cards next year will most probably be pretty bad, depending on what happens. Still, even if 20 nm never shows up, by the time the big chips hits the normal consumer cards (As opposed to the 1000$ Titan), your GTX 980 is worth 300$... at best. Your highend card just got bumped down to a midrange card, sorry.

Sorry for the late post. I'm getting into 3D modeling, and I'd like to do some offline stuff as well as real-time modeling for games. I've found that my current graphics card (GT 610) is trudging along with even basic games right now. One thing I'd like to do is work on animated shorts in Blender. Being able to make quicker renders without it taking hours would be nice since I'll have one of NVIDIA's most powerful comsumer-level GPUs running using Blender's CUDA-enabled renderer.

I also thought that dual GPUs would be a headache. I just wanted to get others' opinions before I went for it. I might start off with just a single 8GB stick of memory since that'll be upgradable to 32GB in the next few years.

You need to see that the GT 610 is an EXTREMLY small and slow card. It is 3 years old, and was most probably even at launch tied with or overtaken by some integrated solutions from Intel or AMD.

You could get a low midrange card like the GTX 750 or 760, or wait for the GTX 960, and you would have most probably more than enough grunt for CUDA Acceleration. You certainly would feel quite a speed bump compared to your old card even with those cheaper cards.

If you have the money, of course a GTX 970 will provide a healthy boost over the GTX 750 / 760, and is most probably also quite faster than the not yet revealed GTX 960.

Why not go with 16G Memory from the start? 4G Sticks are cheap as dirt, and while you don't get a speed bump from more unused memory, the moment your PC runs out of it it slows down to crawl, or some Applications might even outright crash.

Get as much Memory as you can afford. 8G is rather on the low side for a 3D Workstation (or still rather weedy for a gaming PC... some games start to use up 6 to 8G today, and with the OS and other Apps taking their share, your PC will be swapping in no time). Rather than paying a 60% premium to get a graphics card that is 15% faster at max, invest half of that sum to fill all your memory slots, or get 2 8G Sticks if you might want to upgrade later on.

If you do the later, make sure you spread the 2 Sticks over both available memory channels. Your Mainboard Manual might tell you more, basically your 4 slots share 2 channels, 2 slots per channel. If you use the wrong 2 slots, you just plugged in 2 sticks that share a single channel, halfing your Memory Bandwidth.

Advertisement


GTX 1080
I like that name for a GPU. Hope they don't change naming scheme for next gen GPUs.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Next generation DirectX 12 and OpenGL are coming, too, with much better performance even on existing hardware (DX 11 generation hardware), but it'll take a couple years to get "up to stuff" with them.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer


GTX 1080
I like that name for a GPU. Hope they don't change naming scheme for next gen GPUs.

smile.png .... Especially when it might be the first real 4k capable (As in, being able to output 4k at 60hz at max settings) card.... they could call it the GTX 1080p tongue.png

About the naming scheme of Nvidia: GTX 750 being the first Maxwell based card, followed by the 800M series, and now the 900 series for the midrange Maxwell Desktop series. So the next Highend Nvidia range could be anything, from another 900 series (GTX 980 Ti? GTX 985?), to the logical GTX 1080, another bump up because, you know, its just soo much better with a higher number (GTX 1180) to a complete renaming rehaul (because they enter the four digit territory again)....

Next generation DirectX 12 and OpenGL are coming, too, with much better performance even on existing hardware (DX 11 generation hardware), but it'll take a couple years to get "up to stuff" with them.

And nobody knows to what feature level the existing card will support DX 12... I really hope it will be enough for the Drawcall culling stuff, because that is the most important new feature... but its totally possible that is not even supported by the GTX 980, only by new cards coming out next year.

Nobody will know until DX 12 is out there.


My friend was saying that I should just get two GTX 670s and crossfire them (I'm new with what crossfire means)

Wrong answer. I am a strong believer that you ONLY use multi-GPU in two situations:

1) No single GPU can get the performance you need at a reasonable price

2) You already have one of the GPUs and are looking for a cheap upgrade

In a world that the 980 exists, the idea of combining two GTX 670s is a sick joke. (PS, Crossfire is AMD's branding of dual GPU. NVIDIA calls theirs SLI.)

I presume its a typo and the friend meant 970s, but otherwise, this. Scalability of SLI/Crossfire is pretty good these days -- about 75-95 percent of linear scaling, but it also doesn't work with all titles. It also requires more power delivery, power draw at the wall, and can complicate cooling. Single-GPUs have none of these problems.

When I build a PC, which I do about every 4-5 years, I buy the best components that are available just where price/performance begins to fall off radically. I buy the machine I want now, and plan to upgrade the GPU 2-2.5 years in -- I might add storage, but otherwise I don't upgrade. Everyone is different, but that's what works for me.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement