Advertisement

Client systems: Cpu, Gpu, Memory, Hd, or Internet

Started by January 05, 2015 07:27 AM
16 comments, last by TrianglesPCT 10 years ago

I'd take super GPU, i find it funny how you made up all these numbers and went with crazy numbers for some (GPU) but pretty average for other (net at 100MB/sec is slower than what i have at home, and in plenty of places you can get 1GB/sec)

Not to mention that 10 ms stable round-trip latency at all times would seem to allow faster-than-light communication, breaking causality and all of known physics! cool.png

Eh', so is CPU/GPU running with 0 energy. Physics is for chumps when making wish-lists.

I'd take super GPU, i find it funny how you made up all these numbers and went with crazy numbers for some (GPU) but pretty average for other (net at 100MB/sec is slower than what i have at home, and in plenty of places you can get 1GB/sec)

Wish I lived where you live then. 100 MB/s at home would be nuts, and 100 GB/s server-side would be double-nuts. I THOUGHT I was being crazy with the internet spec too, especially the latency. Hell, most "1 Gb/s" ethernet devices even negotiate a channel bandwidth that actually meets that spec, and 10 Gb/s is pretty much out of the question for single-link ethernet. And you get 100 MB/s [800Mb/s or 0.8Gb/s] at home? Cripes.

Where is it then that has this epic internet connection? I'm packin' my bags tomorrow.

No i don't get 100MBs at home, i just said it wasn't nuts, i actually get 500MB/s down and 200mb/s up at home :) (upgrading to 1GB/s soon).

Yea that was mean, couldn't resist :)


I'd take super GPU, i find it funny how you made up all these numbers and went with crazy numbers for some (GPU) but pretty average for other (net at 100MB/sec is slower than what i have at home, and in plenty of places you can get 1GB/sec)

Not to mention that 10 ms stable round-trip latency at all times would seem to allow faster-than-light communication, breaking causality and all of known physics! cool.png

But maybe the rest of the world was obliterated during research and civilisation is down to an area of 200 square miles, then 10ms is perfectly doable !

Advertisement

For the short term benefit: Definitely the GPU! The single most annoying thing with current game graphics in my opinion is the aliasing - all that flickering of the SSAO on the vegetation in Dragon Age Inquisition or the specular aliasing in StarCitizen. Simply down-sampling everything from a massive resolution would be an easy fix.

current project: Roa

For the short term benefit: Definitely the GPU! The single most annoying thing with current game graphics in my opinion is the aliasing - all that flickering of the SSAO on the vegetation in Dragon Age Inquisition or the specular aliasing in StarCitizen. Simply down-sampling everything from a massive resolution would be an easy fix.

Take your glasses off and you get anti-aliasing for free.

No i don't get 100MBs at home, i just said it wasn't nuts, i actually get 500MB/s down and 200mb/s up at home smile.png (upgrading to 1GB/s soon).

You're killing me. I know around here you can get internet that is advertized as 1Gb/s down, but it never actually achieves anything close to that. They'll run fiber to your door, but then still plug it into a toaster oven on their side.

To be honest, the main motivator for me to say "100MB/s" in my original post was to make sure I didn't get too close to the 2.2GB/s or 1.2GB/s of HDMI, and then give everyone the easy out of building gigantic servers, and just streaming the entire video out over the internet. Dropping under those levels assures that people still need to at least consider the client-side machine as something other than just a terminal.

Assuming the machine is otherwise correctly provisioned, but this one component is different, it is a clear winner:

3- Super Memory: 64 Terabytes of main memory, with access timing specs similar to that of a high-rated DDR4.


Since the machine is otherwise competent, my first immediate use would be as a ram disk. Since nearly all processing and serious computing also ends up touching main memory frequently, everything I do would benefit from the higher execution speed with no additional effort. It provides immediately accessible gains to me.


The other items have some problems:

Super CPU doesn't help too much within these constraints. You don't allow other hardware to change, and it would be quite difficult to keep the CPU fed with today's level of hardware. Unless you are spending the time doing raw number crunching (perhaps BitCoin mining in the background) it is difficult to provide a steady stream of instructions and data to the modern processor. That is why the HyperThreading model works so well for so many applications, bolting on a second instruction decoder (rather than additional CPU horsepower) is very often able to provide roughly double the user-facing performance for most users. The actual CPU number-crunching aspect is rarely a bottleneck. Even the enormous number-crunching supercomputers are often constrained more by memory speeds and data interconnects rather than CPU horsepower.

Super GPU provides additional eye candy of course, and GPGPU programming can help for your BitCoin mining efforts. But you still need to develop for it and that takes work. Having recently bought a new machine with a nice new graphics card (GeForce 970) the graphics there in Dragon Age are beautiful. But since you hypothetically cranked it up to "about 100 petaflops" you've really given exactly the same problem as the Super CPU. How do you intend to provide enough data and instructions to keep the beast busy? Sure you can have a rock-solid rendering framerate, but with the other components stuck at today's level it would be difficult to leverage. Maybe I'd take it if I were a digital currency miner, but not for much else.

Super Disk would be fun but not particularly practical. For me personally with a photography hobby I keep multiple copies of everything. With a decade of raw files I'm approaching a terabyte of that data. One contract I'm working with is a moderately sized data warehouse. They've got around 11TB of data records to work with. But your fictional 25 petabytes, or 25 thousand terabytes, is far more space than most people would have a practical use for filling. For things that have cool storage applications like solving the game of Chess a 25 petabyte collections is inadequate by a double-digit order of magnitude. Some commercial uses could benefit. Consider that the Internet Archive just passed 15 PB of data, and all of facebook combined is estimated to have 300 PB. Government agencies would love your 25 PB drives, since that is about the total of everything Google transmits globally in a day. So a few data-centric businesses would love it, but it doesn't give me much individually.

Super Internet would be interesting. Not so much useful for anything I work with, but interesting because of the "10ms stable round trip latency at all times". The financial stock-trading world would kill for this one. Currently it takes slightly over 14ms for the New York to Chicago connection and a surprisingly huge number of financial businesses use that. The speed of light for that distance is about 13.3ms. If you could guarantee a round trip of 10ms to all the financial hubs of the world not only would physicists want to have a chat but you could be a very rich individual. For most software it would be convenient but not really transformative. Games get a better connection, but some players already have great connections, especially for LAN play. For common consumers it might mean videos load a little faster, but since all the rest of the global infrastructure would remain the same it would quickly max out connections and you'd have bottlenecks of memory and CPU at every processing hub.


So if I had to pick one only one of them to help improve my computer experience and to help humanity generally, I'd have to pick the Super Memory.

Where is it then that has this epic internet connection? I'm packin' my bags tomorrow.


Maybe you just live in the land of sucky Internets? I just got upgraded to 50mb/sec this weekend in the UK...
Advertisement


You're killing me. I know around here you can get internet that is advertized as 1Gb/s down, but it never actually achieves anything close to that. They'll run fiber to your door, but then still plug it into a toaster oven on their side.

Aye internet is nice here, my isp actually matches the speed just fine (i'm typically getting 480 to 500 down and 160 to 200 up depending on time of the day). That's when testing outside of my isp of course (no interest in knowing how fast it goes without leaving their own network).

Super CPU: many applications are memory bound, but mine happens to be compute heavy.

CPU compute is preferable to me since there is less latency between the serial and vector code & you don't have to write against some terrible GPU API.

And presumably this super CPU has more & better cache. Wider vector lanes, many core, and more instructions per cycle pls.

This topic is closed to new replies.

Advertisement