Advertisement

Today relevant hardware comparison

Started by August 15, 2011 03:54 PM
28 comments, last by Ravyne 13 years, 1 month ago
This is all I could find about how many coins are actually mined:

http://apps.ycombinator.com/item?id=2265338
Not to turn this into a bitcoin discussion, but on a few points, since they were asked:

[quote name='Ravyne' timestamp='1313513971' post='4849924']
because all the powerful AMD cards have been snapped up by bitcoin miners


Wow, seriously? Bitcoin has an actual effect on the hardware market?

So that's what all this computational power is used for, rather than for science, or, of course, games? :)
[/quote]

Yep indeed. It turns out that AMD's hardware is about 5x faster than nVidia's due to one of the instructions they implement that is heavily used in the SHA algorithm. No CPU or nVidia card has the possibility of ever breaking even after hardware costs and electricity use.

The computations are used to encode bitcoin transactions into the block-chain. Nothing else useful goes on. I believe, though I may be mistaken, that the block-chain is formed "backwards" so that the network is trying to come up with a block which hashes to the value of the last block -- or some-such. It amounts to a brute-force attack trying to ascertain a certain result, rather than the normal way of hashing a known value and calling it good. The first block, called the genesis block, was pre-made, and contains a cryptic message from bitcoin's creator.

This is how bitcoin implements its security -- an attacker would have to have access to enough compute power (significantly more than half of the network's power) in order to take over by creating a different blockchain that is longer than the legitimate one, and which also resolves to the same genesis block. This is also the reason that mining is a free-for all -- a lot of people who got to the party late lament that GPU miners control so much hashing power that even most lower-end GPUs are unprofitable, and call for a limitation on how much hashing power an individual is allowed to apply -- but doing so would undermine the security of the blockchain, even if doing so might be more "fair" in many people's eyes. Of course, the politics of bitcoin lean more libertarian (though most libertarians are too busy being gold-bugs to realize it) than socialist, so there aren't many sympathetic ears anyhow.



[quote name='freddyscoming4you' timestamp='1313592334' post='4850344']
Oh heck yeah. Graphics cards are cheaper than multi-cpu boards that support multi-core cpus. With the projects that turn GPUs into massively parallel processors just set up a three way sli/xfire and you've got a bitcoin churnin machine.


How much does it earn per hour?

[/quote]

I've got one Radeon 6990 turning out about 800 megahashes per second -- In a month of mining I've mined about 15 bitcoins through a pool. Average value over the last month is around 12 USD, so I'd have made about 180 USD if I had cashed them all out as-was. My electrical costs at 10 cents per KW/h is around $30 per month. Mining brings me about $5 per day profit, with one card -- about 21 cents/hour 24 hours per day. My hardware investment is about $1000 for the whole setup. There are more cost-effective ways in, using different GPUs, but they weren't available at the time (and I only just *happened* to find the 6990 on my local craigslist).

Some people control as much as 56 gigahash -- around 64 Radeon 6990s worth -- and are making a decent wage at it (about the equivilent of 56.50/hour normalized against the average 40 hour work week). Of course, that requires about a 75k hardware investment, and probably another 25-50K in electrical infrastructure. There are probably enterprising criminal factions or major "investors" which control even more hashing power, but who don't pool and so we don't know about them.

throw table_exception("(? ???)? ? ???");

Advertisement

Its also worth noting that AMD is on the cusp of releasing their new Bulldozer CPUs, probably before the end of next month. And they've got new GPUs that will probably be out before the end of the year.


Basically; this.

While technology is always moving CPUs and GPU archs come in batches so at the very least I'd wait to see what reviews appear for Bulldozer before picking a CPUs; even if it doesn't stack up it might cause price adjustments.

GPU is harder; we are on the edge of a totally new arch from AMD which will cause price ripples as well. If you can hold off a few months then it might save you some cash even if you don't go for the newest high end. NV have a new GPU coming as well but it won't be on the shelves now until sometime in 2012...
To clarify, the latest rumor, supported by leaked AMD slides, puts Bulldozer out in just over 4 weeks. The model which will sell for ~300 USD is a 4 "module" CPU running at 3.6Ghz, able to turbo up to 4.2Ghz when only 1-2 cores are active. Each "module" is comprised of 2 independent integer cores, 2 independent x87 FPUs, and 2 128bit SSE vector processors which can be combined to do 256bit AVX vector instructions. Basically, you get 8 "classic" integer/FPU/SSE CPU cores and 4 shared AVX execution units. On well-threaded integer work (eg, compiling smile.gif ) its going to destroy Intel on price/performance, and it ought to be competitive with Intel's newest on vector workloads.

I'm somewhat concerned that they are sticking to dual-channel memory for now (DDR-1866), but it should mean the the overall platform will be cheaper, due to less complicated motherboard manufacturing.


I'm guessing that the GPUs will be out before December, christmas shopping season and all. The Dish on the 7xx0 series is that the 79x0s will be the new architecture, and the 78x0s and below will be die-shrinks of the current VLIW-4 architecture found in the 69x0s (they did the same thing with the 6xx0 series, where the 69x0s were the new architecture, and the 68x0s were a die-shrink of the previous VLIW-5 architecture found in the 5xx0 series across the board).

In any event, you can expect to see 6990 performance in a much-cheaper 7870 package for 300-400 USD, roughly.

throw table_exception("(? ???)? ? ???");

Has there ever been any proof of more than double channel ram making a difference in anything beyond synthetic tests? When I got my i7, which is triple channel, the general feel I got from various sites was that in the 'real world' you'll see very little difference between dual and triple channel ram.

Even Intel went back to dual channel after the initial i7 releases and I'm pretty sure, unless I've missed something, they have pretty much stayed there.

I guess Quad-channel might be useful for future Fusion devices, but then that would break AMD's general plan of socket compatibility between generations as they tend to over lap them a bit.

GPU wise; AMD's new arch looks sweet indeed; watch a video of a presentation on it at their Fusion event last night. Need to rewatch to confirm a few points but it looks like a significant step forward over their current arch. If you are an OpenCL coder then I think this will be worth a look as some parts look really nice indeed (such as their Async Compute Engines which let you run multiple compute work loads at once and allow for some self feeding of compute work). It looks to be pretty good for graphics stuff too ;)
It certainly depends -- greater bandwidth doesn't solve anything with respect to latency or scattered reads -- but it may be a bigger help in games and other applications as Data-Oriented Design moves off the console and onto the PC. Its true that intel's triple-channel product was niche at best, but that's not necessarily a sign that it was a dead end -- late next year Intel's next chip will have quad-channel memory.

I suspect AMD will follow suit with their next-Gen fusion APUs around early 2013.

dual channel, paired with fast RAM probably isn't a terrible bottleneck really, but its worth pointing out that PC bandwidth is just lately catching up to consoles, and I think still has a way to go to catch up with latency.

throw table_exception("(? ???)? ? ???");

Advertisement

its going to destroy Intel on price/performance, and it ought to be competitive with Intel's newest on vector workloads.

The i7-2600k is only 315 USD. There's no reason Intel won't slash prices when they get competition either. That one overclocks to 4 Ghz no problem.

[quote name='Ravyne' timestamp='1313718075' post='4851024']
its going to destroy Intel on price/performance, and it ought to be competitive with Intel's newest on vector workloads.

The i7-2600k is only 315 USD. There's no reason Intel won't slash prices when they get competition either. That one overclocks to 4 Ghz no problem.
[/quote]

[size=2]Which is great and all, but the AMD chip will have twice the integer, x87 FPU and SSE resources since they have 4 "modules" rather than 4 cores. Intel's Hyperthreading will help them gain back a bit, but I'd still expect them to come out well ahead on those workloads -- 50% minimum is my guess. Single-threaded performance may be closer and even give the edge to Intel -- there's not really been anything reported or rumored on single-threaded perf. Its not really fair to call out OC numbers since we don't know how well (or not) AMD will overclock.[font=arial, verdana, tahoma, sans-serif][size=2]
[/font][font=arial, verdana, tahoma, sans-serif][size=2]The 2600k is a fine CPU, I'm not at all saying not to get one -- what I'm saying is that AMD is on the cusp of delivering some really interesting stuff. If it should fail to compete (which I doubt) then so be it, but in the likely event that it matches (or exceeds, as I'm betting) the value and performance of Intel's latest, then the very least that will happen is that it will drive Intel's pricing down.[/font][font=arial, verdana, tahoma, sans-serif][size=2]
[/font]
[size=2]If the OP needs a computer *right now* then his decision[size=2] is made for him, but if he can manage to wait 4 weeks there will be a whole new architecture entering the fray -- That doesn't happen all that often, so its well worth waiting out -- die shrinks and minor revisions, no, but new architectures, yes.

throw table_exception("(? ???)? ? ???");

Right now I think the 3 more intresting piece of tech out there for the PC are;

- Bulldozer as it's a new take on the CPU core design
- AMD's Graphics Next Core as it has some very intresting designs, is a signficant step above their current hardware and does some pretty cool things (watch session 2620 here for the details, about 45mins long, I recommend grabbing the pdf as not all the slides are clear); probably be part of Fusion APUs in the next 18 months iirc
- Fusion - while Intel have a GPU on-die now it's not that good to put it mildly.

I'm sure NV and Intel are doing intresting things but right now AMD are making a fair chunk of noise and seem to be hitting time tables... well, at least with the GPU stuff anyway, BD and Fusion have had more slippage over the years than a bunch of old people on an ice rink :D
Intel obviously has an army of great engineers, and they've done a lot of really interesting this over the last oh, 6-7 years especially (correcting the P4 netburst fiasco) after AMD started putting the screws to them starting with Athlon 64/Opteron. The work they've done to get the computing performance they do out of such low TDPs is an awsome turn around -- but Intel, historically, doesn't make especially bold moves -- at least successful ones -- until after the competition has shown the way. They manage great strides as the master of tweaking and optimizing -- but they don't make any radical leaps. Contrast to AMD who seems less able to extract gains from tweaks and optimization and makes its greatest strides when doing doing something that's far from obvious to the competition.

They're rather complimentary in that way, and I'm glad that both are around to keep each other moving forward -- AMD to drag Intel onto the next big thing, and Intel to remind AMD that a tech company cannot survive on bold moves alone.

When AMD bought ATI and everyone wondered "Why not nVidia?" -- nV's much larger market cap aside -- I had always thought that the ATI acquisition made more sense from a cultural and technology standpoint -- both were (and still are) scrappy underdogs who had proven they had the ability to deliver strong products and even steal the performance crown every now and again. Both groups favored bold changes and elegant solutions rather than just doing more of the same, harder.

Although they might be a cultural match in many ways, Intel and nVidia will never merge -- neither is humble enough to cede to the other, and would rather choose instead to move into each other's home territories. Intel is doing this with Kight's Ferry (competing against Tesla/Fermi) and had intended to compete on GPUs with Larabee, and nVidia is doing this with Fermi/Tesla on the server and Tegra (particularly Tegra-3/Kal-El - quad-core, 2.5Ghz ARM -- due before the end of the year) on mobile. Further taking over the compute arena, one of the upcoming nVidia GPUs (either next, or the one after) will have an on-board ARM processor to allow entire compute tasks to take place completely on the card -- in 5 years, a super-computer might be comprised entirely of these GPUs with embedded ARM cores, and there will be few, if any, parts that look similar to a PC. ATI will probably fallow suite with something similar to their Bobcat cores.

Anyhow, now that we're thoroughly off-topic...

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement