🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Question about the cost of CPUs

Started by
7 comments, last by Luckless 8 years, 9 months ago

I recently watched a bunch of videos on silicon costs and how much the cost per wafer will increase as CPUs move down the nm ladder. These were fascinating insights to me, especially since I'm completely ignorant to how this stuff is created.

But the one thing that wasn't explained was the increase in cores to a chip. So that's kind of where I'm lost on.

Does it cost significantly more for Intel or AMD to make a chip with 8 cores compared to 4 cores? I don't really get how the amount of cores plays into the equation of cost. If I understand correctly, there would be more transistors, but that alone doesn't seem like it warrants the drastic increase to the price we pay for a chip.

Is it really just markup and larger margins for CPUs with more cores? Or is there a part of the fabrication process that I've missed?

I figure you guys would know more about this stuff and could maybe explain to me or point me in a direction so I could read about it and learn.

Advertisement

I don't know much about this but as far as i know the shared part is tiny compared to the cores themselves so while going from 1 to 2 cores doesn't double the transistor count / space it definately increases a lot, this in turn probably increases the odds of getting a failed part and having it go to the trash (if you end up throwing the wafer to the bin twice as often it increases the costs).

Also prices aren't that crazy, you don't really pay "per core" at all but per performance segment. If you check out the high tier xeons sometimes at the same price point you can get the choice between +0.6ghz or twice the core at the same price point.

The silicon itself is very cheap. You pay mostly for R&D and the Intel premium.

The cores themselves can be different. Sometimes they have less cache (on a per-core basis). Lack of hyperthreading or more advanced features (transactional memory, AVX level...) are also part of the equation, not to mention iGPU performance.

It seems to me that up to 200 bucks doubling the core count effectively doubles the price (more or less). At high end it's completely different.

Previously "Krohm"

The design does get more complicated if you add more cores. Just plain duplicating as many core templates as you want is not going to work.

Each CPU generates a lot of heat that you have to get rid off (Most power that goes to a CPU is converted to heat.) Bigger distance would work, but then you run into timing problems. High clock speeds mean very short signal times. Ideally, you want the signal to be known everywhere in the same clock tick. Electricity travels fast, but we're talking nano seconds. This means there are limits to the lengths of wires. Finally, there is a load problem. For fast signal switching you want few electrons to represent a signal (fewer electrons are easier to drain, leading to faster switch times). However each destination needs a few electrons to trigger, more outputs to connect to means more electrons, which thus means less fast switching.

Don't think the core count is what makes a chip more expensive. After all, its just something printed on the silicon. Doesn't matter if its more complicated cores, more cores, or a bigger iGPU.

BUT: the more you want to put in a CPU at the same physical size, the more cramped the design gets (leading to what Alberth is talking about). The more you put on the silicon, the more heat is produced, the more you get problems with leakage and all.

The most important thing in todays mainstream CPUs though is the growing space the iGPU is consuming. It seems CPU manufacturers want to really cut into the low-end GPU market, and think that a potent iGPU is creating added value for their consumers and in turn make their CPUs more attractive (which certainly has worked to a point, the lowest end discrete GPUs seemed to have died off).

This makes a low end PC a little bit cheaper for consumers, and helps to bring down the power consumption of the combined CPU+GPU package, but it has to some degree held back the CPU development. All space and power savings that could have been used for more CPU cores went to an increased iGPU size, which has led over the years from iGPUs only really useful for 2D Desktop mode to ones that can be used for light 3D gaming without problems, but on the other hand intel has stuck their 4 core design, which is basically also what AMD did (8 cores, yeah, more like 4 modules with 8-ish cores).

The CPUs produced for markets where CPU power DID count mostly ditched the iGPU (and mostly are physically bigger chips)... if you look at current mainstream CPUs, you will see that sometimes more than 50% of the die sizes are consumed by the iGPU and the eDRAM used to accelerate them. So cutting out the iGPU, even in mainstream die sizes 8 core CPUs might be possible today. Hell, the enthusiast and server market went way beyond 8 cores long ago (with bigger dies though).

So why are there no mainstream core i-7 chips with more than 4 cores and without iGPU for 300-400 bucks? Well, I guess it comes down to business reasons. Could Intel still ask 1000$ for their enthusiast platform i-7 8 core chip when there where (slightly slower, on a sligthly less powerful platform) 8-core chips for 400$? Who would invest in the highpriced LGA-2011 platform if you could get almost the same in LGA-1151 form?

Don't get me wrong, I don't think intel makes a lot of bucks with their enthusiast platform, which mostly does re-spin some of their lower end server and workstation class hardware without some pro class features for people with too much money... on the other hand, because R&D is mostly paid for by the workstation and server products (which are even more expensive), the money they ask for the enthusiast class hardware is mostly direct profits.

The mainstream on the other hand bitches and moanes about small incremental gains with every generation, but doesn't really NEED more CPU power. What normal end user workload is out there a 4 core i-7 with >3GHz cannot satisfy? Workloads needing more power mostly are professional ones... and pros are known for a) being ready to pay the price for the most owerful hardware and b) being ready to pay more if apart from better hardware they get better support bundled with their hardware.

Then there is the almost-monopoly of Intel... why shoot your powder now when there is little pressure from AMD, which struggled with bad CPU designs since Bulldozer... Intel doesn't need to do anything and their old generation is still ahead of newer AMD generations. Does Intel really care consumers have little interest in upgrading to a newer generation when most of them have Intel in their machines? When most new machines still get outfitted with the current Intel generation? Maybe they should with a struggling PC Market, on the other hand, there is ONE thing that AMD has a lead on, and that is the iGPU...

Thanks to byuing ATI, AMD has access to some powerful GPU tech, and their APUs constantly beat Intels iGPUs for graphical prowess. So if there is one area where Intel needs to gain ground to really beat AMD, its iGPU power.

Lastly, with mobile and ever thinner ultrabooks being all the rage lately, a decrease in power consumption has become more important than an increase in CPU power lately. Here, the smaller the die, the less cores and execution units, the less power hungy a chip is. In this light it makes sense to only include as many cores as are needed to handle whatever the device is built for... which for everyday workloads mean 2 cores or 4 slow cores / threads, and for more taxing workloads (like gaming) 4 cores or 8 slow cores / threads.

This all leads to inflated prices for Intel hardware, small incremental gains between generations, a hunt for more powerful iGPUs and power savings, and a "wait and see" strategy when it comes to real CPU power increases.

You can bet, the day AMD releases its new "Zen" design upon the world and this proves to REALLY be as good as the hype says (as opposed to Bulldozer), and/or the day normal workloads start needing more than 4/8 cores in CPUs, Intel is ready to push out 6- or 8-core Mainstream CPUs. The technology is there, it is NOT overly expensive (its a new design for sure).

These chips, if produced today, would most probably have to cut the iGPU out, meaning they would be desktop only chips. But the amount of silicon needed and complexity most probably would stay the same, as the iGPU is this large and complex.

My personal bet would be slightly above the "normal" mainstream i-7 pricepoint, in the 400-500$ range. If AMD pricing lately is anything to go by, they are no longer trying to attract people with bargain bin prices (see the Fury products and their... quite competitive pricing. Performance is good, but still below Nvidia cards at similar pricepoints AFAIK... but hey, if it makes them survive, who am I to judge it)... the Zen products will most probably be priced similar to current intel mainstream products, so Intel could bring out new more powerful mainstream CPUs in the pricerange between the mainstream i-7 and enthusiast i-7 products (discounting the enthusiast 4 core CPUs).

As I understand it, there's really not a whole lot of difference between the 8, 4 and 2 core CPUs in a given product line. From what I have read, it's usually the case that it's all the same process, just that the lower-core CPUs are those that had some defects. So you're trying to produce a batch of 8-core CPUs, and you get some with 8 good cores, some with 7, some with 6, etc. The perfect ones go into the 8-core, top-of-the-line bucket, the ones with varying numbers of defects get working cores disabled to bring them down to the next tier.

With smaller nm manufacturing processes, you get more defects.

Eric Richards

SlimDX tutorials - http://www.richardssoftware.net/

Twitter - @EricRichards22

Basically the cost of manufacturing comes down to yield in one way or another -- yield is the number of usable chips you get per wafer. Producing chips, especially at the bleeding edge of process technology, is an imperfect practice. Material or process flaws can cause all or part of a chip to fail to function to spec, and so it has to be thrown away. Multi-core chips are not comprised of single-core dies wired together inside your CPU, they're typically a single piece of silicon with many cores, large caches, control logic, memory and PCIe interfaces, etc. The larger the cache, or the greater number of cores, the larger that single piece of silicon must be -- thus, it has more exposure to flaws and results in greater loss when the flaw can't be worked around. Sometimes a company can salvage the part by disabling the flawed part -- sometimes you see CPUs with less cache, or 4-cpu dies are sold as dual-core CPUs, where two cores were disabled because at least one of them was flawed.

Of the dies they can sell as CPUs, no two dies are created equal -- some are better suited to operating at very high speeds, some are better suited to operating at very low power/heat; usually for the fastest processors with the most cores, you need both of these properties to achieve high core count and high speeds without exceeding the design limits of the platform by drawing too much power or producing too much heat. These are the most perfect dies of a process that already demands perfection, and are tangibly rarer, hence the higher price.

Now, the raw materials and man-hours of the production cost basically nothing in the grand scheme of things. Its all about the R&D, dividing among the chip design itself and for that of the process technology; piggybacking on that is the cost of the fab equipment itself (which can be viewed as a sort-of fixed-cost component of process R&D). These costs are responsible for the sticker-price of your CPU, and even though two different processors have essentially the same (very small) manufacturing costs, Intel or whomever can sell the more desirable of the two (whether that be greater frequency or lower power) at a higher premium -- the most profitable way for them to do business is get as much as the market will bare for the most desirable chips, and also to sell as many of the least-desirable chips as they can at any price -- so long as its profitable at all, and not priced so low that it cannibalizes sales of SKUs in the next bracket up.

throw table_exception("(? ???)? ? ???");

I am not truly qualified to give a fact-based answer because I've never worked in that field. But my belief is that the price you pay for a CPU is 100% illusory. Silicon is the second most abundant element our planet is made from, and extracting/refining it is a very easy process. That suggests that a silicon wafer costs "nothing". Ten times the amount of silicon still costs "the same" compared to other production costs. Also, moving down the nanometer ladder as you call it means making the chip smaller, not larger. Which means you can fit more chips onto the same surface. So, if it affects production cost in any way, the cost ought to go down.


Also, moving down the nanometer ladder as you call it means making the chip smaller, not larger. Which means you can fit more chips onto the same surface. So, if it affects production cost in any way, the cost ought to go down.

Except it doesn't, because they don't use these advances to make the same chips smaller, they use them to pack more transistors into newer, more-capable processors. The die-size of a typical CPU has been relatively stable, and its die size and process maturity that mainly determines the production curve. The modularity of simply adding a few more identical CPU cores, or GPU cores, or a larger cache makes it easy to scale die size up as much as they wish, its really only bounded by the likelihood of defects creeping in, and whether the market will bare the subsequent price with acceptable margins to make the effort worthwhile.

Even still, production costs are only a small, small part of the cost of developing and manufacturing a CPU -- the price you pay isn't illusory, its just tied up in things less tangible than silica sand.

throw table_exception("(? ???)? ? ???");

Just because you go "Down the nm ladder" doesn't always mean you can cram more onto the same chip or make the same chip in less space. You still have the thermal issues to deal with. And while they Do often cram more on the same sized chip, that doesn't mean it is always an easy task to do either.

Price wise much of the decision making comes from binning chips. A half dozen different models of chips can all come out of the same process, even off the same wafer, and specific models of any given single chip gets determined by testing. The best chips that pass all the QA benchmarks get binned as the highest tier units, while those with flaws get modified and used for lower tier models. Minor flaws leading to thermal issues? Lock it to a lower clock speed and sell at a discounted price. One of the four cores failed? disable the pair involved and sell it as a dual core model. Memory tests fail? Mark out the bad sections and use it as a lower spec unit.

Specifics used depends on the run and chips being made. (There are after all, far more chips being made that aren't CPUs than chips for CPUs.)

So you design a chip, do an initial test run for yield tests, conduct binning tests, and then look at the market and decide what the best prices and binning points are. From my understanding it isn't unusual for much of the lowest tier chips to actually pass spec for a far higher bin than they're used for.

Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

This topic is closed to new replies.

Advertisement