Advertisement

Moores Law

Started by February 02, 2013 07:59 AM
16 comments, last by Sirisian 11 years, 9 months ago

How do you suspect Moore's law will be mitigated, in the event that it begins to slow? Some information I read online suggests that I may already be slowing and quantum tunneling offers a large hurdle.

Yes, we have generally reached a limit which means we won't be able to clock our processors any faster until they are made of something other than silicon. In the absolute worst case that we are no longer able to develop better technology, Moore's Law will degenerate into a "parallel processing" argument which states than we can double computational power by doubling the number of cores, which we won't be able to keep doing very long because of spatial constraints (since we can't make them any smaller either). And many applications do not scale linearly with computational execution units.

Fortunately, "moar power" isn't the only way to make stuff work faster. In fact, there are much more important factors, including memory access patterns, I/O dependencies, SIMD/MIMD and of course the odd algorithmic improvement. These make a huge difference in the running time of a computationally heavy workload, perhaps more so than just throwing more, faster cores at it. There are also a bunch of specialized tricks hardware manufacturers keep introducing to make common operations work that little bit faster (think horizontal add, fused multiply-add, etc..). I'm sure they have a few tricks up their sleeves for the next few years to compensate.

Though I think they are banking on mostly adding more cores to everything while keeping power usage steady, at least GPU's can afford to because most graphics rendering scales almost perfectly, CPU's don't have that luxury and can only do so much to speed up badly designed single-threaded programs. There will be a gradual paradigm change to parallel computing, we may even integrate GPU technology in our current CPU's to create a unified general-purpose parallel processor, and our development patterns will adapt accordingly. Perhaps "cloud computing" will really take off, perhaps it won't. This will take a long time, don't expect to see any of this for at least eight years and perhaps much longer.

But, in the limit, yes, from our current understanding of physics, Moore's Law will simply cease to apply until our technology is built from something other than matter. You can only double anything so many times before it becomes too large to handle (ever heard of the rice and chessboard anecdote?)

In any case, consumer CPU's do not need more power right at the moment. They are more than adequate. What really needs to be addressed is not at hardware but at software level.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Advertisement

"Moores' Law" annoys me, mostly because when it's used by anyone (and, tbh, gamers are the worst for this) they seem to give it the same weight as the law of gravity in a 'this must happen!' sense... Should have been called 'Moore's observation of the past and projection for the future'.... *grumbles*

"Moores' Law" annoys me, mostly because when it's used by anyone (and, tbh, gamers are the worst for this) they seem to give it the same weight as the law of gravity in a 'this must happen!' sense... Should have been called 'Moore's observation of the past and projection for the future'.... *grumbles*

This. Years ago, when people first started talking about it, my first thought was "bullshit." To the best of my remembrance, I've never used the term in any conversation, simply because I never believed it. I'm not any kind of genius prognosticator, but it doesn't really require one to be a genius to intuit that there will be hard limits on scaling and speed that Moore's so-called Law can't circumvent.

It's not like Moore himself said that it would continue indefinitely. He wrote a paper saying transistor densities were doubling every two years in the 1960s and said it would continue for at least ten years. We hit the expiration date on his prediction around thirty years ago.

I found this article interesting - the author argues that Moore's Law has already started slowing down, but that we can (and are, and inevitably will) take advantage of similar trends to keep pace in other ways. I don't have the requisite knowledge to critically examine the article - hardware ain't my field - so I'd be interested to hear what others think of it.

Advertisement
It seems that everything will continue normally for at least 8 years, since 5 nm is on Intel's roadmap. It's hard to imagine anything much smaller than 5 nm being possible with technologies whose core component is photolithography...

IBM has been doing research on stacking multiple die and interconnecting them vertically by thousands of interconnects per square millimeter, with special cooling micro-channels taking care of the generated heat. With such an approach it might be possible to keep increasing the computational capability and probably also drive costs down by not replacing the manufacturing equipment as often.

Meanwhile, there is research in nanoelectronics, namely in methods to include trillions of transistors in a single chip. This paper (http://www.ee.washington.edu/faculty/hauck/publications/NanoSurvey.pdf) suggests that these trillions of devices will probably have to have a regular layout on the chip and that a large number of defective devices per chip will be the norm, so a method will be necessary to avoid using these defective devices.

Bacterius mentioned architectural and algorithmic improvements. True, a good choice of algorithm may make a program execute thousands of times faster, and the choice of the correct architecture might speed a programm up x times, but currently we have the luxury of BOTH architectural/algorithmic improvements and a nice 2x speedup every 20-24 months. If manufacturing technology had stopped evolving in 2007, we still wouldn't be able to play Crysis 1 on a cheap gaming PC!

"Moores' Law" annoys me, mostly because when it's used by anyone (and, tbh, gamers are the worst for this) they seem to give it the same weight as the law of gravity in a 'this must happen!' sense... Should have been called 'Moore's observation of the past and projection for the future'.... *grumbles*

A "law" is an observation though - a simple relation of observed behaviour. Not all scientific laws have the same weight as that of gravitation either, e.g., gas laws which are just an approximation. Perhaps it's unfortunate that people misunderstand what "law" is (same with "theory"), but that applies to all usages of the term.

http://erebusrpg.sourceforge.net/ - Erebus, Open Source RPG for Windows/Linux/Android
http://conquests.sourceforge.net/ - Conquests, Open Source Civ-like Game for Windows/Linux

Since everything must be portable and wireless and green nowadays (bleh), it's likely that Moore's Law (which is purely a marketing gag) will be replaced by some other bullshit. Such as CPUs will consume 30% less power every year. CPUs with integrated GPUs are another such marketing gag. Some nonsense will be thought of to give you an incentive to buy a new processor, rest assured. Maybe the new SSE6.3 instructions, which are only supported on 0.5% of all CPUs and useful for 0.01% of all programs, and have different opcodes between Intel and AMD. And of course there will be support for SHA-3 instructions. GPU manufacturers are going the "low power" route already, though in this case I consent somehow (having to buy a bigger power supply unit because the GPU alone draws 300-400W is simply not acceptable). With Kepler, nVidia who probably has the longest history of ultra power-hungry hardware on the planet, for the first time brought out a new chip with a lot more silicon and only marginally better performance -- but with half the power consumption. When CPU speeds are maxed out, and more speed is desired, there's still memory bandwidth to be dealt with. I'm sure it is possible to make memory faster, there just hasn't been such a great interest in the past (because L2 cache works reasonably well). However, faster RAM obviously makes a faster computer, too.
If moore's law slowed to a halt right now, but we still wanted faster, smaller PC's, then we'd just have to throw out all our software and start again -- this time teaching everyone multi-core practices from the get-go, so that we can make 100 90's era CPU cores onto a chip and actually have them be utilized (which was the idea behind larabe)

Hardware manufacturing is pretty foreign to me, but AFAIK, chip design is still mostly a 2D affair -- moore's law is about surface area, not volume. So, once we hit an area limit, it's time to start stacking vertically, and then racing to reduce the thickness of these layers too... ;|

Power usage, as mentioned above, is another factor that needs to be addressed. Moore's law has been both driving this up and down -- down when things are shrunk, then back up when more shrunken things are crammed into each chip. Biological computers (us) absolutely put computers to shame when it comes to power usage, so there's a lot of work that needs to continue here after moore's law fails.

This topic is closed to new replies.

Advertisement