In your opinion, do you think that any type of new start-up company could become serious competition for Intel and AMD in the consumer PC CPU market, all while creating a new architecture and instruction set (i.e. no ARM or x86, new architecture)? I understand that ARM and mobile processors are the place to be for any CPU company right now, but do you think that it is economically possible for a startup to garner enough resources and staff to create a cutting edge 10nm chip (probably 5nm in near future) for the average PC/Server market that seriously competes with Intel and certain AMD products in 6-7 years or less?
Do You Think Any REAL Start-Up Competition Could Arise For The Intel/AMD Empires?
C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.
A new architecture would be refreshing...
5nm is pretty small, i believe the width of an atom is 1nm so things can't really get much smaller than maybe 3nm imo
What i hate most is that since we can't get transistors much smaller we keep adding cores, but in the future who will need 32-64 cores...
Except for some highly parallelizable applications like 3d rendering and video encoding, maybe compression it's a lot of wasted power.
Even a 4-8 cores today sleep most of the time in normal average user usage.
And processor speed have been pretty much stagnant at 3-4ghz for a while now.
Is it me or hard drives seem to suffer from the same syndrome too? I've buy a 2tb 1.5 years ago and it still the same price now...
A new architecture would be refreshing...
5nm is pretty small, i believe the width of an atom is 1nm so things can't really get much smaller than maybe 3nm imo
What i hate most is that since we can't get transistors much smaller we keep adding cores, but in the future who will need 32-64 cores...
Except for some highly parallelizable applications like 3d rendering and video encoding, maybe compression it's a lot of wasted power.
Even a 4-8 cores today sleep most of the time in normal average user usage.
And processor speed have been pretty much stagnant at 3-4ghz for a while now.
Is it me or hard drives seem to suffer from the same syndrome too? I've buy a 2tb 1.5 years ago and it still the same price now...
1nm is still relatively large compared to an atom(http://hypertextbook.com/facts/MichaelPhillip.shtml), and the atom itself is far smaller then the electron cloud surrounding it.
also, you might want to read up on electron spin gates(i forget the exact name, so someone point out). followed by quantum computing. we still have plenty of development room when it comes to making things smaller.
as for the OP, their might be a sliver of chance with intel's stance of soldering their future chips into mb'shttp://semiaccurate.com/2012/11/26/intel-kills-off-the-desktop-pcs-go-with-it/, but you'll likly need serious capital for startup, and to be competitve, you'll likely be selling at a loss until you've got your own manufacturing lineup. Let's also not forget that creating a new architecture means you need to get the windows onboard for making their os available to your potential customers. You might be sitting their willing to say "screw windows", but then you minus as well just give away your chips if you don't get the biggest marketshare on your side.
Alternativly, you might be able to swing a deal with google, and their laptop googlechrome, if you could gain exclusivity to their chrome os being built ontop your processor, you might have a foot in the door that way.
ARM is already outselling them and Intel is terrified. ARM has ~90% of the mobile market (everything that isn't an iPhone). Even Windows 8 runs on ARM.
Intel built an expensive fab in the US and so over capacity they were looking to resell fab time.
Actually thank-you, I was just trying to figure out what company besides Dell to short.
What i hate most is that since we can't get transistors much smaller we keep adding cores, but in the future who will need 32-64 cores...
Except for some highly parallelizable applications like 3d rendering and video encoding, maybe compression it's a lot of wasted power.
Even a 4-8 cores today sleep most of the time in normal average user usage.
And processor speed have been pretty much stagnant at 3-4ghz for a while now.
In the future, everyone is going to need 64 cores, because future software is going to be written to run well on 64 simple, low clock-speed cores.
The only reason we need complicated, high-clock-speed individual cores at the moment is because we all suck at writing software. It's a myth that only certain types of problems can be parallelized...
Look at the brain: a trillion simple processors, running at clock speeds measured in Hz instead of GHz, using only 20W. That's pretty energy efficient.
GPU's are the same -- their evolution into using many cores has been driven largely by energy efficiency. Running lots of simpler cores is more efficient than a single super-fast and complicated one. The problem right now is a software one -- we need to unlearn a lot of engineering knowledge and re-learn it for a better type of computer.
. 22 Racing Series .
third parties also have more of an opening now as well, since if things like MMX and SSE are omitted, then most of the rest of the x86 ISA could be implemented without raising any patent issues.
the rest would basically be pulling a Zilog, and maybe formally renaming the registers and many of the instruction mnemonics to sidestep possibly copyright issues (though, we may not even need this much).
say:
AL -> R0B, CL -> R1B, ...
AX -> R1W, CX -> R1W, ...
EAX -> R0D, ECX -> R1D, ...
LD R0D, [R5D+0x2C]
ST [R5D-0x18], R0D
then assemblers would just "quietly" accept the original names.
(this would be binary compatible with pretty much all 32-bit code compiled with default compiler settings, basically representing a 486DX or Pentium 1 like subset).
(64-bit is a little more of an issue at this point, since a binary compatible x86-64 would require licensing...).
but, FWIW, x86 is kind of hard to displace at this point.
it has been yet to be seen if ARM will (actually) make any inroads into traditional x86-dominated spaces (laptops, desktops, servers, ...), nor for that matter that laptops or desktops will "actually" die off.
I suspect probably most people are doing the whole thing of having a desktop, a laptop, and a tablet. tablets, being the new thing, will have higher sales, since pretty much everyone already has a desktop, and in recent years there isn't that much reason to buy a new desktop every few years (say, when 3 or 4 years later, the new chips are only marginally faster than they were a few years ago...).
like, people are like "well, sales are dropping, people must be moving away from desktops", rather than, say, "sales are dropping, maybe the market is saturated...".
Current Status / Downloads: http://cr88192.mooo.com:8080/wiki/index.php/BGB_Current_Status
YouTube Channel: http://www.youtube.com/user/BGBTech
Main Page: http://cr88192.mooo.com:8080/wiki/index.php/Main_Page
"Terrified" is a bit of an over statement.ARM is already outselling them [in the mobile space] and Intel is terrified.
Intel admit they made a mistake with a lack of focus on mobile but since they started focusing on the power issue (an engineer at GDC mentioned to a guy I work with that they simply hadn't considered power until recently) they have made great strides. If anything I would say that ARM/Apple should probably looking over their shoulder as they won't have it their own way for too much longer...
What i hate most is that since we can't get transistors much smaller we keep adding cores, but in the future who will need 32-64 cores...
Except for some highly parallelizable applications like 3d rendering and video encoding, maybe compression it's a lot of wasted power.
Even a 4-8 cores today sleep most of the time in normal average user usage.
And processor speed have been pretty much stagnant at 3-4ghz for a while now.In the future, everyone is going to need 64 cores, because future software is going to be written to run well on 64 simple, low clock-speed cores.
The only reason we need complicated, high-clock-speed individual cores at the moment is because we all suck at writing software. It's a myth that only certain types of problems can be parallelized...
Look at the brain: a trillion simple processors, running at clock speeds measured in Hz instead of GHz, using only 20W. That's pretty energy efficient.
GPU's are the same -- their evolution into using many cores has been driven largely by energy efficiency. Running lots of simpler cores is more efficient than a single super-fast and complicated one. The problem right now is a software one -- we need to unlearn a lot of engineering knowledge and re-learn it for a better type of computer.
Honestly, if I recall correctly one of the biggest issues is that current multicore CPUs still have a single MMU shared among all cores. This prevents running multiple processes at the same time. If each core had its own MMU (or at least had something that allowed separate processes to be run simultaneously), that alone would improve performance by a lot because all modern OSes are multitasking and running a quite large amount of processes. Heck, maybe that could encourage sticking to single threading for simple stuff because that would give more room for other processes that need the time to use it.
But yes, we do have a software problem. Even with current hardware it could be a lot better... I don't get it, calculations became much faster but interfaces became less responsive over time =& At least that's the impression I get.