Hi,
Once in a while I read somewhere that we have the technology now to have 128-bit OS. Why the wait?
Hi,
Once in a while I read somewhere that we have the technology now to have 128-bit OS. Why the wait?
Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.
by Clinton, 3Ddreamer
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
Going from 32-bit to 64-bit in Operating Systems had some growing pains at first. For like a year.
I'm not ready for 128-bit :p.
We used 32-bit pointers because 4294967296 bytes was more than enough address space for each application/the kernel.
We use 64-bit pointers because 18446744073709551616 bytes is more than enough address space for each application/the kernel.
Once you need to address more bytes than that at one time, then we'll go ahead and upgrade to a 128bit address space!
BTW, modern CPUs operate on 8, 16, 32, 64, 128 and 256bit values... it's not black and white like it used to be.
Maybe you're working on a 32bit value in a 64bit GPR, the address space is 64bit, but some devices are using 48bit busses... meanwhile your CPU cache works on 512bit chunks, which effectively reduces RAM addressing to 55 bits...
. 22 Racing Series .
We have the technology to send cows to the moon. Why the wait?
When will the cows come home?
I sense that in some ways hardware and software technology is under-achieving. The next big leaps were expected by me by now.
Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.
by Clinton, 3Ddreamer
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
I sense that in some ways hardware and software technology is under-achieving. The next big leaps were expected by me by now.
Welcome to monoculture.
Before the 1980s, computers were a unique thing. Buying a new computer meant employing a team of programmers to rebuild your software, tune it to the hardware, and uniquely fill your needs.
The bad news was that computers were expensive, software was expensive, and systems did not play well together.
People wanted it to change, allowing software to be movable.
One of the fun great ironies for this were both Unix and C.
In the 1960s, Ken Thompson, Dennis Ritchie, and several others were tired of porting every piece of software. The UNIX environment was an attempt to provide a common operating system, C was an attempt to provide a common programming language. When a new system was released, you could modify the compiler slightly, or modify the binary interface for the key Unix functions, and suddenly all your software would work.
Brilliant idea!
When it was released and discussed with others, these people who had ALWAYS been in the mindset of modifying the software to fit their needs, and who were still required to modify the software to fit their custom hardware, continued down that path. So they figured "We modify the languages all the time, let's modify C." And "We modify operating systems all the time, let's modify UNIX."
And so because of history, when Thompson, Ritchie, and various others launched these unification systems on the world, the world responded by fragmenting them. Dozens of UNIX variants, hundreds of C variants. AT&T and Berkeley managed to reign the UNIX world back to a small number using a shared POSIX standard, and the K&R C book dropped the fragmented language back to a more manageable number.
Today we've got a very small number of operating systems rather than a unique OS for every chip that gets released. Very nearly everything from real-time microprocessors to large mainframes has a UNIX variant. The Windows family is probably the only big exception. Just about everything else out there, QNX, Haiku/BeOS, HP-UX, AIX, OS X, Linux, BSD, the real UNIX, and everything else are based on the POSIX standardization of UNIX.
And for languages, there were hundreds of C variants since every lab customized it. When the K&R C book tried to standardize it many labs balked, but ultimately it was recognized as the official version of the language. When C was standardized all the variations collapsed, with the two major branches becoming known as Objective C and C++. As new languages have been introduced most of them were standardized and controlled to prevent wide fragmentation.
When every device had its own custom operating system there was a lot of unique functionality. Hardware companies like IBM, Intel, and Digital Research used to build creative hardware variants and operating system features to support it. Then the programmers were expected to customize the programs to take advantage of the features. New languages were common because they were actually needed, and because software was constantly being rewritten out of necessity. The culture was diverse, but the diversity came with an incredible maintenance and support cost.
Today it is all about compatibility with the core features of the monoculture. There is still some tentative branching out. Companies will add a few bonus features here and there and hope they stick, but they still need to remain close to the existing core. Add an extension for SIMD. Add an extension for a faster data bus. Keep the externally-visible interface identical but add an Out-Of-Order core. We don't get the rapid succession of radically different processors, and the modifications are slow and take time to get incorporated and cross-licensed. There is a monoculture that slowly evolves, and it comes with relatively inexpensive support costs.
The Wikipedia footer template demonstrates this beautifully for the x86 family: x86: MMX (1996) 3DNow! (1998) Streaming SIMD Extensions (SSE) (1999) SSE2 (2001) SSE3 (2004) Supplemental SSE3 (SSSE3) (2006) SSE4 (2006) SSE5 (2007) Advanced Vector Extensions (AVX) (2008) F16C (2009 (AMD), 2011 (Intel)) XOP (2009) FMA instructions (FMA4: 2011, FMA3: 2012 (AMD), 2013 (Intel)) Bit manipulation instructions (ABM: 2007, BMI1: 2012, BMI2: 2013, TBM: 2012) AVX-512 (planned, 2015)
Newer x86 processors support all these fancy additions. You probably have code libraries that support SSE, SSE3, maybe even SSE4. But few companies will include SSE5 generally, it is still too new. AVX if supported may be nice, but support is fairly rare. FMA and the new bulk bit manipulators might be nice, but you won't see mainstream libraries rely on it for several years out: We don't know which versions will become standard, we don't know which will die by the wayside and which will get incorporated back to the core. It is less expensive generally if we go slow, allow a small number of innovators to work out which ones are most versatile, and then incrementally fold the survivors back into the core.
So while on the one hand it would be nice to have more variation, it is also nice to have stability and not rewrite all your code every time new hardware is released. Diversity comes with cost. We need some diversity, but too much and the cost becomes too grievous for the risks and benefits.
Game consoles walk the middle line on that. Large portions remain commonplace due to the language, but other large portions are completely replaced every generation. It is both a blessing and a curse. The industry has a higher cost because of it, but we only pay it every few years at a new console generation.
While in many ways I would love additional growth caused by wild diversity and competition, I am reluctant to pay the costs of constant change and massive continuous rewrites. I don't know of any good ways to get the growth and change without the costs.
I sense that in some ways hardware and software technology is under-achieving. The next big leaps were expected by me by now.
What exactly do you see as the benefits of a 128-bit OS over a 64-bit OS NOW?
Leaps should only be expected where they will provide some tangible benefit.
We do not even have a proper 64bit operating system yet. Do that first.
A proper 64bit operating system should have no such thing as 32bit file pointers (and separate syscalls for 64bit versions) or 32 bit timers. A proper 64bit operating system should not have any such thing as WOW64 and run half of its components on top of that.
A proper 64bit operating system should do anything that could reasonably be expected to exceed the range of 32bit (files?) in 64bits, no exceptions. It should allow for a "small address space" ABI like Linux x32 (since most applications really don't need 64bit pointers, but they can most definitively use the extra registers!), but it should only allow proper 64bit applications otherwise. No hacks, no extra layers, no botching.
Such a "small address space" feature could be implemented simply by setting a per-process flag so any syscall that returns a pointer will return a pointer which is in the 32bit range (with the top bits being zero), and all shared libraries will be loaded in a 32bit addressable region, too. That's really "just about good enough" for as far as 32bit support goes.
DCAS with 128 bit pointers would be fun... :(
Once in a while I read somewhere that we have the technology now to have 128-bit OS. Why the wait?
Build a business case and they will come.
Your business case will not include web/DB servers. They tend to run 32-bit OSes these days for various reasons, and a wider bus or register size are not generally the limiting factor for that multi-billion-dollar industry, although 64-bit systems are starting to creep in because of the commodity prices. Power consumption and heat dissipation are primary considerations in this industry, and many smaller devices help with the latter.
Your business case will not include personal (desktop) computers: 32 bits are fine for that, 64 bits are common but overkill for browsing pr0n and laying the red ten on the black jack. It's unlikely you will be able to upsell enough hipsters to recoup your sunk investment costs in hardware fabs and software development for a good business case.
Your business case will not include personal (mobile) computers: 32 bits are fine for that and the limiting factor tends to be power consumption and network bandwidth not register size or bus width.
Your business case may include games consoles, although limiting factor for those do not tend to be limited to CPU register sizes or bus widths but memory bus width and response, which is already 128 or 256 bits wide on the GPU side. I'm not sure if a good business case can be made for a disruptive hardware change with such little ROI though.
It's likely your business case would involve high-bandwidth number crunching (AKA high-performance computing). This is already a niche market.
Your business plan will need to take into account the geometrically increased cost of fabrication, since the wider bug and register sizes will require bigger silicon with a higher rejection rate (rising with the square of the bus width). Multiply that by the increased interconnect costs. Software-wise, you are going to need to be able to handle the increased bus stalls and delays due to bottlenecks like serial communications and narrow hardware registers.
I suspect that these limitations will delay the introduction of 128-bit or 256-bit systems for a couple of generations, at least until a radically compelling use case is found (embedded autonomous AI in an android form factor anyone? Robot armies?) or radically new low-power low-error tech is perfected (positronics? biocomputing?). Until then, the business case is the constraint.
Stephen M. Webb
Professional Free Software Developer