Advertisement

Are there any modern desktops(Windows, Linux, OS X) that use big endian?

Started by March 02, 2016 04:42 PM
9 comments, last by oliii 8 years, 8 months ago

Trying to see if there is a point in trying to write endian independent code

An invisible text.

This is one of those "If you have to ask, the answer is no" questions.

OS X now runs exclusively on the ia32e chipset, also called "x64" or "x86-64". Ages ago they were on 68K processors that were big endian, then about two decades ago they moved to PowerPC which were bi-endian (could be placed in big-endian or little-endian mode), and then about a decade ago moved to x86. The OS stopped supporting the older PowerPC hardware in 2009. In 2013 they moved to 64-bit only.

Windows has historically had versions that runs on other chipsets (MIPS, Alpha, PowerPC, ia-64/Itanium which is different from "x64", and ARM as recently as Windows ) but I believe currently only runs on the x86 chipset. Most of those were bi-endian chips but some were big-endian only. The ARM platform has mostly moved over to Windows Phone, or the tablet version Windows RT that is now discontinued, but both of these were also bi-endian running in little-endian mode. It supports both 32-bit ia32 (aka "x86"), and 64-bit ia32e (aka "x64" and "x86-64") modes.

Linux runs on many different chipsets, both big endian and little endian (and even a few middle-endian systems). It also supports many systems other than 32-bit and 64-bits for word size. Despite its versatility, today you are unlikely to encounter environments that aren't little-endian and 64-bit.

There are some processors that still use big endian, but most of those have added bi-ordering, the ability to switch between big and little endian. Processors supporting big endian exclusively are nearly dead.

When is the answer is "Yes"?

Network byte ordering is big-endian. If you choose to use network byte ordering, there are standard utilities for this like htonl()/ntohl() to transmit over the wire. This will be clearly communicated, whatever your network protocol will clearly state it uses "network byte order", or big-endian format.

Many networking libraries and network-friendly languages like Java will convert to network byte ordering for you across the wire. This can usually be adjusted.

You can choose byte ordering for file formats, but there is little reason to use big-endian for that. Use it and document it if you want big-endian, otherwise these days it is generally assumed values are written in little-endian format.

Finally, use big-endian if you know you are building something for a big-endian chip. You won't be accidentally supporting such a chip, it will be very intentional. You won't be writing code with "I think this is cross platform", you will be told specifically "This must work on Sparc64".

So unless you have a specific reason to support big-endian format, don't.

Advertisement
I'd additionally add that even if you are going to support big-endian systems for some reason, optimize for the common case. If 99% of your users are running little-endian hardware, then don't use the old "network byte order" for all your data, since that'll just result in needless byte swapping for said 99% of your users. Instead, explicitly document your files/protocol as being little-endian and then do the byte swapping on the big-endian systems.

Sean Middleditch – Game Systems Engineer – Join my team!


Windows has historically had versions that runs on other chipsets (MIPS, Alpha, PowerPC, ia-64/Itanium which is different from "x64", and ARM as recently as Windows ) but I believe currently only runs on the x86 chipset. Most of those were bi-endian chips but some were big-endian only. The ARM platform has mostly moved over to Windows Phone, or the tablet version Windows RT that is now discontinued, but both of these were also bi-endian running in little-endian mode. It supports both 32-bit ia32 (aka "x86"), and 64-bit ia32e (aka "x64" and "x86-64") modes.

Windows does still run on ARM, as the core OS for PC, Phone, IoT/Embedded, etc is all converging. However, modern ARM processors are bi-endian for the most part and I'm 99% certain Windows' ARM ABI specifies little-endian, so for all intents and purposes you can consider Windows to be a little-endian platform.

OP -- if you're concerned, you might at least mark code that has endianess concerns even if you only implement the little-endian codepath for now; or you might encapsulate those operations in macros or functions (again, even if you leave the big-endian code out for now). Better to leave yourself some breadcrumbs if you're concerned you might need to add support in the future.

throw table_exception("(? ???)? ? ???");

It depends on what CPU you used, most Intel CPU is little-endian, ARM is big-endian, some mobile phones is big-endian.

You can use a little code to test which endian it is in your platform.

NFrame - agile game server developing framework. Github:https://github.com/ketoo/NoahGameFrame

Unless you're looking to support end users working with decade-old computers (which will probably cause bigger headaches than just endianness issues), it's safe to assume little endian.

Advertisement


ARM is big-endian

ARM is bi-endian.

Almost all vendors run ARM chips in little-endian mode, although you may occasionally find a more exotic embedded device set to big-endian mode.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

You're more likely to run into trouble with different compilers using different padding rules, and perhaps different CPUs using slightly varying floating point rules (80 bit extended, anyone?) than you are to run into a big-endian system on the Internet today.

That other good news is that, if in the future you were suddenly to have to port to a big-endian system, it's not the end of the world. Check where you call send() and recv(). Trace your code backwards from there, to where the data is put in. Call the appropriate swap macro at that point. A bit tedious work, but not hard. And very unlikely you actually have to do it.
enum Bool { True, False, FileNotFound };

Yes, when Apple switched from PowerPC to Intel (and hence big to little endian), the endian change was far less painful than the change from x/0 == 0 to x/0 causes an exception...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


Trying to see if there is a point in trying to write endian independent code

At this point and for games, probably there is little point to write endian-independent stuff (though you never know what those-guys-with-consoles-will-come-up-with). However, you still need marshalling (passing C structures over the net is a Big No-No not because of endianness but due to different alignments etc. etc.), and when you have reasonably good marshalling library, making it endian-independent (a) is quite a small piece of work, and more importantly (b) endianness handling can be added later without *any* changes outside of your marshalling library.

This topic is closed to new replies.

Advertisement