Advertisement

Why is X networked?

Started by June 07, 2005 11:37 AM
13 comments, last by frob 19 years, 4 months ago
Quote: Original post by 255
Quote: Original post by markr
What can be done with X cannot be done with VNC etc. That is, running multiple windows on the same display from different hosts.


http://metavnc.sourceforge.net/

But yes, I think I'm beginning to understand now. Even though the mechanism to send drawing commands doesn't have to be as generic as sockets, there ultimately has to be some way for programs to send drawing data in a well-defined format to a rendering system. X chose a general-purpose IPC system where others have chosen direct library calls to the kernel level (DirectFB, Windows(?)). If sockets are indeed fast enough to never become a bottleneck then X11 seems very well designed.

Thanks for the replies everyone!


There is also an X extension, Xshm (you might have it already) that doesn't use sockets at all, but a SHared Memory (hence, scm) area instead. Whether this is noticably faster, I don't really know.

One interesting thing about this all, is that other OSes are moving/have moved their windowing system away from simple frame buffers to systems more like X. X was ahead of its time!
So yeah... the XFree86 X server that everyone used to use does suck. But not because it is networked. Now that x.org forked it and is improving it and fixing some of the crufty stuff there, things will get better.

So yeah now I like owe you guys $0.02 for making you dumber... sorry... :p Now stop wasting your life away reading my post.

[Edited by - Null and Void on June 12, 2005 12:55:15 PM]
It is foolish for a wise man to be silent, but wise for a fool.
Advertisement
*sigh* This started out as one of the best threads on this topic I've ever seen, and now we're getting to the name-calling. There should be an option to prevent APs from posting in a thread, especially in sensitive ones like C++ vs. Java, Linux vs. Windows or X vs. the rest of the world.

The fact that X.org is/was able to fix "some of the crufty stuff there" (double buffering, proper damage handling, render extension) as it was eloquently put is -- IMAO -- one of the best indications that the X protocol is indeed a Very Good Thing™. There have however been some proposals to shift around the responsibilities, have the network layer as an add-on or move the whole graphics stuff closer to the kernel that should not be dismissed.
All better now.
All these interesting (and pretty much wrong) answers have been fun to read.

I'll start with some of the history, then go in to the funner arguments. I'm going to ignore the trolls in the comments.

Back in 1983, MIT had a problem of too many acquired and incompatable computers. They started something called "Project Athena", working with the MIT geeks, DEC (at the time the best computer company), and IBM (second best).

Their stated goal was to build a network-based GUI windowing system. It had to be compatable with their existing network code called 'W' that was already used in classes. It had to be both hardware and vendor independant, since it needed to work with all the different types of donated computers. It had to be able to run local and remote programs, as well as just terminal displays, since many machines just had graphical dumb terminals.

They had several other goals. Go read some books by Jim Gettys, Robert Scheifler, and Ron Newman. These guys KNOW X, since they pretty much wrote it. Although Mr. Newman seems to have dropped out of public favor, Jim and Robert are still big icons in the community.

Robert Scheifler was the project lead, and has done tons of stuff for X, Unix, and Open Source since then.

Jim Gettys has worked on a few things like the HTTP/1.1 protocol, networking on the HP iPaq, and very recently back to X. When asked about the benefits, he said "An X application can run anywhere in a network and use any display in a network. Designed from its inception to be a network- transparent environment, X is very popular in the Unix and Linux worlds. ... With X, I can log in and use my applications from anywhere. That’s been the cool thing from Day One."


Quote: I've seen various arguments for and against X. Some say the greatest feature of X is that it's networked while others argue that this makes it slow. For a long time I've wondered why should networking be part of a core graphics system. Operating windows or desktops remotely can be done (better) with separate software like VNC. Why is networking considered an advantage instead of bloat?

I think the above description of the history answers the initial 'for and against' arguments. Both the for and against are true. That is one of the best features of X, and something that does slow it down.

Why networked core is considered an advantage is also answered. Because the protocol is designed from the start to be networked, the protocols are very efficient for network display --- unlike VNC and similar tools.

Very early on it was designed to have shared memory as a communications tool, although that wasn't true for the 386 ports until a bit later. It has been true of the VAX, R10000 (SGI), and Sun implementations, which explains why they have always run so much faster than the x86 versions.

Quote: The only advantage that I can see X having compared to e.g. Windows is running multiple instances at once and selecting the instance on which the client is to appear, but this could be done without networking as well.
X is networked by design.


How about this quote from Jim Gettys. "With PCs, you fundamentally have what we call "sneakernet." If you need to use an application somewhere, you need to physically walk to that machine, or more recently, install additional Microsoft software (like Windows Terminal Server) at additional expense. Inherent in X is that applications can run anywhere. I have used applications that were across the ocean from me."

Quote: That X is client/server-based comes from the time that companies would have big mainframes where all the programs would run, and employees would be working on 'dumb' terminals, without hard drives or anything.

Not really true. While in development X1 through X9 were developed for MIT's VAX stations. X1-X6 were on dumb terminals. It was a rapidly changing protocol with six versions in 2 years (each version number was an incompatable version). They sold the stuff until X9 to one or two universities, but there wasn't a free license for it.

X10 was almost the X we know today. The MIT License was created and used for distribution. It was the first widely distributed version. X10 was ported to several systems, including the 386, Sun workstations, and others. (That's when I started using it.)

Although I was told that X10 could be run on dumb terminals, I never saw it. It was always on a Sparc or x86 box. Each machine booted independantly to their own flavor of Unix (with X10) and connected to the network. It would run processes from our VAX and other systems.

X11, the protocol you know today, was released in 1987. Most new computers coming out were Intel 386/33 or faster, with either EGA or VGA graphics, and 80+ MB of disk space. That's the one of the target machines for X11.

The X Consortium was formed with Scheifler appointed to the head (he was the lead of the X project, if you remember). It was really the first big Open Source project. Although emacs was older, it had almost no support compared to X, which had a dozen major vendors (IBM, DEC, Sun ...) and major universities (MIT, Berkeley, etc.) behind it.


Quote: But yes, I think I'm beginning to understand now. Even though the mechanism to send drawing commands doesn't have to be as generic as sockets, there ultimately has to be some way for programs to send drawing data in a well-defined format to a rendering system. X chose a general-purpose IPC system where others have chosen direct library calls to the kernel level (DirectFB, Windows(?)). If sockets are indeed fast enough to never become a bottleneck then X11 seems very well designed.

That's the key point.

X is incredible for remote display. It was desgned for it. It is incredible for networked systems. It mirrors how many new pieces of hardware and OS's are working right now (except for Windows GDI which has always been slow). 3D video cards work on a similar principle. Apple's older GUI's and hardware (even before moving to X11) worked on the same client/server principle. SGI's Shared Memory architecture worked perfectly with X. Solaris was not quite as good as the SGI or Apple, but still very speedy compared to Windows.

X is very good for local display. It was also designed for lightweight IPC using shared memory and a simple protocol. DirectX on high-end cards will beat it because it is so tightly coupled with the operating system and hardware. DirectX on older or non-brand cards is roughly the same speed on both OS's, since the drivers are the generic versions provided by Microsoft rather than something tuned to be tightly-integrated.

Well, enough rambling.

frob

This topic is closed to new replies.

Advertisement