Advertisement

have you heard of HURD?

Started by October 14, 2004 08:11 PM
58 comments, last by flangazor 20 years ago
Quote: Original post by Mayrel
Then you'll have to explain what it is about the Hurd that makes it a joke. I would have thought that a kernel was a joke if it was unable to serve an operating system. Clearly, the Hurd can serve an operating system, both Debian and GNU itself.
It's a joke because its design is floating off in academic lalaland and isn't PRACTICAL (not a question of "does it work?" it's "does it work well?"). Some people claim this is a "...yet" thing, but that's irrelevant. Hurd isn't practical NOW.
Quote:
Quote:
And nobody has told the HURD people about linux in the last 13 years that linux has been better than HURD?

That's a matter of opinion. Why haven't the various BSDs given up on their kernel and used Linux?
because they're BSD licensed, and some of them think that's better.
Quote: It's because Linux isn't actually a very good example of software engineering.
You're full of shit.
Quote: The design of the Hurd is better than the design of Linux,
So you claim. I happen to know a great many people that would disagree, myself included. I'm sure you've read Linus vs Tananbaum(sp).
Quote: although Linux supports a greater range of features and hardware,
Gee, minor detail. Wonder why nobody uses Hurd?
Quote: (almost) none of those are features or hardware that could not be supported by the Hurd at a later time,
And who is going to do this? Even when you come up with a list of a few dozen people, I can just point out the few thousand more linux developers that would just continue to do what linux has done for the last 13 years: just get shit done. Something Hurd hasn't...done. heh.
Quote: whilst the special features of the Hurd would be highly non-trivial to introduce to Linux. The special features of the Hurd allow it to be more scalable, more flexible, more extensible, and more secure than Linux, given an administrator who knows what he's doing.
Hurd has almost none of those things over linux, and any that it might, it's insignificantly better. Performance still suffers greatly, just from the "superior architecture."

(snipped pedantic stuff)
Quote: Original post by DigitalDelusion
Quote: Original post by Arild Fines
Which only was a viable option because the Amiga lacked memory protection.


Well it's not like the idea really have to be abandoned completly even with memory protection something similar could be used to ease the burden of sending really big packets on the stack.

Messages could simply be allocated in shared memory and then the pointer passed around.
This pretty much defeats the purpose of a microkernel architecture. The ones that do something like this are called hybrids, and are pretty much a useless microkernel with a monolithic kernel running under it. No benefits, and a convoluted design.
Advertisement
Anyone remember BeOS? It was fast lean and by many regarded as one of the best OSs evar!!!!

It was also microkernel.

Just because the Hurd should be renamed Turd doesn't mean the fundamental architecture is wrong and the Torvalds vs Tennenbaum debate is actually quite silly but if anything in convinced me that the prof actually knows what he's talking about.

But yeah monolithic does have it's merits but superior performance really isn't one of them.
HardDrop - hard link shell extension."Tread softly because you tread on my dreams" - Yeats
Quote: Original post by DigitalDelusion
Quote: Original post by Arild Fines
Which only was a viable option because the Amiga lacked memory protection.

Well it's not like the idea really have to be abandoned completly even with memory protection something similar could be used to ease the burden of sending really big packets on the stack.

Define "really big packets".

If you're talking about messages that implementing reading and writing from files, then the blocks to be read into and written from are obviously not passed on the stack, they're passed as pointers.

Ideally, a message-passing kernel won't pass notably more on the stack than a call to a library would.

The issue with protected memory is that it's relatively non-trivial to deal with pointers to data. Suppose we have this interface to a an open file in a fileserver:
bool fileserver::file::write (void* data, int size);

Because the fileserver isn't allowed to access the memory of the server (and also because C lacks a standardised way to select which address space a pointer is relative to), the data must be copied into the fileserver's address space.

Often, the system will optimise this process by specifically allocating blocks of memory for use in message passing. Rather than actually copying it, the kernel instead maps such a block out of the sender's address space and into the receiver's address space.
Quote:
Messages could simply be allocated in shared memory and then the pointer passed around.

The reasons that Mach prefers copying data over sharing data are twofold. Firstly, it eradicates the possibility that one process will alter data that another process is working on, possibly causing a crash or a security hole. Secondly, it makes it trivial for a Mach-based OS to run on a cluster, thus making it future-proof.
CoV
Quote: Original post by C-Junkie
It's a joke because its design is floating off in academic lalaland and isn't PRACTICAL (not a question of "does it work?" it's "does it work well?"). Some people claim this is a "...yet" thing, but that's irrelevant. Hurd isn't practical NOW.

And "academic lalaland" is defined as what? And what isn't it practical for, exactly?
Quote:
Quote:
That's a matter of opinion. Why haven't the various BSDs given up on their kernel and used Linux?
because they're BSD licensed, and some of them think that's better.

So you're saying that the only reason the BSDs aren't using Linux is the license?
Quote:
Quote: It's because Linux isn't actually a very good example of software engineering.
You're full of shit.

A well considered argument, sir. I congratulate you.
Quote:
Quote: The design of the Hurd is better than the design of Linux,
So you claim. I happen to know a great many people that would disagree, myself included. I'm sure you've read Linus vs Tananbaum(sp).

Ah, argument from popularity, is it?

The advantages of a microkernel design are scalability, flexibility and extensibility.

The advantage of a monolithic design is performance.

The importance of performance is often overstated. Most programs spend most of their time idling. When they aren't idling, they're usually in userspace, not in the kernel. Although performance suffers, it is not as though microkernel systems cannot be interactive, and really processor intensive code doesn't usually have a tight loop with system calls in it.
Quote:
Quote: (almost) none of those are features or hardware that could not be supported by the Hurd at a later time,
And who is going to do this? Even when you come up with a list of a few dozen people, I can just point out the few thousand more linux developers that would just continue to do what linux has done for the last 13 years: just get shit done. Something Hurd hasn't...done. heh.

And is that relevant? My point was that Hurd is architecturally superior to Linux. Based not only upon its current ability to do things Linux could not do without non-trivial changes, but upon its admittedly only potentional ability to do things Linux can do with trivial changes.

I wasn't making a point about whether or not Hurd was going to be the Next Big Thing. In all probability it isn't. But I am confident that, when time comes that kernels with Hurd-like characteristics are a necessity, if Linux does not develop them it will be replaced by something that does.
Quote:
Quote: whilst the special features of the Hurd would be highly non-trivial to introduce to Linux. The special features of the Hurd allow it to be more scalable, more flexible, more extensible, and more secure than Linux, given an administrator who knows what he's doing.

Hurd has almost none of those things over linux, and any that it might, it's insignificantly better.

Sounds to me like you're bluffing. Or would you care to give an example of something that Hurd does better than Linux and an explanation of why it's only insignificantly better?
Quote:
Performance still suffers greatly, just from the "superior architecture."

No. You're wrong. Performance does not suffer greatly. Performance suffers mildly.
CoV
Quote: Original post by DigitalDelusion
But yeah monolithic does have it's merits but superior performance really isn't one of them.

I assume you meant to say microkernel, not monolithic? Superior performance is recognised by both sides as being a merit of monolithic architectures.
CoV
Advertisement
I tried to trim some fat...
Quote: Original post by Mayrel
So you're saying that the only reason the BSDs aren't using Linux is the license?
The only reason besides the obvious: it's their codebase, they're loyal to it, and it might actually do something better.
Quote:
Quote:
Quote: It's because Linux isn't actually a very good example of software engineering.
You're full of shit.

A well considered argument, sir. I congratulate you.
Your statement and my statement are on equal footing.
Quote: The advantages of a microkernel design are scalability, flexibility and extensibility.

The advantage of a monolithic design is performance.
Linux runs on everything from a palm pilot to some of the world's fastest super computer. Clearly scalability isn't an advantage.

Flexibility and extensibility are questionable. Exactly what do those mean?
Quote: The importance of performance is often overstated. Most programs spend most of their time idling. When they aren't idling, they're usually in userspace, not in the kernel. Although performance suffers, it is not as though microkernel systems cannot be interactive, and really processor intensive code doesn't usually have a tight loop with system calls in it.
Network code is usually like this.
Quote: Sounds to me like you're bluffing. Or would you care to give an example of something that Hurd does better than Linux and an explanation of why it's only insignificantly better?
My position is that it does nothing better, I was merely allowing for the possibility that I'm wrong.
Quote: Original post by C-Junkie
Quote: The advantages of a microkernel design are scalability, flexibility and extensibility.

The advantage of a monolithic design is performance.
Linux runs on everything from a palm pilot to some of the world's fastest super computer. Clearly scalability isn't an advantage.

The fact that Linux is scalable does not disprove that microkernels are more scalable. It is easier to remove system components from a microkernel system to make it fit on a low-end device. It is easier to add system components to a microkernel system and easier to transparently distribute these components over a cluster.
Quote:
Flexibility and extensibility are questionable. Exactly what do those mean?

Flexibility is the ability of the behaviour of a system to be refined. It is perhaps a measure of the granularity of the systems configurability. Because Linux is monolithic, most configurable aspects of it effect all running processes. For example, there is no generally accessible way pick and choose the kind of scheduling mechanism that a process group uses -- you use whatever Linux uses for all processes. In a suitably designed microkernel system, the scheduling policy can be defined by a server, and different process groups could use a different scheduling server.

Extensibility is the ability of the behaviour of a system to be extended beyond what it currently does. Because Linux is monolithic, it is difficult to extend its behaviour unless you have root priviledges -- you have to depend upon (rare) userspace interfaces to kernel features. There is, for example, no obvious way for an unpriveledged user to create a new filesystem driver using the standard Linux kernel.

The answer to these kinds of obstacles tends to be "but you don't need to do that." The proc filesystem shows process and hardware statistics. Suppose a user wanted to be able to access an archive file as though it was a normal device? Or overlay several an existing directory hierarchies, as games often want to do -- the data might be in /mnt/cdrom, /usr/share/games/my_game, or ~/.my_game. There's no support for that in Linux, and no way to add support for it without root access. Instead, you must use imperfect solutions like physfs. I say imperfect because the fake filesystem that physfs presents only exists in the game -- there's no way to browse that same fake filesystem in a normal file manager.
Quote:
Quote:
The importance of performance is often overstated. Most programs spend most of their time idling. When they aren't idling, they're usually in userspace, not in the kernel. Although performance suffers, it is not as though microkernel systems cannot be interactive, and really processor intensive code doesn't usually have a tight loop with system calls in it.
Network code is usually like this.

Code that sends data to a network port has no business being "really processor intensive." Even if you have an ether link to the other computer, the time it takes to send data between it is so long your kernel could pass messages by carrier pidgeon and still be fast enough.
Quote:
Quote: Sounds to me like you're bluffing. Or would you care to give an example of something that Hurd does better than Linux and an explanation of why it's only insignificantly better?
My position is that it does nothing better, I was merely allowing for the possibility that I'm wrong.

My position is that the Hurd already does userspace filesystems better than Linux, amongst other things.
CoV
Quote: Original post by Mayrel
The fact that Linux is scalable does not disprove that microkernels are more scalable. It is easier to remove system components from a microkernel system to make it fit on a low-end device. It is easier to add system components to a microkernel system and easier to transparently distribute these components over a cluster.
And yet linux is the most scalable operating system on the planet... a microkernel has yet to beat it. I'll believe it when I see it.
Quote: ...extensibility/flexibility
I see. Different philosophies. With monolithic kernels, you design the program for the scheduler, not the scheduler for the program. With monolithic kernels, you provide low level access to the file system, but don't deal with abstractions.

Abstractions come in libraries. I can open up a file on an ftp server or tar archive or whatever easily enough using gnome-vfs.
Quote: Code that sends data to a network port has no business being "really processor intensive."
It's syscall intensive. Lots of processor time gets eaten by inefficiencies inherent in the design of microkernels w.r.t. the network stack.
Quote: My position is that the Hurd already does userspace filesystems better than Linux, amongst other things.
Betamax had better video quality. VHS was cheaper.
The only real advantage of a real-world micro-kernel over the real-world monolith kernel (that I see) is that you can take advantage of additional rings of protection with the micro-kernel. This is possible because drivers and system services (e.g. tcp/ip stack) could exist in their own memory space and have their own access privileges. You can set it up so only the network service (and the kernel) can touch the network IO ports of the NICs. Someone can't load a rogue driver and take control of everything in the system. This is not how NT, VxWorks, Integrity, etc... work. They have a micro-kernel architecture, yet kernel and system services share a common memory space and IO privileges. One of the few kernels that actually does this is QNX. Often it's said that QNX has a nano-kernel architecture because it is so different from typical micro-kernels.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara

This topic is closed to new replies.

Advertisement