Advertisement

Article: OSS S.O.S - How HCI Killed Open Source

Started by August 01, 2004 04:42 PM
130 comments, last by C-Junkie 20 years, 6 months ago
Quote:
Original post by Flarelocke
Quote:
The problem with X is that it has no concept of objects. Everything must be described - and thus sent over the wire - as a description in terms of lines, pixels and colors. Why can't the window manager inform the windowing system of how to draw various widgets, and then describe the interface at this higher level?
My understanding is that this is exactly the way things could work. Gtk+, unfortunately, does not support this sort of behavior. I'm not sure why. It might be a limitation in the protocol of which I'm not aware. I think it's part of the interclient communications and messaging protocol(ICCMP), which establishes a standard upon which window managers in general can work.

Well, technically, yes. That's the way it could work.


However, that's not the way it does work. Currently, all toolkits are client-side. There are, fundamentally, two ways to draw to a window in X -- raster operations, OpenGL commands. There's no way to move a toolkit to the server. Doing so would be a security risk. But that's fine, anyway: the client shouldn't be telling the server how the interface looks with pixel-level accuracy, whether with bitmaps or code.


What's needed is a more GUI-oriented mechanism. It would make sense, considering X is a GUI system. Something derived from XSLT/FO/SVG/SMIL/VRML would be enough to capture most current interfaces, I imagine.

Quote:

Quote:
And then release it on the web, announce it on Freshmeat.Net
Are we still talking about a non-unix system? Freshmeat's only supposed to be about unix programs.

Freshveg.net?
Quote:

Quote:
Note that I only mean for major functionality to be at "Obvious Level." Create a Text Document, Browse the Internet, E-Mail, Send an Instant Message, Create an Office Document. Other applications/functionality can be in an Applications/"Do..." menu - and that menu should be top level.
My preferred solution is similar to the Windows XP start menu, but embedded directly on the desktop (I'd choose the right side). XP's default start menu has the common tasks Internet and Email always at the top, together with some of the more recently use applications. My solution would be similar with a "menu" (not the same as a regular menu because it's on the desktop) of Browse the Internet, Read Email, Create a text document, and some other tasks, and at the bottom, a "..." entry. If there are too few tasks to fill the right hand side, no "..." is needed, but there probably are, and clicking this entry yields something like the start menu (the entries on the desktop are included again here). If the start menu is too large, tasks are grouped together (the Freedesktop.org menu entry standard could feature this behavior!)

I have a problem with the 'recently used programs' menu item. Whilst I can understand it, I can see users getting confused if something disappears from the menu. How do they know where it's gone?

CoV
Quote:
Original post by Oluseyi
Quote:
Original post by Mayrel
Easy to say, hard to do. What you think is the most natural, efficient and intuitive way for a person to tell a computer to do a particular thing is most unlikely to be same for me.
So collect data of a wide variety of users trying various interfaces along with demographic statistics. Evaluate the data, publish it openly, invite open peer review...

Come on, you're stalling!

Oh, and "Well, how do we collect this data?" Write a data collection service that various existing projects can snap into their products easily. That way, you can get a lot of useful feedback from a wide variety of projects - and they can get that feedback too.

My point isn't that you don't know what people think is intuitive. My point is that you can't make an interface which the majority will think is intuitive.
Quote:

Quote:
Most users never fill their hard disks, true. Most users never store a lifetime's worth of undo information with their files, either.
Blah blah blah. Non sequitur.

How so? Below you say that the user might be warned "when the disk starts to become full", so clearly you accept the possibility that the disk might be filled. Where's the non sequitur?
Quote:

Say you had a separate file for each month's financial report by year. Simple data analysis can determine that a particular file is logically "July 2002 Financial Report" through the frequency of terms, cross referencing and relative prominence of items in the file (headings vs paragraph data). Essentially, Google your own data.

That's fine, for some kinds of documents. Most untechnical people I know don't use headers or styles. They aren't aware they exist. Given that the report wouldn't contain "July 2002 Financial Report" multiple times, but most probably just once at the top of document in a modification of the normal style, automagically detecting the name of the document wouldn't necessarily be trivial.
Quote:

This can then allow you to indicate that you want "Financial Reports", or just "Reports". Am I suggesting entering this text into a search box of some sort? Not at all. Based on the frequency of occurence of terms in a variety of files, "collections" can be displayed with logical labels, with "related collections" placed as adjacent but lesser icons, allowing both visual navigation of data and what I'll call a three-point-pivot (the philosophy that any piece of data is three degrees of separation from any other at its most reasonably generic).

This would itself be rather tricky. Do you want several major collections called "And" and "The"? Indexing file content is likely to be useful for searching, but using the index as a catalog is not. For that you need domain-specific knowledge about the structure of a particular document to be able to figure out how it would be categorised.
Quote:

Quote:
So wouldn't you call that file "Q2 2004 Walt Disney Radio Presentation"? If not with a name, how else would you identify it? How would you identity that file to other programs?
See above. And programs are beside the point; why do I need multiple programs to access the same data? Why don't I organize the operations available around the data that I'm working with?

That's silly. It doesn't make sense to extoll the benefits of codec-based storage, designed so that any program can access any data it's capable of understanding, but then say that you'd never use data with more than one program. There needs to be some standardised, unambiguous, human-readable way to address a file. If nothing else, unless you intend to reinvent the web, public files need URLs.
Quote:

Quote:
Also, what does one do about files that are opened by more than one process at the same time? Does only one of them get to open it in update mode? Do the others get it in readonly mode, an editable 'snapshot' mode that doesn't effect the on-disk file, a copy of the file in their personal region of storage, or a 'fork' of the on-disk file that can be merged later by someone who has permission to read both prongs?
Well, that's a possibility, but, again, applications are an anachronistic approach to productivity.

Think of it as a "document container." Every bit of functionality can be invoked from the container if it is defined for the current data. It's sort of inverting the traditional placement of data and processing.

That's avoiding the question. At no point did I say "application". I refered to a "process". The user need not be aware that 'different processes' are accessing the file, just that the file is being accessed twice. The processes might not even be running on the same machine.
Quote:

Quote:
Directories are an obviously correct idea. You appear to be assuming that a directory is necessary a purely heirarchical structure. A directory is merely an index of files.
Not true. Unix INODE structures tie files to directories.

Not true. Unix Inode structures tie pointers to files to directories. A directory is list of names and the Inodes associated with those names.
Quote:

Quote:
On most operating systems, files can be in multiple directories.
Not true. Symbolic links - hard or soft - are not the same as the file (or data, as I prefer to think of things) is part of multiple collections.

Hard links are multiple directory entries containing the same inode pointer.
Quote:

Quote:
In some operating systems, directories can be generated on the fly. There's no technical reason why a UNIX-like operating system couldn't have an 'unfinished reports' directory from which files disappear when their finished status is switched on.
That's an inherently inferior approach to a properties-based system.

Curious. Why do you imagine that it's inherently not properties-based? The BeOS filesystem, which is an inode-based filesystem, associates sets of indexable searchable attributes with files.
Quote:

Quote:
I read it. The fault was not caused by the file selection box looking a little different, the fault was caused by the file selection box behaving in an obviously wrong way.
I need a little more detail than "it was obviously wrong." What was wrong about it, and what would have made it right?

I really don't need to tell you that. The answer is clear. It was wrong because (1) it didn't remember where the user was working and (2) it didn't act like the file manager.
Quote:

Quote:
I presume you're referring to a HTML-style layout-based system, where there'd be a few standard components for actually producing output (text, vector graphics, raster graphics, video, non-visual media) which would know how to be embedded within each other.
Not necessarily. I'm not conceited enough to think that I've solved all the problems or designed the system to a tee. Some of these ideas are still fairly amorphous, floating disembodied in ether.

Quote:
Then, any "interface" to a document would just translate the logical structure of the document into one of the generic display components.
Something like that. The specifics aren't quite set in stone yet. Feel free to make alternate suggestions.

I don't see any obvious alternative. The only other option that occurs is for each type of document to have its own renderer. But then you have a whole load of components which need to be able to interact with each other, not all of which would be made by competent programmers.


I just had a random thought. Take a SMIL SVG animation, embed it in the footer of a document, link (possibly graphically) the frame counter with the page counter, and you have a flickbook.

Quote:

Quote:
There's also the issue that it will very probably be easier to convert an existing program into a virtually opaque component than to make it into a fully fledged member of the component community.
Halfway houses are acceptable transition mechanisms. I'm not naive enough to believe in "clean slate" initiatives - "Throw it all out! Start from scratch!" I believe in working gradually through the problem stack until a simple rebuild completes a total redesign. That way you always (hope to?) have a working, operational build that you can test.

Hmm. But it doesn't seem trivial to componentise a monolithic application. Emacs is undeniably useful. It would be desirable, IMO, to have a "ELisp Scripting" component that would allow existing Emacs scripts to be used with the "Text Document" and "Text Editor" components, and then possibly to be extended to allow elisp to be used with other component types. The best way to make such a component would be to rip it out of Emacs.
Quote:

Quote:
At least KDE and GNOME are already componentised. One could make an adapter that allows their component system to interact with the OS's.
True, but sometimes design choices are pervasive in extent. I guess somebody'll have to give it a shot and see what sticks.

Indeed. The monolithic nature of most current applications is pretty pervasive. I'd warrant that componentising most Linux programs will be very nearly equivalent to writing the program from scratch. You have the benefit, at least, that you wouldn't need to write so many programs, due to the reduction in duplicated code.

CoV
Advertisement
Quote:
Original post by Doc
Quote:
There are tens of word processors. Out of the WYSIWYG ones, I only use LyX,


<pedantic>LyX isn't WYSIWYG, it's WYSIWYM.</pedantic>

Fair enough. I think that makes it better. I don't need to be in the habit of deciding exactly what my document looks like when other people read it. Content-based rather than presentation-based editors are superior because it's easier to (1) display that content in a way that everyone can understand (for example, it's relatively easy to provide a useful voice-control/speach-synthesis interface to a LaTeX document, compared to trying to do the same with a plain text document) and (2) it's easier to use the content in a way you hadn't planned when you made it.
CoV
Quote:
Original post by Oluseyi
The problem with X is that it has no concept of objects. Everything must be described - and thus sent over the wire - as a description in terms of lines, pixels and colors. Why can't the window manager inform the windowing system of how to draw various widgets, and then describe the interface at this higher level?

X must die.
Nope. Putting something like this on the display-side has been dismissed as a possibility, period. Want to know why? Because it is fundamentally inefficient.

Think about what this would mean for something REALLY SIMPLE, like a text editor. You'd have to upload the whole text document to the server. And once it's up there your ideas about not having to save would be shot to hell, because now the graphics system (god knows WHY) has the data! You'd have to transfer the whole text document BACK to the client in order to save it to disk.

That's just the major show stopper. Others include being able to describe widgets to the server (impossible without major security problems), and dynamic data.

X is vastly better than anyone gives it credit for. (especially since the X developers tend to dismiss people who think X sucks as nuts, so nobody ever sets them straight[wink])
Quote:
Now, let's assume that we have intrinsic versioning as a property of the filesystem. The default discard interval will be set such that the average user maintains a reasonable balance between the ability to roll back and memory consumption. Voila! Memory problem solved.
I don't think eliminating saving is a fundamentally bad idea, as long as we can still choose to "tag" "releases" (using cvs-like terminology).
Quote:
The third issue, naming files. Why do we refer to files by name anyway?
come on. Why do we refer to PEOPLE by name?
Quote:
That a user should work on a machine and then have all of that effort disappear into nothingness because of a non-user fault is a fundamental insult.
Agreed.
Quote:
Moving on, why do we have directories?
Because they're an extremely simple organizational method.
Quote:
There's no reason why I should be forced/required to store my files somewhere in a hierarchy - a hierarchy with a dangerous tendency to grow wide and one in which it is ludicrously easy to lose files.
Hierarchies work great-- for hierarchical data, and data has to be MADE to fit a hierarchy. The solution isnt to toss out the hierarchy, it's to augment it with a database solution. And it's being done, take a look at GNOME Storage.
Quote:
Quote:
3. Users shouldn't be able to select files from a dialog. This is just stupid. Because the file manager can be open all the time, he argues, file selection dialogs are crufty. Which misses the fact that, on many modern UIs, file selection dialogs are the file manager.
Uh, his point was that user shouldn't have to select files from a dialog, particularly given that the file explorer (or Finder, or whatever) provides the same functionality but with a different interface. It's confusing.
I can agree partly. This is worth thinking about ... *train of thought wanders off*

Oh yeah, jwz is hardly a "principal actor".

snipped
Right, for what it's worth, I'll nail my colours to the mast and make it clear that I'm not a great fan of the "Open Source" concept.

"Free Software" I can understand, and I agree with many aspects of the FSF philosophy, barring its continued fixation on 'Unix Uber Alles'.

But "Open Source" is a fundamentally flawed and irrelevant concept for end users -- what Oluseyi has referred to as a "prosumer" -- since the very concept of Open Source implies that the end user actually knows how to write their own code. This is acceptable if you're targeting major organisations with their own development teams, or even the major educational / R&D establishments, with their Ph.Ds and professors in Applied C++, but to Mr. Average in the street?

Open Source is irrelevant to most people.

I see two major obstacles in the immediate future of Free Software:

(1) The "Unix-Is-Perfect" brigade, and
(2) The Open Source Software Militants.

This is the reason for my earlier post. (I apologise here and now for the ranting tone, but there's a part of me that believes many so-called experts have really missed some important opportunities here.)


Quote:
Original post by Mayrel
Quote:
Original post by stimarco
NO! Dear God, no! Didn't you read what I wrote?

It's not about plastering over the cracks in Unix. Forget Unix! Unix is a dead end, dammit!

Why?



Because Unix is based on the old serial processing model. It is a poor fit even for _today's_ ideas, let alone tomorrow's. Use it where it's applicable, but for crying out loud, stop acting like it's a perfect fit for every damned project under the Sun.

If you want to see a modern, truly object-oriented operating system that _already_ includes many of Oluseyi's suggestions, look at Symbian. It makes Unix -- and Unix clones -- look like a bloody dinosaur.

By all means use Linux or FreeBSD for R&D purposes, but please _stop trying to push Linux down the public's throat!_ It is an inferior architecture.

Its claims of 'security' are also invalid: Windows is constantly being attacked because *it's what everyone uses*, not because it's insecure. (I'll grant there are holes in it, but Linux also has plenty of flaws reported too. They just don't get the same publicity as Windows ones do.)


Quote:

Bollocks. Every operating system is founded on that view. Do you know why? It's what a damned computer does!



Speak for yourself.

Again, I give you Symbian as evidence. It's now mostly used in mid/high-end mobile phones (SonyEricsson P900, Nokia 6600 and N-Gage, etc.) but it was _originally_ designed by a bunch of developers at what is now Psion-Teklogix for their early, clamshell PDAs like the Revo and Series 5mx, back in the mid-1990s.

In Symbian, *everything* is a component. *Everything* is an object. Granted, it still retains the concept of 'applications', but these aren't the monolithic structures of old.

Oddly enough, the UIQ user interface layer for Symbian (used on the SonyEricsson P800/900 series and some others) actually advocates the concept of applications that _don't_ explicitly allow you to quit them. It's also very task/document-centric (although Symbian does still retain an 'application' concept). If I open up the calendar, I can't then 'quit' it. Instead, I can open up a document, view photos, play a movie or MP3 file -- again, all without quitting previous apps -- safe in the knowledge that the phone:

(a) automatically saves _everything_ for you;
(b) automatically closes applications when memory is tight.

Indeed, the mobile phone user experience is one of the reasons why I tend to rail against traditional GUIs so much. I've seen people who have a bloody hard time just grasping the _basics_ of Windows and MacOS X cheerfully texting their friends and relatives on a phone with a tiny, menu-based UI and a numeric keypad.

It boggles the mind, but it's painfully obvious that there is a lot of UI expertise spent on mobile phones. It'd criminal to ignore their successes. It's also a real eye-opener to talk to the phone companies involved and discuss their approaches to hardware and software design.

Another example: very few phone apps will ask you for a name for your document, simply because they don't need to; the filename concept is retained because these things have to talk to PCs and Macs at some point. But I can tell which picture I want because I can _see_ it among the thumbnails on the screen. This same concept could equally apply to other visually-identifiable documents and I agree with Oluseyi that there are alternatives to asking for filenames if you think things through.

When I go look through my papers for a bank statement, they're not particularly organised, but I can tell by _looking_ at the document which one is which. An annual report will invariably have a title page describing its contents; if your display has a high enough resolution, you could simply display the document's first page as a thumbnail for its icon. Filename no longer required.

Think about it: do you stick a Post-It note on each and every piece of paper you own, with a name on it like "PhoneBillJUL03"? No. You can tell what a document is by looking at it. This wasn't possible in the old CLI days, but when GUIs were implemented, it seemed only 'natural' to perpetuate many CLI concepts, even when not required. OS X's "Dock" doesn't usually display app and document names, because the icons are either very obvious, or miniature renders of the application's main window.

Consider this: mobile phones are now beginning to appear with GPS technologies built-in. Most already have the capability to determine their approximate location by interrogating the nearest 'cell' transceiver, (although few operators take advantage of this). Now, marry this to the cameraphone technology and you need never have to remember where you took your snapshots again; the phone would simply determine its location for you and label the file accordingly, even embedding the info into the EXIF data for JPEG files.

Music is obviously more reliant on a labelling system of some sort, but with systems like CDDB to label ripped CDs for you, the need to actually type a filename in yourself is often very small.

There is, I think, no reason for software not to be able to work out a suitable label for you for the majority of data. As we progress, we'll find it far easier to determine suitable names programmatically, instead of asking the user.

*

The recent revival in web-based email accounts also illustrates a solution to the problem of storage and data management: when your data is stored on a computer elsewhere on the internet -- possibly even the other side of the planet -- then there's no need to worry about disk space, backups or any of that crud; hardware and data management are included as part of the service.

HDTV will make emailing and browsing -- even writing quick letters -- using the TV set far more practical too. In fact, HDTV is a key "enabling" technology that should finally kill off the "desktop PC" as we know it.

The PC itself won't disappear entirely, I think, but the market will become much more fragmented, with Jack-Of-All-Trades designs replaced by more heavily tailored models geared towards specific markets, such as video editing, music composition and sequencing, etc.


Finally: Symbian no longer design user interfaces for their OS. It's entirely GUI-agnostic. So there's no reason why it couldn't be licenced by a benevolent Foundation or Trust for the purposes of open research projects.

I don't understand where people get this fixation that Linux is a required ingredient in all future research and development work by OSS and FSF teams. It can be used as a jumping-off point, certainly, but it's not _required_ and I contend that it's not necessarily the best place to start. There's no reason why the R&D platform _has_ to be an Open Source one, other than politics.

This is, I believe, where Oluseyi and I disagree quite clearly. Oluseyi takes the pragmatic approach: "We have Linux, we must use it."

I disagree, but because Linux isn't the only choice out there. It's perfectly legitimate and feasible to develop Open Source and Free Software on other platforms, such as Windows, Symbian, BeOS or even PalmOS. Symbian emulators and SDKs are available for free download. Ditto for Windows and other platforms. Why does it _have_ to be Linux?

One reason for my stance is that, when it comes to usability testing, you need to get your project out there in front of as many faces as possible. It would make far more sense to use Microsoft's success to help you in your research by leading on their OS rather than Linux.

Quote:

Nonsense. Pipes work. Unix is more than pipes. Pipes are not the most appropriate metaphor for all kinds of interprocess communication. But they are ideal for representing a stream processing task which can be expressed as several discrete stream processing subtasks. Such tasks are still very common.


They do have their uses, but they're very confusing for non-experts. A filter-graph system strikes me as more flexible and better suited to a GUI environment and might even be a good fit for a Linux-based project given the underlying architecture. You could even use it as a high-level, component-based programming environment if you designed it right.


Quote:

Quote:

Step One: Change of Leadership.

No, doofus. It's produced an operating system which is partly vaguely like an archaic operating system. If it's a clone, it's a shockingly bad one.


From GNU.org's index page: "The GNU Project was launched in 1984 to develop a complete UNIX style operating system..."

I've used many flavours of "Unix" (and I appreciate that there is no "standard" Unix any more), but I cannot, in all honesty, spot any objective difference between, say, BSD Unix, AIX and Linux in terms of interface and behaviour.

GNU/Linux is far more similar to the various *nix flavours out there than Apple's OS X is to, say, Microsoft Windows XP.

That Linux has been the victim of feature-creep like every other OS does not prevent it from being, fundamentally, a very close relative of all the other "Unix-like" operating systems out there.

If you disagree with my view, I have no quarrel with that; I've never cared for CLI-focused operating systems and have stayed away from them as much as possible for the better part of 20 years. Even those few systems I've used that _did_ have text-based GUIs were machines like the ZX Spectrum, for which the CLI was, in fact, a full-on BASIC intepreter.

(Speaking of which: Digital Research's "GEM" GUI, which was implemented on the Atari ST range, and was essentially a "MacOS Lite", used drop-down rather than pull-down menus. I dug my old STFM out of the attic recently and found that it actually made me notice how 'click-happy' both MacOS and Windows are by comparison. I wonder if there's a reason why "drop-down" lost out to "pull-down" menus.)


Quote:

Quote:

multiple mediocre clones of a commercial GUI based on ageing paradigms

Whereas you, of course, have examples of GUIs that are not based upon aging paradigms, and are great. We all know, after all, that aging paradigms are necessarily bad. The wheel, for example, is being discontinued from next year.


I've said this before and I'm getting tired of repeating myself: LOOK AROUND YOU. User interfaces are *EVERYWHERE*, not just on computers!

You want an example of a good GUI that doesn't follow the traditional "desktop" metaphor? Look at your mobile phone.


Quote:

Quote:

and -- oh yes! -- let's not forget Firefox and Mozilla, which are barely changed from the original Mosaic, let alone IE.

That's plain stupidity. Mozilla is obviously far removed from Mosaic, and obviously superior in parts to IE.


Eh? How? Both display websites on a screen. That Mosaic didn't support Shockwave Flash (or, I think, Frames) isn't the fault of Mosaic, but entirely due to the fact that nobody had dreamed up those standards at the time.

Look at Opera, which _invented_ the much-vaunted tabbed browsing *AND* the mouse gestures used in the Mozilla/Firefox browsers. (And Opera, I might add, also runs on my netBook, my mobile phone and, yes, even Linux.)

You cannot claim a product is "great" merely because it apes features that already existed in other browsers. Again, I contend that the only unique feature of Mozilla/Firefox/Gecko/whatever is that it's free. As in beer. Whoopee. Some "innovation" there.


Quote:

Quote:

The old guard is clearly part of the problem, not the solution. They must be replaced. By force, if necessary.

That must be joke.


Well done.


Quote:
Quote:

Step Two: New Organisations.

The Free Software Foundation needs to ditch its bias towards Unix and stop blindly following the GNU Project's every whim. The GNU Project is over. They've achieved their aim. Academics have their free OS to toy around with. Let it go and move on.


What an ass. 1) The GNU project is not 'over'.


Er, yes it is. By their own website's assertion, their primary aim is to, and I quote (again): "The GNU Project was launched in 1984 to develop a complete UNIX style operating system..."

That operating system exists, and has done so for some time now. It is called "GNU/Linux". I believe some people here may have heard of it.


Quote:

2) The GNU project has not achieved their aim.


The Free Software Foundation and GNU Project were instigated by Stallman in an attempt to recreate the halcyon days of, er, punched cards, paper tape and, presumably, the dubious "fun" of developing for an operating system that was designed in an era when floppy disks were not only 8" in diameter, but also really were floppy.

It's a copy of an operating system designed _by_ programmers, _for_ programmers. FSF and OSS are so blatantly programmer-centric that it's amazing anyone believes otherwise, yet you'd imagine that, over the intervening 20-odd years, we might have seen a few actual advances in programming techniques from these people. But no.

Which brings me back to my original point: GNU is over.

This doesn't mean "wind it up, kick everyone out and switch off the lights". Linux is a perfectly good "Unix-like" OS and I don't expect it to be kicked in the bin overnight. But it isn't the future.

Pick another 'key' project to nail your flag to.

(See how twitchy you guys are about Linux though? This is my point exactly. I'll repeat this once more: I am NOT advocating just terminating all further development of Linux. I AM advocating that it should cease to be The Big Wahoonie of Software Libre.


Quote:
Quote:

There must also be an end to the constant bickering over "open source" philosophies. The religious wars of the debate _must_ be wiped out ruthlessly and stamped upon; it only gives you a bad name and makes you look like a cult. Quit it. Kick out anyone who refuses to grow the fuck up.

Whilst wiping out, stamping upon and excommunicating anyone who refuses to abide by your rules wouldn't make your organisation a cult?


Hey, I'm just offering some constructive criticism *AND* some suggestions as to a possible future path for you guys. I'm not pointing a gun at your head and forcing you to accept it at face value, but I do feel the personal insults are a little uncalled for.

I couldn't personally give a flying fuck whether OSS and the FSF actually live or die. In fact, I consider them lost causes already and of marginal irrelevance to the future of computing in general at best.

My personal opinion is that most of the people who promote Open Source are basically just doing it because they love programming. They don't give a shit about anything other than the code. "Open Source" as a philosophy actually reinforces this and appears to condone it. I consider it a flawed philosophy as it tends to result in people who care only for the code; the elegance of the code; the quality of the code; the 'purity' of their algorithms... but who couldn't give a damn about whether anyone other than another programmer actually *uses* their code.

Still, I'm sure others have differing viewpoints and I'm a teacher and writer these days, so my opinions are just that. And freely given too.

--
Sean Timarco Baggaley

[Edited by - stimarco on August 7, 2004 5:57:58 PM]
Sean Timarco Baggaley (Est. 1971.)Warning: May contain bollocks.
Quote:
Original post by C-Junkie
Nope. Putting something like this on the display-side has been dismissed as a possibility, period. Want to know why? Because it is fundamentally inefficient.

Think about what this would mean for something REALLY SIMPLE, like a text editor. You'd have to upload the whole text document to the server. And once it's up there your ideas about not having to save would be shot to hell, because now the graphics system (god knows WHY) has the data! You'd have to transfer the whole text document BACK to the client in order to save it to disk.
That wasn't what I meant.

What I meant is that the conversation between client and server is to voluble. Use aliases to make it less so. An alias would be a single identifier that refers to several atomic server- or client-side operations.

Quote:
I don't think eliminating saving is a fundamentally bad idea, as long as we can still choose to "tag" "releases" (using cvs-like terminology).
Absolutely. Incidentally, this addresses Mayrel's question about publishing files, for instance, to the web.

Quote:
come on. Why do we refer to PEOPLE by name?
Bad analogy. People are difficult to describe, so a name is a valuable attribute. Data is inherently regular and about something - that something being inherently more valuable than whatever filename you may give it.

At work our files (for my department) follow a naming pattern of YYYYMMDD(ticker)[-team_suffix][.ext]. I've never heard anybody refer to a file as anything other than the ticker, or the company name in full ("Disney" instead of "DIS", "Gentiva" instead of "GTIV").

YMMV, of course. Nerds [smile]

Quote:
Because [directories are] an extremely simple organizational method.
True, but are they ideal? Can we do better?

That's all I'm asking.

There are initiatives, as you mention, grafting a database onto hierarchy. I just think there's an interesting opportunity in the opposite - grafting the hierarchy onto the database, even if dynamically.

Quote:
Oh yeah, jwz is hardly a "principal actor".
I agree. He made a few contributions here and there. I guess I just got carried away there.
Advertisement
Quote:
Original post by stimarco
You want an example of a good GUI that doesn't follow the traditional "desktop" metaphor? Look at your mobile phone.
That's great. For you mobile phone. It's a nice specialized interface. You can not do that for a general purpose PC. You may think things are going to get more specialised, but I call BS. Nobody is going to spring a few hundred dollars for a "Web Browser 2 - The Device" when there's a nice PC sitting right there that they have already.

The current general purpose UI designs we use have room for improvement, but NOTHING will be gained by going off in a radical direction, on dubious principles. UIs are refined, not redefined. (I was going to make that a joke, but it has too much truth to it)
Quote:
Which brings me back to my original point: GNU is over.

This doesn't mean "wind it up, kick everyone out and switch off the lights". Linux is a perfectly good "Unix-like" OS and I don't expect it to be kicked in the bin overnight. But it isn't the future.

Pick another 'key' project to nail your flag to.

(See how twitchy you guys are about Linux though? This is my point exactly. I'll repeat this once more: I am NOT advocating just terminating all further development of Linux. I AM advocating that it should cease to be The Big Wahoonie of Software Libre.
Linux is a kernel. You ahve raised nothing against the kernel. The GNU underlying system you seem to think is shit, but it does nothing but provide a really basic means to use the system, and you have again raised no good points, er excused me, ANY points as to why it's "bad."

Oh wait, except "old." "Old" isn't a good enough reason to shun ANYTHING.

As far as I can tell, that was a VERY long-winded and VERY disorganized rant on almost nothing, with no evidence to back up what you claim should be done.
Quote:
Original post by stimarco
This is, I believe, where Oluseyi and I disagree quite clearly. Oluseyi takes the pragmatic approach: "We have Linux, we must use it."
Actually, it's something more like "We have Linux and people know it; let's exploit that."

Windows is an excellent choice, technically, but the problem is the expense and difficulty of truly getting to know the system. I recently learned about the features of the NT kernel, in that the Win32 API is not mandated as an exclusive environment - Citrix and SFU are alternative code paths, as it were. That would be an excellent way to develop this.

With your explanation and endorsement, I'll look into Symbian in greater depth.
Quote:
Original post by C-Junkie
You may think things are going to get more specialised, but I call BS. Nobody is going to spring a few hundred dollars for a "Web Browser 2 - The Device" when there's a nice PC sitting right there that they have already.
You're right. And wrong.

Even stimarco's cell phone example is multi-modal. people IM, browse the web, do email and make phone calls on the mobiles. What I envision is a redistribution of functionality. Anything with a screen will be able to browse the web - for example the HD television - while anything with a display and a text input device will be able to do IM and e-mail as well.

That means that the PC will bear less of the burden, and that some people won't even buy PCs in the traditional sense.
Quote:
Original post by Oluseyi
Use aliases to make it less so. An alias would be a single identifier that refers to several atomic server- or client-side operations.
*clicks* Nice. I'm going to think about this. Maybe this could result in an GL-style "display list" extension to X... (and when we have X on GL, it could even be literally a display list... maybe... depending on the kind of data we'd let get "aliased")
Quote:
Quote:
come on. Why do we refer to PEOPLE by name?
Bad analogy. People are difficult to describe
So is data. Seriously. How would you go about NOT naming (for instance) a sound file?

As for the naming scheme used at your workplace, my first thought was "W(hy)TF aren't you using a database for that?" until I realized that was your point.

I can actually imagine a kind of database filesystem. A real simple one, too. Maybe if I get some time someday, I'll look at some people's attempts at this to get some more ideas.

(warning! train of thought: mounts would be considered queries. Kernel would need database support. Decent performance requires caching,and caching would be a nightmare... I think. Have to look into database query caching stuff... and realtime updating fo the cache... I'd need directories to preserve the traditional hiearchy for compatibility. How would directories be presented in a database? a link to a different query? that'd get bloated up the whazoo... or would it. Wonder if there are any hierarchical database formats... speaking of formats, how could you possibly select one that'll work for general purpose use... hmmm. ug, how would you support having "columns" for specific data ('genre' on audio, but not source code)... you'd need a different partition for differnt types of files. that'd suck, unless there's a format with "holes" in it... ...)
Quote:
There are initiatives, as you mention, grafting a database onto hierarchy. I just think there's an interesting opportunity in the opposite - grafting the hierarchy onto the database, even if dynamically.
Actually, as I just thought, that's kind of hard. How would you represent a hierarchy inside a database? It's rather hard to do efficiently... depends on the format again. sheesh.

no wonder this hasn't been done everywhere before, it's extremely difficult.

This topic is closed to new replies.

Advertisement