Easy Installation and Uninstallation of Software
I love Linux. I really do. It''s very nice to use and develop with. However, there is one aspect which is simply inexcusable: it''s impossibly difficult to easily install and uninstall software.
If you''ve used Linux for any period of time, you''ll run into Dependency Hell. Softare conflicts abound and things break. This is due, mainly, to the fact that Linux, being a UNIX derivative, has a different meme when it comes to software than suits most home desktop users. In essence, it''s designed around tools rather than applications.
There are good reasons for this. It makes a much better system overall and much cleaner to use and operate. The problem is that tool-based directory structures are simply inadequate for desktop use. In a server, once you set up the enivornment, you are unlike to ever change it much. On a desktop system, things are far more dynamic with the user constanting putting new things on and taking old software off. This, of course, results in the mess that is Dependency Hell.
.RPMs and .DEBs are supposed to be solutions to this issue, but honestly... they rarely work correctly and are overly complex and incompatable with one another. Worse yet, the moment you don''t use the system, your entire computer will fall into chaos -- either you use package managers all the time or you can''t use them at all. You essentially have a huge database of all the software on your system; in essence, a Windows Registry. And it brings with it all the problems that entails.
There must be a better way.
After thinking on this problem and coming up with a few ideas (most were still far too complex and irritated me), I think I have come up with a solution. It''s simple and backwards compatable with the old way of doing things.
Basically, all you need to do is make a distinction between tools and apps. Yes, it''s just that simple. Let me illustrate.
You have a game called The_Game and you wish to install it in your system. The install script is really simple; all it does is make subdirectories in the appropriate $PATH based on the appname. So...
../bin/The_Game/ would hold all the binaries needed for The_Game
../lib/The_Game/ would hold all the libraries needed
../etc/The_Game/ would hold all the config files needed
... and so on.
Now, when you want to run The_Game, you type in The_Game and the shell will search ../bin for a ''The_Game'' directory _first_ (and an executable file of the same name), then if nothing is found, default to the plain ../bin.
This has many advantages. First, it''s simple. No hacked pseudo registery is needed. All binaries are _still_ in ../bin, all libraries are _still_ ../lib and so on, so you''ll still maintain the Unix-like seperation of files by purpose, rather than have to have some kind of Windows-like directory structure where every application has it''s own directory in which it keeps executables, libraries, config info and so on.
Secondly, it''s easy to install applications, either in binary form or though compiled source. No need to even bother with dependency conflicts, really; if you need a special version of, say, libc++, each application can have it''s own personalized copy in ../lib/. No conflicts!
Thirdly, it''s easy to _uninstall_ things. Basically, it''s just a recursive rm -d on each of the main directories and BOOM! All the program is gone without breaking anything.
It''s also quite easy to see a list of applications on your system. Just do a directory listing of ../bin and there you go. Much better than relying on a DB and it''s able to be used by any installation/uninstallation manager.
Patches are suddenly easy as well; just overwrite the appropriate files in the application directories.
And, on the off chance there is a dependency conflict, it''s easy to resolve. Just recomplie the offending library or whatever and stick it in the approprate application directory. No need to bother sorting out your entire system!
There are, however, a few problems with this methodology.
First off, it''s slightly ineffiecent. Since it will check for a directory of a particular program first, then search generic bin, that can lead up to a lot of wasted calls if you use a lot of plain tools and not applications. There are probably a lot of ways around this small limitation, though (maybe some use of Symlinks on common tools like ls and less?), and the performance hit shouldn''t be too much for the monster computers of today.
Secondly, it''s not POSIX standard. This doesn''t seem to be much of a problem, however, since it''s still backward compatable with the Old Way and Linux in particular has never been afraid to go against POSIX if it leads to a better system.
Third, what precisely is an ''application'' and what is a ''tool''? There''s some murkiness there. I don''t think anyone would deny that OpenOffice is an application or ls is a tool. But what about GCC? Is it a tool or an app? XFree86? KDE? GNOME? MESA? Are these tools or apps? You could make an argument either way. In the end, this would probably need to be defined by something like the LSB.
So the challenges to impliment this system are there, but not insurmountable and I think that the added streamlined ability to have applications Just Work would suddenly make Linux far more platable as a desktop system (as it is now it''s an absolute pain to get anything installed and then uninstalled again. RPMs and DEBs just don''t work and have other flaws).
What do you think?
"Diplomacy is the ability to tell someone to go to tell in such a way as they look forward to the trip."
September 28, 2002 11:01 AM
You''ve just described /opt with some (notable) differences.
Quite frankly, and at the risk of sounding like an hypocritical fanatic, I only experienced such problems with old RPM based distros.
I use Debian ''unstable'' on one of my workstations, and the only trouble I have with it is the occasional missing dependency. That''s the price for running what is basically a beta system. It never happened with my Debian ''stable'' server, nor with any of my Open/Net/FreeBSD servers/workstations. Even with Debian unstable, it''s very rare.
Servers also run applications (web servers, userland NFS, or even X and office suites in the case of applications servers). If installing applications on a desktop computer leads to problems, I don''t understand how it wouldn''t on servers. Could you elaborate?
You can build packages from source and binary. It requires quite some work though (Slackware is a notable exception here).
I installed a few programs from source on my Debian workstation and I don''t have any problem. ''make uninstall'' will clean the files it has installed, but I always installed in /opt/package_name anyway, so rm -rf /opt/package_name and I''m done.
In fact, I could even overwrite what apt has installed. When uninstalling, it wouldn''t delete the files but rather give me a list of what has not been removed (different timestamp). The package, however, would be gone from the list and could be reinstalled again.
Agreed, though there is a major exception: it only holds informations about the packages. Nothing about the system and the such. In that respect, it is safer than the Windows registry (no risk to damage the whole system if messing with the DB).
Sun''s packages management is nice. It is used in one form or another by the free *BSD.
What if you have a package with more than one binary? Say openoffice.org? Would it create a bin/openoffice.org/swriter, bin/openoffice.org/scalc, etc? In that case, the search will fail ("no /usr/local/bin/swriter/swriter".
What if I rename the binary to prevent collision? Or if I use ./configure –program-transform-name? Would it become bin/package_name/modified_name? Or bin/modified_name/modified_name?
Don''t get me wrong, I have nothing against your idea, I''m merely pointing out potential problems.
Games differ from other kind of applications here. Binaries belong to /usr/local/games/game_name/, which is again easy to uninstall with ''rm -rf''.
Also, that scheme would bypass $PATH, which is somewhat rude: if I put /usr/local/mounted_share/bin at the beginning of $PATH, it''s because I want binaries in that folder to have precedence over others.
It also raises a small security issue: I could create a /usr/bin/ls/ls trojan for instance. Ok, if I can do that, I can also overwrite /bin/ls, but that would be visible with a simple checksum. The former wouldn''t. This is why $PATH is important. Even if I create a /usr/bin/ls now, it''s /bin/ls that will be invoked, because of its precedence in the path.
It would also be a maintenance nightmare. What happens if a big, must-have patch for the libc (or any other lib) comes? I''ll have to patch, reinstall it all over the place and pray that I didn''t forget any.
If you mean the applications will have their own and thus not need the patch, you''re right, but this can lead to severe security/stability/etc issues: forget to patch your lib/secure_server/libcrypt.so and your next transactions will use a vulnerable library, possibly even infected with a trojan.
Here I''d rather use the package manager. That way I don''t have to look in /bin, /sbin, /usr/bin, /usr/local/sbin, etc.
I don''t understand. I can overwrite it even if it is in /usr/bin. Could you clarify? Also see above. Patching would actually be a problem.
Or you could set LD_LIBRARY_PATH. But really, I''d rather not. Some developers would start shipping their modified version of library X to work with their application Y, and it would totally defeat the purpose of _shared_ objects (granted, they can still modify them and ship them under another name).
Also, I don''t see how this would make it easier to the desktop user: most wouldn''t want to touch a compiler with a 10'' pole.
That''s a big problem for me. I care about standards. Without them I wouldn''t even be able to write portable code at all. It''s already tough with the standards.
Companies also care about standards (interoperability). Windows NT as a POSIX subsystem only because companies required so.
Another debate in the line of vi vs emacs, kde vs gnome, linux vs bsd, BSD vs SysV, ext3fs vs Reiserfs, etc .
Well I pointed some of the obvious problems, and I''m sure there are many more (subtle) ones. Such a project would require a _lot_ of planning. I''m not opposed to such an idea though, as long as it doesn''t mess with ''standard'' directories. I think it would be nice if applied to, say, /opt.
Also, the industry is conservative by nature, as all changes introduce new problems, sometimes nasty ones. It would take a while before your idea is widely adopted. Until then, it would have to be optional and not the default. That would somewhat defeat the ease of use, as end users would have to turn it on somehow.
Finally, keep in mind that packages are more than glorified archives. They make it easier to verify a system''s integrity (checksums/hashes), repair, automatically update, etc. Your system could actually be implemented with packages managers and benefit from their features (simply build the packages so that they will be installed in /wherever/app_name rather than /usr).
Sorry about the (very) long and confuse post, I''m very tired.
Hope this helps.
quote: Original post by Sivle
I love Linux. I really do. It''s very nice to use and develop with. However, there is one aspect which is simply inexcusable: it''s impossibly difficult to easily install and uninstall software.
Quite frankly, and at the risk of sounding like an hypocritical fanatic, I only experienced such problems with old RPM based distros.
quote:
If you''ve used Linux for any period of time, you''ll run into Dependency Hell. Softare conflicts abound and things break. This is due, mainly, to the fact that Linux, being a UNIX derivative, has a different meme when it comes to software than suits most home desktop users. In essence, it''s designed around tools rather than applications.
I use Debian ''unstable'' on one of my workstations, and the only trouble I have with it is the occasional missing dependency. That''s the price for running what is basically a beta system. It never happened with my Debian ''stable'' server, nor with any of my Open/Net/FreeBSD servers/workstations. Even with Debian unstable, it''s very rare.
quote:
There are good reasons for this. It makes a much better system overall and much cleaner to use and operate. The problem is that tool-based directory structures are simply inadequate for desktop use. In a server, once you set up the enivornment, you are unlike to ever change it much. On a desktop system, things are far more dynamic with the user constanting putting new things on and taking old software off. This, of course, results in the mess that is Dependency Hell.
Servers also run applications (web servers, userland NFS, or even X and office suites in the case of applications servers). If installing applications on a desktop computer leads to problems, I don''t understand how it wouldn''t on servers. Could you elaborate?
quote:
.RPMs and .DEBs are supposed to be solutions to this issue, but honestly... they rarely work correctly and are overly complex and incompatable with one another.
They weren''t meant to be interoperable, just like MSI files aren''t interoperable with ZIP. You can, however, convert from one format to another (with some work), but of course it isn''t what newcomers would call friendly.
The standard is now RPM, and distributions are supposed to shift to RPM in order to be standard. I really wish another package management system was picked up (pkg, apt, etc).
Again, Debian and the *BSD never failed me, and neither has Slackware back when I was using it.
Worse yet, the moment you don''t use the system, your entire computer will fall into chaos – either you use package managers all the time or you can''t use them at all.
You can build packages from source and binary. It requires quite some work though (Slackware is a notable exception here).
I installed a few programs from source on my Debian workstation and I don''t have any problem. ''make uninstall'' will clean the files it has installed, but I always installed in /opt/package_name anyway, so rm -rf /opt/package_name and I''m done.
In fact, I could even overwrite what apt has installed. When uninstalling, it wouldn''t delete the files but rather give me a list of what has not been removed (different timestamp). The package, however, would be gone from the list and could be reinstalled again.
quote:
You essentially have a huge database of all the software on your system; in essence, a Windows Registry. And it brings with it all the problems that entails.
Agreed, though there is a major exception: it only holds informations about the packages. Nothing about the system and the such. In that respect, it is safer than the Windows registry (no risk to damage the whole system if messing with the DB).
quote:
There must be a better way.
Sun''s packages management is nice. It is used in one form or another by the free *BSD.
quote:
(snipped)
You have a game called The_Game and you wish to install it in your system. The install script is really simple; all it does is make subdirectories in the appropriate $PATH based on the appname. So…
../bin/The_Game/ would hold all the binaries needed for The_Game
../lib/The_Game/ would hold all the libraries needed
../etc/The_Game/ would hold all the config files needed
… and so on.
What if you have a package with more than one binary? Say openoffice.org? Would it create a bin/openoffice.org/swriter, bin/openoffice.org/scalc, etc? In that case, the search will fail ("no /usr/local/bin/swriter/swriter".
What if I rename the binary to prevent collision? Or if I use ./configure –program-transform-name? Would it become bin/package_name/modified_name? Or bin/modified_name/modified_name?
Don''t get me wrong, I have nothing against your idea, I''m merely pointing out potential problems.
quote:
Now, when you want to run The_Game, you type in The_Game and the shell will search ../bin for a ''The_Game'' directory _first_ (and an executable file of the same name), then if nothing is found, default to the plain ../bin.
Games differ from other kind of applications here. Binaries belong to /usr/local/games/game_name/, which is again easy to uninstall with ''rm -rf''.
Also, that scheme would bypass $PATH, which is somewhat rude: if I put /usr/local/mounted_share/bin at the beginning of $PATH, it''s because I want binaries in that folder to have precedence over others.
It also raises a small security issue: I could create a /usr/bin/ls/ls trojan for instance. Ok, if I can do that, I can also overwrite /bin/ls, but that would be visible with a simple checksum. The former wouldn''t. This is why $PATH is important. Even if I create a /usr/bin/ls now, it''s /bin/ls that will be invoked, because of its precedence in the path.
quote:
(snipped)
Secondly, it''s easy to install applications, either in binary form or though compiled source. No need to even bother with dependency conflicts, really; if you need a special version of, say, libc++, each application can have it''s own personalized copy in ../lib/<programname>. No conflicts!
It would also be a maintenance nightmare. What happens if a big, must-have patch for the libc (or any other lib) comes? I''ll have to patch, reinstall it all over the place and pray that I didn''t forget any.
If you mean the applications will have their own and thus not need the patch, you''re right, but this can lead to severe security/stability/etc issues: forget to patch your lib/secure_server/libcrypt.so and your next transactions will use a vulnerable library, possibly even infected with a trojan.
quote:
It''s also quite easy to see a list of applications on your system. Just do a directory listing of ../bin and there you go. Much better than relying on a DB and it''s able to be used by any installation/uninstallation manager.
Here I''d rather use the package manager. That way I don''t have to look in /bin, /sbin, /usr/bin, /usr/local/sbin, etc.
quote:
Patches are suddenly easy as well; just overwrite the appropriate files in the application directories.
I don''t understand. I can overwrite it even if it is in /usr/bin. Could you clarify? Also see above. Patching would actually be a problem.
quote:
And, on the off chance there is a dependency conflict, it''s easy to resolve. Just recomplie the offending library or whatever and stick it in the approprate application directory. No need to bother sorting out your entire system!
Or you could set LD_LIBRARY_PATH. But really, I''d rather not. Some developers would start shipping their modified version of library X to work with their application Y, and it would totally defeat the purpose of _shared_ objects (granted, they can still modify them and ship them under another name).
Also, I don''t see how this would make it easier to the desktop user: most wouldn''t want to touch a compiler with a 10'' pole.
quote:
(snipped)
Secondly, it''s not POSIX standard. This doesn''t seem to be much of a problem, however, since it''s still backward compatable with the Old Way and Linux in particular has never been afraid to go against POSIX if it leads to a better system.
That''s a big problem for me. I care about standards. Without them I wouldn''t even be able to write portable code at all. It''s already tough with the standards.
Companies also care about standards (interoperability). Windows NT as a POSIX subsystem only because companies required so.
quote:
Third, what precisely is an ''application'' and what is a ''tool''? There''s some murkiness there. I don''t think anyone would deny that OpenOffice is an application or ls is a tool. But what about GCC? Is it a tool or an app? XFree86? KDE? GNOME? MESA? Are these tools or apps? You could make an argument either way. In the end, this would probably need to be defined by something like the LSB.
Another debate in the line of vi vs emacs, kde vs gnome, linux vs bsd, BSD vs SysV, ext3fs vs Reiserfs, etc .
quote:
So the challenges to impliment this system are there, but not insurmountable and I think that the added streamlined ability to have applications Just Work would suddenly make Linux far more platable as a desktop system (as it is now it''s an absolute pain to get anything installed and then uninstalled again. RPMs and DEBs just don''t work and have other flaws).
What do you think?
Well I pointed some of the obvious problems, and I''m sure there are many more (subtle) ones. Such a project would require a _lot_ of planning. I''m not opposed to such an idea though, as long as it doesn''t mess with ''standard'' directories. I think it would be nice if applied to, say, /opt.
Also, the industry is conservative by nature, as all changes introduce new problems, sometimes nasty ones. It would take a while before your idea is widely adopted. Until then, it would have to be optional and not the default. That would somewhat defeat the ease of use, as end users would have to turn it on somehow.
Finally, keep in mind that packages are more than glorified archives. They make it easier to verify a system''s integrity (checksums/hashes), repair, automatically update, etc. Your system could actually be implemented with packages managers and benefit from their features (simply build the packages so that they will be installed in /wherever/app_name rather than /usr).
Sorry about the (very) long and confuse post, I''m very tired.
Hope this helps.
"You''ve just described /opt with some (notable) differences."
Actually, I did look at /opt first, but didn''t really think it solved the underlining problem. Perhaps I''m mistaken, but isn''t everything installed in /opt in it''s own directory so it, in essence, acts like the Windows naming convention? This would seem to break the normal UNIX logic of having specialized directories for different aspects of the programs on a system.
"Quite frankly, and at the risk of sounding like an hypocritical fanatic, I only experienced such problems with old RPM based distros."
Hmm. I''ve still have had problems with both Redhat and Mandrake. RPMs never seem to work correctly. Though I do admit that Mandrake''s URPMI does seem to do a much better job recently.
"I use Debian ''unstable'' on one of my workstations, and the only trouble I have with it is the occasional missing dependency. That''s the price for running what is basically a beta system. It never happened with my Debian ''stable'' server, nor with any of my Open/Net/FreeBSD servers/workstations. Even with Debian unstable, it''s very rare."
Yes, this is true, I admit. .DEBs seem to be much better than .RPMs. This is likely because .DEBs are mainly used just for Debian while .RPMs are utilized by a lot of different distros. Nonetheless, it still strikes me as an overly complex method for something which should be very simple.
"Servers also run applications (web servers, userland NFS, or even X and office suites in the case of applications servers). If installing applications on a desktop computer leads to problems, I don''t understand how it wouldn''t on servers. Could you elaborate?"
Forgive me, I wasn''t clear. I meant to say that once software is installed on a server, it generally doesn''t change much. If you have a webserver, it''s going to serve web content and will have Apache and any mods you want then afterwards go about it''s merry business without much in the way of maintence or change. Desktop systems on the other hand are always changing; users typically want to upgrade applications constantly, whether they need to or not. Servers have a more ''it''s not broken, don''t fix it'' mentality.
"You can build packages from source and binary. It requires quite some work though (Slackware is a notable exception here)."
I know it''s possible, but I''ve never bothered to puzzle it out as yet. Out of curiousity, how is Slackware different in this respect?
"I installed a few programs from source on my Debian workstation and I don''t have any problem. ''make uninstall'' will clean the files it has installed, but I always installed in /opt/package_name anyway, so rm -rf /opt/package_name and I''m done."
Not all programs have an uninstall makefile, though. In the methodology I describe, it wouldn''t really matter; a package system could gain a listing of everything installed just by listing the bin directory (or maybe usr would actually be better... hmm) and the resulting removal logic would be appearent and much simplier.
"In fact, I could even overwrite what apt has installed. When uninstalling, it wouldn''t delete the files but rather give me a list of what has not been removed (different timestamp). The package, however, would be gone from the list and could be reinstalled again."
This behavior is fine, as far as I can see. The main advantage of doing things the way I describe is that one can use any package manager they want and they will all work the same. There isn''t a need to have everything installed with the same packager and you can even pick and choose which package manager you want to use per application. Or you can even just do a make install and forgo packages. Or even just untar a tarball. How an application is installed in the system is immaterial, as is how it is uninstalled. If all else fails, they can go in by hand and do things.
"Agreed, though there is a major exception: it only holds informations about the packages. Nothing about the system and the such. In that respect, it is safer than the Windows registry (no risk to damage the whole system if messing with the DB)."
Yes, this is very good. But I still have this gut feeling it''s an overly-engineered solution. It''s not _bad_, it just could be so much better.
"Sun''s packages management is nice. It is used in one form or another by the free *BSD."
Could you describe it? I''m rather curious (is it Portage?).
"What if you have a package with more than one binary? Say openoffice.org? Would it create a bin/openoffice.org/swriter, bin/openoffice.org/scalc, etc? In that case, the search will fail ("no /usr/local/bin/swriter/swriter"."
Good question! My inital thought is that each individual application within the Open Office suite would get it''s own directory. Since they are seperate for the most part and the only real connection they have is that they''re all packaged together, it makes sense to split each one off into their own directory.
"What if I rename the binary to prevent collision? Or if I use ./configure --program-transform-name? Would it become bin/package_name/modified_name? Or bin/modified_name/modified_name?"
Another excellent question! My thoughts it would be the latter.
"Don''t get me wrong, I have nothing against your idea, I''m merely pointing out potential problems."
Not at all! That''s the only way ideas can improve. I wouldn''t have put it up here for view if I didn''t want people to rip it to pieces. It''s quite possible I''m just being a ninny and the idea is total junk, after all.
"Games differ from other kind of applications here. Binaries belong to /usr/local/games/game_name/, which is again easy to uninstall with ''rm -rf''.
Also, that scheme would bypass $PATH, which is somewhat rude: if I put /usr/local/mounted_share/bin at the beginning of $PATH, it''s because I want binaries in that folder to have precedence over others."
I''m not sure I understand. How does $PATH get overridden in this instance?
"It also raises a small security issue: I could create a /usr/bin/ls/ls trojan for instance. Ok, if I can do that, I can also overwrite /bin/ls, but that would be visible with a simple checksum. The former wouldn''t. This is why $PATH is important. Even if I create a /usr/bin/ls now, it''s /bin/ls that will be invoked, because of its precedence in the path."
This is something I didn''t think of and you''re right, it''s a glaring problem. Hmm... I''ll have to think more on this to see if I can develop an easy solution. If you have any suggestions, feel free to pipe up!
"It would also be a maintenance nightmare. What happens if a big, must-have patch for the libc (or any other lib) comes? I''ll have to patch, reinstall it all over the place and pray that I didn''t forget any."
I used libc only as an example. This facility really shouldn''t be used unless it''s absolutely required for the reasons you state. I see it mostly as a fallback mechanism. A last-ditch response to get the damn thing working if you need, say, a libc compiled with GCC2.95 just for one particular program, but don''t want to have to go back and change you''re entire system to the older compile. Such is the case with Mozilla; in order to get the Java plugin to work, it requires Mozilla have access to the GCC2.95 compiled libc. It''s for use when anomolies arise, not as a general use.
"If you mean the applications will have their own and thus not need the patch, you''re right, but this can lead to severe security/stability/etc issues: forget to patch your lib/secure_server/libcrypt.so and your next transactions will use a vulnerable library, possibly even infected with a trojan."
Yes, it''s a bit risky, I admit. But we''re talking about desktop machines here; security and stability are secondary to ease of use. And, like I said, it shouldn''t be required all that often.
"Here I''d rather use the package manager. That way I don''t have to look in /bin, /sbin, /usr/bin, /usr/local/sbin, etc."
That''s just it, you can if you want. But if you don''t, you can remove things by hand and not have to worry.
"I don''t understand. I can overwrite it even if it is in /usr/bin. Could you clarify? Also see above. Patching would actually be a problem."
I meant to say you''d be free from the normal headaches accompanied with dependencies; rather than having to resolve them all, you can fairly confidently just copy a new executible or whatever into the directories.
"Or you could set LD_LIBRARY_PATH. But really, I''d rather not. Some developers would start shipping their modified version of library X to work with their application Y, and it would totally defeat the purpose of _shared_ objects (granted, they can still modify them and ship them under another name)."
Again, this would be only used when nessisary. You wouldn''t have to hold up your entire system because one critical application doesn''t have new bindings that come with a library or whatever. Likewise, you wouldn''t have to wait for any other packages to update in order to use new wiz-bang libraries in those programs that are ready for it.
"Also, I don''t see how this would make it easier to the desktop user: most wouldn''t want to touch a compiler with a 10'' pole."
True. But it''s still easier than trying to deal with dependencies. A tech support guy in the trenches could easily tell a desktop user the commands to type in to compile a specific library if it doesn''t seem to work. Resolving dependencies is orders of magnatude more complex.
"That''s a big problem for me. I care about standards. Without them I wouldn''t even be able to write portable code at all. It''s already tough with the standards."
True, but as I said the methodology is still compatable with the old way and Linux doesn''t really follow POSIX too well. Pthreads have been non-POSIX compliant since it''s creation, after all (though this should change in the 2.6 kernal).
"Another debate in the line of vi vs emacs, kde vs gnome, linux vs bsd, BSD vs SysV, ext3fs vs Reiserfs, etc ."
Hmm... I appearently wasn''t clear here. It wasn''t an attempt to provide a This vs That, but rather give examples of things which might be considered tools or might be considered apps. You could make arguments either way for these things. My question is basically where does one draw the line between an application and a tool?
"Well I pointed some of the obvious problems, and I''m sure there are many more (subtle) ones. Such a project would require a _lot_ of planning. I''m not opposed to such an idea though, as long as it doesn''t mess with ''standard'' directories. I think it would be nice if applied to, say, /opt."
Could you expound on this a bit? How do you mean? Are you saying making a mini directory structure in /opt? That actually might not be a bad idea... hm...
"Also, the industry is conservative by nature, as all changes introduce new problems, sometimes nasty ones. It would take a while before your idea is widely adopted. Until then, it would have to be optional and not the default. That would somewhat defeat the ease of use, as end users would have to turn it on somehow."
Well, it would probably be the job of the distros to decide whether or not to use the system or not.
"Finally, keep in mind that packages are more than glorified archives. They make it easier to verify a system''s integrity (checksums/hashes), repair, automatically update, etc. Your system could actually be implemented with packages managers and benefit from their features (simply build the packages so that they will be installed in /wherever/app_name rather than /usr)."
Exactly! That is what I was trying to get at and I appearently failed. The idea here is that one can use package management systems if they want to and to make things easier, but they wouldn''t be _dependant_ on them for system integrity. It''s also allow different package managers to interact without really knowing or caring even how a program got where it is.
"Sorry about the (very) long and confuse post, I''m very tired."
No problem! I was tired when I wrote it.
"Hope this helps."
It does, very much! Thank you for your informative and thoughtful reply.
Actually, I did look at /opt first, but didn''t really think it solved the underlining problem. Perhaps I''m mistaken, but isn''t everything installed in /opt in it''s own directory so it, in essence, acts like the Windows naming convention? This would seem to break the normal UNIX logic of having specialized directories for different aspects of the programs on a system.
"Quite frankly, and at the risk of sounding like an hypocritical fanatic, I only experienced such problems with old RPM based distros."
Hmm. I''ve still have had problems with both Redhat and Mandrake. RPMs never seem to work correctly. Though I do admit that Mandrake''s URPMI does seem to do a much better job recently.
"I use Debian ''unstable'' on one of my workstations, and the only trouble I have with it is the occasional missing dependency. That''s the price for running what is basically a beta system. It never happened with my Debian ''stable'' server, nor with any of my Open/Net/FreeBSD servers/workstations. Even with Debian unstable, it''s very rare."
Yes, this is true, I admit. .DEBs seem to be much better than .RPMs. This is likely because .DEBs are mainly used just for Debian while .RPMs are utilized by a lot of different distros. Nonetheless, it still strikes me as an overly complex method for something which should be very simple.
"Servers also run applications (web servers, userland NFS, or even X and office suites in the case of applications servers). If installing applications on a desktop computer leads to problems, I don''t understand how it wouldn''t on servers. Could you elaborate?"
Forgive me, I wasn''t clear. I meant to say that once software is installed on a server, it generally doesn''t change much. If you have a webserver, it''s going to serve web content and will have Apache and any mods you want then afterwards go about it''s merry business without much in the way of maintence or change. Desktop systems on the other hand are always changing; users typically want to upgrade applications constantly, whether they need to or not. Servers have a more ''it''s not broken, don''t fix it'' mentality.
"You can build packages from source and binary. It requires quite some work though (Slackware is a notable exception here)."
I know it''s possible, but I''ve never bothered to puzzle it out as yet. Out of curiousity, how is Slackware different in this respect?
"I installed a few programs from source on my Debian workstation and I don''t have any problem. ''make uninstall'' will clean the files it has installed, but I always installed in /opt/package_name anyway, so rm -rf /opt/package_name and I''m done."
Not all programs have an uninstall makefile, though. In the methodology I describe, it wouldn''t really matter; a package system could gain a listing of everything installed just by listing the bin directory (or maybe usr would actually be better... hmm) and the resulting removal logic would be appearent and much simplier.
"In fact, I could even overwrite what apt has installed. When uninstalling, it wouldn''t delete the files but rather give me a list of what has not been removed (different timestamp). The package, however, would be gone from the list and could be reinstalled again."
This behavior is fine, as far as I can see. The main advantage of doing things the way I describe is that one can use any package manager they want and they will all work the same. There isn''t a need to have everything installed with the same packager and you can even pick and choose which package manager you want to use per application. Or you can even just do a make install and forgo packages. Or even just untar a tarball. How an application is installed in the system is immaterial, as is how it is uninstalled. If all else fails, they can go in by hand and do things.
"Agreed, though there is a major exception: it only holds informations about the packages. Nothing about the system and the such. In that respect, it is safer than the Windows registry (no risk to damage the whole system if messing with the DB)."
Yes, this is very good. But I still have this gut feeling it''s an overly-engineered solution. It''s not _bad_, it just could be so much better.
"Sun''s packages management is nice. It is used in one form or another by the free *BSD."
Could you describe it? I''m rather curious (is it Portage?).
"What if you have a package with more than one binary? Say openoffice.org? Would it create a bin/openoffice.org/swriter, bin/openoffice.org/scalc, etc? In that case, the search will fail ("no /usr/local/bin/swriter/swriter"."
Good question! My inital thought is that each individual application within the Open Office suite would get it''s own directory. Since they are seperate for the most part and the only real connection they have is that they''re all packaged together, it makes sense to split each one off into their own directory.
"What if I rename the binary to prevent collision? Or if I use ./configure --program-transform-name? Would it become bin/package_name/modified_name? Or bin/modified_name/modified_name?"
Another excellent question! My thoughts it would be the latter.
"Don''t get me wrong, I have nothing against your idea, I''m merely pointing out potential problems."
Not at all! That''s the only way ideas can improve. I wouldn''t have put it up here for view if I didn''t want people to rip it to pieces. It''s quite possible I''m just being a ninny and the idea is total junk, after all.
"Games differ from other kind of applications here. Binaries belong to /usr/local/games/game_name/, which is again easy to uninstall with ''rm -rf''.
Also, that scheme would bypass $PATH, which is somewhat rude: if I put /usr/local/mounted_share/bin at the beginning of $PATH, it''s because I want binaries in that folder to have precedence over others."
I''m not sure I understand. How does $PATH get overridden in this instance?
"It also raises a small security issue: I could create a /usr/bin/ls/ls trojan for instance. Ok, if I can do that, I can also overwrite /bin/ls, but that would be visible with a simple checksum. The former wouldn''t. This is why $PATH is important. Even if I create a /usr/bin/ls now, it''s /bin/ls that will be invoked, because of its precedence in the path."
This is something I didn''t think of and you''re right, it''s a glaring problem. Hmm... I''ll have to think more on this to see if I can develop an easy solution. If you have any suggestions, feel free to pipe up!
"It would also be a maintenance nightmare. What happens if a big, must-have patch for the libc (or any other lib) comes? I''ll have to patch, reinstall it all over the place and pray that I didn''t forget any."
I used libc only as an example. This facility really shouldn''t be used unless it''s absolutely required for the reasons you state. I see it mostly as a fallback mechanism. A last-ditch response to get the damn thing working if you need, say, a libc compiled with GCC2.95 just for one particular program, but don''t want to have to go back and change you''re entire system to the older compile. Such is the case with Mozilla; in order to get the Java plugin to work, it requires Mozilla have access to the GCC2.95 compiled libc. It''s for use when anomolies arise, not as a general use.
"If you mean the applications will have their own and thus not need the patch, you''re right, but this can lead to severe security/stability/etc issues: forget to patch your lib/secure_server/libcrypt.so and your next transactions will use a vulnerable library, possibly even infected with a trojan."
Yes, it''s a bit risky, I admit. But we''re talking about desktop machines here; security and stability are secondary to ease of use. And, like I said, it shouldn''t be required all that often.
"Here I''d rather use the package manager. That way I don''t have to look in /bin, /sbin, /usr/bin, /usr/local/sbin, etc."
That''s just it, you can if you want. But if you don''t, you can remove things by hand and not have to worry.
"I don''t understand. I can overwrite it even if it is in /usr/bin. Could you clarify? Also see above. Patching would actually be a problem."
I meant to say you''d be free from the normal headaches accompanied with dependencies; rather than having to resolve them all, you can fairly confidently just copy a new executible or whatever into the directories.
"Or you could set LD_LIBRARY_PATH. But really, I''d rather not. Some developers would start shipping their modified version of library X to work with their application Y, and it would totally defeat the purpose of _shared_ objects (granted, they can still modify them and ship them under another name)."
Again, this would be only used when nessisary. You wouldn''t have to hold up your entire system because one critical application doesn''t have new bindings that come with a library or whatever. Likewise, you wouldn''t have to wait for any other packages to update in order to use new wiz-bang libraries in those programs that are ready for it.
"Also, I don''t see how this would make it easier to the desktop user: most wouldn''t want to touch a compiler with a 10'' pole."
True. But it''s still easier than trying to deal with dependencies. A tech support guy in the trenches could easily tell a desktop user the commands to type in to compile a specific library if it doesn''t seem to work. Resolving dependencies is orders of magnatude more complex.
"That''s a big problem for me. I care about standards. Without them I wouldn''t even be able to write portable code at all. It''s already tough with the standards."
True, but as I said the methodology is still compatable with the old way and Linux doesn''t really follow POSIX too well. Pthreads have been non-POSIX compliant since it''s creation, after all (though this should change in the 2.6 kernal).
"Another debate in the line of vi vs emacs, kde vs gnome, linux vs bsd, BSD vs SysV, ext3fs vs Reiserfs, etc ."
Hmm... I appearently wasn''t clear here. It wasn''t an attempt to provide a This vs That, but rather give examples of things which might be considered tools or might be considered apps. You could make arguments either way for these things. My question is basically where does one draw the line between an application and a tool?
"Well I pointed some of the obvious problems, and I''m sure there are many more (subtle) ones. Such a project would require a _lot_ of planning. I''m not opposed to such an idea though, as long as it doesn''t mess with ''standard'' directories. I think it would be nice if applied to, say, /opt."
Could you expound on this a bit? How do you mean? Are you saying making a mini directory structure in /opt? That actually might not be a bad idea... hm...
"Also, the industry is conservative by nature, as all changes introduce new problems, sometimes nasty ones. It would take a while before your idea is widely adopted. Until then, it would have to be optional and not the default. That would somewhat defeat the ease of use, as end users would have to turn it on somehow."
Well, it would probably be the job of the distros to decide whether or not to use the system or not.
"Finally, keep in mind that packages are more than glorified archives. They make it easier to verify a system''s integrity (checksums/hashes), repair, automatically update, etc. Your system could actually be implemented with packages managers and benefit from their features (simply build the packages so that they will be installed in /wherever/app_name rather than /usr)."
Exactly! That is what I was trying to get at and I appearently failed. The idea here is that one can use package management systems if they want to and to make things easier, but they wouldn''t be _dependant_ on them for system integrity. It''s also allow different package managers to interact without really knowing or caring even how a program got where it is.
"Sorry about the (very) long and confuse post, I''m very tired."
No problem! I was tired when I wrote it.
"Hope this helps."
It does, very much! Thank you for your informative and thoughtful reply.
"Diplomacy is the ability to tell someone to go to tell in such a way as they look forward to the trip."
The problem isn''t the rpm or deb formats, per se. It''s the applications that use them. The reason the problems don''t crop up with Debian is because they tell everyone to use apt_get, while Redhat tells everyone to use Up2date or rpm, neither of which are very smart programs.(there is also an apt_rpm)
Alternatively, source based distributions(Gentoo, Source Mage) tend to use very intelligent package managers, that don''t have dependency hell. Often, they have facilities to resolve the dependencies transparently. The downside is very cryptic errors(compilation or linking errors) when it doesn''t work perfectly.
Alternatively, source based distributions(Gentoo, Source Mage) tend to use very intelligent package managers, that don''t have dependency hell. Often, they have facilities to resolve the dependencies transparently. The downside is very cryptic errors(compilation or linking errors) when it doesn''t work perfectly.
---New infokeeps brain running;must gas up!
September 28, 2002 07:39 PM
A few notes on this well-written reply...
There''s a nice package called "checkinstall" which mostly automates the process of building debs/rpms/etc from source packages. It''s not foolproof, but it''s rather nice.
Indeed, some packages do. Try "less `which mozilla`" sometime.
Heh, tell me about it.. Right now, I''m writing a package which has to target two different embeddable Scheme interpreters (MzScheme and Guile). The MzScheme side is done, the Guile side was just begun today, and I''m busily sprinkling my code with #+ and #- tests (that read-macro itself written using a non-standard extension in Guile) to conditionally compile in various parts for the different interpreters....
Of course, Scheme''s a special case as its standard is so minimal as to be nearly useless by itself. Nice language though, especially if you include the various SRFIs.
Not quite true. The POSIX subsystem is there because the US government required it for some certification or other. No one ever uses the NT POSIX subsystem per se. (It''s completely different from the POSIX support in the MSVC C runtime library, which is actually useful if you''re unfortunate enough to be a Windows developer.)
Last I checked, in fact, it was recommended that administrators uninstall the POSIX subsys because it''s nothing but a potential security hole.
Scheme vs. Lisp, procedural vs. object-oriented, Perl vs. Just About Anything Else... these things are what keep life interesting.
quote: Original post by Anonymous Poster
You can build packages from source and binary. It requires quite some work though (Slackware is a notable exception here).
There''s a nice package called "checkinstall" which mostly automates the process of building debs/rpms/etc from source packages. It''s not foolproof, but it''s rather nice.
quote:
Or you could set LD_LIBRARY_PATH.
Indeed, some packages do. Try "less `which mozilla`" sometime.
quote:
That''s a big problem for me. I care about standards. Without them I wouldn''t even be able to write portable code at all. It''s already tough with the standards.
Heh, tell me about it.. Right now, I''m writing a package which has to target two different embeddable Scheme interpreters (MzScheme and Guile). The MzScheme side is done, the Guile side was just begun today, and I''m busily sprinkling my code with #+ and #- tests (that read-macro itself written using a non-standard extension in Guile) to conditionally compile in various parts for the different interpreters....
Of course, Scheme''s a special case as its standard is so minimal as to be nearly useless by itself. Nice language though, especially if you include the various SRFIs.
quote:
Companies also care about standards (interoperability). Windows NT as a POSIX subsystem only because companies required so.
Not quite true. The POSIX subsystem is there because the US government required it for some certification or other. No one ever uses the NT POSIX subsystem per se. (It''s completely different from the POSIX support in the MSVC C runtime library, which is actually useful if you''re unfortunate enough to be a Windows developer.)
Last I checked, in fact, it was recommended that administrators uninstall the POSIX subsys because it''s nothing but a potential security hole.
quote:
Another debate in the line of vi vs emacs, kde vs gnome, linux vs bsd, BSD vs SysV, ext3fs vs Reiserfs, etc .
Scheme vs. Lisp, procedural vs. object-oriented, Perl vs. Just About Anything Else... these things are what keep life interesting.
September 29, 2002 03:12 AM
Original A.P.
Yes, forgive me I wasn''t clear . Also, /opt is actually part of the FHS, which is now being used on more than simply GNU/Linux, but but also on proprietary UNIX systems. I guess another directory could do it though. /clean maybe .
I haven''t used any RPM based distro in a long while, hence why I said "old RPM based distros": I haven''t used any in years and have no idea how they behave nowadays. IIRC, the last one was RedHat 4.x. I''m sure they''ve vastly improved their system since then.
My mistake, I understand, and you''re right. Still, keep in mind that an application server will run most/all of the desktop computers applications and will require frequent updates. Of course here the end user isn''t in charge of it, and I think it explains a lot: the administrator is supposed to know his system (which is one of the reasons why so many people have problems with Linux: they went from lambda user to *NIX admnistrator in 30 minutes of installation).
I don''t recall the name of the commands to be issued as it has been a while, but the idea is to create a directory hierarchy somewhere (like "/tmp/usr/local/bin/", "/tmp/usr/local/lib", etc), copy the files of your package in that "virtual" hierarchy and create symlinks to the libraries/binaries it depends on. Then run the packager on it: it will follow the symlinks and count them as dependencies, then pack and compress your files. When unpacking, the installer will simply unpack to / and all the files will go in the appropriate /usr/local/ directory (after a checksum etc).
I''m not sure this is clear at all.
True, but a lot of packages already install their libraries in /usr(/local)/lib/package_name, while their binaries are in the standard places (a bin/). How would your system handle these? Or are you suggesting that these packages have to be tweaked to install their binaries in their own directory? That would make "cross-distro" development annoying unless they all provide a ''package-config'' a la GTK/Gnome/SDL/etc.
Understood.
No, I don''t know where ports come from, actually. Maybe is it another Sun thing, maybe is it unique to *BSD. ''pkg'' is what I''m refering to. In order to install package XYZ, you''d simply issue this:
pkg_add ftp://ftp.someserver.com/pub/package/package-1.0.0.pkg
or if you already have the file:
pkg_add package-1.0.0.pkg
Ports are even nicer: they''ll attempt to fetch it from a list of known mirrors and failback to the source code if needed (and do all the fetch-unpack-make-make install-etc for you), while also installing the dependencies. No matter how it was obtained, the package is now installed. Here you don''t even need to care about the backend: it could use pkg, apt, urpm, etc… and still work.
The downside to pkg is that you have to know the exact name of the file when fetching. Ports have another problem: a _huge_ ports/ subdirectory sleeping on the HD (not convenient for low end systems, handled devices and the such).
Of course they both still suffers from the flaw you described: if I simply grab a source archive and build it myself, pkg/ports/etc won''t help at all.
Be careful not to run out of inodes . I don''t see this being a problem on modern computers, though. It could be an issue on (very) old computers as well as handled devices (again).
Imagine I have this in my path: /usr/bin:/usr/local/bin:/usr/local/sbin. Now if I want to run ''fubar'' which is located in /usr/bin, I''ll simply type ''fubar'' at the prompt. If, however, there is a /usr/local/bin/fubar/fubar, that one will be selected in place of the one I expected to run. This is what led to the "security issue" below. Unless I misunderstood again.
Sadly, I don''t think there is an easy one. ''.'' was removed from the path on most *NIX systems for more or less the same reason.
I guess that you could simply apply your idea to lambda users, while UID/GID 0 users would be ignored and only use their $PATH. That wouldn''t resolve the problem at all, but at least it wouldn''t jeopardize commands executed by the administrators. Of course problems would arise with SUID/SGID programs.
OK.
I''d usually hammer you with "your desktop system should be secured as well, you #@µ%$!" rethorics, but this is not the debate so I''ll try to make abstraction of it . As long as the feature is optional and than the distro''s installer clearly states than it is a potential security hole (like so many other things anyway), I''m OK with it. This isn''t your responsibility anyway so my point is irrelevant.
I misunderstood again, I totally agree. In fact it''s been the cause of my headaches a while ago: Qt3-dev required libpng2 but 99% of my other applications/libraries needed libpng3. I don''t use Qt much myself (mostly FLTK and GTK+), but this simple dependency problem could easily be fixed with your idea, while the package manager gets in the way. Of course I could have fixed it manually quite easily, but I like to keep my systems clean of ''hacks'' when possible.
As long as it doesn''t get in the way, I see no problem with it. Keep in mind that we''re mostly discussing userland here. The kernel developers aren''t concerned with the file system hierarchy (beside some specific things like permissions, ACL, etc). You can see, however, that they''re trying to make Linux (the kernel) as POSIX as possible.
That''s what I was trying to point out. People will disagree on this. Some will call GCC a tool, others an application. I guess it would require yet another team to draw a line and edict rules, or more likely the FHS people would have to work on it.
Truth is, there is no difference. People usually call small, non interactive programs "tools", but some include everything that perform only specific tasks, or everything that they seldom use. For instance, ''nslookup'' can be used both interactively and in batch mode. I''d tend to call it a tool myself, but you might think differently.
Some will call a game''s level editor a tool, because it''s not required in order to use game, but it adds possibilities.
Yes, that''s what I meant. If /opt is not present, you don''t even need to check for /opt/bin/binary_name/binary_name. If it is present, it will have a minimal impact on performances because you only need to check under that directory, not under /bin, /usr/bin and /usr/local/bin.
/opt is now standarized, but you could come with a new directory.
Yes, but it would be problematic. Say I install a system without that option turned on, then later decide to install the module. Now what does the package manager do? Install subsequent packages following your new layout? What if I find out than I don''t like it/it doesn''t work as expected? Will the packager reinstall the packages? Are they going to break? This could lead to problems worse than missing dependencies.
Of course it could be done, with planning, but it would also require a major effort from distributions: they''ll have to adapt their package system (because your module could be installed from source ).
No you didn''t, I was tired .
As long as they all know about your system, yes. If they don''t, they''ll simply install where they see fit. Hence the need for a (semi-) standard layout. We must also remember that there is no magic secret to keep a system clean: know what you need, install it and nothing else. I found out that "desktop computer user" and "system integrity" _tend_ to be mutually exclusive .
Finally, it''s worth noting than a few programs don''t care about standards at all. They would hardly be manageable by your system.
For instance, most of D.J. Bernstein packages go in /package, /command and /doc. I personally hate it because:
1) my / partition is usually very small,
2) if I want to give them a partition, I have to create 3 new ones, whereas /usr/bin, /usr/lib, etc typically lie on the same physical partition/network share,
3) or I have to symlink them to other directories, effectively demonstrating that his layout is useless.
But the packages are very easy to uninstall. If such a layout was standarized/widespread, it would become common to have 3 new partitions, and after a while nobody would complain.
Of course, this also have weaknesses. If your system was to use such a layout (or a /opt like), it will hardly be able to deal with chrooted programs (like apache installed in /var/www etc).
Anyway, I''m sure you''ll manage to work something out, but make sure to test it for a while before releasing even a draft, or it will likely be ripped apart by the "community" (cf. the Linux kernel development mailing list for fierce battles about "your idea sucks".
quote: Original post by Sivle
Perhaps I''m mistaken, but isn''t everything installed in /opt in it''s own directory so it, in essence, acts like the Windows naming convention? This would seem to break the normal UNIX logic of having specialized directories for different aspects of the programs on a system.
Yes, forgive me I wasn''t clear . Also, /opt is actually part of the FHS, which is now being used on more than simply GNU/Linux, but but also on proprietary UNIX systems. I guess another directory could do it though. /clean maybe .
quote:
Hmm. I''ve still have had problems with both Redhat and Mandrake. RPMs never seem to work correctly. Though I do admit that Mandrake''s URPMI does seem to do a much better job recently.
I haven''t used any RPM based distro in a long while, hence why I said "old RPM based distros": I haven''t used any in years and have no idea how they behave nowadays. IIRC, the last one was RedHat 4.x. I''m sure they''ve vastly improved their system since then.
quote:
Yes, this is true, I admit. .DEBs seem to be much better than .RPMs. This is likely because .DEBs are mainly used just for Debian while .RPMs are utilized by a lot of different distros. Nonetheless, it still strikes me as an overly complex method for something which should be very simple.
Well, assuming that someone woud write a GUI front end to dpkg (there is probably one already), one could easily download the packages from the Debian archives and/or elsewhere (unofficial packages), double click it and let it install itself. Of course, if dependencies are missing, it won''t work. The beauty of apt is that it will list the dependencies and offer you to fetch-and-install them.
The problem here is uninstalling the thing. While removing package XYZ is easy on Debian (apt-get remove (–purge) XYZ), it won''t remove its dependencies, even if nothing depends on them anymore. A few programs exist that help this (like deborphan), but most people don''t know about them or expect the removal to be automatic.
Forgive me, I wasn''t clear. I meant to say that once software is installed on a server, it generally doesn''t change much.
(snipped)
My mistake, I understand, and you''re right. Still, keep in mind that an application server will run most/all of the desktop computers applications and will require frequent updates. Of course here the end user isn''t in charge of it, and I think it explains a lot: the administrator is supposed to know his system (which is one of the reasons why so many people have problems with Linux: they went from lambda user to *NIX admnistrator in 30 minutes of installation).
quote:
I know it''s possible, but I''ve never bothered to puzzle it out as yet. Out of curiousity, how is Slackware different in this respect?
I don''t recall the name of the commands to be issued as it has been a while, but the idea is to create a directory hierarchy somewhere (like "/tmp/usr/local/bin/", "/tmp/usr/local/lib", etc), copy the files of your package in that "virtual" hierarchy and create symlinks to the libraries/binaries it depends on. Then run the packager on it: it will follow the symlinks and count them as dependencies, then pack and compress your files. When unpacking, the installer will simply unpack to / and all the files will go in the appropriate /usr/local/ directory (after a checksum etc).
I''m not sure this is clear at all.
quote:
Not all programs have an uninstall makefile, though. In the methodology I describe, it wouldn''t really matter; a package system could gain a listing of everything installed just by listing the bin directory (or maybe usr would actually be better… hmm) and the resulting removal logic would be appearent and much simplier.
True, but a lot of packages already install their libraries in /usr(/local)/lib/package_name, while their binaries are in the standard places (a bin/). How would your system handle these? Or are you suggesting that these packages have to be tweaked to install their binaries in their own directory? That would make "cross-distro" development annoying unless they all provide a ''package-config'' a la GTK/Gnome/SDL/etc.
quote:
(snipped)
How an application is installed in the system is immaterial, as is how it is uninstalled. If all else fails, they can go in by hand and do things.
Understood.
quote:
Could you describe it? I''m rather curious (is it Portage?).
No, I don''t know where ports come from, actually. Maybe is it another Sun thing, maybe is it unique to *BSD. ''pkg'' is what I''m refering to. In order to install package XYZ, you''d simply issue this:
pkg_add ftp://ftp.someserver.com/pub/package/package-1.0.0.pkg
or if you already have the file:
pkg_add package-1.0.0.pkg
Ports are even nicer: they''ll attempt to fetch it from a list of known mirrors and failback to the source code if needed (and do all the fetch-unpack-make-make install-etc for you), while also installing the dependencies. No matter how it was obtained, the package is now installed. Here you don''t even need to care about the backend: it could use pkg, apt, urpm, etc… and still work.
The downside to pkg is that you have to know the exact name of the file when fetching. Ports have another problem: a _huge_ ports/ subdirectory sleeping on the HD (not convenient for low end systems, handled devices and the such).
Of course they both still suffers from the flaw you described: if I simply grab a source archive and build it myself, pkg/ports/etc won''t help at all.
quote:
Good question! My inital thought is that each individual application within the Open Office suite would get it''s own directory. Since they are seperate for the most part and the only real connection they have is that they''re all packaged together, it makes sense to split each one off into their own directory.
Be careful not to run out of inodes . I don''t see this being a problem on modern computers, though. It could be an issue on (very) old computers as well as handled devices (again).
quote:
I''m not sure I understand. How does $PATH get overridden in this instance?
Imagine I have this in my path: /usr/bin:/usr/local/bin:/usr/local/sbin. Now if I want to run ''fubar'' which is located in /usr/bin, I''ll simply type ''fubar'' at the prompt. If, however, there is a /usr/local/bin/fubar/fubar, that one will be selected in place of the one I expected to run. This is what led to the "security issue" below. Unless I misunderstood again.
quote:
This is something I didn''t think of and you''re right, it''s a glaring problem. Hmm… I''ll have to think more on this to see if I can develop an easy solution. If you have any suggestions, feel free to pipe up!
Sadly, I don''t think there is an easy one. ''.'' was removed from the path on most *NIX systems for more or less the same reason.
I guess that you could simply apply your idea to lambda users, while UID/GID 0 users would be ignored and only use their $PATH. That wouldn''t resolve the problem at all, but at least it wouldn''t jeopardize commands executed by the administrators. Of course problems would arise with SUID/SGID programs.
quote:
(snipped)
It''s for use when anomolies arise, not as a general use.
OK.
quote:
Yes, it''s a bit risky, I admit. But we''re talking about desktop machines here; security and stability are secondary to ease of use. And, like I said, it shouldn''t be required all that often.
I''d usually hammer you with "your desktop system should be secured as well, you #@µ%$!" rethorics, but this is not the debate so I''ll try to make abstraction of it . As long as the feature is optional and than the distro''s installer clearly states than it is a potential security hole (like so many other things anyway), I''m OK with it. This isn''t your responsibility anyway so my point is irrelevant.
quote:
Again, this would be only used when nessisary. You wouldn''t have to hold up your entire system because one critical application doesn''t have new bindings that come with a library or whatever. Likewise, you wouldn''t have to wait for any other packages to update in order to use new wiz-bang libraries in those programs that are ready for it.
I misunderstood again, I totally agree. In fact it''s been the cause of my headaches a while ago: Qt3-dev required libpng2 but 99% of my other applications/libraries needed libpng3. I don''t use Qt much myself (mostly FLTK and GTK+), but this simple dependency problem could easily be fixed with your idea, while the package manager gets in the way. Of course I could have fixed it manually quite easily, but I like to keep my systems clean of ''hacks'' when possible.
quote:
True, but as I said the methodology is still compatable with the old way and Linux doesn''t really follow POSIX too well. Pthreads have been non-POSIX compliant since it''s creation, after all (though this should change in the 2.6 kernal).
As long as it doesn''t get in the way, I see no problem with it. Keep in mind that we''re mostly discussing userland here. The kernel developers aren''t concerned with the file system hierarchy (beside some specific things like permissions, ACL, etc). You can see, however, that they''re trying to make Linux (the kernel) as POSIX as possible.
quote:
My question is basically where does one draw the line between an application and a tool?
That''s what I was trying to point out. People will disagree on this. Some will call GCC a tool, others an application. I guess it would require yet another team to draw a line and edict rules, or more likely the FHS people would have to work on it.
Truth is, there is no difference. People usually call small, non interactive programs "tools", but some include everything that perform only specific tasks, or everything that they seldom use. For instance, ''nslookup'' can be used both interactively and in batch mode. I''d tend to call it a tool myself, but you might think differently.
Some will call a game''s level editor a tool, because it''s not required in order to use game, but it adds possibilities.
quote:
Could you expound on this a bit? How do you mean? Are you saying making a mini directory structure in /opt? That actually might not be a bad idea… hm…
Yes, that''s what I meant. If /opt is not present, you don''t even need to check for /opt/bin/binary_name/binary_name. If it is present, it will have a minimal impact on performances because you only need to check under that directory, not under /bin, /usr/bin and /usr/local/bin.
/opt is now standarized, but you could come with a new directory.
quote:
Well, it would probably be the job of the distros to decide whether or not to use the system or not.
Yes, but it would be problematic. Say I install a system without that option turned on, then later decide to install the module. Now what does the package manager do? Install subsequent packages following your new layout? What if I find out than I don''t like it/it doesn''t work as expected? Will the packager reinstall the packages? Are they going to break? This could lead to problems worse than missing dependencies.
Of course it could be done, with planning, but it would also require a major effort from distributions: they''ll have to adapt their package system (because your module could be installed from source ).
quote:
Exactly! That is what I was trying to get at and I appearently failed.
No you didn''t, I was tired .
quote:
The idea here is that one can use package management systems if they want to and to make things easier, but they wouldn''t be _dependant_ on them for system integrity. It''s also allow different package managers to interact without really knowing or caring even how a program got where it is.
As long as they all know about your system, yes. If they don''t, they''ll simply install where they see fit. Hence the need for a (semi-) standard layout. We must also remember that there is no magic secret to keep a system clean: know what you need, install it and nothing else. I found out that "desktop computer user" and "system integrity" _tend_ to be mutually exclusive .
Finally, it''s worth noting than a few programs don''t care about standards at all. They would hardly be manageable by your system.
For instance, most of D.J. Bernstein packages go in /package, /command and /doc. I personally hate it because:
1) my / partition is usually very small,
2) if I want to give them a partition, I have to create 3 new ones, whereas /usr/bin, /usr/lib, etc typically lie on the same physical partition/network share,
3) or I have to symlink them to other directories, effectively demonstrating that his layout is useless.
But the packages are very easy to uninstall. If such a layout was standarized/widespread, it would become common to have 3 new partitions, and after a while nobody would complain.
Of course, this also have weaknesses. If your system was to use such a layout (or a /opt like), it will hardly be able to deal with chrooted programs (like apache installed in /var/www etc).
Anyway, I''m sure you''ll manage to work something out, but make sure to test it for a while before releasing even a draft, or it will likely be ripped apart by the "community" (cf. the Linux kernel development mailing list for fierce battles about "your idea sucks".
Try fixing the problem from the other end: define standards for common libraries and directory structure. Encourage applications developers (a tool is an application imo, btw) to reference their dependencies to a reasonable level of detail - "libpng2" instead of "libpng", both actually symlinks but the unnumbered one pointing to the latest version.
Oh, wait. That sounds like LSB.
An unfortunate side-effect of Open Source software is a potential abundance of choice. This abundance can often lead to interoperability and compatibility problems, as well as aggravating personal differences into a lack of cooperation between projects (KDE and GNOME, particularly following RMS'' [mis-quoted?] statements re KDE?) As Linux continues to mature, and particularly as it starts to become a viable option for a new kind of user - the desktop user - these types of problems will become apparent and there will be a push for a unification of critical methods for doing things.
Package management is a better option than source distribution for a number of reasons; I don''t believe those reasons need be rehearsed here. While I haven''t studied any of the maor package formats, I would assume that they feature reasonably efficient, feature-rich and robust designs. So perhaps the problem lies, as the first AP alluded to, in extremely dumb client software? Perhaps the adoption or development of better tools is an appropriate fix?
What I''m really trying to communicate is that before a problem can be solved, it must be identified and the characteristics of its solution(s) be determined. What exactly do we find wrong with current package management techniques? What exactly would we like to see instead. How do we get from a to b?
The journey begins!
Oh, wait. That sounds like LSB.
An unfortunate side-effect of Open Source software is a potential abundance of choice. This abundance can often lead to interoperability and compatibility problems, as well as aggravating personal differences into a lack of cooperation between projects (KDE and GNOME, particularly following RMS'' [mis-quoted?] statements re KDE?) As Linux continues to mature, and particularly as it starts to become a viable option for a new kind of user - the desktop user - these types of problems will become apparent and there will be a push for a unification of critical methods for doing things.
Package management is a better option than source distribution for a number of reasons; I don''t believe those reasons need be rehearsed here. While I haven''t studied any of the maor package formats, I would assume that they feature reasonably efficient, feature-rich and robust designs. So perhaps the problem lies, as the first AP alluded to, in extremely dumb client software? Perhaps the adoption or development of better tools is an appropriate fix?
What I''m really trying to communicate is that before a problem can be solved, it must be identified and the characteristics of its solution(s) be determined. What exactly do we find wrong with current package management techniques? What exactly would we like to see instead. How do we get from a to b?
The journey begins!
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement