Advertisement

shared libs and different distributions

Started by June 17, 2008 12:36 PM
26 comments, last by Prefect 16 years, 4 months ago
Quote: Original post by Metron
So if I understand correctly, anyone wanting to distribute binaries to Linux distributions has to counter check if it actually works on the different distributions. How does one cope with such much work?!?

Welcome to the fun world of Linux...

Well, in practice it's not that bad. If you don't link to a ton of weird third party libraries and use a recent version of GCC, binaries usually work on most modern distributions without recompiling.

Here, in a pretty large application we ported to Linux, the only system libraries we directly link to are (let me check) - pthread, dl, X11, gomp and usb (for a hardware dongle). And obviously the runtime libs. So far, it seems to work on most distribs we tried (Suse, RH, Ubuntu, Gentoo, Debian, and some other obscure ones). But there is never a guarantee.

In order to keep ourselves from being cornered legally, we officially only support Novell Suse (the standard 'industrial' distribution).

Oh yeah, and you can distribute your library or application using the 'xcopy deploy' method everybody loves from the Windows world. Means that you can just drop your .so in the same directory than your application, and forget about the weird fixed path places where Linux usually puts libraries. Most Linux people hate doing this, so it's not very well documented, but just add $ORIGIN to the rpath when linking your app or lib.

Quote: Original post by Metron
Not to sound harsh here, but this makes me really wonder how the Linux-defenders expect the Linux distribution to replace Windows on the common user machine?

Well, in their cozy little world, everything comes with source. So you (usually) don't have problems with binary interface incompatibilities.

Quote:
I'ld really like to distribute my lib to the Linux users. I'm currently looking into how to do this in the smoothest way. I don't want to compile it on the distribution upon installation because that would make my code public... and I don't want that...

Of course not. As I said above, try to link to as few system libraries as you can, and use an up to date version of gcc. That should you put (more or less) on the safe side.

But be prepared for ferocious attacks by open source zealots if you publically release your binary lib ;)
Thanks for the hints Yann... I wasn't aware of the $ORIGIN thing. Everyone told me to make a .deb to distribute... Your eases my pain ;)

Currently I don't have much dependencies:

linux-gate.so.1 => (0xb7ee6000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb7d46000)
libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb7d21000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7d15000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7bc6000)
/lib/ld-linux.so.2 (0xb7ee7000)

So, hopefully I'm safe with this. The compiler is version 4.2.3

Thanks again,
Metron
----------------------------------------http://www.sidema.be----------------------------------------
Advertisement
There are people, "maintainers", whos primary responsibility is to package every new version of their project for multiple distributions. - So I guess it's real work :D
Actually, GNU/Linux's incompatibilities and deployment nightmare are grossly overexaggerated by everyone. In virtually every modern GNU/Linux distribution that is at least somewhat used you can expect the same set of software. To make most people happy you basically need to make only 3 different packages, .DEB, .RPM and TGZ. DEB's are for Debian-like distros, RPM's are for RedHat-like and TGZ are for others. The last group won't mind you not providing a package tailored to their distribution since, well, no one does. (: And, they should be proficient enough to convert it themselves. (Since the most popular (read: newbie-friendly) distributions are either RPM or DEB based, anyway.)

So, unless you need some funky installation scripts this is basically the process one needs to take:

1) You make your installation tree in some directory, eg.
dummy/usr/bin/my_program
dummy/usr/lib/my_library.so

2) You pack that into TGZ (short for tar.gz), eg.
tar -zcf package.tgz dummy/*

3) You convert that to DEB and RPM, eg.
alien -d package.tgz
alien -r package.tgz

Ofcourse, that won't provide the dependency information in DEB or RPM, but, it is the easiest route to take if you only depend on widely used libraries.

[edit]
Based on your dependencies that you've posted is is almost 100% safe to assume that every GNU/Linux distribution will have these libraries.
[/edit]
Quote: Original post by Metron
So if I understand correctly, anyone wanting to distribute binaries to Linux distributions has to counter check if it actually works on the different distributions. How does one cope with such much work?!?

First off, Linux is a portable OS kernel, not an OS distribution like Mac OS or Windows. Expecting a package to work on multiple Linux-based distributions is not any different than expecting it to also work on Windows and Mac OS X as well. Few people expect a single package to work on both Windows and Mac OS X. Why do they expect a single pacage to work on Slackware and Gentoo?

Consider this. The Debian Linux distribution works on hardware as diverse as an IBM z-series and an embedded Thumb-based controller. Do you expect the same binary package to work on the entire Debian spectrum as well?
Quote: Not to sound harsh here, but this makes me really wonder how the Linux-defenders expect the Linux distribution to replace Windows on the common user machine?

It's not harsh. The answer is, they don't. They expect some Linux distros to compete with Windows and Mac OS X on the common user machine (I know, I work for one such distro, and we happen to be doing rather well in our niche). To that end, the Linux Standard Base was created. Any distro that wants to compete with Windows or OS X has to conform to the published standard. If your library or application adheres to that standard, it will more than likely work with any distro that conforms, and all the common distros conform (you do need to match LSB versions).

You need to package your software properly, because that's how Linux users will install software. When in Rome, so as the Romans do. Windows users will also prefer an installer, and OS X users will expect at least a bundle they can drag and drop.
Quote: I was told (citation) "I did not switch to Linux to pay for a software or a library... can't you give me just the code? I'll compile it myself."

You may safely ignore such zealots. People who are free software bigots will react this way. If you are not producing free software, you are not targetting such people anyway.



Stephen M. Webb
Professional Free Software Developer

Quote: Original post by desudesu
Actually, GNU/Linux's incompatibilities and deployment nightmare are grossly overexaggerated by everyone. In virtually every modern GNU/Linux distribution that is at least somewhat used you can expect the same set of software. To make most people happy you basically need to make only 3 different packages, .DEB, .RPM and TGZ.

You know, this is pretty much the deployment nightmare people are talking about :) Why do you need three different installation packages ? Under Windows, a single one is enough - and it should be enough under Linux.

Quote: Original post by desudesu
So, unless you need some funky installation scripts this is basically the process one needs to take:
[etc]

No need to do all that. Just create a directory wherever the user wants (by default you can create one in his home dir), and uncompress all your binaries into it (either through a tgz, or better, through a little graphical installer). Then the user can simply run the executable, which will find all its own .so files in its own install directory. No need to copy stuff to /usr/something (which I find an extreme design flaw in current Linux distros). Just run it, and you're done. If you want to uninstall it, delete the directory.

Works like a charm.
Advertisement
Quote: Original post by Yann L
Quote: Original post by desudesu
So, unless you need some funky installation scripts this is basically the process one needs to take:
[etc]

No need to do all that. Just create a directory wherever the user wants (by default you can create one in his home dir), and uncompress all your binaries into it (either through a tgz, or better, through a little graphical installer). Then the user can simply run the executable, which will find all its own .so files in its own install directory. No need to copy stuff to /usr/something (which I find an extreme design flaw in current Linux distros). Just run it, and you're done. If you want to uninstall it, delete the directory.

Works like a charm.


It depends. If the user's going to use the program only by himself (or the installed library will be used only by one program) then yes. But, if you want to install an application/library system-wide you must install it in system-wide directory.

You see a design flaw in /usr/something scheme? Why? Unices were always like that. And with a package manager the uninstallation is not a problem at all. The „sudo apt-get remove program-name” is easy enough. There is more to it thought - this scheme is extremely useful in some cases. It's very easy to isolate program code (bin,lib), global settings (etc), documentation (share/doc) and other shared assets (share). I've made an ultra-light GNU/Linux distribution for my personal use. It was dead-easy to throw away components that I didn't need.
Quote: Original post by desudesu
It depends. If the user's going to use the program only by himself (or the installed library will be used only by one program) then yes. But, if you want to install an application/library system-wide you must install it in system-wide directory.

If your product is a library, then you should give the user / developer the choice on where to install the library. Chances are that he wants to ship it with his product, and simply put it into the installation directory of his application. Again, no need to install it system wide, unless the end user explicitly wishes to do so. If the user wants to put it into /usr/lib or /usr/local/lib, then that's fine. But don't force it there, and don't default it to there either.

In my opinion, putting a library into /usr/lib or similar is the same bad practice as copying your DLLs into Windows\system32. You just don't do that, unless there is a very good reason to (drivers, system libs, etc).

Quote: Original post by desudesu
You see a design flaw in /usr/something scheme? Why? Unices were always like that.

Yep, they were always like that, but this doesn't mean that it is a good thing. It creates dependency hell, a worse form of Windows' own DLL hell. If I look into my /usr/lib directory, I see the personification of chaos. And this is an almost clean installation of OpenSuse.

Unless you have a very good dependency management system for dynamic resources, which Linux does not have (creating a symlink called libblah.so.1.0.6.31 linking to libblah.so.1.0.5, again linking to libblah_final.so.1 is not good dependency management) then libraries should go into the application directory. Unless there's a really good reason for global access - system and runtime libraries come to mind. But nothing else.

Microsoft has tried to remedy the situation using manifests and side-by-side assemblies, with pretty good success so far. So unless Linux comes up with something similar, one should be very careful when installing things globally.

There's just to much risk of applications downgrading existing libraries or getting confused on which versions to load. And since there is no unified installation management system under Linux, chaos is unavoidable. Especially RPMs/DEBs versus individual "make install"'s. The later are the spawn of hell, since they subvert the whole idea of centralized package management.

Quote:
It's very easy to isolate program code (bin,lib), global settings (etc), documentation (share/doc) and other shared assets (share).

But why ? Isn't it more logical to have all resources belonging to a certain application (ie. its executable, its libraries, manuals, etc) at a central location, instead of scattering them around everywhere over your system ? I certainly think so.
Quote: Original post by Yann L
Isn't it more logical to have all resources belonging to a certain application (ie. its executable, its libraries, manuals, etc) at a central location, instead of scattering them around everywhere over your system ? I certainly think so.

If it's specific to an application, it makes the most sense to build it into the application.

The point of a shared library is that it is not specific to the application.

If you remember the old DOS days, all applications shipped everything they needed and kept it together. It was a nightmare. Every vendor shipped a different subset of video and sound card drivers.

Following your logic, every game should ship its own OpenGL drivers for every video card made.

The point of package managers is to provide a very good dependency management system for dynamic resources. Both apt (and offspring) and yum do an excellent job -- far far better than anything natively available on Windows or OS X. The fact that a lot of libraries get installed in /usr/lib just doesn't seem to cause a problem: shared library versioning works well. In fact, the only time I've seen it break down is when someone tries to subvert the system without knowing what they're doing. Doing that, under any system, is bound to cause problems and is certainly not unique to Linux distros.

Applications cannot downgrade existing libraries or get confused about which libraries to load in a managed package system. You generally have to go outside the system to achieve that. If you've gone outside the system and borked the system, don't blame the system.

Stephen M. Webb
Professional Free Software Developer

Quote: Original post by Bregma
If it's specific to an application, it makes the most sense to build it into the application.

The point of a shared library is that it is not specific to the application.

There are many situations where dynamic libraries make perfect sense when being integral part of an application. Look at every major commercial application out there today, and you will notice that all of them are essentially implemented as modular dynamic libraries:

* Shorter compile and linking times.
* Better modularization and maintenance (for example proprietary modules you reuse internally on different applications, but you don't want to share with others, eg. a 3D engine).
* Better license management (customers can buy additional features without changing the application).
* A plugin architecture, an absolutely essential part of any larger application.
* Better resource management (only load the functionalities a user needs)
* Installation doesn't require root access.

...and so on.

Quote: Original post by Bregma
If you remember the old DOS days, all applications shipped everything they needed and kept it together. It was a nightmare. Every vendor shipped a different subset of video and sound card drivers.

Drivers are part of the system, and need to be shared. So do runtime libraries, as I mentioned in my previous post. This has absolutely nothing to do with local dynamic libraries, used only by a specific application.

Notice my use of the terms "dynamic library" and "shared library". While technically the same, they're conceptually different.

An example: in our application, we use a cryptographic hardware dongle as a DRM system (nodelocking on MAC or similar doesn't work on Linux, since it can be easily modified by the user). The dongle comes with binary drivers in the form of an .so. Of course we ship them in our applications directory, because it wouldn't make sense to deploy them to a shared location.

Quote: Original post by Bregma
The point of package managers is to provide a very good dependency management system for dynamic resources. Both apt (and offspring) and yum do an excellent job -- far far better than anything natively available on Windows or OS X. The fact that a lot of libraries get installed in /usr/lib just doesn't seem to cause a problem: shared library versioning works well.

It doesn't always work - I can't even count the number of instances where customers of us called support because 'our application doesn't work'. The result was that something else replaced shared libraries, and botchered up their system. That's why we now only support one single distro.

Quote: Original post by Bregma
In fact, the only time I've seen it break down is when someone tries to subvert the system without knowing what they're doing.

And this happens all the time. As I mentioned above, a simple 'make install' on some third party source can kill everything. It's pretty easy to make the system break down on Linux, especially for a beginner. It's much harder on Windows (although not impossible, but it usually involves messing around in the registry or manually deleting random system files). And unfortunately, if you sell commercial or closed source applications for Linux, chances are that they're not going to be installed and used by Linux geeks - but by some corporate manager who wants to point and click.

Quote: Original post by Bregma
Applications cannot downgrade existing libraries or get confused about which libraries to load in a managed package system. You generally have to go outside the system to achieve that. If you've gone outside the system and borked the system, don't blame the system.

If the system is not consistently used, then it doesn't make sense.

This topic is closed to new replies.

Advertisement