More on Rolling Release

Let’s get one thing clear: I utterly despise the brand of morality and ethics displayed by Microsoft. Gates knew a thing or two about computers, just enough to steal everything he built; he never comprehended the difference between good and evil. To this day, his utterly broken moral compass still affects the way Microsoft does business. However, to a large degree, he simply continued what was already in place in the business world. He remains a primary example of what’s wrong with that world. For example, when you buy a system running Windows, you are not the customer. You are the product delivered to MS business partners, generally in the form of a system so shot through with intentional security vulnerabilities, you are forced to surrender your privacy and your eyes to a tsunami of advertising. The only “malware” MS attempts to squash is from companies who won’t play nice with MS.

There is, however, a few things MS does right. Have you ever noticed you can run multiple versions of a particular library on Windows, and each application knows which one it should use? Try that on Linux. Here we sit some eight years later, and just about every software product I can buy for Windows will run on Win2K. And notice Win2K is still supported by MS. Buy a Windows server once, and for at least five or six years, the only complete upgrade required is for a few third party packages. If any particular package needs later versions of some library, just add them. You cannot do that in Linux. On a private mailing list, Jonathan Brickman said:

This is all the basic rub, with Linux packaging+library standards. And non-package app delivery standards run into Linux P+L as a brick wall. Just for one instance: I have an old eight-core server with two gigs of RAM, built for RH Enterprise 3 and still running it nicely. I would have loved to add a number of things into it, convert it into an application server, but the glib is so old that many things just won’t go without a complete recompile, which would mean getting a whole lot more dev libraries, et cetera.

Well, you could in theory, I suppose, but nobody does it that way. Not often, at least. I can recall a single incidence where I was running SuSE 8.2, and someone was building GNOME packages. One particular application required a later library, so the packager installed it in its own directory, labeled in part by the release number, then had his package refer to that library directly. But that’s not the Open Source way. In fact, such a concept seems to be held in contempt among Open Source developers.

That is, except for a few projects. For example, I can go to the Mozilla folks and get a copy of Seamonkey, still working fine on older machines, and even a version with the old Gtk1 interface. That would probably run quite well still on clones of RHEL 3, and certainly on 4. Yes, there are machines out there still run some of those old clones (CentOS and Scientific Linux, to name two) and those are still supported from the upstream RHEL distribution until 2010. And did you know? RHEL 2 is still supported until sometime in 2009. That’s roughly the same as the old RedHat 7.2 — released in October 2001 — almost as old as Win2K.

I didn’t post this to sing the praises of RedHat and friends. As I understand it, they did some of their customers dirty when they sold support licenses for 9.0, then simply dropped it. However, that hardly compares with the consistent and persistent mean-spirited behavior of MS. RedHat does compete well in terms of business expectations. The next nearest support level comes from a handful of Linux distros supporting their releases as much as three years, among them openSUSE, and certain Ubuntu releases (labeled “LTS”). These are the versions you can get for free. Both RedHat and Novell support their commercial versions now about seven years each. Buy a server, install the OS, both last about the same and things are simpler. That’s the sort of life-cycle business like. With this sort of support, we can expect to see Linux making some inroads in the future.

Except for one problem: third party developer support. That is, those who are developing stuff to run on these systems tend to ignore the commercially stable distributions of Linux. It’s one thing for me to upgrade ALSA in place on my CentOS 5 box. That’s pretty low-keyed, and the instructions are easy to find. It affects only one element of how things work. However, if I want a better webcam application, such as Cheese, I’m out of luck. It won’t compile on my CentOS 5 box unless I upgrade the entire GNOME system installed. I find it odd the folks writing GnomeSword can make it so I can compile their latest and greatest, but not Cheese. I don’t know the difference at the code-writing level, but I wonder what this is which makes some developers spare no concern for supporting anything but the latest and greatest distros of Linux.

The people running these projects seem to have no comprehension how this cuts them off from wider adoption. I can get OpenOffice, Seamonkey, and even commercial products like Opera to run on much older systems, but far too many popular desktop software applications exclude those of us for whom Linux is not merely a hobby. Also, I realize projects tied tightly into the major desktops (KDE and GNOME) are worse about this than those projects which use other toolkits, or which aren’t so tightly woven into the desktop itself. For example, you can use both Qt and GTK libraries in such a way as to make applications compile on a very wide range of release versions. Projects linked to Motif/Lesstif, FLTK, Tcl/Tk, and other, or multiple, interface toolkits probably represent the best of breed when it comes to opening the door to inclusion. Perhaps the problem is more a matter of the culture within those large desktop projects. All the more is this a serious problem, since these are vying for wider adoption as “the Linux standard” in industry. If so, they guarantee it will never happen.

This entry was posted in computers and tagged , . Bookmark the permalink.