When "progress" is backwards

From LinuxReviews
Jump to navigationJump to search
Frustrated stallman cropped.jpg

Lately I see many developments in the linux FOSS world that sell themselves as progress, but are actually hugely annoying and counter-productive. Counter-productive to a point where they actually cause major regressions, costs, and as in the case of GTK+3 ruin user experience and the possibility that we'll ever enjoy "The year of the Linux desktop".

 Original story by DevsOnACID. Originally published 2020-10-20.

Child-1214231.jpg
Going backwards. Photo by Pezibear/Pixabay.

Showcase 1: GTK+3[edit]

GTK+2 used to be the GUI toolkit for Linux desktop applications. It is highly customizable, reasonably lightweight and programmable from C, which means almost any scripting language can interface to it too.

Rather than improving the existing toolkit code in a backwards-compatible manner, its developers decided to introduce many breaking API changes which require a major porting effort to make an existing codebase compatible with the successor GTK+3, and keeping support for GTK+2 while supporting GTK+3 at the same time typically involves a lot of #ifdef clutter in the source base which not many developers are willing to maintain.

Additionally GTK+3 made away with a lot of user-customizable themeing options, effectively rendering useless most of the existing themes that took considerable developer effort for their creation. Here's a list of issues users are complaining about.

Due to the effort required to port a GTK+2 application to use GTK+3, many finished GUI application projects will never be ported due to lack of manpower, lost interest of the main developer or his untimely demise. An example of such a program is the excellent audio editor sweep which has seen its last release in 2008. With Linux distros removing support for GTK+2, these apps are basically lost in the void of time.

The other option for distros is to keep both the (unmaintained) GTK+2 and GTK+3 in their repositories so GTK+2-only apps can still be used, however that causes the user of these apps to require basically the double amount of disk and RAM space as both toolkits need to live next to each other. Also this will only work as long as there are no breaking changes in the Glib library which both toolkits are built upon.

Even worse, due to the irritation the GTK+3 move caused to developers, many switched to QT4 or QT5, which requires use of C++, so a typical linux distro now has a mix of GTK+2, GTK+3, GTK+4, QT4 and QT5 applications, where each toolkit consumes considerable resources.

Microsoft (TM) knows better and sees backwards compatibility as the holy grail and underlying root cause of its success and market position. Any 25 year old Win32 GUI application from the Win95 era still works without issues on the latest Windows (TM) release. They even still support 16bit MS-DOS apps using some built-in emulator.

From MS' perspective, the freedesktop.org decision makers played into their hands when they decided to make GTK+3 a completely different beast. Of course, we are taught to never believe in malice but in stupidity, so it is unthinkable that there was actually a real conspiracy and monetary compensations behind this move. Otherwise we would be conspiracy theorist nuts, right?

Showcase 2: python3[edit]

Python is a hugely successful programming/scripting language used by probably millions of programmers.

Whereas python2 development has been very stable for many years, python3 changes at the blink of an eye. It's not uncommon to find that after an update of python3 to the next release, existing code no longer works as expected.

Many developers such as myself prefer to use a stable development environment over one that is as volatile as python3.

With the decision to EOL python2 thousands of py2-based applications will experience the same fate as GTK+2 applications without maintainer: they will be rendered obsolete and disappear from the distro repositories. This may happen quicker than one would expect, as python by default provides bindings to the system's OpenSSL library, which has a history of making backwards-incompatible changes. At the very least, once the web agrees on a new TLS standard, python2 will be rendered completely useless.

Porting python2 to python3 isn't usually as involving as GTK+2 to GTK+3, but due to the dynamic nature of python the syntax checker can't catch all code issues automatically so many issues will be experienced at runtime in cornercases, causing the ported application to throw a backtrace and stopping execution, which can have grave consequences.

Many companies have millions of line of code still in python2 and will have to produce quite some sweat and expenses to make it compatible to python3.

Showcase 3: ip vs ifconfig[edit]

Once one had learned his handful of ifconfig and route commands to configure a Linux' box network connections, one could comfortably manage this aspect across all distros. Not any longer, someone had the glorious idea to declare ifconfig and friends obsolete and provide a new, more "powerful" tool to do its job: ip.

The command for bringing up a network device is now ip link set dev eth1 up vs the older ifconfig eth1 up. Does this really look like progress? Worst, the documentation of the tool is non-intuitive so one basically has to google for examples that show the translation from one command to the other.

The same critics apply to iw vs iwconfig.

Showcase 4: ethernet adapter renaming by systemd/udev[edit]

Latest systemd-based distros come up with network interface names such as enx78e7d1ea46da or vethb817d6a, instead of the traditional eth0. The interface names assigned by default on Ubuntu 20 are so long a regular human can't even remember them, any configuration attempt requires one to copy/paste the name from ip a output. Yet almost every distro goes along with this Poettering/freedesktop.org-dictated nonsense.

Showcase 5: CMake, meson, and $BUILDSYSTEMOFTHEDAY[edit]

While the traditional buildsystem used on UNIX, autoconf, has its warts, it was designed in such a way that only the application developer required the full set of tools, whereas the consumer requires only a POSIX compatible shell environment and a make program.

More "modern" build systems like cmake and meson don't give a damn about the dependencies a user has to install, in fact according to this, meson authors claimed it to be one of their goals to force users to have a bleeding edge version of python3 installed so it can be universally assumed as a given.

CMake is written in C++, consists of 70+ MB of extracted sources and requires an impressive amount of time to build from source. Built with debug information, it takes up 434 MB of my harddisk space as of version 3.9.3. Its primary raison-d'etre is its support for the Microsoft (TM) Visual Studio (R) (TM) solution files, so Windows (TM) people can compile stuff from source with a few clicks.

The two of them have in common that they threw over board the well-known user interface to configure and make and invented their own NIH solution, which requires the user to learn yet another way to build his applications.

Both of these build systems seem to have either acquired a cult following just like systemd, or someone is paying trolls to show up on github with pull requests to replace GNU autoconf with either of those, for example 1 2 . Interestingly also, GNOME, which is tightly connected to freedesktop.org, has made it one of its goals to switch all components to meson. Their porting effort involves almost every key component in the Linux desktop stack, including cairo, pango, fontconfig, freetype, and dozens of others. What might be the agenda behind this effort?

Conclusion[edit]

We live in an era where in the FOSS world one constantly has to relearn things, switch to new, supposedly "better", but more bloated solutions, and is generally left with the impression that someone is pulling the rug from below one's feet. Many of the key changes in this area have been rammed through by a small set of decision makers, often closely related to Red Hat/Gnome/freedesktop.org. We're buying this "progress" at a high cost, and one can't avoid asking oneself whether there's more to the story than meets the eye. Never forget, Red Hat and Microsoft (TM) are partners and might even have the same shareholders.

4.50
(8 votes)


avatar

Intgr

14 months ago
Score 1++

So by this argument, it's bad to both:

  • release new incompatible versions of software.
  • to write new distinct software to replace an older one. Supposedly this is even bad if the switch is made in a fully backwards compatible manner. "ifconfig" is still being maintained and shipped by all distros.

By this logic, Wayland is bad, we should still be using the old X11 protocol as designed in 80s, without any dependence on new X11 extensions (which all current toolkits rely on BTW). Vulkan is bad, OpenGL should be good enough for everyone. All new init systems are bad. In fact, all new programming languages are bad too, because they create something new users are not familiar with, and often come with their own build systems that aren't autoconf/automake. I'm sure this list could be essentially endless.

While in broad terms I agree, breaking backwards compatibility is a bad thing if it can be avoided, and reinventing the wheel is a bad thing if it can be avoided, reality is far more nuanced than this article makes it out to be. Strictly following always backwards compatibility would have prevented a lot of innovation.

This article could have been a good one, if it did some research, looking into the reasons why these decisions were originally made, whether those reasons still make sense now, if they were worth the breakage they caused, and what can be learned from it. For instance, Python developers have very clearly stated that they regret the way Python 3.0 broke compatibility and will never repeat this mistake again.

But as it stands, this is just a misguided rant against any kind of change with no attempt to understand nuance.

Also, the article begins by saying that these changes have happened "lately" but most of these examples are a decade old or older.

GTK+ 3 was released in 2011 Python 3 was released in 2008 The "ip" utility is harder to pin down but archive.org shows its man page from 2007 https://web....net/man/8/ip , it's probably older than that.

CMake was released in 2000, although it's not clear when it gained significant mindshare in open source projects.
avatar

Gnu4ever

14 months ago
Score 2++
Lol, there's so much idiocy in your comment. Good bait
avatar

Gnu4ever

14 months ago
Score 1++
They said binaries. Re-compiling from source is a different story.
avatar

01101001b

14 months ago
Score 1++
Sorry. "Sources compile easily without fuss", no, they don't (exceptions aside). When the code is "old" and the compiler version is "much newer", if you don't patch the code, compilation fails bad.
avatar

Anonymous (307dbc9b24)

4 months ago
Score 0

@Intgr, if you cannot make changes to $SOFTWARE without breaking the world every time, you should stay as far away from critical infrastructure as possible, it's that easy.

"release new incompatible versions of software."

Yes, that is very bad indeed.

"to write new distinct software to replace an older one."

Yes, that is bad too, if you don't have a damn good reason to do so and cannot provide some sort of compatibility layer.

If you introduce some game-breaking change into something as fundamental as GTK, this happens: (1) You show everyone that in 2021 you still haven't learned how to implement pushbuttons and editfields the right way. (2) You create completely pointless work for everyone using your library, and in the end, each time that happens 1000 projects die out there because the maintainer doesn't have the time or the dedication to run to stay in the same place.

For the software I write, this pretty much means that I avoid external libraries like the plague, and if I really need a GUI, I use Motif, which was en vogue when I was a toddler. Not because I like it, but because it doesn't break randomly, has accurate documentation and doesn't need some soydev build-system with 900 MB of dependencies. In other words, at this point I don't choose software based on it's technical merits, but on the long-term pain it could case, or how likely it is someone breaks it because they absolutely have to support wobbly windows or other shit of earth-shaking importance.

It's the same game under the hood. I replaced Linux with BSD on all servers I'm responsible for, first due to that systemd abomination, and because apparently Club Penguin just can't get their shit together on how to configure network interfaces. In the "dark ages" (and on BSD), that's easy, you run ifconfig and route, and you're set. Meanwhile on Linux, I have to use "ip" to set up routes, which comes back with fantastic error messages like "SIOFOOXY: File exists." (for the uninitiated here, that means "you already have a route for that destination"). But wait, there's more. You also have systemd-network, systemd-resolve, NetworkManager, ModemManager and a few others I don't remember (plus even more if it's wifi), which all interact in complex ways, so if one of those is missing or you rub it in the wrong way, unplug your USB wifi stick and plug it in again, the device is gone (ip link won't see it anymore) and you cannot get it back like it's 1995.

  • exhale*
Add your comment
LinuxReviews welcomes all comments. If you do not want to be anonymous, register or log in. It is free.