When "progress" is backwards

From LinuxReviews
Jump to navigationJump to search
Frustrated stallman cropped.jpg

Lately I see many developments in the linux FOSS world that sell themselves as progress, but are actually hugely annoying and counter-productive. Counter-productive to a point where they actually cause major regressions, costs, and as in the case of GTK+3 ruin user experience and the possibility that we'll ever enjoy "The year of the Linux desktop".

 Original story by DevsOnACID. Originally published 2020-10-20.

Child-1214231.jpg
Going backwards. Photo by Pezibear/Pixabay.

Showcase 1: GTK+3

GTK+2 used to be the GUI toolkit for Linux desktop applications. It is highly customizable, reasonably lightweight and programmable from C, which means almost any scripting language can interface to it too.

Rather than improving the existing toolkit code in a backwards-compatible manner, its developers decided to introduce many breaking API changes which require a major porting effort to make an existing codebase compatible with the successor GTK+3, and keeping support for GTK+2 while supporting GTK+3 at the same time typically involves a lot of #ifdef clutter in the source base which not many developers are willing to maintain.

Additionally GTK+3 made away with a lot of user-customizable themeing options, effectively rendering useless most of the existing themes that took considerable developer effort for their creation. Here's a list of issues users are complaining about.

Due to the effort required to port a GTK+2 application to use GTK+3, many finished GUI application projects will never be ported due to lack of manpower, lost interest of the main developer or his untimely demise. An example of such a program is the excellent audio editor sweep which has seen its last release in 2008. With Linux distros removing support for GTK+2, these apps are basically lost in the void of time.

The other option for distros is to keep both the (unmaintained) GTK+2 and GTK+3 in their repositories so GTK+2-only apps can still be used, however that causes the user of these apps to require basically the double amount of disk and RAM space as both toolkits need to live next to each other. Also this will only work as long as there are no breaking changes in the Glib library which both toolkits are built upon.

Even worse, due to the irritation the GTK+3 move caused to developers, many switched to QT4 or QT5, which requires use of C++, so a typical linux distro now has a mix of GTK+2, GTK+3, GTK+4, QT4 and QT5 applications, where each toolkit consumes considerable resources.

Microsoft (TM) knows better and sees backwards compatibility as the holy grail and underlying root cause of its success and market position. Any 25 year old Win32 GUI application from the Win95 era still works without issues on the latest Windows (TM) release. They even still support 16bit MS-DOS apps using some built-in emulator.

From MS' perspective, the freedesktop.org decision makers played into their hands when they decided to make GTK+3 a completely different beast. Of course, we are taught to never believe in malice but in stupidity, so it is unthinkable that there was actually a real conspiracy and monetary compensations behind this move. Otherwise we would be conspiracy theorist nuts, right?

Showcase 2: python3

Python is a hugely successful programming/scripting language used by probably millions of programmers.

Whereas python2 development has been very stable for many years, python3 changes at the blink of an eye. It's not uncommon to find that after an update of python3 to the next release, existing code no longer works as expected.

Many developers such as myself prefer to use a stable development environment over one that is as volatile as python3.

With the decision to EOL python2 thousands of py2-based applications will experience the same fate as GTK+2 applications without maintainer: they will be rendered obsolete and disappear from the distro repositories. This may happen quicker than one would expect, as python by default provides bindings to the system's OpenSSL library, which has a history of making backwards-incompatible changes. At the very least, once the web agrees on a new TLS standard, python2 will be rendered completely useless.

Porting python2 to python3 isn't usually as involving as GTK+2 to GTK+3, but due to the dynamic nature of python the syntax checker can't catch all code issues automatically so many issues will be experienced at runtime in cornercases, causing the ported application to throw a backtrace and stopping execution, which can have grave consequences.

Many companies have millions of line of code still in python2 and will have to produce quite some sweat and expenses to make it compatible to python3.

Showcase 3: ip vs ifconfig

Once one had learned his handful of ifconfig and route commands to configure a Linux' box network connections, one could comfortably manage this aspect across all distros. Not any longer, someone had the glorious idea to declare ifconfig and friends obsolete and provide a new, more "powerful" tool to do its job: ip.

The command for bringing up a network device is now ip link set dev eth1 up vs the older ifconfig eth1 up. Does this really look like progress? Worst, the documentation of the tool is non-intuitive so one basically has to google for examples that show the translation from one command to the other.

The same critics apply to iw vs iwconfig.

Showcase 4: ethernet adapter renaming by systemd/udev

Latest systemd-based distros come up with network interface names such as enx78e7d1ea46da or vethb817d6a, instead of the traditional eth0. The interface names assigned by default on Ubuntu 20 are so long a regular human can't even remember them, any configuration attempt requires one to copy/paste the name from ip a output. Yet almost every distro goes along with this Poettering/freedesktop.org-dictated nonsense.

Showcase 5: CMake, meson, and $BUILDSYSTEMOFTHEDAY

While the traditional buildsystem used on UNIX, autoconf, has its warts, it was designed in such a way that only the application developer required the full set of tools, whereas the consumer requires only a POSIX compatible shell environment and a make program.

More "modern" build systems like cmake and meson don't give a damn about the dependencies a user has to install, in fact according to this, meson authors claimed it to be one of their goals to force users to have a bleeding edge version of python3 installed so it can be universally assumed as a given.

CMake is written in C++, consists of 70+ MB of extracted sources and requires an impressive amount of time to build from source. Built with debug information, it takes up 434 MB of my harddisk space as of version 3.9.3. Its primary raison-d'etre is its support for the Microsoft (TM) Visual Studio (R) (TM) solution files, so Windows (TM) people can compile stuff from source with a few clicks.

The two of them have in common that they threw over board the well-known user interface to configure and make and invented their own NIH solution, which requires the user to learn yet another way to build his applications.

Both of these build systems seem to have either acquired a cult following just like systemd, or someone is paying trolls to show up on github with pull requests to replace GNU autoconf with either of those, for example 1 2 . Interestingly also, GNOME, which is tightly connected to freedesktop.org, has made it one of its goals to switch all components to meson. Their porting effort involves almost every key component in the Linux desktop stack, including cairo, pango, fontconfig, freetype, and dozens of others. What might be the agenda behind this effort?

Conclusion

We live in an era where in the FOSS world one constantly has to relearn things, switch to new, supposedly "better", but more bloated solutions, and is generally left with the impression that someone is pulling the rug from below one's feet. Many of the key changes in this area have been rammed through by a small set of decision makers, often closely related to Red Hat/Gnome/freedesktop.org. We're buying this "progress" at a high cost, and one can't avoid asking oneself whether there's more to the story than meets the eye. Never forget, Red Hat and Microsoft (TM) are partners and might even have the same shareholders.

4.43
(7 votes)

avatar

Anonymous user #1

one month ago
Score 1++

Excellent article!

I think the idea is to make the Linux desktop so complicated and time consuming that it will remain a hobby desktop forever.

I think both Microsoft and Red Hat can agree that they don't want competition from free desktops.

You could also have mentioned Snap (Microsoft update mentality) and Flatpak (Red Hat complicated dependency bloat) as examples that might scare Windows users who are used to less dependency bloat.

These package formats are so big mainly because there is no stable toolkit for Linux, neither GTK nor Qt. As you mentioned it isn't like Windows where an old binary works. GTK and Qt break all the time. Sad but true and as long as that happens there won't be a growing Linux ecosystem.

GTK3 and to an even greater extent GTK4 is an attempt to kill every GTK desktop that isn't GNOME. GTK3 is really bad for normal desktop use, like scrolling. We can blame bad porting from GTK2, but I think it's GTK3 itself. I never experienced scrolling problems with GTK2.

The lesson for the Linux user is that there is no free lunch. Only non-corporate Linux users can create a viable toolkit. The Linux desktop is on a long journey and as long as we don't have our own stable toolkit it will be difficult. Forget mainstream distros, they just follow Red Hat so eventually you will end up with a Wayland smartphone Linux "desktop".

As bad as Windows and macOS are these things above (lack of proper stable toolkit mainly) scream "hobby platform".

The conclusion: If you don't like tinkering, use an evil proprietary platform instead - just like Microsoft and Red Hat want you to do.

That being said, there are many smaller distros and Linux projects that work hard to provide real user value so I don't want to come across as ungrateful. It's just that the corporations carefully maimed the future of the Linux desktop.
avatar

Anonymous user #2

one month ago
Score 1++

apt removed bash prefixes.

I am really pissed off about that.
avatar

Anonymous user #3

one month ago
Score 0++

quote: "As you mentioned it isn't like Windows where an old binary works."

  1. 1 thats just plain wrong. in reality, some work, some break.
  2. 1 there are very old (20+ years) X11-app sources while compile easily without fuss and still work in any linux, such as an X11-fileexplorer.
avatar

Gnu4ever

one month ago
Score 0++
They said binaries. Re-compiling from source is a different story.
avatar

01101001b

20 days ago
Score 0++
Sorry. "Sources compile easily without fuss", no, they don't (exceptions aside). When the code is "old" and the compiler version is "much newer", if you don't patch the code, compilation fails bad.
avatar

Anonymous user #4

one month ago
Score 1++
regarding the business about eth0 versus en0ps0 : this offers advantages too. Mr Poettering made it so that any ethernet card plugged in say slot 1 will always be the same device regardless of card type et cetera.
avatar

Anonymous user #4

one month ago
Score 1++
for most users, red hat is kinda uninteresting unless they manage a large datacentre. red hat focusses on the server industry rather than the desktop market. most stuff red hat offers is not useful at home, unless you have 10000 computers at home, lawl.
avatar

Intgr

one month ago
Score 0++

So by this argument, it's bad to both:

  • release new incompatible versions of software.
  • to write new distinct software to replace an older one. Supposedly this is even bad if the switch is made in a fully backwards compatible manner. "ifconfig" is still being maintained and shipped by all distros.

By this logic, Wayland is bad, we should still be using the old X11 protocol as designed in 80s, without any dependence on new X11 extensions (which all current toolkits rely on BTW). Vulkan is bad, OpenGL should be good enough for everyone. All new init systems are bad. In fact, all new programming languages are bad too, because they create something new users are not familiar with, and often come with their own build systems that aren't autoconf/automake. I'm sure this list could be essentially endless.

While in broad terms I agree, breaking backwards compatibility is a bad thing if it can be avoided, and reinventing the wheel is a bad thing if it can be avoided, reality is far more nuanced than this article makes it out to be. Strictly following always backwards compatibility would have prevented a lot of innovation.

This article could have been a good one, if it did some research, looking into the reasons why these decisions were originally made, whether those reasons still make sense now, if they were worth the breakage they caused, and what can be learned from it. For instance, Python developers have very clearly stated that they regret the way Python 3.0 broke compatibility and will never repeat this mistake again.

But as it stands, this is just a misguided rant against any kind of change with no attempt to understand nuance.

Also, the article begins by saying that these changes have happened "lately" but most of these examples are a decade old or older.

GTK+ 3 was released in 2011 Python 3 was released in 2008 The "ip" utility is harder to pin down but archive.org shows its man page from 2007 https://web....net/man/8/ip , it's probably older than that.

CMake was released in 2000, although it's not clear when it gained significant mindshare in open source projects.
avatar

Gnu4ever

one month ago
Score 1++
Lol, there's so much idiocy in your comment. Good bait
Add your comment
LinuxReviews welcomes all comments. If you do not want to be anonymous, register or log in. It is free.