Home
Unchained
Engineering Blog

Disrupting the Status (Distro)Quo

Dustin Kirkland, VP of Engineering

This is the first in a three part series exploring open source software delivery — the recent past, the inflection point of the present, and the future. We’ll dive into the foundations that created and defined UNIX and Linux operating system distributions, the challenges posed by those traditional distributions, the moment of change upon us, and Chainguard’s approach to defining our open source future.


Part one focuses on how today’s status (distro)quo came to be and some of its implications.


History of the Linux World


In their 30+ years of history, Linux distributions (distros) have achieved remarkable ubiquity, powering everything from personal laptops and servers to complex cloud environments and embedded systems. The different flavors and varieties of Linux distros are a testament to the overwhelming popularity of open source and the broad use cases it’s been built to solve, with prominent players each releasing distros with their own unique characteristics and philosophies.


Following in the footsteps of UNIX systems, Linux distros adopted the established practice of periodically snapshotting the kernel and user space, rigorously testing, hardening, documenting, releasing, supporting, and eventually retiring each version. For decades, stalwarts like SUSE, Debian, Red Hat, and Ubuntu have followed a pattern of producing named and numbered distro releases.


For over three decades, UNIX and Linux releases were delivered “when they were ready.” Recognizing the asynchronous and unpredictable release cycles of its parent distribution, Debian, Canonical introduced a timed release model for Ubuntu. Ubuntu reliably releases on Debian's "devel" branch every April and October each year since 2004. This contrasts with the "when it's ready" approach of Red Hat and Debian, and provides some sense of foresight to the users and organizations who rely on Ubuntu. This consistent release cycle helped users plan their technology adoption and infrastructure deployments more capably, however the lifecycle and maintenance of traditional distros led to longer term concerns.


Software, frozen in time


Each release encapsulates the state of the open-source software ecosystem at that particular moment, with thousands of software packages bundled together in a given general purpose distribution. Think of each of these releases as an ice core, preserving a historical record of atmospheric conditions, only in this case it’s software that’s largely “frozen” in time.


Producing a robust Linux distribution like RHEL, Ubuntu, or Debian requires monumental efforts, involving hundreds, if not thousands, of engineers (many employed by the distro, and many more volunteering their efforts in their own open source communities). While users appreciate the initial release, the ongoing maintenance and updates are equally crucial. The maintainers, whether community volunteers or commercial entities, provide ongoing updates and fixes for these packages for a defined period, ultimately leading to the release's End of Life (EOL). These updates address bugs, enhance performance, improve stability, and perhaps most importantly, patch security vulnerabilities. This critical maintenance and support eventually reaches its end regardless of the user or organization’s readiness, leaving them to run unsupported software and dealing with vulnerabilities or incompatibility on their own.


The reality of eventual obsolescence leads users to turn their attention beyond ongoing patches and minor performance improvements to “big bang” upgrades. These major version upgrades are necessary for accessing new features, hardware compatibility, cloud advancements, performance improvements, and more comprehensive security updates, but there are considerable costs. Major upgrades are always time consuming and challenging, threatening stability and performance while slowing overall business velocity. So while organizations and hobbyists alike may recognize the need for upgrades, you’d be hard pressed to find anyone excited at the prospect, especially across a massive fleet of infrastructure.


When it comes to traditional Linux distros, it’s between this rock and a hard place that many users utilizing on open source find themselves:


  • They can count on incremental patches for as long as a given distro release is maintained. That means patching their aging infrastructure to limit potential security and performance risks while being unable to cherry-pick and backport every fix to their stable environment;

  • Or they can attempt a large-scale “big bang” upgrade that may deliver new functionality in emerging areas like AI and solve some otherwise unpatched security issues, but at the costs of tremendous effort, introducing instability.  Moreover, this is merely a temporary solution, as they will need to do this again and again, every couple of years.


Distro trade offs and golden image efforts


This perpetual trade off leads to a variety of organizational problems. Security teams constantly point out high CVE counts and vulnerabilities within software underpinning a variety of applications and processes without any clear path to remediation. Engineering teams spend countless hours and developer resources attempting to patch vulnerabilities without inhibiting performance and uptime, all while new features and innovations are either sidelined or delayed. Juggling conflicting engineering and security priorities ultimately leads to some kind of board level trade off, whether that’s assuming more risk or slowing innovation and potentially revenue expansion.


Some organizations have tried to solve these problems in-house, with teams dedicated to DIY curation of software packages and bespoke distros through golden image programs. These well-intended efforts to bridge security and standardization expectations and developer velocity often fall victim to the challenges today’s distro delivery models offer, presenting less of a standardizing body and more of a bottleneck circumvented by shadow IT. Even with these programs in place, businesses struggle to meet delivery timelines or enter new compliance-driven markets because core open source software is either aging, vulnerable, or both.


The time for a new approach


All of this makes it abundantly clear that despite its history as a foundational component of open source software delivery, the traditional distro model is increasingly challenged by the demands for security and the ever increasing velocity of modern computing. To truly leverage the potential of cloud-native technologies, we must look beyond the limitations of the conventional distro and explore new paradigms.


In the next entry of this series, we’ll look at the forces driving an inflection point and what it means for modern software development.


If you found this useful, be sure to check out our whitepaper and the broader approach we’ve taken to solving this problem for organizations.

Share

Ready to Lock Down Your Supply Chain?

Talk to our customer obsessed, community-driven team.

Talk to an expert