Our industry is full of buzzwords, jargon and abbreviations. DevOps itself was only coined as a term in 2008, so some of these concepts are relatively new, but some are actually quite old, and their definitions or uses have changed over time.
Continuous integration and continuous delivery (CI/CD) is one example. It’s a core aspect of DevOps, but it predates DevOps by decades, drastically changing the way we build and release software.
Before the popularity of CI/CD, releasing software was a massive undertaking. Updates were released infrequently, sometimes only once a year or less. As a result, the updates were very large and internet connections were slow, so it was always time-consuming.
Providing the new version of a piece of software on a floppy disc, CD or thumb drive wasn’t uncommon. It was generally accepted that an update would introduce problems, and the way we did things back then meant that a quick patch release likely would not be forthcoming.
This was a problem that affected not just other businesses and developers, but also consumers. Regular, nontechnical users had to care about the difficulty of updating software.
You probably knew what version a particular application was running, and updates were always a big production that had to be planned for and handled manually. Today, most of us don’t care about it at all, and we barely notice software updates when they happen.
How Did We Get Here?
Continuous integration came first. It’s the practice of regularly merging all developers’ working codebases with the main branch, potentially multiple times per day, and it was first proposed by Grady Booch in 1991, in his book “Object-Oriented Analysis and Design with Applications.” His vision for continuous integration did n’t suggest releasing multiple times a day, but it was the first step.
In 1997, Extreme Programming built on Booch’s method by advocating for releasing multiple times a day. The name sounds absurd in retrospect, conjuring images of edgy ’90s product marketing, but it meant taking concepts and paradigms that were already accepted parts of writing and releasing software, and then exaggerating the practice of them to the extreme. For example, the concept of code review was exaggerated by introducing pair programming. Methodologies like SCRUM and Kanban followed, each of them building on what came before to release more software more often.
In the early days, while we recognized that we needed to be releasing more frequently, we didn’t really have tools to make that easier. Software was still largely being tested and delivered by hand. We didn’t get the first open source tool to make continuous delivery easier to achieve until 2001, with the release of CruiseControl. For the first time, we had a system we could install and stand up ourselves to automate the management of builds, which allowed us to release more often. It even integrated with your integrated development environment (IDE), if you were using Eclipse. CruiseControl was Java-specific — if you were writing Ruby, for instance, you had to use the Ruby variant of CruiseControl.
Jenkins was released a decade later, in 2011. It quickly overtook CruiseControl and remains in wide use today; it supports many languages, can be made to do almost anything you want, and has an enormous community, so when you do run into difficulties, someone else is likely to run into the same issue and documented the solution. The success of Jenkins led to the release of many other similar tools, like Team City and Bamboo.
However, Jenkins and its class of tools is starting to show its age. You have to stand up infrastructure and install it yourself, and someone has to be responsible for maintaining it.
Slowly but surely, these self-hosted, self-managed CI tools like Jenkins or the now-defunct CruiseControl are being replaced by lower-maintenance cloud native or hosted services like CircleCI, TravisCI or even GitHub’s own GitHub Actions.
Most major cloud providers offer native CI/CD tools of their own. They support dozens of languages and build environments, know how to deal with cloud native like Docker and Kubernetes, and integrate directly with a wealth of other services to handle deployment or analytics or observability. Nearly anything can be automated inside your pipeline now.
Thirty years ago, updating a piece of software involved a massive amount of repetitive, manual work over the course of months, culminating in a new version that was slow to download (if it wasn’t so large it had to be provided on physical media ) and potentially introduced more problems than features.
Today, we barely notice them. Software updates happen automatically as they’re available, and consumers no longer have a reason to know or care what version a particular application is running.
The evolution of CI/CD has given us more than just faster software updates. That ability to release more frequently, plus the robust environment CI/CD tools provide inside of the pipeline, has given rise to new classes of tools that we consider central to DevOps and would have a hard time imagining software development without — better metrics, observability tools, tracing, Infrastructure as Code, a whole host of security tools and more.
It’s been nothing short of revolutionary, and I can’t even begin to imagine where we’ll be in another 30 years.