LJ Archive

21st-Century DevOps—an End to the 20th-Century Practice of Writing Static Build and Deploy Scripts

Tracy Ragan

Issue #230, June 2013

Automating, standardizing and simplifying DevOps requires a model-driven process unchained from one-off back-end scripts.

Having served as a software developer since leg warmers and shoulder pads were in style, I've seen the distributed platform, from UNIX to Windows struggle, with the process of moving software changes from development to production over and over again. News from IBM and CA technologies about acquiring “Release Automation” solutions for the distributed platform has occurred in the recent past and has become in fashion once again. It seems our industry either enjoys the feeling of déjà vu or simply chooses to forget what it already has done to address the problem of DevOps.

DevOps, at its core, is a process that simplifies the hand-off of source code between development and production, allowing test and production release teams to build and deploy binaries as required for the correct technology stack along the way. And, we have tools released in the not so distant past that have tried to solve this problem, but never quite met the challenge. Tivoli, CA Unicenter and a host of other solutions from competing companies always have had the ability to perform software releases. But there is always something missing, and most of today's “in fashion” solutions do little different from their predecessors.

So what is missing? Is no one else noticing the 800-pound gorilla sitting in the room? The solutions that are paraded around as the latest-and-greatest method of solving the DevOps problem simply do not disrupt the status quo enough to get to the core of the DevOps issue. They serve some ability to centralize logs and to manage server environments and configurations, which is certainly helpful. But they ignore the back-end one-off build and deploy scripts that contain the logic that actually does the work.

Yes, I understand. Ant and Maven are so cool that you may be compelled to tinker with them. But in the corporate enterprise, the time for tinkering is over, and the time has come for a 21st-century solution that is dynamic and model-driven for both build and deploy. Central to most DevOps tools is the claim of a “virtualized” process. However, these “virtualization” solutions choose to ignore the 20th-century static scripts that serve as the foundation of their solutions. A 21st-century “virtualized” solution cannot be built upon brittle, one-off scripts for either the build or the deploy. What is required is for developers to accept the paradigm shift that is needed for 21st-century DevOps, a model-driven, scriptless solution. And yes, change can be scary.

History can be our best teacher, and in the DevOps space, this is particularly true. Looking at how the UNIX administrators and mainframe administrators of the 1980s and 1990s addressed the problem can provide insight into what works and what does not work. On the mainframe, private compile and ship JCL was thrown out when “processors” were introduced. Processors are the way that the mainframe creates dynamic build and deploy “scripts”, based on a model-driven framework. Everyone repeats and reuses the logic for compiling/binding/linking source code and shipping load objects. On the UNIX side, a central administrator, often using ClearCase, managed a central build and deploy script for the different levels and versions of the application moving across the life cycle. Even though they did not succeed in completely eliminating build and deploy scripts, they minimized the number of them down to a manageable level, often one build and deploy script for each environment—smart!

For reasons that are far beyond my understanding, it has been my perception that individuals who sell themselves as visionaries in this particular space avoid the discussion of scripts. Scripts are a huge bottleneck and a hidden cost in the activities of developing and releasing software. There is some hope, however. We may be seeing some movement away from scripting for releases, but few visionaries understand the build and tend to leave it out of the conversation.

So, what is in a build and why does it matter? Actually a substantial amount of information that is critical in the process of managing the DevOps effort is managed in the build. Get the build right, and a substantial amount of time and money is saved across the life cycle. Builds most commonly are managed by static build scripts that are somewhat unintelligent. You pass it a set of commands, and it executes them starting at the top and ending at the bottom. What build scripts cannot do is everything that is needed for managing your DevOps process through release.

A software build needs the ability to be flexible and transparent in what and how it is building the software—for example, incremental builds, dependency management, compile/link/archive options (debug vs. no debug), transitive dependency and the use of third-party libraries that make up the release target technology stack. Scripting languages lack the ability to manage these moving parts dynamically. As a result, you get redundancy—copied scripts for different needs and environments, and you often hear “it worked on my machine.” Scripts also cannot produce reports that provide the insight that allows validation of the binaries before a release. Scripts are black boxes that produce black-box binaries. At best, there is a guessing game as to what the script did, such as what libraries and options did it use to create the deployable objects. And, if you do not know what your build script did, you cannot guarantee consistent deployments, regardless of the level of virtualization you have achieved. A bad build absolutely will result in a bad deploy, no matter how much money you spend on your release automation solution. The two are simply different sides of the same coin.

Deploy scripts are equally as critical. In a deploy script, the steps for performing the release should be super-standard, with no wobbles between one Java application using a Websphere server and another. Scripts are wobbly. It is their nature to be wobbly, as there are so many different ways to write a script. For any DevOps solution that claims automation and virtualization as the core features, one-off scripts delivered by the customer's development teams should not be required to drive the automation tools. When you are spending top dollars on a new DevOps solution, your developers should not need to deliver the foundation of that new solution. If you purchase a solution that requires build or deploy scripting, Ant, Maven, Make, Python or the next groovy-new-scripting language, you can be guaranteed to be in the market once again in the not-too-distant future, as the problems you are trying to solve today still will be with you in those black-box, one-off build and deploy scripts. The only solution is for your team to embrace the paradigm shift and move to a model-driven process, from build through deploy, for achieving DevOps.

Paradigm shifts often are difficult when there is a deep-rooted culture to overcome. I suspect this is the reason that one-off build and deploy scripts still are used by the distributed platform, even though the mainframe and UNIX administrators eliminated or minimized them close to 30 years ago. I recently purchased a Tesla Model S, a 100%-electric automobile. It is an amazing automobile. Very little can go wrong; two small electric motors push the back wheels—no transmission, no grease, no parts, no hassle. I was showing the car to a neighbor recently, and he argued “I would never buy an electric car. I would miss the sound of the engine racing, and I would not be able to work on it.” His statement reminded me of a developer who once told me he liked his build and release scripts and enjoyed “tweaking” them. He explained it gave him a sense of accomplishment when they worked well, and that he considered himself a craftsman of his trade. I'm not sure what his director would have thought of that statement considering the time and money he spent on tweaking and managing those scripts.

Like the electric car, the time has come for a better way to do DevOps. A model-driven process will allow us to stop spending time and money tinkering with a DevOps engine that is based on hard-coded scripts and instead move to a process that does not require 20th-century techniques to solve the very real 21st-century DevOps challenge.

Tracy Ragan has had extensive experience in the development and implementation of business applications. She began her consulting career in 1989 consulting to Fortune 500 organizations in the areas of testing, configuration management and build management. During her consulting experiences, Tracy recognized the lack of build management procedures for the distributed platform that had long been considered standard on the mainframe. In the four years leading to the creation of OpenMake Software, she worked with development teams in implementing a team-centric standardized build and deploy process. She served on the Eclipse Foundation Board of Directors as an Add-in Provider Representative for five years. She received her BS in Business Administration from California Polytechnic University and is a first degree black belt in Shotokan Karate.

LJ Archive