Skip to main content

CA Endevor – A History Lesson for Today’s DevOps Pipelines

By March 25, 2021November 1st, 2023Blog, Community

Contributed by Vaughn Marshall, Broadcom Inc. (not a CDF Member)

Processes governing the delivery of quality software have continually evolved and improved since the early days of software development in the 1950s and 60s. These days, best practices such as Continuous Integration (CI) and Continuous Delivery (CD) are considered foundational for DevOps, but have you ever wondered where these got started? It may surprise you to learn that these key concepts, along with Application Lifecycle Management in general, have their roots in the Mainframe world. In fact, one product—CA Endevor—is still widely used today to manage mainframe software development.

In the 1980s, when CA Endevor (originally called Control One) was first introduced, software development was mostly done on mainframe computers. And just like software development in the early days of open systems, it could be considered the Wild West. Developers wrote their code and did their builds individually and maintained their own mainframe jobs to do so. To deploy to production, developers phoned the operators, telling them where to grab the binaries. There was no governance over the process or audit trails, builds could not be easily redone at any point in time and controls to limit who could change application sub-systems were absent.

These were the problems the creators of Endevor set out to fix. They did so by bringing all the processes for managing code under one governing system and applying the first “guard rails” to the development process. Originally with only a two-stage lifecycle, developers would make their changes with the system tracking all the versions along with who made the changes and then automating the builds and managing the executable outputs. An inventory classification allowed for security to be applied down to the individual elements. Promotion to production was done by moving the changes to the second stage in the lifecycle under the governance of the tool.  

By combining development and operations, Endevor has been a trailblazer from the outset. In fact, the name Endevor is an acronym to describe what was a novel way of working – an ENvironment for DEVelopment and OpeRations, which sounds an awful lot like the term we use today for these practices – DevOps. Endevor has continued to evolve, pioneering new practices around the creation and delivery of software to production environments. Here are some examples of Endevor innovations that are commonplace today:

1. Configurable Lifecycles for Different Applications: Applications can have different routes to production and in order to properly govern all the steps to creating quality software, the ability to define different lifecycles or pipelines is required. Code from a vendor might not be developed in-house but it may still need to be tested before rolling it out. Contrast that to in-house developed software that needs a more comprehensive Software Development Lifecycle (SDLC) to properly manage. 

2. Parallel Development with the Ability to Easily Integrate Changes Continuously: In the early days of development, this was mostly done with static parallel lifecycle paths and a merge point later in the lifecycle. Features like Parallel Development Manager (PDM) allowed developers to pull in other developers’ changes as needed (e.g. bug fixes in prod could be integrated into in-flight new feature work). Of course, these days, static parallel environments for development are not enough – developers need to create on-demand, which is why CA Endevor created Sandboxes and now Dynamic Environments to meet this need.

3. Automation of Hand-offs Between Stages: To work with maximum efficiency while still maintaining quality, a system should automate promotions, including deployment to live environments and ideally running any test automation. This capability is handled by CA Endevor Processors. Similar to the way Generate Processors automate everything to do with compiling an application, Move Processors automate the activities done as a result of moving through the lifecycle. These days, you can even layer in automation that runs off-host—for example, static application source analysis during a build or triggering an off-host test suite after a promotion.

4. Planning, Packaging, and Validation of Deliveries to Live Environments: While it is entirely possible to do things on-demand, there are often interdependencies between changes that need to be taken into account. Finding problems halfway through promoting your code to production on-demand results in wasted time and resources. You may also want to promote off-hours to avoid impact to running systems. This capability was added to CA Endevor in the form of Packages. With Packages you can specify all the pieces being moved and even perform validation of the move prior to execution to determine if there would be any problems.

5. Tracking Dependencies Between Application Components: As mentioned above, to properly plan a move, the system needs to understand the dependencies between application components. This is especially important with Mainframe applications where application monoliths are common. CA Endevor tackled this with Automation Configuration Manager or ACM. ACM lets developers better understand the impacts of their change with the system warning of missing dependencies in promotions. Endevor can also automatically rebuild dependent components based on changes to the dependencies.

Things are continuing to evolve as well. On the one hand, the next generation of developers is not familiar with the interfaces and technology that are traditionally used with mainframe so it might make sense to migrate to new tools. On the other hand, it would be nice not to have to reinvent the years of investment in automation made by their predecessors for things like complex build processes. In the Endevor world, we are addressing that by building integrations to modern systems like Enterprise Git platforms and open source CI/CD tools. Projects like Zowe CLI, a command-line interface built specifically for mainframe, allow developers to layer new processes and automation on top of old and leverage newer interfaces to visualize and interact with the underlying systems while ensuring existing processes and automations are not disrupted.

While mainframe systems have certainly become more specialized, dealing mostly now large systems of record applications that process large transaction volumes securely for entities like banks, insurers, transportation companies and governments, they are still a critical part of the world’s computing infrastructure. And given its pedigree, there should be no surprise that many of the practices that were pioneered in mainframe application lifecycle management systems are now considered key to being competitive in the marketplace, regardless of the development platform.