Contributed by Melissa J McKay, JFrog
The State of Continuous Delivery Report, commissioned by the Continuous Delivery Foundation (CDF) and released in May, is a source worthy of attention. It’s based on data collected from more than 125,000 devs over the course of the last two and a half years from SlashData’s Developer Nation surveys. The analysis is compelling. The report itself isn’t heavy reading — it’s presented in neatly organized and digestible sections along with helpful graphs to better visualize some of the points.
By itself, data isn’t as valuable as what you choose to do with it! Ideally, the insights provided in this report will lead to real-world decisions made to improve team processes. After my initial once-over, I reread the analysis with three different scenarios important to developers and teams looking to examine where they rank in their DevOps and software security journey.
- Dev teams at the beginning of their DevOps journey
- Dev teams in the middle of their DevOps journey
- Dev teams improving their software security
For each of these scenarios, what is the next step a team should take in implementing DevOps practices and tools? Using the analysis in the report, I want to advise what action would give these teams the most bang for their buck. Given the extensive amount of data collected, a team should feel confident that their chosen path would lead to an improvement worthy of the resources spent. And if not, I’d expect a healthy discussion on what is missing or incorrect in the analysis. Either way, this approach is certainly better than choosing a direction based on the day of the week or the mood of the team.
In using the analysis directly from The State of Continuous Delivery Report, I’m taking leaps that I don’t assume all readers will be comfortable with from the start. Much of the analysis in the report is based on respondents’ answers to questions regarding DevOps Research and Assessment (DORA) metrics with the premise that these metrics predict software delivery performance. The report heavily relies on three of these four metrics: lead time for changes, deployment frequency, and time to restore service. For an in-depth background on DORA metrics and their significance, refer to the book: Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performance Technology Organizations, by Nicole Forsgren Ph.D., Jez Humble, and Gene Kim.
Dev Teams at the Beginning of Their DevOps Journey
One of the data points in The State of Continuous Delivery Report revealed that DevOps activities on software teams are not yet ubiquitous! My surprise is probably just due to my extensive exposure in the positions that I hold, but regardless, the data indicates that 16% of developers are not engaged in DevOps activities at all.
This is a strikingly high number given how broadly DevOps methodologies have been accepted and adopted in the industry. My first question was whether there was something in common with the types of software developed where teams do not benefit from DevOps. That might be the case given the data in the report showing that particular sectors slow to adopt in the past have seen large jumps this year compared to Q1 2022 in the use of DevOps activities—as much as an 8-point increase for those involved with game development, for example. A similar increase was seen with desktop app development. Without this trend, this alarming 16% would be a much higher number.
Given there are clearly still software developers on teams out there that are not involved in DevOps activities, and that a portion of those are likely considering coming on board, what does the data tell us that would support a team making this decision? The report section on DevOps technology usage provides analysis on the subject of the correlation between the number of DevOps technologies used by developers and improved software delivery performance based on DORA metrics. The analysis stakes the claim that there is a strong correlation between using 10 or more DevOps technologies and the likelihood of being in the top-performing groups for lead time and restoration time. Conversely, using a single DevOps technology, the likelihood of being in a low-performing group is alarmingly high for lead time, restoration time, as well as deployment frequency.
Given this insight, you could conclude that the more immersed you are in DevOps tools, the more likely your software delivery performance will improve. So if you haven’t already embraced DevOps, it’s time to start!
Dev Teams in the Middle of Their DevOps Journey
If your team has decided to take the DevOps plunge, you certainly want to consider where to start! Two recommendations here: (1) get your baseline metrics before you make any changes to evaluate your improvement, and (2) don’t make more than one change to your processes at a time before collecting new metrics to compare with your baseline.
You won’t be able to prove that your work was worthwhile if you forget to measure! When it comes time to prove to upper management the value of your efforts, or to help make the case for an alternative solution, you need the data to back you up. Also, it’s tempting to try and fix “all the things” at the same time. It’s exhilarating to solve problems as fast as you can and frustrating to hold back! But process and tool changes like this are bigger than an individual development team. There are times when it will be crucial to know what changes are likely to provide the biggest benefit given limited resources and time. To do this, you need some separation between your efforts and the ability to measure each change. For example, if you candocument metrics for each change you make to your system, your organization will be better suited to prioritize specific process changes and allocate the necessary resources for teams across an organization.
Regarding the best choice of DevOps tools to start integrating into your environment, The State of Continuous Delivery Report shares an interesting insight on this specifically. The analysis reveals that low-performing development groups are the largest proportion of developers for all metrics for those not using CI/CD tools. From this, you can conclude that if you are considered a low-performance team according to DORA metrics and have yet to implement CI/CD DevOps tools into your development environments, this may be a good place to focus your efforts.
Despite the evidence that an increased number of DevOps tools improve performance across DORA metrics, it comes as no surprise that there are things you could do that would thwart your efforts. If you’re a little further along in your DevOps tool implementation, but are still looking for improvement, it seems that more is not always better. The Interoperability of CI/CD tools section of the report addresses this scenario and suggests that there is a diminishing return from increasing the number of CI/CD tools in use for lead time and deployment frequency. For restoring service, there was a significant increase in the number of low-performing teams when going beyond two or three CI/CD tools. The analysis leaves room for a couple of different interpretations of this. One is that an increased number of tools is in response to increased project complexity. The other is that increasing the number of CI/CD tools that overlap in functionality or have other interoperability issues also comes at a cost to software delivery performance when it comes to integrating them and isolating service-impacting issues.
The Interoperability data is a little more difficult to assess given many open questions leading to multiple possible interpretations, but it stands to reason that simply increasing the number of tools is not always the appropriate approach when seeking improved software delivery performance. Consider the skill sets of your developers. Focus on a breadth of functionality to avoid overlap, and before implementing a new tool, spend some time researching how well it behaves with what already exists in your environment.
Dev Teams Improving Their Software Security
Security is certainly at the top of mind today—where it should be! There have been quite a few developments over the last few years in the industry to address software security concerns, especially when it comes to shifting left to developers. We are far from perfection, but the tools available now have dramatically improved how we prevent and mitigate security threats and code vulnerabilities beginning at the development stage rather than left to the end of the software lifecycle. Knowing more about how implementing these necessary security checks affects software delivery performance will be valuable information going forward.
I was pleased that The State of Continuous Delivery Report included data from questions in their Q3 2022 survey regarding security activities DevOps practitioners were involved with as well as some intriguing information about the methodology—automatic and continuous or manual—and type of security testing—API, test-time, and build-time. Listed as a key insight in the report is that testing apps for security measures is now the second most popular DevOps-related activity. The data also seems to support automated and continuous security testing as the way to go given that this is how teams more likely to be in top-performing groups have decided to implement their security checks.
The analysis is careful about making a lot of claims in this area. The authors give a fair warning that there are no hard and fast rules here and teams should be thoughtful about their goals when adding these tools to their development and delivery processes. As always, I would assume interoperability issues would be a concern as well as how well these tools can be integrated into the existing processes so that there is as little disruption as possible.
I’m really looking forward to future reports on this subject matter in particular as there is still a lot that can be learned here. But for now, based on this data, I would be confident to advise a team to certainly add automated security checks (especially at build-time), and ensure that any other tools are carefully evaluated for interoperability and smooth integration into current processes — particularly for developers. For example, as a developer myself, I appreciate tools that don’t require me to leave my development environment I’m accustomed to. That extra context switch is costly!
Final Thoughts
I can’t overstate how important it is to work with empirical data. Anecdotal evidence can enhance and help explain some of the results as well as guide some of the analysis. But ultimately, to get the needed support for something as heavy as a change in software delivery processes, you need empirical evidence to work with.
The State of Continuous Delivery Report provides a wealth of information and helpful analysis. There are a lot of reports released regularly like this one that make various claims about different topics. For any that you want to rely upon, always take a moment to check out the quantity of respondents, the positions held by said respondents, and the methodology used to address sampling biases and representation across the industry.
For the three chosen scenarios that I identified, The State of Continuous Delivery Report provided valuable and relevant information. If you find yourself in similar circumstances, not only is this data and analysis worthy to have on your side, but I’m optimistic that in these cases you will be able to make some data-based decisions about your toolsets and processes that will result in improving your software delivery performance—and maybe even developer happiness! I’m looking forward to diving deeper into this report to glean further insights and discussion points and I encourage you to do the same. Happy reading!
Melissa McKay is a member of the Technical Oversight Committee of the Continuous Delivery Foundation, a global consortium of 36 member organizations that support the growth and evolution of continuous delivery models and best practices. Her software engineering and development career spans more than 20 years and she currently holds a Developer Advocate position at JFrog, a recognized leader in DevOps and software security.