Skip to main content
Category

Blog

Intro to Jenkins Training Course Enrolls Over 5,000 in First Month

By Blog, Staff

Linux Foundation Training and Continuous Delivery Foundation launched a free training course on the edX platform, LFS167x – Introduction to Jenkins, on June 4. In that time, the course has already enrolled 5,000 students, making it one of the fastest-growing courses we have ever released. This is great news for helping to grow the Jenkins and overall DevOps communities.

The course covers the fundamentals of continuous integration/continuous delivery (CI/CD), and how they help transform the overall software delivery process. It is most useful for roles such as DevOps engineers, software developers and architects, and professionals focused on site reliability and quality assurance, though anyone involved in the software delivery process will benefit. It includes a detailed introduction to the Jenkins automation server, and also provides instructions on how to set up/use Jenkins for CI/CD workflows. 

Upon completion, enrollees will have a solid understanding of the role that Jenkins plays in the software development lifecycle, how to install a Jenkins server, how to build software with it, how to manage third party integrations/plugins and how to scale and secure Jenkins. They will also get a glimpse of what they can do to further enhance their CI/CD skills. 

Join the more than 5,000 individuals who have started improving their software delivery processes – enroll for free today!

Introducing Our Newest CDF Ambassador – Romnick Acabado

By Blog, Staff

Hi CI/CD Fans,

I’m Romnick Acabado, a DevOps Leader and IT Manager from Lingaro Philippines. I use my strengths, passion, and purpose in exploring, learning, and sharing DevOps practices to improve the lives of the people who are involved within the flow of business ideas to the best quality of user experience.

In short, I would like to help to improve the lives of people all over the world through modern IT.

This year marks my tenth year in my IT professional career. Learning DevOps practices becomes my priority as we call our team a “DevOps Team.”

Everything starts with awareness, and you will only know your team’s position, validate your overall experience and challenges on something when you are exposed to the external community.

When I was studying, I was very active in joining student societies to give additional value to our community. I have never imagined that I could continue it in the corporate world because of my doubts.

Last 2019, I worked a lot on how I can improve my confidence, and I believe that the actions that I took helped me to believe in myself, leverage my strengths (analytical, responsibility, relater, communication & learning) and competencies while outworking my potentials. So in the latter part of 2019, I found the Ambassador Program of DevOps Institute, and I tried to sign-up. Forest Jing and Dheeraj Nayal were too accommodating to assist me in the process. I have been a fan of the learning model of 70-20-10, where 20% of learning a skill usually comes from your relationship and connections to experts, so I thought that it is about time not to limit my connections within my company. Luckily, I was selected, and I have successfully joined the program.

I have focused on improving myself and continuously be an asset wherever I am engaged, and I am happy that the DOI’s Chief Ambassador has recognized it:

As I continue to build my branding and represent our organization in DevOps, I have embraced my role as one of the leaders in the Philippines to support and promote the DevOps movement.

I created a website, where I share my exploration and DevOps journey. I also maintain a Facebook and Twitter account.

You can also join my Meetup group DevOps SKIL Up PH, where I want to build a community of DevOps practitioners and leaders in our country to help each other through upskilling.

I’m joining the CDF as an ambassador because I believe that DOI and what they advocate are intersecting. I even see my co-ambassadors in both communities. I am confident that through CDF, I will be able to learn more and share my knowledge about CI/CD tools, which are essential in the DevOps design.

I also see the alignment of my vision, mission, and my values about creativity, being solution-oriented and collaborative in these communities.

Being a life-long learner, I know that I will continue to learn with the experts in this community, and I will be able to give value as well through my time and expertise. It’s always fun to be surrounded by like-minded DevOps professionals.

For the past months, I was able to join the DevOps summit and panel discussion in our DevOps communities at the DevOps Summit in April 2020 and DevOps India Summit. I’ll also be speaking at the DevOps upcoming Virtual DevOps Summit on November 2-3.

You can connect to me through my LinkedIn account for future collaboration about DevOps, CI/CD, or analytics solutions.

Good luck to your DevOps journey and see you at future CDF events. It’s my pleasure to represent and be part of this great community! 😉

Introducing Our Newest CDF Ambassador – Steven Terrana

By Blog, Staff

Heyyo!

My name is Steven Terrana. It’s great to be here! I’m currently a DevSecOps & Platforms Engineer at Booz Allen Hamilton.

My day to day largely consists of working with teams to implement large-scale CI/CD pipelines using Jenkins, implementing DevSecOps principles, and adopting all the buzzwords :).

Through experiencing all of the pains associated with the “large-scale” pipeline development, I developed the Jenkins Templating Engine: a Jenkins plugin that allows users to stop copying and pasting Jenkinsfiles by creating tool-agnostic, pipeline templates that can be shared across teams enabling organizational governance while optimizing for developer autonomy. If that sounds cool, you can check out the Jenkins Online Meetup.

You can probably find me somewhere in the Jenkins community. I help drive the Pipeline Authoring SIG and contribute to community plugins and pipeline documentation where I can.

I’m excited to be a part of an organization in CDF that’s helping to establish best practices, propel the adoption of continuous delivery tooling, and facilitate interoperability across emerging technologies to streamline software delivery.

Oh, yeah, and I have two cats and a turtle. Meet James Bond, GG, and Sheldon:

Follow me on Twitter @steven_terrana

From Harness – Automating Enterprise Governance within Delivery Pipelines

By Blog, Member

Originally posted on the Harness.io blog by Tiffany Jachja (@tiffanyjachja)

In an organization where developers are continuously pushing code to production, managing risks can be difficult. In Measuring and Managing Information Risk: A FAIR Approach, Jack Freund and Jack Jones describe governance as a cost-effective approach to “govern the organization’s risk landscape.” You want to ensure your organization actively understands and manages risk, especially in heavily regulated industries expected to comply with governing authorities and standards, see compliance, or a blog post on measuring compliance

Governance, risk management, and compliance (GRC) is an umbrella term covering an organization’s approach across these three practices: governance, risk management, and compliance. Freund and Jones describe risk and compliance. 

“This [the risk] objective is all about making better-informed risk decisions, which boils down to three things: (1) identifying ‘risks,’ (2) effectively rating and prioritizing ‘risks,’ and (3) making decisions about how to mitigate ‘risks’ that are significant enough to warrant mitigation.” 

“Of the three objectives, compliance management is the simplest—at least on the surface. On the surface, compliance is simply a matter of identifying the relevant expectations (e.g., requirements defined by Basel, Payment Card Industry (PCI), SOX, etc.), documenting and reporting on how the organization is (or is not) complying with those expectations, and tracking and reporting on activities to close any gaps.”

So if GRC is about aligning an organization to managing risks, what role do developers play?

From code commit to production

We discussed in the previous blog posts the importance of taking a systematic approach to developing software delivery processes. We shared practices like Value Stream Mapping, to give organizations the tools to better understand their value streams and to accelerate their DevOps journey. These DevOps practices indicate that every software delivery stakeholder is responsible for the value they deliver. But on the flip side, they also indicate that stakeholders are responsible for any risks that they create or introduce. 

The DevOps Automated Governance Reference Architecture found here, shares how to further adopt a systems approach to delivery. 

By looking at each stage in your delivery pipeline, you can define the inputs, outputs, actors, actions, risks, and control points related to that stage. 

The essential part of governance is that developers are aware of the risks at each stage. The reference architecture paper shares some of the common risks associated with code commits, such as unapproved changes and PII or credentials in source code. Likewise, for deploying to production, you can have risks such as low-quality code in production, lack of quality gates, and unexpected system behaviors in production. 

These risks help define areas of control points that help manage that risk. If you face the risk of unapproved changes, introduce a change approval process. Likewise, you can control risks through secrets management, application quality analysis, quality gate evaluation, and enforced deployment strategies.

Everyone involved in the process from code commits to production is responsible for mitigating risks. 

The pieces to Enterprise Governance

Now let’s discuss the components of a governance process for a cloud environment. The DevOps Automated Governance Reference Architecture, found here, shares an approach to navigating your automated governance journey. Many of these concepts to be discussed here are explained in detail in that reference paper. 

Notes are metadata definitions. Occurrences are generated for each artifact or resource that needs this note. As an example, a note could provide details of a specific vulnerability, such as the impacts, names, and status. I would generate an occurrence for every container image with that security vulnerability. Similarly, I could have a note that defines a specific application deployment, as I promote the deployment across different environments, I would generate an occurrence. There is a one to many relationships between notes and occurrences.

An attestation is a particular type of note that represents a verification that you’ve satisfied in your governance process. Attestations are tied to attestors, which hold the authority to verify a control point within your governance process. For example, determining you’ve passed a code review or a unit test is an attestation. Each attestation represents a control point within your governance process. 

A binary authorization policy uses a list of attestors to represent your governance as code. A binary auth policy acts as a series of gates so that you can not get to the next stage of your software delivery before getting an attestation from each attestor. Therefore, it’s common practice to turn on binary authorization (BinAuthz) in your Kubernetes environment to ensure you are governing changes and deployments. You’ll have an Admission Controller in your Google Kubernetes Engine(GKE) that does the checks for attestation when you go interact with your environment. Here’s more information on how BinAuthz works for GKE.

If you’d like to learn more about designing control points for your governance process, Captial One also shared their pipeline design through a concept called “16 Gates” in a blog post called “Focusing on the DevOps Pipeline.” 

Harnessing your governance process

Governance processes require automation to accelerate software delivery; otherwise, it can harm your velocity and time to market. A popular topic to emerge in the past year is automated pipeline governance, which gives enterprises the ability to attest to the integrity of assets in a delivery pipeline. Pipeline governance goes beyond traditional CICD, where developers simply automate delivery without truly mitigating risk. Continuous Integration and Continuous Delivery platforms can help heavily regulated industries manage their governance processes when developers and operations understand the organization’s risks. 

From Harness – Value-Stream Mapping (VSM): Your Software Delivery

By Blog, Member

Originally posted on the Harness.io blog by Tiffany Jachja (@tiffanyjachja)

Value-stream mapping (VSM) is a lean manufacturing technique popularized in the 90s after its successful applications in the manufacturing industry with companies like Toyota. Since then, DevOps practitioners have shared these processes as it applies to software development. DevOps literature, such as the DevOps Handbook, suggests value stream maps inform the most critical areas of application for DevOps practices and technology. A VSM is also known as a material and information flow map. Using this map, you can identify areas of improvement and map your current state to your future state. If you’d like an in-depth look at VSM, I recommend “Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation” written by Karen Martin and Mike Osterling. This blog post summarizes the key concepts of VSM and shares how you can use VSM within your IT organization. 

Value Stream Map Symbols and Components

In a VSM, three types of flow deliver a product/service to a consumer. The flow of information, the flow of materials, and the flow of time. The flow of information goes from right to left in a VSM. And in contrast, the flow of information and time goes from left to right in a VSM. The illustration below shows the major components of a VSM. 

Within the information section, the factory symbol denotes the customer, supplier, and any other entities. All VSMs have a customer and supplier specified. A truck indicates the frequency of delivery. Within the materials section, the process boxes host additional space to include information about the resources needed to complete a particular process. It can be advantageous to also track inventory between processes. This is often denoted by a triangle. We also have the arrows which can denote nonlinear/sequential relationships between processes. Lastly, a VSM has a flow of time to showcase lead times in the flow of materials and information. 

These are the major components of a VSM. There may be additional VSM components and symbols that are helpful for more complex value stream exercises. This resource explains those VSM components in more detail.

Creating a Value Stream Map

Example Value Stream Map (edited from a Lucid Chart Template found here).

Here are the steps to creating a value stream map:

  1. Select the product or service you’d like to value stream map. Start with the most important / highest value proposition. Here I have Simple Service at the top center of my VSM.
  2. Start with your customers. The customer should drive the entire stream of value. In the example, a customer sends all their requests to a simple service.
  3. Define your start and end processes. Scope your VSM with a start and end trigger. Here I have prioritizing reported bugs as my starting event. Likewise, deploying the feature is my final event.
  4. Include information flow. Reach that information flows from right to left in a VSM. In my example, I have “Feature Requests” as the triggers to my start process. Once code has made it to production, I have “Ship the Service” as my final flow of information.
  5. Fill in the remaining processes. In my example, I have the components of a standard software lifecycle: develop, build, test, and deploy. Here I could also add in counts of inventory or average quantity between processes.
  6. Gather process data. I am including who is involved in each process along with the tools they use. Be objective in this step and the following one. The goal is to capture the current state. 
  7. Create the timeline. Here I am mapping current metrics on each process. There are three standard metrics used in VSM. Lead Time, Process Time, Percent Complete & Accurate. Lead time is the elapsed time or time from initiating the phase to completing it. The processing time is the amount of time it takes to handle the said request. And Percent Compete&Accurate (%C&A or PCA) is the percentage occurrence where the finished output was correct according to the requirements of the customer of the process.

Evaluating a Value Stream Map

Now that we have a VSM let’s discuss how to analyze it. If you are not adding value to your end customer, you are adding to the cost of production. Value stream management practices encourages organizations to focus on the flow of value.

Organizations will look at a Value Stream Map to consider the performance of entire systems. Starting with a single stream of value can help ease other parts of your organization into adopting new ideas and practices. 

Flow refers to the flow of work in your value stream. Work can come from features requests, requirements, defects, and or support tickets. The goal is to ensure that the value stream is always moving forward. Some things that can affect flow include: 

  • Changing priorities and task switching,
  • Lack of visibility around problems and processes,
  • Long development cycles,
  • And not getting code to higher environments

A book titled Team Topologies authored by Matthew Skelton and Manuel Pais share a few more obstacles to flow. https://teamtopologies.com/

Here is a guide to improving the flow of work within a value stream. It involves creating a dedicated transformation team that’s held “accountable for achieving a clearly defined, measurable, system-level result (e.g., reduce the deployment lead time from “code committed into version control to successfully running in production” by 50%).” This dedicated team should have the resources and freedom to utilize the different DevOps practices to achieve the results.

There are also eight types of waste defined through the lean manufacturing movement. The diagram below describes these types of waste.

The 8 Types of Waste. (Image source: https://theleanway.net/The-8-Wastes-of-Lean)

If we’d like to minimize in our software delivery processes we need to consider how to manage these findings.

Here are some more ways to look at a Value Stream Maps. 

  1. Optimize Value Streams through Process: Look to see if your process is of the following characteristics: valuable(does the customer need this), capable(quality results), available(minimal downtimes), adequate(meets demands), flexible(can be switched or configured). 
  2. Focus on flow: Is flow continuous or stagnate? Consider your quantity levels. In the earlier VSM example, the flow could stagnate if there are no feature requests. Likewise, if you have too many features developed, your QA process may need more scaling.  Identify areas where your value stream is suffering, if stagnation exists. 
  3. Analyze the flow of information: Is it a push or pull model? In push models, the supplier supplies customers with features and deliverables. In a pull model, the customer is requesting those features. It’s important to have the right balance to ensure you’re not overtaxing specific job functions and processes in your value stream.

A well-executed VSM workshop gives those in the room the opportunity to champion change. Maybe you’re using an outdated tool for a certain process. A VSM workshop invites critical stakeholders to come and challenges areas of your value stream.

Harness Your Value Stream

Every organization has a process for delivering their product or service to their customers. Value-stream mapping allows you to optimize your flow of materials and information by lowering costs and improving value adds. This blog posts shares some tips for navigating your DevOps journey through Value stream management techniques such as Value Stream Mapping.