

Linux Foundation Training and Continuous Delivery Foundation launched a free training course on the edX platform, LFS167x – Introduction to Jenkins, on June 4. In that time, the course has already enrolled 5,000 students, making it one of the fastest-growing courses we have ever released. This is great news for helping to grow the Jenkins and overall DevOps communities.
The course covers the fundamentals of continuous integration/continuous delivery (CI/CD), and how they help transform the overall software delivery process. It is most useful for roles such as DevOps engineers, software developers and architects, and professionals focused on site reliability and quality assurance, though anyone involved in the software delivery process will benefit. It includes a detailed introduction to the Jenkins automation server, and also provides instructions on how to set up/use Jenkins for CI/CD workflows.
Upon completion, enrollees will have a solid understanding of the role that Jenkins plays in the software development lifecycle, how to install a Jenkins server, how to build software with it, how to manage third party integrations/plugins and how to scale and secure Jenkins. They will also get a glimpse of what they can do to further enhance their CI/CD skills.
Join the more than 5,000 individuals who have started improving their software delivery processes – enroll for free today!
Hi CI/CD Fans,
I’m Romnick Acabado, a DevOps Leader and IT Manager from Lingaro Philippines. I use my strengths, passion, and purpose in exploring, learning, and sharing DevOps practices to improve the lives of the people who are involved within the flow of business ideas to the best quality of user experience.
In short, I would like to help to improve the lives of people all over the world through modern IT.
This year marks my tenth year in my IT professional career. Learning DevOps practices becomes my priority as we call our team a “DevOps Team.”
Everything starts with awareness, and you will only know your team’s position, validate your overall experience and challenges on something when you are exposed to the external community.
When I was studying, I was very active in joining student societies to give additional value to our community. I have never imagined that I could continue it in the corporate world because of my doubts.
Last 2019, I worked a lot on how I can improve my confidence, and I believe that the actions that I took helped me to believe in myself, leverage my strengths (analytical, responsibility, relater, communication & learning) and competencies while outworking my potentials. So in the latter part of 2019, I found the Ambassador Program of DevOps Institute, and I tried to sign-up. Forest Jing and Dheeraj Nayal were too accommodating to assist me in the process. I have been a fan of the learning model of 70-20-10, where 20% of learning a skill usually comes from your relationship and connections to experts, so I thought that it is about time not to limit my connections within my company. Luckily, I was selected, and I have successfully joined the program.
I have focused on improving myself and continuously be an asset wherever I am engaged, and I am happy that the DOI’s Chief Ambassador has recognized it:
As I continue to build my branding and represent our organization in DevOps, I have embraced my role as one of the leaders in the Philippines to support and promote the DevOps movement.
I created a website, where I share my exploration and DevOps journey. I also maintain a Facebook and Twitter account.
You can also join my Meetup group DevOps SKIL Up PH, where I want to build a community of DevOps practitioners and leaders in our country to help each other through upskilling.
I’m joining the CDF as an ambassador because I believe that DOI and what they advocate are intersecting. I even see my co-ambassadors in both communities. I am confident that through CDF, I will be able to learn more and share my knowledge about CI/CD tools, which are essential in the DevOps design.
I also see the alignment of my vision, mission, and my values about creativity, being solution-oriented and collaborative in these communities.
Being a life-long learner, I know that I will continue to learn with the experts in this community, and I will be able to give value as well through my time and expertise. It’s always fun to be surrounded by like-minded DevOps professionals.
For the past months, I was able to join the DevOps summit and panel discussion in our DevOps communities at the DevOps Summit in April 2020 and DevOps India Summit. I’ll also be speaking at the DevOps upcoming Virtual DevOps Summit on November 2-3.
You can connect to me through my LinkedIn account for future collaboration about DevOps, CI/CD, or analytics solutions.
Good luck to your DevOps journey and see you at future CDF events. It’s my pleasure to represent and be part of this great community! 😉
Heyyo!
My name is Steven Terrana. It’s great to be here! I’m currently a DevSecOps & Platforms Engineer at Booz Allen Hamilton.
My day to day largely consists of working with teams to implement large-scale CI/CD pipelines using Jenkins, implementing DevSecOps principles, and adopting all the buzzwords :).
Through experiencing all of the pains associated with the “large-scale” pipeline development, I developed the Jenkins Templating Engine: a Jenkins plugin that allows users to stop copying and pasting Jenkinsfiles by creating tool-agnostic, pipeline templates that can be shared across teams enabling organizational governance while optimizing for developer autonomy. If that sounds cool, you can check out the Jenkins Online Meetup.
You can probably find me somewhere in the Jenkins community. I help drive the Pipeline Authoring SIG and contribute to community plugins and pipeline documentation where I can.
I’m excited to be a part of an organization in CDF that’s helping to establish best practices, propel the adoption of continuous delivery tooling, and facilitate interoperability across emerging technologies to streamline software delivery.
Oh, yeah, and I have two cats and a turtle. Meet James Bond, GG, and Sheldon:
Follow me on Twitter @steven_terrana
Originally posted on the Harness.io blog by Tiffany Jachja (@tiffanyjachja)
In an organization where developers are continuously pushing code to production, managing risks can be difficult. In Measuring and Managing Information Risk: A FAIR Approach, Jack Freund and Jack Jones describe governance as a cost-effective approach to “govern the organization’s risk landscape.” You want to ensure your organization actively understands and manages risk, especially in heavily regulated industries expected to comply with governing authorities and standards, see compliance, or a blog post on measuring compliance.
Governance, risk management, and compliance (GRC) is an umbrella term covering an organization’s approach across these three practices: governance, risk management, and compliance. Freund and Jones describe risk and compliance.
“This [the risk] objective is all about making better-informed risk decisions, which boils down to three things: (1) identifying ‘risks,’ (2) effectively rating and prioritizing ‘risks,’ and (3) making decisions about how to mitigate ‘risks’ that are significant enough to warrant mitigation.”
“Of the three objectives, compliance management is the simplest—at least on the surface. On the surface, compliance is simply a matter of identifying the relevant expectations (e.g., requirements defined by Basel, Payment Card Industry (PCI), SOX, etc.), documenting and reporting on how the organization is (or is not) complying with those expectations, and tracking and reporting on activities to close any gaps.”
So if GRC is about aligning an organization to managing risks, what role do developers play?
We discussed in the previous blog posts the importance of taking a systematic approach to developing software delivery processes. We shared practices like Value Stream Mapping, to give organizations the tools to better understand their value streams and to accelerate their DevOps journey. These DevOps practices indicate that every software delivery stakeholder is responsible for the value they deliver. But on the flip side, they also indicate that stakeholders are responsible for any risks that they create or introduce.
The DevOps Automated Governance Reference Architecture found here, shares how to further adopt a systems approach to delivery.
By looking at each stage in your delivery pipeline, you can define the inputs, outputs, actors, actions, risks, and control points related to that stage.
The essential part of governance is that developers are aware of the risks at each stage. The reference architecture paper shares some of the common risks associated with code commits, such as unapproved changes and PII or credentials in source code. Likewise, for deploying to production, you can have risks such as low-quality code in production, lack of quality gates, and unexpected system behaviors in production.
These risks help define areas of control points that help manage that risk. If you face the risk of unapproved changes, introduce a change approval process. Likewise, you can control risks through secrets management, application quality analysis, quality gate evaluation, and enforced deployment strategies.
Everyone involved in the process from code commits to production is responsible for mitigating risks.
Now let’s discuss the components of a governance process for a cloud environment. The DevOps Automated Governance Reference Architecture, found here, shares an approach to navigating your automated governance journey. Many of these concepts to be discussed here are explained in detail in that reference paper.
Notes are metadata definitions. Occurrences are generated for each artifact or resource that needs this note. As an example, a note could provide details of a specific vulnerability, such as the impacts, names, and status. I would generate an occurrence for every container image with that security vulnerability. Similarly, I could have a note that defines a specific application deployment, as I promote the deployment across different environments, I would generate an occurrence. There is a one to many relationships between notes and occurrences.
An attestation is a particular type of note that represents a verification that you’ve satisfied in your governance process. Attestations are tied to attestors, which hold the authority to verify a control point within your governance process. For example, determining you’ve passed a code review or a unit test is an attestation. Each attestation represents a control point within your governance process.
A binary authorization policy uses a list of attestors to represent your governance as code. A binary auth policy acts as a series of gates so that you can not get to the next stage of your software delivery before getting an attestation from each attestor. Therefore, it’s common practice to turn on binary authorization (BinAuthz) in your Kubernetes environment to ensure you are governing changes and deployments. You’ll have an Admission Controller in your Google Kubernetes Engine(GKE) that does the checks for attestation when you go interact with your environment. Here’s more information on how BinAuthz works for GKE.
If you’d like to learn more about designing control points for your governance process, Captial One also shared their pipeline design through a concept called “16 Gates” in a blog post called “Focusing on the DevOps Pipeline.”
Governance processes require automation to accelerate software delivery; otherwise, it can harm your velocity and time to market. A popular topic to emerge in the past year is automated pipeline governance, which gives enterprises the ability to attest to the integrity of assets in a delivery pipeline. Pipeline governance goes beyond traditional CICD, where developers simply automate delivery without truly mitigating risk. Continuous Integration and Continuous Delivery platforms can help heavily regulated industries manage their governance processes when developers and operations understand the organization’s risks.