Skip to main content
Category

Blog

Tekton Pipelines for CD Interoperability

By Blog, Project

CDF Newsletter – May 2020 Article
Subscribe to the Newsletter

Contributed By Eric Sorenson

Tekton is a project that evolved from an internal Google tool that used Knative to build and deploy software. In 2018, it was spun out as an independent project and donated to the Continuous Delivery Foundation.

The core component, Tekton Pipelines, runs as a controller in a Kubernetes cluster. It registers several custom resource definitions which represent the basic Tekton objects with the Kubernetes API server, so the cluster knows to delegate requests containing those objects to Tekton. These primitives are fundamental to the way Tekton works. Tekton’s building block approach starts with the smallest atom of work, the Step, aggregates Steps together in Tasks, and aggregates Tasks together in Pipelines. 

If the nomenclature here feels confusing, don’t feel bad — it is complicated! Each tool in the space uses slightly different terms; this is something we’re working on standardizing in the CDF Interoperability SIG. We’d love your input – here’s how to participate! Tekton’s usage of these terms is clarified in the sig-interop Vocabulary definitions doc:

* *Step*: a specific function to perform.

* *Task*: is a collection of sequential steps you would want to run as part of your continuous integration flow. A task will run inside a pod on your cluster.

* *ClusterTask*: Similar to Task, but with a cluster scope.

* *Pipeline*: stateless, reusable, parameterized collection of tasks. Tasks are linked together in a Pipeline, which describes the end-to-end deployment for an application.

* *PipelineRun*: an instantiation of a Pipeline definition, filling in the Pipeline’s parameters with concrete values

* *Pipeline Resource*: objects that will be input to or output from tasks

* *Trigger*: Triggers is a Kubernetes Custom Resource Definition (CRD) controller that allows you to extract information from event payloads (a “trigger”) to create Kubernetes resources.

Notable omissions from the CRD list are “Steps”, which don’t have their own CRD because they’re the smallest unit of execution which are always contained inside a Task. The Conditions and Dashboard Extension CRDs are still optional and experimental — but very exciting!

Tekton’s approach is particularly interesting from a tool interoperability standpoint. By focusing on these building blocks and the concrete representation of them as declarative configuration, Tekton creates a standard platform for CD in the same way that Kubernetes provides a platform for application runtimes. This allows user-facing tools to build on the platform rather than reinventing these primitives. Several projects have already taken up this approach:

* Jenkins X uses Tekton as its execution engine. It’s been an option for a while now, but recently the project announced it was moving to using Tekton exclusively. Jenkins X provides pipeline definitions and gitops workflows that are tailored for cloud-native CD.

* Kabanero is a project that enables teams to develop and deploy applications on Kubernetes, so architects can provide pre-approved application stacks for developers to work from. It uses Tekton Pipelines and several associated projects like Tekton Dashboard and Triggers; indeed the developers building the Dashboard are largely working on Kabanero and the IBM Cloud Devops Pipeline product.

* Relay by Puppet is a hosted service that uses Tekton as the execution engine for event-triggered devops and deployment workflows. (Full disclosure, this is the product I am working on!) It provides a YAML dialect for building workflows that can be triggered by external events, via API, or manually, to automate tasks that need to stitch together different tools and services.

* TriggerMesh have integrated Tekton Pipelines into their TriggerMesh Cloud project and are working on a tool called Aktion to translate Github Actions into Tekton Pipelines.

* There are more, too! Check out the Tekton Friends repo for a longer list of projects and end users building on Tekton.

As exciting as this activity is, I think it’s important to note there’s still a lot of work to be done. There’s a distinct difference between two projects both using Tekton as a common upstream platform and achieving interoperability between them! It’s a big problem and it’s easy to get overwhelmed with the magnitude of the whole thing. One of my earliest lessons when I moved from SRE into product management was: focus first on solving the pain points which end users feel most acutely. That can be some combination of pervasiveness (what percent of the overall user base feels it?) and severity (how bad is each individual incident?) – ideally, fix the thing which is worst on both axes! From an end user’s standpoint, CD tools have a pretty steep learning curve with a bunch of pitfalls. A sampling of these severe-and-pervasive pitfalls I’ve heard from our users as we’ve been building Relay:

* How do I wrap my head around the terminology and technology so I can get started?

* How do I integrate the parts of the build/test/deploy toolchain my organization needs to continue using?

* How do I operate (upgrade, monitor, troubleshoot) the tool once it’s up and running?

Interoperability isn’t a cure-all, but there are definitely areas where it could work like a soothing balm on all of this pain. Industry-standard terminology or at a minimum, an authoritative Rosetta Stone for CD, could help. At the moment, there’s still pockets of debate on whether the “D” stands for Deployment or Delivery! (It’s “Delivery”, folks – when you mean “Deployment” you have to spell it out.)

Going deeper, it’d be hugely helpful help users integrate the tools they’re already using into a new framework. A wide ecosystem of steps that could be used by any of the containerized CD tools – not just those based on Tekton but, for example, Spinnaker and Keptn as well – would have a number of benefits. For end users, it would increase the amount of content available “out of the box”, meaning they would have less work to integrate the tools and services they need. Ideally, no end-user should have to create a step from scratch because there’s a vast, easily discoverable library of things that accomplish the job they have. There’s also a benefit to maintainers of services and tools that end-users want, like Kaniko, Gradle, and the cloud services, who have to build an integration with each execution framework themselves or rely on the community to do it. Building and maintaining one reusable implementation would reduce the maintenance burden and allow them to provide higher quality. 

To put on my Tekton advocate hat for a moment, its well-defined container contract makes it easy to use general-purpose containers in your pipeline. If you want to take advantage of more specialized features the framework provides, the Tekton Catalog has a number of high-quality examples to build from. There are improvements on the way to aid the discoverability and reuse parts of the problem, such as the exciting new Tekton Hub donated by Red Hat. 

The operability concerns are a real problem for CD pipeline tools, too. Although CD is usually associated with development, in many organizations the tool itself is considered a production service, because if there are problems committing, building, testing, and shipping code, the engineering organization isn’t delivering value. Troubleshooting byzantine failures in complex CI/CD pipelines is a specialized discipline requiring skills that span Quality Engineering, SRE, and Development. The more resilient the CD tools are architected, and the more standard their interfaces for reporting availability and performance metrics, the easier that troubleshooting becomes. 

Again, to address these from Tekton’s perspective, a huge benefit of running on Kubernetes is that the Tekton services that run in the cluster can take advantage of all the powerful k8s operability features. So fundamental capabilities that are highly valuable to operators and troubleshooters like log aggregation, in-place upgrades, error reporting, and scale-out all ride on top of the Kubernetes infrastructure. It’s not “for free” of course; nothing in distributed systems is ever truly “for free” and if anyone tries to tell you otherwise, the thing they’re selling you is probably *very* expensive. But it does mean that general-purpose Kubernetes skills and tooling goes a long way towards operating Tekton at scale, rather than having to relearn or reimplement them at the application layer.

In conclusion, I’m excited that the interoperability conversation is well underway at the CDF. There’s a long way to go, but the amount of activity and progress in the space is very encouraging. If you’re interested in pitching in to discuss and solve these kinds of problems, please feel free to join in #sig-interoperability channel on the CDF slack or check out the contribution information

Introducing our newest CDF Ambassador – Yun “Forest” Jing (景韵)

By Blog, Staff

Hi CI/CD fans,

I’m Yun “Forest” Jing (景韵), a DevOps practitioner from China. I believe IT changes the world, and that DevOps changes IT.

You can read the blog First Online CI/CD Meetup in China Gets Over 5,000 Attendees. It is very inspiring to share knowledge and experiences in the community, so I’ve co-founded a local community called DevOps Times Community where there are almost 30,000 subscribers in our wechat official account. 

I’m also a Jenkins Ambassador and DevOps Institute Ambassador, too. I’ve organized the Jenkins Area Meetup and Jenkins User Conference China for 3 years. It was an honorable moment to win the Most Valuable Advocate of Jenkins community and to be Jenkins Ambassador in 2018, as well.

I, Xuefeng “BC” Shi and Tao Lei were on DevOps World 2018 San Francisco

Jenkins Ambassador Teams

My story with DevOps started from an email sent by my boss in 2014. He said, “make a study of DevOps.” And so it began. 

I found an internal community in our company where architects, developers, testers, and ops could meet together to understand and learn from each other.

Logo and Slogan of Internal DevOps Community

And I didn’t forget the work assigned by my boss . I’ve led from start to release an internal DevOps Guide to help all teams to practice DevOps.

The architecture of DevOps Guide in Chinese

2017 will be a memorable year for me. DevOpsDays Beijing 2017 has lit up DevOps in China. Lots of companies shared their experience about DevOps, such as Alibaba, Tencent, Baidu, Huawei, etc.

Patrick Debois and Me (Master and Newbie)

From 2017, I also started to be a full-time member of the community. I’ve co-organized the local DevOps event coined the DevOps International Summit (DOIS) and Jenkins User Conference in Beijing, Shanghai and Shenzhen to share Agile, CI/CD, AIOps, DevOps practices and experiences in China.

I’ve joined the experts group to contribute to the DevOps Capability Maturity Model organized by CAICT, as well. Lots of companies could learn how to practice according to this model.

Not only focused on China, but also built the communication bridge with the global DevOps community and companies. For this, Kohsuke Kawaguchi and Alyssa Tong have really helped a lot.

 JUCC Shanghai 2017

Alan Shimel and Jayne Groll  have also inspired me to introduce more experiences from China to the world and also from the world to China. So, I’m a DevOps Institute Ambassador right now. It is a great team helping to share DevOps with the world.

From Jenkins – GitHub App authentication support released

By Blog, Project

Originally posted on the Jenkins blog by Tim Jacomb

I’m excited to announce support for authenticating as a GitHub app in Jenkins. This has been a long awaited feature by many users.

It has been released in GitHub Branch Source 2.7.0-beta1 which is available in the Jenkins experimental update center.

Authenticating as a GitHub app brings many benefits:

  • Larger rate limits – The rate limit for a GitHub app scales with your organization size, whereas a user based token has a limit of 5000 regardless of how many repositories you have.
  • User-independent authentication – Each GitHub app has its own user-independent authentication. No more need for ‘bot’ users or figuring out who should be the owner of 2FA or OAuth tokens.
  • Improved security and tighter permissions – GitHub Apps offer much finer-grained permissions compared to a service user and its personal access tokens. This lets the Jenkins GitHub app require a much smaller set of privileges to run properly.
  • Access to GitHub Checks API – GitHub Apps can access the the GitHub Checks API to create check runs and check suites from Jenkins jobs and provide detailed feedback on commits as well as code annotation

Getting started

Install the GitHub Branch Source plugin, make sure the version is at least 2.7.0-beta1. Installation guidelines for beta releases are available here

Configuring the GitHub Organization Folder

Follow the GitHub App Authentication setup guide. These instructions are also linked from the plugin’s README on GitHub.

Once you’ve finished setting it up, Jenkins will validate your credential and you should see your new rate limit. Here’s an example on a large org:

GitHub app rate limit

How do I get an API token in my pipeline?

In addition to usage of GitHub App authentication for Multi-Branch Pipeline, you can also use app authentication directly in your Pipelines. You can access the Bearer token for the GitHub API by just loading a ‘Username/Password’ credential as usual, the plugin will handle authenticating with GitHub in the background.

This could be used to call additional GitHub API endpoints from your pipeline, possibly the deployments api or you may wish to implement your own checks api integration until Jenkins supports this out of the box.

Note: the API token you get will only be valid for one hour, don’t get it at the start of the pipeline and assume it will be valid all the way through

Example: Let’s submit a check run to Jenkins from our Pipeline:

pipeline {
  agent any

  stages{
    stage('Check run') {
      steps {
        withCredentials([usernamePassword(credentialsId: 'githubapp-jenkins',
                                          usernameVariable: 'GITHUB_APP',
                                          passwordVariable: 'GITHUB_JWT_TOKEN')]) {
            sh '''
            curl -H "Content-Type: application/json" \
                 -H "Accept: application/vnd.github.antiope-preview+json" \
                 -H "authorization: Bearer ${GITHUB_JWT_TOKEN}" \
                 -d '{ "name": "check_run", \
                       "head_sha": "'${GIT_COMMIT}'", \
                       "status": "in_progress", \
                       "external_id": "42", \
                       "started_at": "2020-03-05T11:14:52Z", \
                       "output": { "title": "Check run from Jenkins!", \
                                   "summary": "This is a check run which has been generated from Jenkins as GitHub App", \
                                   "text": "...and that is awesome"}}' https://api.github.com/repos/<org>/<repo>/check-runs
            '''
        }
      }
    }
  }
}

What’s next

GitHub Apps authentication in Jenkins is a huge improvement. Many teams have already started using it and have helped improve it by giving pre-release feedback. There are more improvements on the way.

There’s a proposed Google Summer of Code project: GitHub Checks API for Jenkins Plugins. It will look at integrating with the Checks API, with a focus on reporting issues found using the warnings-ng plugin directly onto the GitHub pull requests, along with test results summary on GitHub. Hopefully it will make the Pipeline example below much simpler for Jenkins users 🙂 If you want to get involved with this, join the GSoC Gitter channel and ask how you can help.

Insights by Harness – Common Challenges with Continuous Delivery

By Blog, Member

At Harness, we are here to champion Continuous Delivery for all. As a member of the Continuous Delivery Foundation [CDF], we’re happy to see that the industry has taken a collective step forward to better each other’s capabilities. We are pretty fortunate to have the opportunity to talk to many people along their Continuous Delivery journey. Part of what we do at Harness when engaging with a customer or prospect is help run a Continuous Delivery Capability Assessment (CDCA) to catalog and measure maturity. 

Over the past year, we have analyzed and aggregated the capability assessments that we have performed. We uncovered common challenges organizations faced in the past 12 months. In Continuous Delivery Insights 2020, we identified the time, effort, cost, and velocity associated with their current Continuous Delivery process. Trending data that we have around key metrics is that velocity is up but also complexity and cost are on the rise. 

Key Findings

We observed the following Continuous Delivery performance metrics across the sample of over 100 firms. Organizations that are looking towards strengthening or furthering their Continuous Delivery goals we noticed the following with median [middle] or average values. As sophisticated as organizations are, there is still a lot of effort to get features/fixes into production. 

In terms of deployment frequency, we define deployment frequency as the number of times a build is deployed to production. In terms of a microservices architecture, deployment frequency is usually increased as the number of services typically have a one to one relationship with the build. For the sample set we interviewed, the median deployment frequency is ten days which shows bi-monthly deployments are becoming more the norm.

These bi-monthly deployments might be on demand but the lead times can start to add up. Lead time is the amount of time needed to validate a deployment once the process has started. Through the sample, organizations typically require an average of eight hours; e.g eight hours of advance notice to allow validation and sign off of a deployment. 

If during those eight hours of lead time during the validation steps, if a decision is made to roll back, we saw that organizations in the sample averaged 60 mins of time to restore a service e.g roll back or roll forward. An hour might not seem too long to some but for engineers, every second can feel stressful as you race to restore your SLAs

Adding up all the effort from different team members during a deployment, getting an artifact into production represented an average of 25 human hours of work. Certainly, different team members will have varying levels of involvement throughout the build, deploy, and validation cycles but represents more than ½ a week of a full-time employee in total burden. 

Software development is full of unknowns; core to innovation we are trying and developing approaches and features for the first time. Expectation is there for iteration and learning from failures. We certainly have gotten better at deployment and testing methodologies and one way to measure is with change failure rate or the percentage of deployments that fail. Through the sample set, there was on average 11% of deployments failed. Not all doom and gloom, the Continuous Delivery Foundation is here to move the needle forward. 

Looking Forward 

The goal of the Continuous Delivery Foundation and Harness is to collectively raise the bar around software delivery. For organizations that are members of the Continuous Delivery Foundation, deploying via a canary deployment seems like second nature for safer deployments. If you are unfamiliar with a canary deployment, basically a safe approach to releasing where you send in a canary [release candidate] and incrementally replace the stable version until the canary has taken over.  As simple as this concept is to grasp, in practice can be difficult. In Continuous Delivery Insights 2020, only about four percent of organizations were taking a canary based approach somewhere in their organization. 
We are excited to start to track metrics on the overall challenges with Continuous Delivery year to year and work towards improving the metrics and adoption of Continous Delivery approaches for all. For greater insights, approaches, and breakdowns, feel free to grab your digital copy of Continuous Delivery Insights 2020 today!

Tekton: Wow! We’re beta now!

By Blog, Project

You may have heard that Tekton Pipelines is now beta! That’s not beta like the video format but beta like Kubernetes! Okay I’ll stop trying to make jokes, because compatibility is no laughing matter for folks who want to build on top of and use Tekton, and that’s why we’ve declared beta, so that you can feel more confident in using it.

What exactly does beta mean for Tekton?

So what does beta mean exactly? It means for Tekton what it means for Kubernetes, and it boils down to two things:

  1. Features that are beta will not be removed; they might change but you can count on the features themselves sticking around
  2. Backwards incompatible changes to the API will be avoided; if they do have to happen you will be given at least 9 months worth of releases to migrate to the new way of doing things

You might be wondering what “the API” means in this context – good question! It’s the specifications of the CRDs themselves and runtime details like the special directories that Tekton makes.

Not all of Tekton is beta however! Right now it’s just Tekton Pipelines and it’s only the following CRDs:

  • Tasks, ClusterTasks and TaskRuns
  • Pipelines and PipelineRuns


This means that other types that you might like, such as Conditions and PipelineResources (see the next section!) are still alpha and don’t (yet!) have the same beta level guarantees.

You can always refer to our API compatibility docs in our repo if you forget!

What about PipelineResources?

What about them indeed! If you are part of the Tekton community, you’ll know that we keep going back and forth on our love/hate-able PipelineResources – the feature you love until it doesn’t work.

A few months ago, our “difficult to understand, hard to debug” friend was challenged by the community: what would the Tekton world look like without PipelineResources? And when we went on that journey, we discovered features which PipelineResources gave us which were super useful on their own:

So we focused on adding those features and brought them to beta. In the meantime, we keep asking the question: do we still need PipelineResources? And what would they look like if redesigned with workspaces and results? We’re still asking those questions and that’s why PipelineResources aren’t beta (yet)!

We know some users really love them: “There are dozens of us,” – @dlorenc. So we haven’t given up on them yet, and there are some things that you just still can’t do well without them: for example, how do you consistently represent artifacts such as images moving through Pipelines? You can’t! So the investigation continues.

In the meantime, we’ve made Task equivalents of some of our PipelineResources in the Tekton catalog, such as PullRequests, GCS, and git.

Tekton Website is Live Now!

Hooray! Our shiny new site is live! Right this way -> https://tekton.dev/

Tekton Documentation is now hosted on the website at https://tekton.dev/docs/. And interactive tutorials are hosted at https://tekton.dev/try/. There is just one interactive tutorial hosted right now but more are in process to get published, so watch this space!

What’s coming up next?

We’re hard at work on more nifty Tekton stuff to make your CI/CD Pipelines more powerful and more portable by achieving Tekton’s mission:

Be the industry-standard, cloud-native CI/CD platform components and ecosystem.

Check out more on our mission and our 2020 roadmap in our community repo.

THANK YOU!!! ❤️

Thanks to all of the many amazing contributors who have gotten us to this point! The list below is people credited in Tekton Pipelines release notes, but for the complete list of everyone contributing to Tekton check out our devstats!

From Jenkins – Validating JCasC configuration files using Visual Studio Code

By Blog, Project

Originally posted on the Jenkins Blog by Sladyn Nunes

Configuration-as-code plugin

Problem Statement: Convert the existing schema validation workflow from the current scripting language in the Jenkins Configuration as Code Plugin to a Java based rewrite thereby enhancing its readablity and testability supported by a testing framework for the same. Enhance developer experience by developing a VSCode Plugin to facilitate autocompletion and validation which would help the developer write correct yaml files before application to a Jenkins Instance.

The Configuration as Code plugin has been designed as an opinionated way to configure Jenkins based on human-readable declarative configuration files. Writing such a file should be feasible without being a Jenkins expert, just translating into code a configuration process one is used to executing in the web UI. The plugin uses a schema to verify the files being applied to the Jenkins instance.

With the new JSON Schema being enabled developers can now test their yaml file against it. The schema checks the descriptors i.e. configuration that can be applied to a plugin or Jenkins core, the correct type is used and help text is provided in some cases. VSCode allows us to test out the schema right out of the box with some modifications. This project was built as part of the Community Bridge initiative which is a platform created by the Linux Foundation to empower developers — and the individuals and companies who support them — to advance sustainability, security, and diversity in open source technology. You can take a look at the Jenkins Community Bridge Project Page

Steps to Enable the Schema Validation

a) The first step includes installing the JCasC Plugin for Visual Studio Code and opening up the extension via the extension list. Shortcut for opening the extension list in VSCode editor using Ctrl + Shift + X.

b) In order to enable validation we need to include it in the workspace settings. Navigate to File and then Preference and then Settings. Inside settings search for json and inside settings.json include the following configuration.

{
"yaml.schemas": {
        "schema.json": "y[a]?ml"
    }
}

You can specify a glob pattern as the value for schema.json which is the file name for the schema. This would apply the schema to all yaml files. eg: .[y[a]?ml]

c) The following tasks can be done using VSCode:

a) Auto completion (Ctrl + Space):
  Auto completes on all commands.
b) Document Outlining (Ctrl + Shift + O):
Provides the document outlining of all completed nodes in the file.

d) Create a new file under the work directory called jenkins.yml. For example consider the following contents for the file:

jenkins:
  systemMessage: “Hello World”
  numExecutors: 2
  1. The above yaml file is valid according to the schema and vscode should provide you with validation and autocompletion for the same.

Screenshots

vscode
userDocs1
userDocs2

We are holding an online meetup on the 26th February regarding this plugin and how you could use it to validate your YAML configuration files. For any suggestions or dicussions regarding the schema feel free to join our gitter channel. Issues can be created on Github.

New Chair of CD Foundation Governing Board Elected

By Announcement, Blog

Tracy Miranda, Director of Open Source Community at CloudBees, has been elected as chair of the CD Foundation Governing Board. CloudBees is a Premier Member of the CD Foundation, and Tracy has been deeply involved in CD Foundation activities over the past year, serving on the Governing Board, helping craft the 9 Strategic Goals of the foundation, and participating in CD Foundation events around the globe. 

“I’m excited and honoured to be elected chair of the CD Foundation governing board. I join all CDF members in expressing sincere thanks to Kim Lewandowski for great progress in our first 12 months. Recent global events highlight how continuous software delivery is critical to every industry. The CDF will increasingly drive many key initiatives in this space, and I am excited to work with CDF members and the broader CI/CD community to pursue the CDF’s 9 Strategic Goals and move the CDF forward.”

The CD Foundation Governing Board raises, budgets and spends funds in support of CI/CD open source and standards projects. According to the CD Foundation, the Chair is responsible for the overall management of the foundation’s budget, and “will preside over meetings of the Governing Board, manage any day-to-day operational decisions, and will submit minutes for Governing Board approval.” The full charter is available here.

Tracy succeeds Kim Lewandowski, Product Manager at Google, who served as the CD Foundation’s first Governing Board chair.

Tracy Miranda Bio

Tracy is director of open source community at CloudBees, where she works closely with the Jenkins and Jenkins X communities. A developer and open source veteran, besides her work with the CD Foundation, Tracy is on the board of directors for the Eclipse Foundation. Tracy has a background in electronics system design and holds patents for her work on processor architectures. She writes for JAXenter.com and Opensource.com on tech, open source, and diversity.

More CD Foundation Resources

To keep up-to-date, sign up for our newsletter and join us in 2020 as we continue to grow and advance CI/CD in the industry!

New Chair of CD Foundation Technical Oversight Committee Elected

By Announcement, Blog

We are excited to announce that Dan Lorenc has been elected the CDF Technical Oversight Committee (TOC) Chairperson.

The TOC is responsible for the overall technical management of CDF projects, ultimately managing and guiding project technical infrastructure. Dan’s election to the TOC chairperson role is a recognition of his substantial contributions to the OSS community. These contributions include ongoing TOC leadership, numerous insightful posts on the CDF blog,  active participation in CDF SIGs and general leadership in the community.

Dan Lorenc said, “I’m humbled and excited to be elected as TOC Chair. We have an amazing opportunity to make it easier for the industry to adopt secure, safe and productive Continuous Delivery tools and practices. I look forward to working with the rest of the CDF community this year! I’d like to thank Kohsuke again for his hard work on Jenkins and in the CDF. I wish him the best of luck in his new adventure!”

As the TOC chairperson, Dan will continue to be a strong voice representing the perspectives of the broader CDF technical community, especially to the governing board. The CDF is excited to see him continue to help make CDF the definitive destination for the continuous delivery ecosystem.

For more information on CDF’s TOC leadership, please see here, or reach out to us.

Dan Lorenc Bio

Dan Lorenc is a Staff Software Engineer at Google, where he’s been working in the PAAS-space for 6 years. He currently manages a team focused on building open source tools to improve the container/Kubernetes developer experience. Previously he founded projects such as Minikube, Skaffold and Kaniko.

More CD Foundation Resources

To keep up-to-date, sign up for our newsletter and join us in 2020 as we continue to grow and advance CI/CD in the industry!

From IBM – Build and Deliver Using Tekton-Enabled Pipelines

By Blog, Member

Originally posted to the IBM blog by Jerh O’Connor

We are delighted to announce the addition of the industry-leading Tekton capability to IBM Cloud Continuous Delivery pipelines.

With this new type of Continuous Delivery Pipeline, you define your pipeline as code using Tekton resource YAML stored in a Git repository. This lets you version and share your pipeline definitions while allowing you to configure, run, and view pipeline output in the familiar IBM Cloud Continuous Delivery DevOps experience.

Tekton Pipelines is an open source project, born out of Knative Build, that you can use to configure and run continuous integration/continuous delivery (CI/CD) pipelines within a Kubernetes cluster.

Tekton Pipelines are cloud native and run on Kubernetes using custom resource definitions specialized for executing CI/CD workflows. The Tekton Pipelines project is new and evolving and has support and active commitment from leading technology companies, including IBM, Red Hat, Google, and CloudBees. 

For a closer look at Tekton, see our video “What is Tekton?”:06:49

What is Tekton?

In addition to the benefits of Tekton, this new capability in IBM Continuous Delivery provides the following unique features:

  • dashboard to easily view in-flight and completed Pipeline Run information
  • Manual and Git triggering support
  • Integration into the rich Open Toolchain ecosystem of integrations
Delivery Pipeline Tekton Dashboard

How to get started

Getting started with CD Tekton-enabled pipelines requires a little setup. You will need the following:

Once you have these pieces in place, you need to configure the Tekton Pipeline to enable the running of your pipeline.

Click on the Tekton Pipeline tile on your toolchain dashboard to be taken to the configuration screen, where you will see notifications on what remains to be configured to use this new pipeline technology:

Click on the Tekton Pipeline tile on your toolchain dashboard to be taken to the configuration screen, where you will see notifications on what remains to be configured to use this new pipeline technology:

Select the required values in the Definitions tab, select your private worker in the Worker tab, and click Save.

You now need to set some trigger mappings via the Triggers tab so you can run the pipeline.

The simplest trigger is a manual trigger where you kick off a run yourself from the dashboard. You can also create Git triggers that fire based on git push and pull events as needed.

CD Tekton-enabled pipelines also provide a means of externalising reusable properties and provide a simple, secure method for storing sensitive information like apikeys via the Environment Properties tab.

Once the setup has been completed, the new Tekton Dashboard automatically updates to show information on in-flight and completed Pipeline Runs, showing the status of these runs and allowing the cancellation and deletion of runs.

In all cases, you can dig deeper into the logs and see the output of each Tekton Task and Step in the selected Pipeline run.

dp4

Caution: The Tekton Pipelines project is still in alpha and being actively developed. It is very likely that you will need to make changes to your pipeline definitions as the project continues to evolve

Watch the demo

Watch this short companion video for a demo of using our sample Tekton pipeline in IBM Cloud.

Contact

Now you can try it out yourself. For more information, see our documentation. If you have questions, engage our team via Slack by registering here and join the discussion in the #ask-your-question channel on our public Cloud DevOps @ IBM Slack.

We’d love to hear your feedback.

Learn more about Tekton by reading “Tekton: A Modern Approach to Continuous Delivery.”

From Jenkins X – Asking and Finding Help: Outreachy

By Blog, Project

Originally posted on the Jenkinx X blog by Neha Gupta

Neha Gupta is adding support for Kustomize in Jenkins X, to enable Kubernetes native configuration management, while participating in Outreachy from December 2019 to March 2020.

Outreachy open-source contribution for applicants — Asking/Finding help

This blog might be helpful for beginners who are fear-stricken or I would say hesitant to ASK, to get lost in the new world while trying to understand any open source project, fear of asking questions that may sound stupid later on or are very obvious! First of all.. Relax!

  • Everyone starts from somewhere and has a learning curve!..
  • There are some pre-requisites that may help you get into open-source development better..
  • Learn basics of git operations. (https://learngitbranching.js.org , I find this easy and helpful).
  • Try to find an open-source project (remember : you’re going to contribute to a part of it, so it’s okay if some/many things doesn’t make sense in the beginning, because it’s easier to write code than to understand someone else’s code).
  • For selecting a project you may also look for Google-Summer-of-Code, Outreachy, Google-Code-In, RSoC and other open-source programs and their organisations that helps people/students/aspiring developers to find your best interest communities and projects.

NOTE : Beware! seeing too many organisations and projects will only confuse you, so start with only one or max 2 projects, try to deep-dive and focus on them.

After selecting the project :

  • Connect with the community through their communication channels for both developers and users (example : Slack, IRC-Cloud, Zulip, Riot etc )
  • Try to read the documentation and understand the overall structure and purpose of the project you’re starting to work on.
  • If you don’t understand something functionality wise — just ask! Ask on the communication channel.
  • If you are facing any error — Google search it, or try to look into the existing issues, if you’re not able to move forward and you’re stuck on the same error for more than 45 mins, just ask! Trust me! There’s no harm. In-fact, people of open-source communities appreciate it, feels motivated when there are users asking them about something that they’re passionately building. It also sometimes, helps the community to re-define and re-align the product and some features.

Happy learning! 🙂