The Continuous Delivery Foundation (CDF), a vendor-neutral home for many of the fastest-growing projects for continuous delivery, is announcing Screwdriver as its newest incubation project. Screwdriver is a self-contained, pluggable service to help developers build, test, and continuously deliver software using the latest containerization technologies. Screwdriver was originally developed by Yahoo, now Verizon Media, as simplified interfacing for Jenkins. It was open sourced in 2016 and completely rebuilt to handle deployments at scale along with CI/CD goals.
Screwdriver ties directly into DevOps teams’ daily habits. It tests pull requests, builds merged commits, and deploys to any environment. It also defines load tests, canary deployments, and multi-environment deployment pipelines with ease.
Begin contributing to Screwdriver today. Pull requests are always welcome. Start by browsing the Screwdriver contributing guide.
“The CD Foundation welcomes Screwdriver. We believe Screwdriver is off to an excellent start, and we’re excited to be working together. By joining the CD Foundation, Screwdriver will be able to scale more quickly, taking greater strides forward in development and deployment,” said Dan Lopez, CDF program manager. “With so many supported integrations, Screwdriver provides the openness and flexibility that DevOps teams require.”
“The Screwdriver team and platform are heroes at Yahoo and Verizon Media for helping us run our massive software engineering operations at scale. Together we can make your CI/CD team heroes at your company too. We invite you to work with us in this neutral home for open source excellence.” Gil Yehuda, Sr. Director of Open Source, Verizon Media/Yahoo.
“It’s great to see Screwdriver joining the CDF. I know the people behind the project are passionate about the same thing we are, and together we can make a bigger impact faster. Open source has a proven unique ability of achieving that across project boundaries,” said Kohsuke Kawaguchi, Co-CEO at Launchable, Inc.
The CD Foundation provides a wide range of services to projects, and the first step is starting as an Incubation Project. Full details on bringing an open source CI/CD project to the CDF are available here.
“Our team is thrilled to join the CDF. Together with our partners from Yahoo! Japan and all our external contributors we’ll continue to rapidly deliver solutions which support developer workflows and interoperability with various Continuous Delivery solutions.” Jithin Emmanuel, Sr. Engineering Manager, Verizon Media/Yahoo and Product Owner for Screwdriver.
For more information on getting involved with Screwdriver, please visit:
Do you have your ticket for the CD summit yet? Grab it soon, because we have limited space and it looks like we’re gonna sell out!
We (Rosalind Benoit, Armory, and Christie Wilson, Google) are thrilled to be collaborating with the CDF as co-chairs of the upcoming Continuous Delivery Summit, happening March 30th, 2020, and co-located with KubeCon EU in Amsterdam.
What’s our vision for this event? And more importantly, what’s the story of Continuous Delivery (CD) that the fledgling CDF has committed to sharing with the world? Why does it matter, and what do we hope to gain from telling it?
If you create software, you know how much power your software delivery lifecycle (SDLC) has: it can feel slow and oppressive, or it can accelerate you and give you the freedom you need to try all your cool ideas!
Software is increasingly more and more important to our culture and our economy: enterprises need great software to offer competitive products and services, and humans need great software to automate tasks, do less with more, and improve the lives of their families and communities.
And we need CD to make the software that makes this all possible!
At the CDF, we believe in CD, we believe in CI, and we believe that we make better solutions when we have more perspectives. The CDF aims to bring together the people building and using CI/CD projects so that we can take CI/CD forward into the future together as a community. At the CD summit we want to unpack these goals, dig into delivery platforms and strategies, and give voice to the frustrations and successes you’ve run into with your own SDLC.
This blog post has been written by the owners of the different projects, and in particular, huge thanks to Christie Wilson, Andrea Frittoli, Adam Roberts and Vincent Demeester!
At the end of last year Dan wrote the blog post: A Year of Tekton. It was a great retrospective on what happened since the bootstrap of the project; a highly recommended read! Now that we’re getting into the swing of 2020, let’s reflect again back on 2019 and look forward to what we can expect for Tekton this year!
Tekton in 2019
We can safely say 2019 (more or less the project’s first year!) was a great year for Tekton. Just like a toddler we tried things, sometimes failed and learned a lot; we are growing fast!
The year 2019 saw 9 releases of Tekton Pipelines, from the first one (0.1.0) to the latest (0.9.2). We shared the work of creating the releases as much as possible, though many contributors are behind the work in each!!
0.1.x (Jason Hall) First Tekton Pipelines release!
0.2.x (Christie Wilson) Tekton Pipelines with graphs; without init containers!
0.3.x Chartreux C-3PO (Vincent Demeester) Released using Tekton itself!
0.4.x Aegean Brackenridge (Dan Lorenc) Exposes digests of built images
If you are curious about the naming of the release starting from 0.3.x, we decide to spice things up a bit and name our release with a composition of a cat breed and a robot name (in reference to our amazing logo, a robot cat).
Aside from the initial project (tektoncd/pipeline), we bootstrapped a bunch of new projects:
tektoncd/cli: This project aims to provide an easy to use command line interface to interact with the tekton components. As Tekton objects are Kubernetes components you can always interact with them via the Kubernetes CLI — kubectl, but the kubectl experience can be very ‘raw’ and not very focused. The `tkn` CLI has the ambition to provide an easy to use user experience without having to know anything about kubectl (or Kubernetes for that matter).
tektoncd/dashboard: Alongside the CLI project, the Tekton Dashboard provides a user interface for the Tekton components, in a browser. It allows users to manage and view Tekton PipelineRuns and TaskRuns and the resources involved in their creation, execution, and completion.
tektoncd/catalog: Tekton pipeline is designed to provide highly shareable objects (Task, Pipeline, Condition, …), so creating a repo to store a catalog of shared Tasks and Pipelines came naturally!
tektoncd/experimental: With growing interest in Tekton came a growing number of “feature requests”. In order to be careful about how we expand the scope of Tekton pipeline while still allowing contributors to experiment, we created this repository to allow experiments to happen more easily. Experiments can graduate with enough traction. The biggest project so far is the webhooks extension which combines using the Dashboard project and Triggers to allow users to create webhooks for Git that trigger PipelineRuns.
tektoncd/operator: This project aims to provide an operator to manage installation, updation and uninstallation of tektoncd projects (pipeline, …). It has yet to be published in the community OperatorHub.
tektoncd/triggers: And speaking of the experimental repo, we have Triggers which started its life there! This project provides lightweight event triggering for Pipelines.
Looking forward into 2020 🔮
We’ve come a long way but we’ve got more to do! Though we can’t predict what will happen for sure, here is a preview of what we’d like to make happen in 2020!
Beta API, GA
As you’ve seen, we’ve made a lot of changes! Going forward we want to make sure folks who are using and building on top of Tekton can have more stability guarantees. With that in mind, we are pushing for Tekton Pipelines to have a beta release early in 2020. If you are interested in following along with your progress, please join the beta working group! Or keep an eye on our Twitter for the big announcement.
Once we’ve announced beta, users should be able to expect increased stability as we’ll be taking our lead from kubernetes and mirroring their deprecation policy, for example any breaking changes will need to be rolled out across 9 months or 3 releases (whichever is longer).
And once we get to beta, why stop there? We’d love to be able to offer users GA stability as soon as we possibly can. After we get to beta, we’ll be looking to progress the types that we didn’t promote to beta (e.g. Conditions), add any important features we don’t yet have (we’re looking at you on failure handling and “pause and resume” aka “the feature that enables manual approval”!), and then we should be ready to announce GA!
Task Interfaces and PipelineResources
Speaking of types that won’t be going beta right away: PipelineResources! PipelineResources are a type in Tekton that is meant to encapsulate and type data as it moves through your Pipelines, e.g. an image you are building and deploying, or a git commit you’re checking out and building from.
This concept was introduced early on in Tekton and bares a close resemblance to Concourse resources. However as we started trying to add more features to them, we started discovering some interesting edges to the way we had implemented them that caused us to step back and give them a re-think. Plus, some folks in our community asked the classic question “why PipelineResources” and we found our answer wasn’t as clear as we’d like!
As we started down the path of re-designing, and re-re-designing again, we started to get some clarity on what exactly it was we were trying to create: the interface between Tasks in a Pipeline! And thanks to a revolutionary request to improve our support for volumes, we finally feel we are on the right path! The next steps along this path are to add a few key features, namely the concept of workspaces (i.e. files a Task operates on) and allow Tasks to output values (aka “results”).
Once we have these in place we’ll revisit our designs and our re-designs.
tekton.dev
Hand in hand with our beta plans, we’re revamping our website! Soon at tekton.dev you’ll be able to find introductory material, tutorials, and versioned docs.
The Tekton Catalog
Besides making it easy for folks to implement cloud native CI/CD, one of the most important goals of Tekton is for folks to be able to share and reuse the components that make up your Pipeline. For example, say you want to update Slack with the results of a Task – wouldn’t it be great if there were one battle tested way to do that, with a clean interface?
But there’s so much more we want to do! We want to offer versioning and test guarantees that can make it painless for folks to depend on Tasks in the Catalog – and for companies to create Catalogs of their own.
Plus, the Catalog is a great place for us to build better interoperability even between the Tekton projects, for example with the Task that runs tkn (the Tekton CLI).
Shout outs 😻
A community is nothing without its users, contributors, adopters and friends, so we want to explicitly shout outs to our community for their tremendous effort and support in 2019 and hopefully even more in 2020.
We welcome friend requests! Please submit a PR to https://github.com/tektoncd/friends, this repository acts as a place that allows members of the ecosystem (known as “Tekton Friends”) to self-report in a way that is beneficial to everyone. We’d love to have you as a friend if your company is using Tekton and/or contributing to it 😀
Projects
Adoption of Tekton has grown and became a part of both free and commercial offerings by various companies, demonstrating that Tekton’s valuable and ready for anything
In mid-2019, Puppet launched a new cloud-native CD service called Project Nebula that’s built on Tekton Pipelines. It provides a friendly YAML workflow syntax and niceties like secrets management and a spiffy GUI on top of Tekton instance running in GCP. To coincide with the public beta of Nebula, Scott Seaward keynoted at the Puppetize PDX user conference to talk about how Tekton works under the hood. Since then, the Nebula team has contributed several PRs to the Pipelines repo and are looking forward to working on step interoperability, triggers, and other awesome upstream features in 2020.
It has been such a privilege to see more and more people get excited about Tekton and share it with the world! Here are some (but not all!!) of the great talks and tweets we saw about Tekton in 2020, not to mention our Tekton contributor summit!
If you are interested in contributing to Tekton, we’d love to have you join us! Every tektoncd project has a CONTRIBUTING.md that can point you in the right direction, and our community contains helpful links and guidelines. Feel free to open issues, join slack, or pop into one of our working groups! Hope to see you soon 😀
It’s 2020, so what’s our New Year’s Resolution? #1, Make Spinnaker the Perfect Continuous Delivery Platform.
🤦 Voltaire said “perfect is the enemy of good,” and we’ve seen some resolution-minded ads lately reviving that adage (I’m thinking of Michael Phelps reminding me that Progress IS perfection : ) Striving for perfection in software development can lead to obsolete products. So, we hack. We listen to our users and iterate. When we do that as a community, we can progress towards something truly brilliant. Spinnaker’s progress was perfection in 2019, and by all accounts will exceed that trajectory in 2020.
Enterprise Adoption Crescendo (of Production workloads)
Spinnaker saw promising early adoption from large companies like Target and Adobe, and this year has been no exception. While literally everyone books stays on the site, oblivious to digital transformation, Airbnb is using Spinnaker to migrate from monolith to service-oriented architecture, and from brittle deployments to continuous delivery. SAP joyfully leverages Spinnaker on its mission to run the world better, and Pinterest uses it to boost productivity as it pioneers visual discovery. Transunion stays ahead of the fintech curve, providing total credit protection through applications it now deploys with Spinnaker, a more full-featured fit for ephemeral infrastructure than its previous Ansible solution. Companies like Comcast, going all-in on Kubernetes as a software-defined datacenter, have added Spinnaker to manage deployment pipelines. Meanwhile, Salesforce has adopted Spinnaker to bake images based on both Helm charts for Kubernetes and Packer templates for VMs, to support its complex software delivery requirements.
In 2019, we proudly welcomed engineers from new enterprises, including JPMorgan Chase and Home Depot, into the Spinnaker community. Now more than 175 companies have contributed to the project, with over 200 new ICs just last year, and many more companies have become key stakeholders, using and extending Spinnaker. These demonstrate Spinnaker as the mature CD solution, proven to handle production workloads flexibly and scalably.
Organic Growth Through Governance
As adoption continues to rise, and our community grows, it becomes crucial to create a growth-adapted project space. A transparent structure for building and maintaining the project invites new companies and users to take an active role in shaping Spinnaker’s future. To that end, 2019 saw the governance process and entities created in 2018 codified in GitHub, along with a process for modifying it via PRs and votes from members of the TOC and Steering Committee. Spinnaker governance also blossomed into an active space encompassing 8 community-initiated SIGs which organize contributors around feature growth and maintenance in areas of interest. SIGs welcome anyone to join, and we saw growing attendance from end-user companies in H2 2019. As the TOC experiments with public Open Office Hours, Spinnaker Slack is always open, and welcomes nearly 8500 participants to troubleshoot, chat with a SIG team, or reach out to a community member any time. Coupled with the donation of the project to the CDF, these growth factors signal the founding of Spinnaker as a neutral, democratized project space. Our goal? Fuel rapid innovation as we work to empower humanity to deliver their innovations, faster.
What came out of this investment in 2019? Where to begin…! An OSS ecosystem thrives with modular components that allow operators to optimize for business goals and maintain compliance. As our user base grows, the problem set expands, use cases vary, and we innovate across a richer toolchain. This allows us to create a smarter, more automated experience. Case(s) in point:
New data sources added for SignalFX and New Relic, to inform Automated Canary Analysis decisions that let app owners sleep instead of being paged.
A new Gremlin integration allowing chaos experimentation in Spinnaker pipelines will expand in 2020 to provide results useful for automated decision support.
Integrations with artifact repositories Nexus and JFrog’s Artifactory have added new native triggers for Spinnaker pipelines.
New end-to-end secrets management dynamically decrypts Spinnaker secrets as needed for validation and deployment from a backing store of your choosing, such as Vault or S3.
Since interoperability is crucial to Spinnaker, implementing a reliable plugin system was a key 2019 milestone. As our community leverages Spinnaker to solve problems, we must remove friction from the dev’s experience in contributing those extensions to the project. A plugin framework provides libraries and application context to devs, and defines clear extension points to start from when integrating something new. In 2019, we adopted PF4J as our backend plugin-loader framework. In 2020, we’ll implement plugin loading in the Spinnaker UI, and foster community around building and sharing plugins, to enrich our ecosystem.
Cloud Providers – Raining Champions : P
Spinnaker depends on cloud provider investment in extending the project for deployment to the ever-growing variety of ephemeral infra solutions. In 2019, engineers at Google developed a blueprint for a production-ready Spinnaker instance on GKE, integrated with Google Cloud services such as Cloud Build. Amazon Engineers have extended cloud providers for AWS services, ensuring that we can deploy with Spinnaker to any attribute available in Fargate or ECS (Elastic Container Service). As of this year, that includes any task definition attribute. AWS also added full support for deployments to serverless applications using AWS Lambda, including the ability to use Lambda functions as ALB targets.
Migrating to the cloud alleviates headaches, while bringing new operational challenges. Spinnaker evolves to capture and solve for these new challenges as we encounter them. The extendable Swabbie service, created in 2019, tackles the tedium (and potential nightmare at scale!) of reaping unused resources programmatically, to help optimize cloud spend. With Swabbie, an operator can set rules for cleanup candidates via YAML, and clean resources according to a configurable schedule. Deployments to highly automated cloud environments prompted enablement of dynamic account creation, discovery, and configuration for Cloud Foundry and Kubernetes accounts in the cloud.
Upleveling Functionality = Perfect Progress
The Kubernetes V2 Provider for Spinnaker also came into its own last year, offering the ability to deploy, delete, scale, and roll back K8S manifests as artifacts managed as code. The Kubernetes SIG iterated fast to improve the V2 user experience by surfacing more kubectl commands in the Spinnaker UI, and improving management of rollout strategies. They also enhanced traffic management to enable more deployment patterns with the provider, such as blue/green (AKA red/black) and dark canary. In 2020, simplifying the Kubernetes developer experience is an important roadmap element, and the community will tackle it by visualizing more K8S resources in the Spinnaker UI, and improving terminology, error, and workflow management.
Under the hood, 2019 saw lots of effort to provide operators the option to back Spinnaker with a MySQL database instead of Redis. Stateful data in Spinnaker enables event routing and orchestration for pipelines, integrated CI and SCM events, and Swabbie cleanup notifications. The choice of whether to use a relational DB or in-memory store to manage that data gives operators the freedom to optimize performance for their workloads and infrastructure. This makes all that effort, which required updating several microservices, including Echo, the eventing service, and Orca, the orchestration engine, well worth it. Likewise, updates to the Authorization model have allowed even more granular permissions to be durably API-driven across the platform.
A Bright Future Won’t Blind Us (to Your Story)
One high-level 2020 goal aims to better incorporate user stories and enterprise use cases into Spinnaker’s trajectory. The steering committee has committed to building a roadmap that tells high-level stories about using Spinnaker to solve problems. Tool chain interoperability, notably with Kubernetes, cloud providers, and monitoring systems figure large in the H1 2020 Roadmap. Managed Delivery, an exciting Spinnaker CD initiative incubating at Netflix, uniquely responds to a common narrative around software delivery. It uses declarative automation to alleviate the operational knowledge and maintenance burden that comes with ownership of modern, continuously delivered applications.
Users can help us tell the best Spinnaker stories by submitting comments and issues describing usage and business context. Please visit Spinnaker.io (which the Docs SIG will overhaul in 2020) and check out our Success Stories page. Join us on Spinnaker Slack or in the Spinnaker org and tell your tale!
Recognition of the importance of interoperability and identifying it as one of the strategic goals is a very important step for CDF to take for users. Users and organizations employ various CI/CD tools and technologies depending on their needs and where they are in their CI/CD transformation. Organizations often employ more than one tool in various stages of their CI/CD pipelines due to different capabilities provided by the tools and this is perhaps one of the biggest benefits users get by using open technologies for their CI/CD needs. For example, CDF member Salesforce has over 20 different CI/CD tools internally thanks to acquisitions and different requirements in teams.
However, one of the challenges users face is the lack of interoperability across the CI/CD tools and technologies, resulting in various issues while constructing and running pipelines such as passing metadata and artifacts between the tools or achieving traceability from commit to deployment. Often users end up building their “own glue code” to address what is a common problem, further complicating moving from one tool to another and adopting new technologies and methodologies.
These “glue code solutions” are generally specific to users’ needs and tools rather than being loosely coupled and agnostic to tooling and technology. Additionally these solutions are not visible to other users and the communities, making them vulnerable to the risk of outage in their CI/CD pipelines due to potential changes (i.e. non-backward changes to the APIs, changes in data models) that happen to the tools in respective projects.
Therefore, focusing on tool interoperability is critical.
There has been significant collaboration going on in this area. Linux Foundation Networking (LFN), OpenStack Foundation (OSF), and Cloud Native Computing Foundation (CNCF) projects have done a lot to raise awareness of CI/CD interoperability challenges. In addition to these communities, Spinnaker, Jenkins, Tekton, and Jenkins X, CDF founding projects, have been collaborating and sharing ideas. However, there are many more users, projects and communities, either looking for answers to similar interoperability challenges, on their way to developing solutions, or simply trying to find like minded people to work with together.
We believe the work should happen in a neutral forum where users come together with maintainers of open source CI/CD projects and have a dialog about the challenges we need to address.
Which is why the CDF Interoperability SIG was launched, led by Fatih Degirmenci of Ericsson and with support from representatives from Netflix, Google, China Mobile, CloudBees and others.
We, the CDF Interoperability SIG, aim to provide such a forum and enable a dialog around interoperability in order to:
clarify what interoperability means for the CI/CD ecosystem
promote the need to collaborate on interoperability challenges in a neutral forum
highlight and promote the needs of the users who face challenges constructing complex end-to-end CI/CD flows and pipelines by employing different tools and technologies
explore synergies between, and enable collaboration across, the CI/CD projects with regards to interoperability
pursue solutions which are loosely coupled, scalable, flexible, and tool and technology agnostic
reduce the need for users to implement in-house solutions by promoting native interoperability between tools
attract and assist projects that work on interoperability
Membership to the Interoperability SIG is open to the public. We invite users and contributors to open source CI/CD projects to join us to share ideas, use cases, challenges, and solutions with each other.
Here are some of the ways you can take part in the Interoperability SIG and start collaborating:
CDF SIG Meets every even week on Thursdays at 15:00UTC on Zoom and the meeting agenda and minutes are available here. Our first meeting will be on January 23, 2020.
Finally, we would like to thank everyone who has listened to our ideas, shared their thoughts, taken part in crafting the proposal, and most importantly, encouraged us with their +1s!
The Jenkins X project started the beginning of 2019 by celebrating its first birthday on the 14th January, a big event for any open source project, and we have just celebrated our 2nd – hooray!
Jenkins X has evolved from a vision of how CI/CD could be reimagined in a cloud native world, to a fast-moving, innovative, rapidly maturing open source project.
Jenkins X was created to help developers ship code fast on Kubernetes. From the start, Jenkins X has focused on improving the developer experience. Using one command line tool, developers can create a Kubernetes cluster, deploy a pipeline, create an app, create a GitHub repository, push the app to the GitHub repository, open a Pull Request, build a container, run that container in Kubernetes, merge to production. To do this, Jenkins X automates the installation and configuration of a whole bunch of best in breed open source tools, and automates the generation of all the pipelines. Jenkins X also automates the promotion of an application through testing, staging, and production environments, enabling lots of feedback on proposed changes. For example, Jenkins X preview environments allow for fast and early feedback as developers can review actual functionality in an automatically provisioned environment. We’ve found that preview environments, created automatically inside the pipelines created by Jenkins X, are very popular with developers, as they can view changes before they are merged to master.
Jenkins X is opinionated, yet easily extensible. Built to enable DevOps best practices, Jenkins X is designed to the deployment of large numbers of distributed microservices in a repeatable and manageable fashion, across multiple teams. Jenkins X facilitates proven best practices like trunk based development and GitOps. To get you up and running quickly, Jenkins X comes with lots of example quickstarts.
Highlights of 2019
February 2019: The rise of Tekton!
In the second half of 2018, Jenkins X embarked on a journey to provide a Serverless Jenkins and run a pipeline engine only when required. That pipeline engine was based on the knative build-pipeline project which evolved into Tekton with much help and love from both the Jenkins and Jenkins X communities. The Jenkins X project completed its initial integration with Tekton in February 2019. Tekton is a powerful and flexible kubernetes-native open source framework for creating CI/CD pipelines, managing artifacts and progressive deployments.
March 2019: Jenkins X joined The Continuous Delivery Foundation!
Jenkins X joined the Continuous Delivery Foundation (CDF) as a founding project alongside Jenkins, Spinnaker, and Tekton. Joining a vendor-neutral foundation, focused on Continuous Delivery, made a lot of sense to the Jenkins X community. There had already been a high level of collaboration with the Jenkins and Tekton communities, and there have been some very interesting and fruitful (in terms of ideas) discussions about how to work better with the Spinnaker communities also.
June 2019: Project Lighthouse
When Jenkins X embarked on its serverless jenkins journey, it chose to use Prow, an event handler for GitHub events and ChatOps. Prow is used by the Kubernetes project for building all of its repos and includes a powerful webhook event handler. Prow is well proven, but heavily tied to GitHub, and not easily extendable to other SCM providers. At the end of June 2019, work commenced on a lightweight, extensible alternative to Prow, called Lighthouse. Lighthouse supports the same plugins as Prow (so you can still ask via ChatOps for cats and dogs) and the same config – making migration between Prow and Lighthouse extremely easy.
June 2019: Jenkins X Boot!
We were very busy in June – a frantic burst of activity before summer vacations! One common problem Jenkins X users were facing was the installation of Jenkins X on different Kubernetes clusters. Installing services, ensuring DNS and secrets are correct, and done in the right order is completely different from vendor to vendor, and sometimes cluster to cluster. We realised that to simplify the install, we really needed a pipeline, and whilst this may sound a little like the plot to a film, running a Jenkins X pipeline to install jx really is the best option. The jx boot command interprets the boot pipeline using your local jx binary. The jx boot command can also be used for updating your cluster.
July 2019: A New Logo!
As part of the move to the CDF the Jenkins X project took the opportunity to redesign its logo. An automaton represents the ability of Jenkins X to provide automated CI/CD on Kubernetes and more!
Second half 2019: Big focus on Stability and Reliability
The Jenkins X project has been fast paced with lots of different components and moving parts. This fast pace unfortunately led to some instability and a growth of serious issues that risked undermining all the great work there had been on Jenkins X. There has been a concerted effort by the community to increase stability and reduce outstanding issues – the graph below shows the trend over the last year, with a notable downward trend in the number of issues being created in the last 6 months.
CloudBees also aided this effort by introducing the CloudBees Jenkins X Distribution with increased testing around stabilized configurations and deployments and regular releases every month.
October 2019: Jenkins X Steering Committee inaugural meeting
The Jenkins X Bootstrap Steering Committee is tasked with organising the transition to an elected steering committee, as well as determining what responsibilities the steering committee will have in guiding the Jenkins X project.
December 2019: First Jenkins X Outreachy mentee!
Neha Gupta is adding support for Kustomize in Jenkins X, to enable Kubernetes native configuration management, while participating in Outreachy from December 2019 to March 2020. We welcome Neha’s work on Jenkins X and look forward to building on our culture of continuous mentoring!
The Jenkins X project is going to be encouraging the community to get involved with more innovation. There are a lot of great ideas to extend the continuous story with integrated progressive delivery (A/B testing, Canary and Blue/Green deployments) and Continuous Verification, alongside more platforms support. We are expecting lots of awesome new features in the CloudBees UI for Jenkins X too.
Expect lots more exciting new announcements from Jenkins X in 2020!
2019 was a crazy time to be writing software. It’s hard to believe how careless we were as an industry. Everyone was just having fun slinging code. Companies were using whatever code they found laying around on NPM, Pip, or Maven Central. No one even looked at the code these package managers were downloading for them. We had no idea where these binaries came from or even who wrote most of this stuff.
And don’t even get me started on containers! There was no way to know what was inside most of them or what they did. Yet there we were, pulling them from Dockerhub, slapping some YAML on them, and running them as root in our Kubernetes clusters. Whoops, I just dated myself. Kubernetes was a primitive system written mostly in YAML and Bash that people used to interact with before Serverless came and saved us all.
Looking back, it’s shocking that the industry is still around! How we didn’t have to cough up every Bitcoin in the world to stop our databases from getting leaked or our servers from being blown up is beyond me. Thankfully, we realized how silly this all was, and we stopped using whatever code had the most Github stars and started using protection.
We’re Under Attack
No, really. Every time you pip install, go get, or mvn fetch something, you’re doing the equivalent of plugging a thumb drive you found on the sidewalk into your production server.
You’re taking code from someone you’ve never met and then running it with access to your most sensitive data. Hopefully, you at least know their email address or Github account from the commit, but there’s no way to know if this is accurate unless you’re checking PGP signatures. And let’s be honest, you’re probably not doing that.
This might sound like I’m just fear-mongering, but I promise I’m not. This is a real problem that everyone needs to be aware of. Attacks like this are called supply-chain attacks, and they are nothing new. Just last month, an active RCE vulnerability was found in an open source package on PyPi that was being used to steal SSH and GPG credentials.
There are lots of variations on this same play that make use of different social-engineering techniques in interesting ways. One attacker used a targeted version of this to steal cryptocurrency from a few specific websites. Another group performed a “long-con” where they actually produced and maintained a whole set of useful open source images on Dockerhub for years before slowly adding, you guessed it, crypto-mining.
The possibilities are endless, terrifying, and morbidly fascinating. And they’re happening more and more often. If reading about attacks like these is your kind of thing, the CNCF has started cataloging known instances of them. Snyk also just published a post detailing how easy it is to inject code like this in most major languages — Github even hides these diffs in code review by default! Russ Cox has also been writing about this problem for a while.
Vision
OK, there’s a bit of hyperbole up there (Kubernetes doesn’t have that much bash in it), but open source is under attack, and it’s not OK. Some progress is being made in this area — GitHub and others are scanning repositories, binaries, and containers, but these tools all only work on known vulnerabilities. They have no mechanism to handle intentional, malicious ones before they are discovered, which are at least as dangerous.
The brutal fact is that there is no way to be confident about the code you find on most artifact repositories today. The service might be compromised and serve you a different package from the one the author uploaded. The maintainer’s credentials might have been compromised, allowing an attacker to upload something malicious. The compiler itself might have been hacked, or even the compiler that compiler used (PDF warning)! Or, the maintainer could have just snuck something in on purpose.
For any given open source package, we need to be able to confidently assert what code it’s comprised of, what toolchains and steps were used to produce the package, and who was responsible for each piece. This information needs to be made available publicly. A reliable, secure view of the supply-chain of every open source package will help make these attacks easier to prevent and easier to detect when they do happen. And the ability to tie each line of code and action back to a real individual will allow us to hold attackers accountable.
How Do We Get There?
We need to work as an industry to start securing open source software, piece by piece.
Artifact repositories need to support basic authentication best practices like 2FA, artifact signing, and strong password requirements. DockerHub, PyPi, and NPM support 2FA, but there’s no way to see if a maintainer of a package is using it. Most container registries don’t support signatures yet, though work is ongoing.
Software build systems need to make reproducible, hermetic builds possible and easy. Debian has started doing some great work here, but they’re basically alone. Every docker build gives you a new container digest. Tar and gzip throw timestamps everywhere. It’s possible to get reproducible builds in Go, Java, and most other major languages, but it’s not necessarily easy. See the recently published whitepaper on how Google handles much of this internally for more information.
SCM providers need strong identity mechanisms so we can associate code back to authors confidently. Git commit logs can be easily forged, and signed commits are not in common use. Even with them, you still have no idea who is on the other end of a PR, only that the signature matches. This isn’t just an issue for security. It can also be a licensing nightmare if you don’t know the real author or license of code you’re accepting.
There is value in allowing developers to work anonymously, but there is also a cost. We need to balance this with systems that apply a higher level of scrutiny to anonymous code. We also need to allow other individuals to “vouch for” patches that they’ve examined, maybe similar to how Wikipedia handles anonymous edits.
And finally, all of this needs to be tied together in secure CI/CD systems and platforms that implement binary transparency for public packages. Putting the packaging steps in the hands and laptops of developers leaves way too large an attack surface. The ability to push a package that will run in prod is the same as having root in prod. By moving the build and upload steps into secure CI/CD systems, we can reduce the need to trust individuals.
OK, but What Can I Do Now?
First, start by securing your code as much as possible. Make sure you have copies of every dependency you’re using stored somewhere. Make sure you review all code you’re using, including OSS. Set up and mandate the use of 2FA across your organization. Publish, and actually check the signatures and digests of the software you’re using.
Log enough information in your build system so you can trace back every artifact to the sources. And every deployment to the artifacts. Once you’ve done all of this, you’ll be pretty far ahead of everyone else. You’re not completely safe, though.
That’s where we need to work together. If you’re interested in helping out, there are many ways to get involved, and I’m sure there are a lot of efforts going on. We’re just getting started on several initiatives inside the Continuous Delivery Foundation, like our new Security SIG. We’re also hoping to make it easier to build and use secure delivery pipelines inside the TektonCD open source project.
We would love your help, no matter your expertise! For example, I’m far from a security expert, but I’ve spent a lot of time working on developer tools and CI/CD systems. Feel free to reach out to me directly if you have any questions or want to get involved. I’m on Twitter and Github.
If you are interested to know more about Jenkins features introduced in 2019, stay tuned for a separate blog post about it (coming soon!).
Project updates
Highlights above do not cover all advancements we had in the project. Below you can find slides from the Jenkins contributor summit in Lisbon. There we had project updates by officers, SIG and sub-project leaders. See the slide deck to know about: Jenkins Core, Pipeline, Configuration-as-Code, Security, UX Overhaul, Jenkins Infrastructure, platform support and documentation.
Some stats and numbers
If this section seems to be too long for you, here is some infographic prepared by Tracy Miranda. As you may see, Jenkins is pretty big 🙂
Community. Over the past year we had 5433 contributors in GitHub repositories (committers, reviewers, issue submitters, etc.). We had 1892 unique committers who created 7122 pull requests and 45484 commits, bots excluded. Contributors represent 273 companies and 111 countries, 8% of contributors are recognized as independent. The most active repositories were Jenkins Core and jenkins.io. The most active month was October 2019 when we reached the record high number of contributions: 915 unique contributors, 124 of them were first-timers, thanks to Hacktoberfest!.
Jenkins core. In 2019 Jenkins core had 54 weekly and 13 LTS releases with several hundreds of notable fixes/enhancements. There was a login screen extensibility rework, many update manager and administrative monitors improvements. We also introduced support for user timezones, not speaking of emojis support 🥳. There was also a lot of housekeeping work: better APIs, codebase refresh, cleaning up static analysis warnings and removing deprecated features like Remoting CLI. The core’s components also got major updates. Only Jenkins Remoting got 11 releases with stability improvements and new features like support of inbound connections to headless Jenkins masters. There are also major incoming features like JEP-222: WebSocket Services support, UI look&feel updates, JENKINS-12548: Readonly system configuration support, Docker images for new platforms like Arm. To facilitate further changes we created a new Core pull request reviewers team and added 9 contributors there.
Plugins. There were 2654 plugin releases, and 157 NEW plugins have been hosted in the Update Center. Jenkins ecosystem got a lot of new integrations with Development and DevOps tools. Also, warm welcome back to the Scriptler Plugin which was depublished in 2017 due to security issues. If you are afraid about such plugin numbers and dependency management, there is a new Plugin Installation Manager CLI Tool which should help Jenkins users to manage plugins more efficiently.
Security. It was a hot year for the Jenkins Security Team. There were 5security advisories for the core and 20 – for plugins. In total we disclosed 288 vulnerabilities across the project, including some backlog cleaning for unmaintained plugins. Script Security Plugin was the hottest plugin with 10 critical fixes addressing various sandbox bypass vulnerabilities. Plain text storage and unprotected credentials were the most popular vulnerability type 120 disclosures in 2019. It was made possible by hundreds of reports submitted by contributors after code surveys, special thanks to Viktor Gazdag who reported the most of the issues and became the Jenkins 2019 Security MVP (check out his story here).
Infrastructure. Got Jenkins? If so, you rely on Jenkins update centers, website and issue tracker. All these and many other services are maintained by the Jenkins Infrastructure Team. This year the team handled more than 400 requests in the bugtracker, and many other informal requests. In total, more than 30 people contributed to Jenkins infrastructure this year (website content is excluded). We also deployed 4 new services, migrated 7 services from Azure Container Service to Azure Kubernetes Service and updated many other services. More changes will happen in the next months, and we are looking for new INFRA team members!
Documentation. Only last quarter we had 178 contributors to Jenkins documentation. It includes jenkins.io and other documentation hosted on GitHub, Wiki is not included. There is also ongoing migration plugin documentation from Jenkins Wiki to GitHub (announcement). Since the beginning of the project in Sep 2019, more than 150 plugin were migrated, and they got significant documentation revamp during the migration. You can see the current status https://jenkins-wiki-exporter.jenkins.io/progress. We also work on introducing changelog automation in the project. 123 plugins have already adopted the new changelog tools, powered by Release Drafter. Also, we had more than 60 technical blog posts published on jenkins.io.
Configuration as Code was one of the most popular areas this year. Jenkins Configuration as Code Plugin had more than 30 releases with new features and bug fixes. More than 50 plugins have been also updated in order to offer better configuration-as-code support. As a result, the JCasC Plugin got massive adoption this year (from 2000 to almost 8000 installations), and now it becomes a de-facto standard for managing Jenkins as code. This year we also ran our very first CommunityBridge project devoted to JCasC Schema validation and developer tools.
Events and outreach programs. In 2019 we participated in multiple conferences, including FOSDEM, DevOps World | Jenkins World, SCALE. More than 40 Jenkins Area Meetups were organized across the world, and there were many other meetups devoted to Jenkins. We also kept expanding our outreach programs. In total we had 12 students who participated in Google Summer of Code, Outreachy and newly introduced Community Bridge. We also had the biggest ever Hacktoberfest with 664 pull requests and 102 participants. These outreach programs help us to deliver new features in Jenkins. For example, this year we added Multi-branch Pipeline support for Gitlab and a new Plugin Installation Manager Tool during GSoC, and Outreachy resulted in a new Audit Log Plugin.
Year 2020 will be pretty busy for the Jenkins project. There are many long-overdue changes in the project, which need to happen if we want the project to succeed. As it was written Board elections blogpost, there are many areas to consider: UX revamp, cloud native Jenkins, pluggable storage, etc. In the coming months there will be a lot of discussions in mailing lists and special interest groups, and we invite all teams to work on their roadmaps and to communicate them in the community.
Next month we will participate in FOSDEM, and there will be a Jenkins stand there. On January 31st we will also host a traditional contributor summit in Brussels, where we will talk about next steps for the project, in terms of technical roadmaps and the project governance. If you are interested in Jenkins, stop by at our community booths and join us at the summit! See this thread for more information.
We also plan to continue all outreach programs. At the moment we are looking for Google Summer of Code 2020 mentors and project ideas (announcement), and we will be also interested to consider non-coding projects as a part of other programs like CommunityBridge. We also work on improving contribution guidelines for newcomers and expert contributors. If you are interested, please contact the Advocacy and Outreach SIG.
And even more
This blog post does not provide a full overview of what changed in the project. The Jenkins project consists of more than 2000 plugins and components which are developed by thousands of contributors. Thanks to them, a lot of changes happen in the project every day. We are cordially grateful to everybody who participates in the project, regardless of contribution size. Everything matters: new features, bug fixes, documentation, blog posts, well reported issues, Stackoverflow responses, etc. THANKS A LOT FOR ALL YOUR CONTRIBUTIONS!
So, keep updating Jenkins and exploring new features. And stay tuned, there is much more to come next year!
Jenkins core maintainer and board member. Oleg started using Hudson for Hardware/Embedded projects in 2008 and became an active Jenkins contributor in 2012. Nowadays he leads several Jenkins SIGs, outreach programs (Google Summer of Code, Hacktoberfest) and Jenkins meetups in Switzerland and Russia. Oleg works for CloudBees and focuses on key projects in the community. GitHub Twitter Blog
By Jacqueline Salinas, Director of Ecosystem & Community
Happy New Year everyone! We are so excited to kick off 2020. We are quickly approaching our first birthday this March, which is a huge milestone for the foundation.
The CD Foundation appreciates all of the hard work and contribution from the community in 2019, without the collaboration and contributions from you it would have been very challenging to get to the end of Year 1.
As we mature and grow we want to ensure that collaborating with us is easy. We realized that we needed to have a more streamlined process for collecting & publishing content. Here’s information on how to submit your blogs, case studies, PR/News, Announcements, and Meetups to the marketing folks at the CD Foundation. Let’s make some magic happen this 2020!
Blogs
We get it… you don’t always have the time to run a meetup or travel to a conference or summit. We encourage the community to participate in other ways such as submitting written content. We would love to hear from you. Here’s how to submit your blog in 3 easy steps:
1. Select your topic and write your blog! Pro tip: Good topics are ones that the community has asked for like project milestones, project roadmap announcements, recaps of community events & conference attendance, etc.
3. Once we receive your submission we will send over a confirmation email and provide you a publishing date, edits & feedback, and we will also collaborate on creating some social media posts to promote your blog through the CD Foundation’s social channels.
Case Studies
Working on something cool? Do you have content that was already published and can get recycled through the CDF as a case study? Get in touch with us and let’s collaborate on a case study. Kick off a conversation with me and schedule some time to chat:
1. Drop us a line and set up a 30 min call through my calendly link here: https://calendly.com/jsalinas-cdf and email me at jsalinas@cd.foundation.
PR/News/Announcements
If you are speaking at an event on behalf of the CD Foundation, speaking about CI/CD technologies, recently interviewed, or know of an article that will be published soon? Let us know and we will publish this on our website and social channels. Please get Jesse Casman involved as he is our PR guru.
1. Send your PR/News/Announcement content here: pr@cd.foundation
Meetups
Are you interested in becoming a Meetup organizer or CDF CI/CD Community Ambassador? No CI/CD Meetup in your city?
Here is how to request to join the CD Foundation Meetup Network. We sponsor the quarterly fees, and we will support you wherever possible. Please complete this form and you will be on your way to joining our network CI/CD Meetups.
There are other ways to get involved in your local Meetup community! You can also join as a Meetup co-organizer and become part of the CD Foundation’s CI/CD Community Ambassador program. Learn more about the program requirements, benefits, and how to get involved.
We are excited to share the latest news with the community about the progress of each project, training & resources, new blog posts, and important updates. We encourage the community to submit content they wish to share with the CI/CD community.
The CD Foundation team will compile all submitted content from the members and community to publish the bulletin on the last week of each month. If you are interested in submitting content here’s how: