Skip to main content
All Posts By

cdfoundation

9 CD Foundation Projects Are Participating in this Year’s Google Summer of Code

By Blog, Staff

The CD Foundation has joined the list of organizations participating in Google’s Summer of Code (GSoc) this year. GSoC is an annual program aimed at bringing more student developers into open source software development. The CD Foundation projects Spinnaker and Screwdriver joined long-time participant Jenkins in providing mentors for a number of projects for students interested in continuous delivery and software pipeline infrastructure.

In total, 7 Jenkins projects, 2 Spinnaker and 1 Screwdriver project were accepted in this summer’s program. Mentors from many different organizations around the world are pitching in, including CD Foundation ambassadors.

“The CD Foundation is dedicated to supporting open source continuous delivery projects worldwide. Part of that mission includes supporting and encouraging the next generation of talented developers worldwide, said Tara Hernandez, Senior Engineering Manager, Google Cloud Platform and CD Foundation Technical Oversight Committee member. “Thank you to the students and mentors who work tirelessly to create and innovate for the GSoC. We hope everyone has a fantastic time coding and learning this summer. Congratulations!”

The following is a list of the projects accepted and links to each project description and associated mentors.

Jenkin’s Projects 

Loghi Perinpanayagam – Jenkins Machine Learning Plugin for Data Science

This project provides a plugin for data scientists to integrate Machine Learning Workflow with Jenkins.

Kezhi Xiong – GitHub Checks API for Jenkins Plugins

The GitHub Checks API allows developers to report the CI integrations’ detail information rather than binary pass/fail build status on GitHub pages.

stellargo – External Fingerprint Storage for Jenkins

File fingerprinting is a way to track which version of a file is being used by a job/build, making dependency tracking easy.

Rishabh Budhouliya – Git Plugin Performance Improvement

The principles of micro-benchmarking were used to create and execute a test suite which involves comparison of GitClient APIs implemented by CliGitAPIImpl and JGitAPIImpl using “average execution time per operation” as a performance metric.

Buddhika Chathuranga – Jenkins Windows Services: YAML Configuration Support

Enhance Jenkins master and agent service management on Windows by offering new configuration file formats and improving settings validation.

Zixuan Liu – Jenkins X: Consolidate the use of Apps / Addons

The main aim of the project is to consolidate Apps and Addons inside Jenkins X to avoid confusion.

Sladyn Nunes Custom Jenkins Distribution Build Service

The main idea behind the project is to build a customizable Jenkins distribution service that could be used to build tailor-made Jenkins distributions.

Spinnaker Projects

Victor Odusanya – Drone CI type for Spinnaker pipeline stage

Add Drone build type as a Spinnaker pipeline stage type.

Moki Daniel – “Continuous Delivery, Continuous Deployments with Spinnaker” 

This project idea will aim at ensuring continuous delivery and continuous deployments, bringing up automated releases, undertaking deployments across multiple cloud providers, and mastering the best built-in deployments practices from Spinnaker.

Screwdriver Project

Supratik Das – Improve SCM Integration

The two key areas where Screwdriver will be improved are introduction of deployment keys for seamless handling of private repositories and triggering of builds from external SCM repositories.

Thank you to all participants! We look forward to getting updates and information on progress over the summer. For more details, please continue to visit the CD Foundation blog.

From Red Hat – Part 1: Building Multiarch imageStream with the NFD Operator and OpenShift 4

By Blog, Member

Originally posted on the OpenShift 4.3 blog by Eduardo Arango

Introduction

Using general available packages (in the form of container images) from an official source or a certified provider comes with a big caveat in relation to performance-sensitive workloads. 

These packages may provide ABI compatibility, but they are not optimized for our specialized hardware (like GPUs or high-performance NICs), nor our CPU chip architecture. The best way to address this is to compile your packages (build your images) on your own deployment.

OpenShift provides a way to seamlessly build images based on defined events called BUILDS. A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process.

The missing part to building hardware-specific images is to orchestrate the build process over the different available resources. In this post you will learn about the Node Feature Discovery (NFD) operator and how to tie it to OpenShift builds to have a hardware-specific image build.

The first part describes the NFD operator and how you can use it to manage the detection of hardware features in the cluster. The second part describes how to create an imageStream from a GitHub webhook and how to use the information from the NFD operator to schedule node-specific builds. The third part presents a sample app to test what you have learned.

The Node Feature Discovery Operator

The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift cluster by labeling the nodes with hardware-specific information. NFD will label the host with node-specific attributes, like PCI cards, kernel, or OS version, and many more. See (Upstream NFD) for more info.

The NFD operator can be found on the Operator Hub by searching for “Node Feature Discovery”:

After following the install steps, you can go to “Installed Operators” in the OpenShift  cluster and see:

Inside, a card instructs you to create an instance:

Click on “Create Instance” to get help from the OpenShift web console, which will auto-generate the needed YAML file and allow you to create the object from the console.

Once the NFD operator is deployed, you can go to a node dashboard and check all the Node_labels generated by the operator. Here is a sample excerpt of NFD labels applied to the node:

By reading the generated labels, you can understand hardware information of the OpenShift node; for example, we are running an amd64 architecture with multithreading enabled: (“beta.kubernetes.io/arch=amd64”, “feature.node.kubernetes.io/cpu-hardware_multithreading=true”)

Defining a BuildConfig

BuildConfig is a powerful tool in OpenShift. OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry.

The first step is to create a specific namespace to allocate the builds:

```yaml
apiVersion: v1
kind: Namespace
metadata:
	name: multiarch-build
	labels:
    	openshift.io/cluster-monitoring: "true"
```

For this example, you are pointing the builds to a repository on GitHub. First, you need to generate a secret for the generated GitHub hook:

```yaml
kind: Secret
apiVersion: v1
metadata:
  name: arch-dummy-github-webhook-secret
  namespace: multiarch-build
data:
  WebHookSecretKey: bXVsdGlhcmNoLWJ1aWxk
---
kind: Secret
apiVersion: v1
metadata:
  name: arch-dummy-generic-webhook-secret
  namespace: multiarch-build
data:
  WebHookSecretKey: bXVsdGlhcmNoLWJ1aWxk
```

With the namespace and secret in place, you can now create the imageStream and BuildConfig to continuously watch for user-defined triggers to keep the image up to date. Image streams are part of the OpenShift extension APIs. Image streams are named references to container images. The OpenShift extension resources reference container images indirectly, using image streams.

The following YAML files can be generated via the OpenShift Developer web console. Once you have a generated imageStream and BuildConfig YAML, you need to make sure they look like the following:

```yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
  labels:
	app: arch-dummy
  name: arch-dummy
spec: {}
---
kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
  name: arch-dummy
  namespace: multiarch-build
  selfLink: >-
	/apis/build.openshift.io/v1/namespaces/multiarch-build/buildconfigs/arch-dummy
  labels:
	app: arch-dummy
	app.kubernetes.io/component: arch-dummy
	app.kubernetes.io/instance: arch-dummy
	app.kubernetes.io/part-of: arch-dummy-app
  annotations:
	app.openshift.io/vcs-ref: master
	app.openshift.io/vcs-uri: 'https://github.com/ArangoGutierrez/Arch-Dummy'
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
    beta.kubernetes.io/arch=amd64
  resources:
	requests:
  	cpu: "100m"
  	memory: "256Mi"
  output:
	to:
  	kind: ImageStreamTag
  	name: 'arch-dummy:latest'
  resources: {}
  successfulBuildsHistoryLimit: 3
  failedBuildsHistoryLimit: 3
  strategy:
	type: Docker
	dockerStrategy:
  	dockerfilePath: build/Dockerfile
  postCommit: {}
  source:
	type: Git
	git:
  	uri: 'https://github.com/ArangoGutierrez/Arch-Dummy'
  	ref: master
	contextDir: /
  triggers:
	- type: ImageChange
  	ImageChange: {}
	- type: GitHub
  	github:
    	secretReference:
      	name: arch-dummy-github-webhook-secret
	- type: ConfigChange
  runPolicy: Parallel
```

There are three lines worth noting in the above YAML (not auto-generated via the Developer web console), where you leverage the NFD operator labels to orchestrate the image builds on top of nodes with specific features, by using the nodeSelector key. For example, only schedule container builds on worker nodes with amd64 architecture:

```yaml
  nodeSelector:
    node-role.kubernetes.io/worker: ""
    beta.kubernetes.io/arch=amd64
```

Now with the BuildConfig created, you can check out the GitHub URL hook: 

```bash
[eduardo@fedora-ws image_stream]$ oc describe bc/arch-dummy
Name:   	 arch-dummy
Namespace:    multiarch-build
Created:    5 days ago
Labels:   	 app=arch-dummy
   	 app.kubernetes.io/component=arch-dummy
   	 app.kubernetes.io/instance=arch-dummy
   	 app.kubernetes.io/part-of=arch-dummy-app
Annotations:    app.openshift.io/vcs-ref=master
   	 app.openshift.io/vcs-uri=https://github.com/ArangoGutierrez/Arch-Dummy
Latest Version:    2

Strategy:   	 Docker
URL:   		 https://github.com/ArangoGutierrez/Arch-Dummy
Ref:   		 master
ContextDir:   	 /
Dockerfile Path:    build/Dockerfile
Output to:   	 ImageStreamTag arch-dummy:latest

Build Run Policy:    Serial
Triggered by:   	 Config
Webhook Generic:
    URL:   	 https://api.4.z.y-ed-dev.blog-openshift.devcluster.openshift.com:6443/apis/build.openshift.io/v1/namespaces/multiarch-build/buildconfigs/arch-dummy/webhooks/<secret>/generic
    AllowEnv:    false
Webhook GitHub:
    URL:    https://api.4.z.y-ed-dev.blog-openshift.devcluster.openshift.com:6443/apis/build.openshift.io/v1/namespaces/multiarch-build/buildconfigs/arch-dummy/webhooks/<secret>/github
Builds History Limit:
    Successful:    5
    Failed:   	 5

Build   	 Status   	 Duration    Creation Time
arch-dummy-1     complete     1m37s    	 2020-03-31 17:35:32 -0400 EDT

Events:    <none>

```

With this URL, you can then follow GitHub Webhook instructions for a ready-to-work imageStream..

To learn more about OpenShift Builds and more advanced use cases, you can go here.

Deploy an Example

To test what you just learned today, you can create a buildConfig of Arch-Dummy as a didactic confirmation that the feature-specific build is working. To do so, deploy the image as detailed on https://learn.openshift.com/introduction/deploying-images/ by selecting the built image “arch-dummy:latest”.

This image was built from the repo https://github.com/ArangoGutierrez/Arch-Dummy as seen in the imageStream.yaml.

This application generates a small API service with three endpoints:

/ -> Will retrieve information about the app

/version -> Will retrieve information about the app binary and where it was built

Example:


/ -> Will retrieve information about the app

/version -> Will retrieve information about the app binary and where it was built
Example:
```json
{Git Commit:"6825a2f2a5b6a60278868260d8cdb51d192d9e63",CPU_arch:"Intel(R) Xeon(R) CPU E5-2686 v4 @",Built:"Tue Mar 31 21:43:15 UTC 2020",Go_version:"go1.12.8 linux/amd64"}
```

/cpu -> will retrieve information about the node on which the app is currently running

```json
{name:"Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz",model:"79",family:"6"}
```

This dummy arch app will allow you to test whether the image was built correctly and orchestrated correctly.

Conclusion

Building hardware-specific images is easy by leveraging internal OpenShift tooling like imageStreams and coupling with the Node-Feature-Discovery Operator to manage the detection of hardware features and configuration in the OpenShift cluster. OpenShift simplicity allows developers to define the nodeSelector key to orchestrate image builds over target hardware. These could prove to be of great use when you consider image-build processes that involve AI/ML training requiring GPU and other special resources.

Future Work

On this blog post, you saw a quick example on how to tie together the Node-Feature-Discovery Operator and Openshift imageStreams for simple hardware-specific image builds. The following post goes deeper into OpenShift replacing the imageStream build with OpenShift Pipelines and another operator, the Special-resource-operator, to build more complex images and deploy them in the cluster.

Introducing our Newest CDF Ambassador – Zhao Xiaojie (Rick)

By Blog, Staff
This image has an empty alt attribute; its file name is pBNUl3G56Uduy8qq9Pi2IrEuBsHcSazVkg9f4d4MGiwOFDI0stRb61AQO8_GSATSwnF9b10AhMud2V1U2F2KwhB7j_qahb5jyucGpqTjErtu6ZLV6SOxcYnLlma20k66wT_kaODU

My name is Zhao Xiaojie (Rick). I’m a software engineer at Alauda, which is responsible for developing a CI/CD platform. I’m the leader of the Chinese Localization SIG and a press contact for Jenkins in China, too, where a large developer community exists invisible from the west! 

I am passionate about promoting the Jenkins community and have done so in several ways, such as running Jenkins official social media channels, encouraging people to contribute tech articles, giving speeches about Jenkins at related conferences, and maintaining the Chinese Jenkins website. 

I am also the author of several open source projects such as the Simplified Chinese Plugin, Jenkins CLI. And I have participated in Google Summer of Code (GSoC) twice as a mentor.

I’m a very active author and contributor in open source. I believe that CI/CD can speed business value shipping for all teams. Advocating CI/CD open source projects is an excellent way to help other teams and individuals adopt DevOps best practices. I enjoy giving public speeches or organizing meetups related to CI/CD. In my opinion, working with the CDF offers me a lot of opportunities to spread information about open source projects. The CDF ambassador program can help us to gather much more CI/CD fans.

You can find me on GitHub.

From Armory – Safety Is No Accident

By Blog, Member

Originally posted on the Armory blog by Chad Tripod

Continuous Delivery and Deployment is changing the way organizations deliver software. Over the years, software delivery has morphed into a time consuming process.  With countless validations and approvals to ensure the code is safe to present to users. And with good reason, releasing bad software can severely impact a business’s brand, popularity, and even revenue. In this day and age, with customer sentiment immediately feeding back into public visibility, companies are taking even stricter measures to ensure the best software delivery and user experience.

When deploying software to production, we use words like “resilience” to talk about how the code runs in the wild. For the optimists, we use words such as “Availability Zones,” and for those more pessimistic about deployments, we say, “Failure Domains.” When I was architecting and deploying applications for Apple, eBay, and others, I always built for failure. I was always more interested in how things behave when we break things, and less so on the steady state. I’d relish in unleashing tools like Simian Army to wreak havoc on what we had built to ensure code and experience weren’t impaired. 

Nowadays, there is a much better approach to ensuring safety. Continuous Delivery (CD) has enabled organizations to shift left. Empowering developers with access to deploy directly to production, but with the guardrails needed to make sure safety isn’t compromised for speed. Luckily, the world-class engineers at Netflix and Google have built a platform, Spinnaker. Spinnaker addresses deployment resiliency concerns and empowers developers with toolsets to validate and verify as a built-in part of delivering code.

Now, let’s break down the modern model and review the tools available in the CI/CD workflow. 

Spinnaker – Spinnaker is a high scale multicloud continuous delivery (CD) tool.  While leveraging the years of software delivery best practices that Netflix and Google built into Spinnaker, users get to serialize and automate all the decisions that they have baked into their current software delivery process. Approvals, environments, testing, failures, feature flagging, ticketing, etc., are all completely automated and shared across the whole organization. The end result? Built-in safety that allows DevOps teams to deploy software with great velocity.

Continuous Verification – Leveraging real-time KPIs and log messages to dictate the health of code and environment. Spinnaker’s canary deployments ingest real-time metrics from data platforms including Datadog, NewRelic, Prometheus, Splunk, and Istio into a service called Kayenta. Kayenta runs these time series metrics into the Mann Whitney algorithm developed by Netflix and Google, and compares  release metrics to current production metrics. Spinnaker will then adjust or roll back deployments automatically based on success criteria. This allows math and data, rather than manual best-guesses, to dictate in real time if the user is getting the best experience from the service.

Chaos Engineering – Why wait for things to break in production to fix them? There are better ways. Chaos Engineering is the practice of breaking things in pre prod environments to understand how the code behaves when it’s exercised. What happens when a dependent service goes offline? How do the other services in the application behave? How does Kubernetes deal with it? What about shutting off a process in a service? These are the measures Gremlin and Chaos Monkey give your developers. Now testing is much more than what your CI Server does, it takes into consideration the environments in which they are deployed. 

Service Mesh – Service Meshes are a Kubernetes traffic management solution. Kubernetes applications can traverse many clusters, regions, and even clouds. Service Meshes are a way to manage traffic flows, traceability, and most importantly with ephemeral workloads, observability. There are many flavors of service meshes to choose from. Istio/Envoy has the most visibility, but you can also implement service meshes from nginx, consulsolo.io or even get enterprise support from companies like F5/NGINX+ or Citrix, which offer elevated ingress features. Service Meshes in the context of software delivery provide a very granular canary release. Instead of blindly sending traffic to a canary version for testing, you can instead programmatically use layer 7 traffic characteristics such as URI, host, query, path, and cookie to steer traffic. This allows you to switch only certain users, business partners, or regions to new versions of software.

DevSecOps – In my years seeing changes in technology and how we deliver software to end users, one thing is for sure: security wants to understand the risks in what you’re doing. And with good reason. Security exploits can leak sensitive information or, even expose an organization to malicious hackers. Luckily this new deployment world allows security to process their scans and validations in an automated fashion. Solutions range from TwistlockArtifactory XrayAquaSignal Science, etc. There are many DevSecOps solutions, so it is a good thing to know that Spinnaker supports them all!

Spinnaker stages automate developer tools:

End Result – As you put together your new cloud native tool chain, there are many ways you can improve the way you release software. I urge you to deploy the tools you need for the service you are providing, not only based on what a vendor is saying. Over time, implementing guardrails will increase your innovation and time to market. For many this will be a competitive advantage against those who move slowly, and investing in these areas will, over time, improve the hygiene of your software code, which will provide stability in your future releases. By de-risking the release processes and improving safety, the end users are given the best possible experience with your software.

Jenkins & Spinnaker: Tale As Old As Screen Time

By Blog, Project

CDF Newsletter – May 2020 Article
Subscribe to the Newsletter

By Rosalind Benoit

Don’t worry. As long as you hit that wire with the connecting hook at precisely eighty-eight miles per hour the instant the lightning strikes the tower…everything will be fine.

– Dr. Emmett Brown, “Back To The Future”

If you’re reading this, you’ve probably experienced the feeling of your heart racing — hopefully with excitement, but more likely, with anxiety — as a result of your involvement in the software development lifecycle (SDLC). At most organizations, artifacts must traverse a complex network of teams, tools, and constraints to come into being and arrive in production. As software becomes more and more vital to social connection and economic achievement, we feel the pressure to deliver transformational user experiences.

No company has influenced human expectations for reliably delightful software experiences more than Netflix. After 10 years of supporting large-scale logistics workloads with its mail-order business, Netflix launched an addictive streaming service in 2007. It soon experienced SDLC transformation at an uncommonly rapid pace, and at massive scale. After pioneering a new entertainment standard, Netflix survived and innovated through all the learnings that come with growth.

We’ll soon have one more reason to be glad it did; Back to the Future arrives on Netflix May 1!

https://www.youtube.com/watch?v=KqYvQchlriY

Jenkins at Netflix

You may know Netflix as the birthplace of open source Spinnaker, but it is also a perennial Jenkins user. As early cloud adopters, Netflix teams quickly learned to automate build and test processes, and heavily leveraged Jenkins, evolving from “a single massive Jenkins master in our datacenter, to running 25 Jenkins masters in AWS” as of 2016. 

Jenkins changed the software development and delivery game by freeing teams from rigid, inflexible build processes and moving them into continuous integration. With test and build automation, “it works on my laptop” became a moot point. A critical leap for software-centric businesses like Netflix, this ignited a spark of the possible. 

As Jenkins became an open source standard, engineers leveraged it to prove the power of software innovation, and the difference that velocity makes to improving user experiences and business outcomes. This approachable automation still works, and most of us still use it, over 15 years after its first release. 

Over time, Netflix teams found it increasingly difficult to meet velocity, performance, and reliability demands when deploying their code to AWS with Jenkins alone. Too much technical debt had accumulated in their Jenkins and its scripts, and developers, feeling the anxiety, craved more deployment automation features. So, Netflix began to build the tooling that evolved into today’s Spinnaker. 

Spinnaker & Delegation

Much like what Jenkins did for testing and integration, Spinnaker has done for release automation. It allows us to stitch together the steps required to safely deliver updates and features to production; it delegates pipeline stages to systems across the toolchain, from build and test, to monitoring, compliance, and more. Spinnaker increasingly uses its plugin framework to integrate tools. However, its foundational Jenkins integration exists natively, using triggers to pick up artifacts from it, and stages to delegate tasks to it. With property files to pass data for use in variables further down the pipeline, and concepts like Jenkins’ “unstable build” built in, Spinnaker can leverage the power of existing Jenkins assets. 

Then, out of the box, Spinnaker adds the “secret sauce” pioneered by companies like Netflix to deliver the software experiences users now expect. With Spinnaker, you can skip change approval meetings by adding manual judgments to pipelines where human decisions are required. You can perform hotfixes with confidence and limit the blast radius of experiments by using automated canary analysis or your choice of deployment strategy. Enjoy these features when deploying code or functions to any cloud and/or Kubernetes, without maintaining custom scripts to architect pipelines. 

As a developer, I found that I had the best experience using Jenkins for less complicated jobs and pipelines; even with much of the process defined as code, I didn’t always have enough context to fully understand the progression of the artifact or debug. Since joining the Spinnaker community, I’ve learned to rely on Jenkins stages for discrete steps like applying a Chef cookbook or signalling a Puppet run. I can manage these steps from Spinnaker, where, along with deployment strategies and native infrastructure dashboards, I can also experiment with data visualization using tools like SumoLogic, and even run terraform code. 

It’s simple to get started with the integration. I use Spinnaker’s Halyard tool to add my Jenkins master, and boom:

If Jenkins is a Swiss Army knife, Spinnaker is a magnetic knife strip. Their interoperability story is the story of continuous delivery’s evolution, and allows us to use the right tool for the right job:

  • Jenkins: not only do I have all the logic and capability needed to perform your testing, integration, and deployment steps, I’m also an incredibly flexible tool with a plugin for every special need of every development team under the sun. I’m game for any job!
  • Spinnaker: not only can I give your Jenkins jobs a context-rich home, I also delegate to all your other SDLC tools, and visualize the status and output of each. My fancy automation around deployment verifications, windows, and strategies makes developers happy and productive!

My first real experience with DevOps was a Jenkins talk delivered by Tracy Ragan at a conference in Albuquerque, where I worked as an (anxious) sysadmin for learning management systems at UNM. It’s amazing to have come full circle and joined the CDF landscape as a peer from a fellow member company. I look forward to aiding the interoperability story as it unfolds in our open source ecosystem. We’re confident the tale will transform software delivery, yet again. 

Join Spinnaker Slack to connect with other DevOps professionals using Jenkins and Spinnaker to deliver software with safety and velocity!

From Armory – The World of Jenkins: Better #withSpinnaker

By Blog, Member
Jenkins and Spinnaker, Better Together

Originally posted on the Armory blog by Rosalind Benoit

Folks in the DevOps community often ask me, “I’m already using Jenkins, so why should I use Spinnaker?” We’re hosting a virtual talk to address the question! Register here to join us 3/26 and learn how Jenkins and Spinnaker cooperate for safe, scalable, maintainable software delivery.

A delivery engineer I spoke with last week said it best:

“I came from a world of using Jenkins to deploy. It’s great but, you’re just modifying Jenkins jobs. It can do a lot, but it’s like that line in Jurassic Park – ‘Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.’”

Many of us came from that world: we built delivery automation with scripts and tools like Jenkins, CircleCI, Bamboo, and TeamCity. We found configuration management, and used Puppet or Ansible to provision infrastructure in our pipelines as code. We became addicted to D.R.Y. (don’t repeat yourself), and there is no looking back.

Jenkins provides approachable automation of continuous integration steps. Spinnaker works with Jenkins to pick up and deliver build artifacts, and to delegate pipeline stages. As a true continuous delivery platform, Spinnaker codifies your unique software delivery culture and processes to your comfort level. It also adds production-ready value to your pipelines:

  • Turnkey automation of advanced delivery strategies such as canary deployments
  • One-click rollbacks
  • Single pane of glass to view deployments, applications, server groups, clusters, load balancers, security groups, and firewalls
  • Centralized API to automate and integrate across your toolchain

Jenkins taught us many lessons. It popularized the use of imperative pipelines to execute ordered steps in a SDLC. It taught us that centralizing delivery workflows into one platform makes strategic sense in scaling operations. At the same time, especially when used for deployments, it suffers from instability and maintenance overhead brought by unchecked plugin sprawl. It struggles to offer a scalable model for managing multiple jobs and distributed apps. But the way it consolidated SDLC tasks within a full-featured GUI empowered developer teams to start doing delivery.

In the new world of fast innovation through immutable infrastructure, Spinnaker has adapted that visibility to the realms of cloud and cloud native. It provides a centralized vantage point on all of your ephemerally-packaged applications, in their many variations.  Within your pipelines, its guardrails identify invalid or non-compliant infrastructure before deployment even happens. Spinnaker’s smart delivery workflows insulate customers and end-users from impact to their software experience.

This sense of safety is Jenkins’ missing ingredient. Jenkins introduced a world where developers could independently chain together a path to production. It enabled us to improve our efficiency and code quality through testing and build automation, with self-service. This giant technological shift sparked a move away from waterfall development and ITIL-style delivery.

But, culture cannot change overnight. Developers who exercised this newfound power struck terror in the hearts of those accountable for availability and software-driven business goals. Culture lagged behind tooling, sparking fear and risk aversion. That fear still permeates many organizations, allowing baggage-free startups and the most nimble companies to digitally disrupt the status quo. These innovators prove that delivering highly valued interactions through software means increased profit and influence. Enter Armory Spinnaker.


Watch Armory CTO Isaac Mosquera’s Supercharge Your Deployments With Spinnaker and Jenkins presentation at CD Summit, or check out the longer version with Q&A at DeliveryConf.

Stop spending time and talent knitting your toolchain together with pipeline steps that rely on brittle, expensive-to-maintain scripts and repetitive GUI fiddles! Attend “I have Jenkins; why do I need Spinnaker?” to learn more about how Spinnaker can free your developers and evolve your continuous delivery game.

From Armory – Scaling Spinnaker at Salesforce: The Life of a Cloud Ops Architect

By Blog, Member
Salesforce & Spinnaker

Originally posted on the Armory blog by Rosalind Benoit

I met Edgar Magana at Spinnaker Summit last year, when he spoke during Armory’s keynote as one of six Spinnaker champions. The energy and enthusiasm he brings to advocating for Spinnaker contrasts his intensity in approaching his role as operator of mega-scale cloud infrastructure. But, the more I get to know him, the more I understand that it’s one in the same. To enable Salesforce’s application owners to safely evolve software, he must ensure homogenous, predictable models for continuous delivery. Spinnaker has helped him make that a reality. 

“Across multiple environments, we have to enable different models for Spinnaker based on security requirements,” he explains as he shares his standardization strategy. 

Salesforce logo

“We templatize all the pipelines for consistency across services, with two types: EC2 instance deployments, and Kubernetes cluster pipelines. For Kubernetes, we require a lot of security hardening, and we need to use the same logging and monitoring mechanisms for all of our clusters.”

Every time Edgar’s team discovers a new configuration for Kubernetes, they upgrade that pipeline, which should trigger a service owner to relaunch the pipeline with new parameters, or sometimes, destroy their Kubernetes cluster and create a new one. “We make all these changes in development and staging first, of course,” he says, noting the dev, pre-prod, and prod Spinnaker instances his team maintains. 

A security requirement that artifacts not be created in the same place that deploys services imposes an added complexity. It required Edgar’s team to innovate further, and split the baking process across two different Spinnaker instances. Luckily, they could configure this out of the box with Spinnaker by overriding parameters; Edgar appreciates that Spinnaker doesn’t hard-code a lot of configuration, as rigidity wouldn’t support Salesforce’s unique requirements. That has also allowed his team to create a “heavy layer of automation on top of Spinnaker,” providing guardrails for application owners.

Our conversation turns to the Spinnaker Ops SIG (Special Interest Group), which Edgar recently founded. A solid kickoff meeting produced several action items to be completed before the next scheduled meeting on February 27th at 10 AM, (always a good sign).

Most importantly at this stage, Edgar says:

“We want to reach out to more operators, people who are either struggling or evaluating Spinnaker. We need operators in different stages — super experts who control everything, like those at Netflix and Airbnb, operators that are getting there, like us at Salesforce, and those in the initial evaluation stage. The goal of the SIG is to have a place where operators can exchange use cases, and have a unified voice, just like other Spinnaker SIGs, and a path for specific features we want incorporated into Spinnaker. The community needed a place to discuss how to operate Spinnaker better. As an operator of large-scale infrastructure, I don’t want to share this system with only a few companies. We want to welcome new users and operators, and facilitate their transition from the POC (proof-of-concept) environment to the real thing. This will help us understand what kinds of features are more important.”


Does the Ops SIG also provide a place to vent and empathize? I sure hope so! “That’s the life of a cloud operations architect,” Edgar says when he has to reschedule our meeting, “we get called all the time, from account issues, to Spinnaker and Kubernetes configuration,” and lots more; indeed, once when I ping Edgar about this blog, he’s in a “war room” (boy, I sure don’t miss my Ops days right now!)  But just like Armory, Edgar values empowering developers, and safely pushing control of applications and their infrastructure to the edge to fuel innovation. Better software is worth the hard work!

Another of Edgar’s goals for the Ops SIG: create reference architecture documentation for HA (high availability) and disaster recovery. “I want new users of Spinnaker to say, ‘I don’t need to reinvent the wheel; I’ll just follow these HA guidelines.” Architecture collateral will help Platforms and DevOps teams convince leadership that Spinnaker is a good investment for the company’s continuous delivery of software. This is where Edgar’s warm enthusiasm and operator’s intensity meet: empowering developers, empowering the community, empowering the planet. 

I look forward to working more with Edgar and his team at Salesforce as part of the Ops SIG, our April Spinnaker Gardening Days online hackathon, and more.  This is the kind of open-source heroism that will usher in the new industrial revolution!

Check out Edgar’s talk on Salesforce & Spinnaker from last winter’s Spinnaker Summit below, or hop over to the registration page for Armory’s upcoming, “I have Jenkins; why do I need Spinnaker?” webinar to reserve your spot!

View image on Twitter

712:23 PM – Nov 16, 2019Twitter Ads info and privacySee Edgar Magana’s other Tweets

Watch Edgar’s 8 minute talk, part of Isaac Mosquera’s keynote.:

From Armory – Kubernetes Native: Introducing Spinnaker Operator

By Blog, Member

Originally posted on the Armory blog by German Muzquiz

Spinnaker Operator is now Beta!

With Spinnaker Operator, define all the configurations of Spinnaker in native Kubernetes manifest files, as part of the Kubernetes kind “SpinnakerService” defined in its own Custom Resource Definition (CRD). With this approach, you can customize, save, deploy and generally manage Spinnaker configurations in a standard Kubernetes workflow for managing manifests. No need to learn a new CLI like Halyard, or worry about how to run that service.

The Spinnaker Operator has two flavors to choose from, depending on which Spinnaker you want to use: Open Source or Armory Spinnaker.

With the Spinnaker Operator, you can:

  • Use “kubectl” to install and configure a brand new Spinnaker (OSS or Armory Spinnaker).
  • Take over an existing Spinnaker installation deployed by Halyard and continue managing it with the operator going forward.
  • Use Kustomize or Helm Charts to manage different Spinnaker installations with slight variations.
  • Use Spinnaker profile files for providing service-specific configuration overrides (the equivalent of clouddriver-local.yml, gate-local.yml, etc.)
  • Use Spinnaker service settings files to tweak the way Deployment manifests for Spinnaker microservices are generated.
  • Use any raw files needed by configs in the SpinnakerService manifest (i.e. packer templates, support config files, etc.)
  • Safely store secret-free manifests under source control, since a SpinnakerService manifest can contain references to secrets stored in S3GCS or Vault (Vault is Armory Spinnaker only).

Additionally, Spinnaker Operator has some exclusive new features not available with other deployment methods like Halyard:

  • Spinnaker secrets can be read from Kubernetes secrets.
  • Spinnaker is automatically exposed with Kubernetes service load balancers (optional).
  • Experimental: Accounts can be provisioned and validated individually by using a different SpinnakerAccount manifest, so that adding new accounts involves creating a new manifest instead of having everything in a single manifest.

Let’s look at an example workflow.

Assuming you have stored SpinnakerService manifests under source control, you have a pipeline in Spinnaker to apply these manifests automatically on source control pushes (Spinnaker deploying Spinnaker) and you want to add a new Kubernetes account:

  1. Save the kubeconfig file of the new account in a Kubernetes secret, in the same namespace where Spinnaker is installed.
  2. Checkout Spinnaker config repository from source control.
  3. Add a new Kubernetes account to the SpinnakerService manifest file, referencing the kubeconfig in the secret:
apiVersion: spinnaker.armory.io/v1alpha2
kind: SpinnakerService
metadata:
name: spinnaker
spec:
spinnakerConfig:
config:
version: 2.17.1 # the version of Spinnaker to be deployed
persistentStorage:
persistentStoreType: s3
s3:
bucket: acme-spinnaker
rootFolder: front50
+ providers:
+ kubernetes:
+ enabled: true
+ accounts:
+ - name: kube-staging
+ requiredGroupMembership: []
+ providerVersion: V2
+ permissions: {}
+ dockerRegistries: []
+ configureImagePullSecrets: true
+ cacheThreads: 1
+ namespaces: []
+ omitNamespaces: []
+ kinds: []
+ omitKinds: []
+ customResources: []
+ cachingPolicies: []
+ oAuthScopes: []
+ onlySpinnakerManaged: false
+ kubeconfigFile: encryptedFile:k8s!n:spinnaker-secrets!k:kube-staging-kubeconfig # secret name: spinnaker-secrets, secret key: kube-staging-kubeconfig
+ primaryAccount: kube-staging
  1. Commit the changes and open a Pull Request for review.
  2. Pull Request approved and merged.
  3. Automatically a Spinnaker pipeline runs and applies the updated manifest.

We hope that the Spinnaker Operator will make installing, configuring and managing Spinnaker easier and more powerful. We’re enhancing Spinnaker iteratively, and welcome your feedback.

Get OSS Spinnaker Operator (documentation)

Get Armory Spinnaker Operator (documentation)

Interested in learning more about the Spinnaker Operator? Reach out to us here or on Spinnaker Slack – we’d love to chat!

From Armory – Why Armory Has a Remote Work Culture

By Blog, Member

Originally posted on the Armory blog by Andrew Backes

Armory Has a Remote Work Culture

At Armory, we are intensely focused on building our culture, not just building our product. Our culture is the operating system of our company, underpinning and supporting everything that we do.

From the get-go, we decided that Armory’s culture was going to be designed with intentionality to be remote work friendly. Armory is built on Spinnaker, an open source project created by Netflix and Google. One of the incredible features of open source software is that it is built through the collaboration of talented individuals and teams distributed all over the world. To align with the vibrant Spinnaker open source community, we doubled down on building a strong remote work culture. Today, more than half of our company works remotely, with the remainder working at our HQ in San Mateo, CA.

What Does this Mean in Practice?

For many companies, the experience of remote workers is a secondary concern, if it is even thought of at all. But remote workers face their own unique experiences, benefits, and challenges. At Armory, we acknowledge and welcome this unique experience, approach it with empathy and understanding, and create feedback mechanisms to ensure that our on-site and remote tribals are all having the best possible experience.

Some of the things we do at Armory include:

  • Set up Zoom stations all over the office to make it seamless to hop on a video chat and have synchronous, “face-to-face” communication
    • This includes the always-on camera and screen in the main section of our office so that we can all eat lunch “together”
  • Rotate meeting leadership between on-site and remote tribals, to continually ensure that the needs of remote tribals are being met
  • Provide a flexible expense policy for remote workers to ensure that they have the best, most productive remote setup for their individual needs
  • Organize frequent in-person events (team and company-wide offsites, holiday parties, the Spinnaker Summit, and others) to provide everyone with the opportunity for in-person relationship building
  • Default to having conversations in a public Slack channel to ensure that everyone can participate, voice their perspectives, and share in the knowledge
  • Create unstructured time for “high-bandwidth” communication (over Zoom and in-person) for people to get to know each other outside of a business context
    • Specifically within the engineering team, many of the scrum teams dedicate a chunk of their daily standup time to the type of general team syncing and catch-up that happens more naturally in-person in the office

A high-risk moment for Armory! Most of the engineering team in one single elevator, on the way to dinner during a 3-day offsite. Everything turned out fine, and dinner was delicious.

Why Do We Invest in the Remote Experience?

All of these initiatives take time and effort. Why do we bother, instead of leaving it up to remote tribals to conform to the working cadence of the tribals who are at HQ?

One simple answer is that it comes down to people. We want to work with the best people all over the world, not just the best people in the Bay Area. And that means embracing remote work and maximizing the remote work experience.

For me personally, my main role as the Head of Engineering is to empower the engineering team. That means creating the visibility and shared context so that engineers have the information that they need to make great decisions to positively impact the company and the Spinnaker community. Furthermore, continual growth and improvement is a core value at Armory. It is imperative that each of us continue to learn and grow, not just in our technical depth but also in understanding how each other work, how we can function as a better, more cohesive team, and how we can strive together for larger goals.

If I am not investing in the right tools, processes, and culture to foster that shared context and continual growth across the entire engineering team, then I am not doing my job.

Armory is hiring polyglot engineers, as well as tribals across the entire organization. Check out open roles and learn more about what life is like at Armory at armory.io/careers. We’d love to hear from you!

From Jenkins – Google Summer of Code 2019 Report

By Blog, Project

Originally posted on the Jenkins blog by Martin d’AnjouJeff PearceOleg NenashevMarky Jackson

Google Summer of Code is much more than a summer internship program, it is a year-round effort for the organization and some community members. Now, after the DevOps World | Jenkins World conference in Lisbon and final retrospective meetings, we can say that GSoC 2019 is officially over. We would like to start by thanking all participants: students, mentors, subject matter experts and all other contributors who proposed project ideas, participated in student selection, in community bonding and in further discussions and reviews. Google Summer of Code is a major effort which would not be possible without the active participation of the Jenkins community.

In this blogpost we would like to share the results and our experience from the previous year.

Results

Five GSoC projects were successfully completed this year: Role Strategy Plugin Performance ImprovementsPlugins Installation Manager CLI Tool/LibraryWorking Hours Plugin – UI ImprovementsRemoting over Apache Kafka with Kubernetes featuresMulti-branch Pipeline support for Gitlab SCM. We will talk about the projects a little later in the document.

Highlights

Project details

We held the final presentations as Jenkins Online Meetups in late August and Google published the results on Sept 3rd. The final presentations can be found here: Part 1Part 2Part 3. We also presented the 2019 Jenkins GSoC report at the DevOps World | Jenkins World San Francisco and at the DevOps World | Jenkins World 2019 Lisbon conferences.

In the following sections, we present a brief summary of each project, links to the coding phase 3 presentations, and to the final products.

Role Strategy Plugin Performance Improvements

Role Strategy Plugin is one of the most widely used authorization plugins for Jenkins, but it has never been famous for performance due to architecture issues and regular expression checks for project roles. Abhyudaya Sharma was working on this project together with hist mentors: Oleg NenashevRunze Xia and Supun Wanniarachchi. He started the project from creating a new Micro-benchmarking Framework for Jenkins Plugins based on JMH, created benchmarks and achieved a 3501% improvement on some real-world scenarios. Then he went further and created a new Folder-based Authorization Strategy Plugin which offers even better performance for Jenkins instances where permissions are scoped to folders. During his project Abhyudaya also fixed the Jenkins Configuration-as-Code support in Role Strategy and contributed several improvements and fixes to the JCasC Plugin itself.

Role strategy performance improvements

Plugins Installation Manager CLI Tool/Library

Natasha Stopa was working on a new CLI tool for plugin management, which should unify features available in other tools like install-plugins.sh in Docker images. It also introduced many new features like YAML configuration format support, listing of available updates and security fixes. The newly created tool should eventually replace the previous ones. Natasha’s mentors: Kristin WhetstoneJon Brohauge and Arnab Banerjee. Also, many contributors from Platform SIG and JCasC plugin team joined the project as a key stakeholders and subject-matter experts.

Plugin Manager Tool YAML file

Working Hours Plugin – UI Improvements

Jenkins UI and frontend framework are a common topic in the Jenkins project, especially in recent months after the new UX SIG was established. Jack Shen was working on exploring new ways to build Jenkins Web UI together with his mentor Jeff Pearce. Jack updated the Working Hours Plugin to use UI controls provided by standard React libraries. Then he documented his experienced and created template for plugins with React-based UI.

Web UI controls in React

Remoting over Apache Kafka with Kubernetes features

Long Le Vu Nguyen was working on extended Kubernetes support in the Remoting over Apache Kafka Plugin. His mentors were Andrey Falco and Pham vu Tuan who was our GSoC 2018 student and the plugin creator. During this project Long has added a new agent launcher which provisions Jenkins agents in Kubernetes and connects them to the master. He also created a Cloud API implementation for it and a new Helm chart which can provision Jenkins as entire system in Kubernetes, with Apache Kafka enabled by default. All these features were released in Remoting over Apache Kafka Plugin 2.0.

Jenkins in Kubernetes with Apache Kafka

Multi-branch Pipeline support for Gitlab SCM

Parichay Barpanda was working on the new GitLab Branch Source Plugin with Multi-branch Pipeline Jobs and Folder Organisation support. His mentors were Marky Jackson-TauliaJustin HarringaZhao Xiaojie and Joseph Petersen. The plugin scans the projects, importing the pipeline jobs it identifies based on the criteria provided. After a project is imported, Jenkins immediately runs the jobs based on the Jenkinsfile pipeline script and notifies the status to GitLab Pipeline Status. This plugin also provides GitLab server configuration which can be configured in Configure System or via Jenkins Configuration as Code (JCasC). read more about this project in the GitLab Branch Source 1.0 announcement.

Gitlab Multi-branch Pipeline support

Projects which were not completed

Not all projects have been completed this year. We were also working on Artifact Promotion plugin for Jenkins Pipeline and on Cloud Features for External Workspace Manager Plugin, but unfortunately both projects were stopped after coding phase 1. Anyway, we got a lot of experience and takeaways in these areas (see linked Jira tickets!. We hope that these stories will be implemented by Jenkins contributors at some point. Google Summer of Code 2020 maybe?

Running the GSoC program at our organization level

Here are some of the things our organization did before and during GSoC behind the scenes. To prepare for the influx of students, we updated all our GSoC pages and wrote down all the knowledge we accumulated over the years of running the program. We started preparing in October 2018, long before the official start of the program. The main objective was to address the feedback we got during GSoC 2018 retrospectives.

Project ideas. We started gathering project ideas in the last months of 2018. We prepared a list of project ideas in a Google doc, and we tracked ownership of each project in a table of that document. Each project idea was further elaborated in its own Google doc. We find that when projects get complicated during the definition phase, perhaps they are really too complicated and should not be done.

Since we wanted all the project ideas to be documented the same way, we created a template to guide the contributors. Most of the project idea documents were written by org admins or mentors, but occasionally a student proposed a genuine idea. We also captured contact information in that document such as GitHub and Gitter handles, and a preliminary list of potential mentors for the project. We embedded all the project documents on our website.

Mentor and student guidelines. We updated the mentor information page with details on what we expect mentors to do during the program, including the number of hours that are expected from mentors, and we even have a section on preventing conflict of interest. When we recruit mentors, we point them to the mentor information page.

We also updated the student information page. We find this is a huge time saver as every student contacting us has the same questions about joining and participating in the program. Instead of re-explaining the program each time, we send them a link to those pages.

Application phase. Students started to reach out very early on as well, many weeks before GSoC officially started. This was very motivating. Some students even started to work on project ideas before the official start of the program.

Project selection. This year the org admin team had some very difficult decisions to make. With lots of students, lots of projects and lots of mentors, we had to request the right number of slots and try to match the projects with the most chances of success. We were trying to form mentor teams at the same time as we were requesting the number of slots, and it was hard to get responses from all mentors in time for the deadline. Finally we requested fewer slots than we could have filled. When we request slots, we submit two numbers: a minimum and a maximum. The GSoC guide states that:

  • The minimum is based on the projects that are so amazing they really want to see these projects occur over the summer,
  • and the maximum number should be the number of solid and amazing projects they wish to mentor over the summer.

We were awarded minimum. So we had to make very hard decisions: we had to decide between “amazing” and “solid” proposals. For some proposals, the very outstanding ones, it’s easy. But for the others, it’s hard. We know we cannot make the perfect decision, and by experience, we know that some students or some mentors will not be able to complete the program due to uncontrollable life events, even for the outstanding proposals. So we have to make the best decision knowing that some of our choices won’t complete the program.

Community Bonding. We have found that the community bonding phase was crucial to the success of each project. Usually projects that don’t do well during community bonding have difficulties later on. In order to get students involved in the community better, almost all projects were handled under the umbrella of Special Interest Groups so that there were more stakeholders and communications.

Communications. Every year we have students who contact mentors via personal messages. Students, if you are reading this, please do NOT send us personal messages about the projects, you will not receive any preferential treatment. Obviously, in open source we want all discussions to be public, so students have to be reminded of that regularly. In 2019 we are using Gitter chat for most communications, but from an admin point of view this is more fragmented than mailing lists. It is also harder to search. Chat rooms are very convenient because they are focused, but from an admin point of view, the lack of threads in Gitter makes it hard to get an overview. Gitter threads were added recently (Nov 2019) but do not yet work well on Android and iOS. We adopted Zoom Meetings towards the end of the program and we are finding it easier to work with than Google Hangouts.

Status tracking. Another thing that was hard was to get an overview of how all the projects were doing once they were running. We made extensive use of Google sheets to track lists of projects and participants during the program to rank projects and to track statuses of project phases (community bonding, coding, etc.). It is a challenge to keep these sheets up to date, as each project involves several people and several links. We have found it time consuming and a bit hard to keep these sheets up to date, accurate and complete, especially up until the start of the coding phase.

Perhaps some kind of objective tracking tool would help. We used Jenkins Jira for tracking projects, with each phase representing a separate sprint. It helped a lot for successful projects. In our organization, we try to get everyone to beat the deadlines by a couple of days, because we know that there might be events such as power outages, bad weather (happens even in Seattle!), or other uncontrolled interruptions, that might interfere with submitting project data. We also know that when deadlines coincide with weekends, there is a risk that people may forget.

Retrospective. At the end of our project, we also held a retrospective and captured some ideas for the future. You can find the notes here. We already addressed the most important comments in our documentation and project ideas for the next year.

Recognition

Last year, we wanted to thank everyone who participated in the program by sending swag. This year, we collected all the mailing addresses we could and sent to everyone we could the 15-year Jenkins special edition T-shirt, and some stickers. This was a great feel good moment. I want to personally thank Alyssa Tong her help on setting aside the t-shirt and stickers.

swag before shipping

Mentor summit

Each year Google invites two or more mentors from each organization to the Google Summer of Code Mentor Summit. At this event, hundreds of open-source project maintainers and mentors meet together and have unconference sessions targeting GSoC, community management and various tools. This year the summit was held in Munich, and we sent Marky Jackson and Oleg Nenashev as representatives there.

Apart from discussing projects and sharing chocolate, we also presented Jenkins there, conducted a lightning talk and hosted the unconference session about automation bots for GitHub. We did not make a team photo there, so try to find Oleg and Marky on this photo:

GSoC2019 Mentor summit

GSoC Team at DevOps World | Jenkins World

We traditionally use GSoC organization payments and travel grants to sponsor student trips to major Jenkins-related events. This year four students traveled to the DevOps World | Jenkins World conferences in San-Francisco and Lisbon. Students presented their projects at the community booth and at the contributor summits, and their presentations got a lot of traction in the community!

Thanks a lot to Google and CloudBees who made these trips possible. You can find a travel report from Natasha Stopa here, more travel reports are coming soon.

gsoc2019 team jw us
gsoc2019 team jw lisbon

Conclusion

This year, five projects were successfully completed. We find this to be normal and in line with what we hear from other participating organizations.

Taking the time early to update our GSoC pages saved us a lot of time later because we did not have to repeat all the information every time someone contacted us. We find that keeping track of all the mentors, the students, the projects, and the meta information is a necessary but time consuming task. We wish we had a tool to help us do that. Coordinating meetings and reminding participants of what needs to be accomplished for deadlines is part of the cheerleading aspect of GSoC, we need to keep doing this.

Lastly, I want to thank again all participants, we could not do this without you. Each year we are impressed by the students who do great work and bring great contributions to the Jenkins community.

GSoC 2020?

Yes, there will be Google Summer of Code 2020! We plan to participate, and we are looking for project ideas, mentors and students. Jenkins GSoC pages have been already updated towards the next year, and we invite everybody interested to join us next year!