Skip to main content
Category

Blog

From Jenkins – WebSocket

By Blog, Project

Originally posted on the Jenkins blog by Jesse Glick

I am happy to report that JEP-222 has landed in Jenkins weeklies, starting in 2.217. This improvement brings experimental WebSocket support to Jenkins, available when connecting inbound agents or when running the CLI. The WebSocket protocol allows bidirectional, streaming communication over an HTTP(S) port.

While many users of Jenkins could benefit, implementing this system was particularly important for CloudBees because of how CloudBees Core on modern cloud platforms (i.e., running on Kubernetes) configures networking. When an administrator wishes to connect an inbound (formerly known as “JNLP”) external agent to a Jenkins master, such as a Windows virtual machine running outside the cluster and using the agent service wrapper, until now the only option was to use a special TCP port. This port needed to be opened to external traffic using low-level network configuration. For example, users of the nginx ingress controller would need to proxy a separate external port for each Jenkins service in the cluster. The instructions to do this are complex and hard to troubleshoot.

Using WebSocket, inbound agents can now be connected much more simply when a reverse proxy is present: if the HTTP(S) port is already serving traffic, most proxies will allow WebSocket connections with no additional configuration. The WebSocket mode can be enabled in agent configuration, and support for pod-based agents in the Kubernetes plugin is coming soon. You will need an agent version 4.0 or later, which is bundled with Jenkins in the usual way (Docker images with this version are coming soon).

Another part of Jenkins that was troublesome for reverse proxy users was the CLI. Besides the SSH protocol on port 22, which again was a hassle to open from the outside, the CLI already had the ability to use HTTP(S) transport. Unfortunately the trick used to implement that confused some proxies and was not very portable. Jenkins 2.217 offers a new -webSocket CLI mode which should avoid these issues; again you will need to download a new version of jenkins-cli.jar to use this mode.

The WebSocket code has been tested against a sample of Kubernetes implementations (including OpenShift), but it is likely that some bugs and limitations remain, and scalability of agents under heavy build loads has not yet been tested. Treat this feature as beta quality for now and let us know how it works!

From Jenkins – Atlassian’s new Bitbucket Server integration for Jenkins

By Blog, Project

Originally posted on the Jenkins blog by Daniel Kjellin

We know that for many of our customers Jenkins is incredibly important and its integration with Bitbucket Server is a key part of their development workflow. Unfortunately, we also know that integrating Bitbucket Server with Jenkins wasn’t always easy – it may have required multiple plugins and considerable time. That’s why earlier this year we set out to change this. We began building our own integration, and we’re proud to announce that v1.0 is out.

The new Bitbucket Server integration for Jenkins plugin, which is built and supported by Atlassian, is the easiest way to link Jenkins with Bitbucket Server. It streamlines the entire set-up process, from creating a webhook to trigger builds in Jenkins, to posting build statuses back to Bitbucket Server. It also supports smart mirroring and lets Jenkins clone from mirrors to free up valuable resources on your primary server.

Our plugin is available to install through Jenkins now. Watch this video to find out how, or read the BitBucket Server solution page to learn more about it.

Once you’ve tried it out we’d love to hear any feedback you have. To share it with us, visit https://issues.jenkins-ci.org and create an issue using the component atlassian-bitbucket-server-integration-plugin.

Introducing our newest CDF Ambassador – Marky Jackson

By Blog, Staff

Hello friends! I am Marky Jackson and I am so thrilled to be one of the newest CDF ambassadors.

I have been involved in open source for many years but my start in the world was rather rocky. I had a difficult childhood. I was shuffled from one boys’ home to another and had little control over my life. But I was tough and smart, and I emancipated at an early age, which allowed me to start living the way I wanted while I was still in my teens.

I studied computer science at UCLA and MIT and then spent 14 months as an intern at Jasmine Multimedia Publishing before joining companies such as Yahoo, AT&T, HP, Symantec and more.

I am extremely excited to be a part of this program because I get to make people smile by mentoring and being positive. I love public speaking, doing a meetup in person or virtual and helping people online. I get joy when a person gets involved in the open-source community.

The open source community is all about inclusion. We welcome people to contribute, and we try to express our gratitude for their hard work. The sense of unity and belonging is second to none with developers, coders, and engineers from around the world collaborating to advance our industry.

It takes a lot of time and effort to keep the open-source community going. Most of us are working after-hours to get things done, and we need help—lots of help. People think that you have to be an expert coder to join us, but that’s not true. There are plenty of ways to take part in. You can contribute error reports, write technical documentation, or even sponsor an application. There are plenty of ways to offer support. Just ask what you can do.

I look forward to meeting everyone and collaborating! You can find me at
@markyjackson5 on Twitter.

Screwdriver: Introducing Queue Service

By Blog, Project
Introducing Queue Service

Pritam Paul, Software Engineer, Verizon Media

We have recently made changes to the underlying Screwdriver Architecture for build processing. Previously, the executor-queue was tightly-coupled to the SD API and worked by constantly polling for messages at specific intervals. Due to this design, the queue would block API requests. Furthermore, if the API crashed, scheduled jobs might not be added to the queue, causing cascading failures.

Hence, keeping the principles of isolation-of-concerns and abstraction in mind, we designed a more resilient REST-API-based queueing system: the Queue Service. This new service reads, writes and deletes messages from the queue after processing. It also encompasses the former capability of the queue-worker and acts as a scheduler.

Authentication

The SD API and Queue Service communicate bidirectionally using signed JWT tokens sent via auth headers of each request.

Build Sequence
image
Design Document

For more details, check out our design spec.

Using Queue Service

As a cluster admin, to configure using the queue as an executor, you can deploy the queue-service as a REST API using a screwdriver.yaml and update configuration in SD API to point to the new service endpoint:

# config/default.yaml
ecosystem:
    # Externally routable URL for the User Interface
    ui: https://cd.screwdriver.cd

    # Externally routable URL for the Artifact Store
    store: https://store.screwdriver.cd

    # Badge service (needs to add a status and color)
    badges: https://img.shields.io/badge/build–.svg

    # Internally routable FQDNS of the queue service
    queue: http://sdqueuesvc.screwdriver.svc.cluster.local

executor:
    plugin: queue
    queue: “

For more configuration options, see the queue-service documentation.

Compatibility List

In order to use the new workflow features, you will need these minimum versions:

  • UI – v1.0.502
  • API – v0.5.887
  • Launcher – v6.0.56
  • Queue-Service – v1.0.11
Contributors

Thanks to the following contributors for making this feature possible:

Questions and Suggestions

We’d love to hear from you. If you have any questions, please feel free to reach out here. You can also visit us on Github and Slack.

Screwdriver : Recent Enhancements and Bug Fixes

By Blog, Project

Recent Enhancements and Bug Fixes

Screwdriver Team from Verizon Media

UI

Previously, users could not start builds during a freeze window unless they made changes to the freeze window setting in the screwdriver.yaml configuration. Now, you can start a build by entering a reason in the confirmation modal. This can be useful for users needing to push out an urgent patch or hotfix during a freeze window.

image
image

Store

  • Feature: Build cache now supports local disk-based cache in addition to S3 cache.

Queue Worker

  • Bugfix: Periodic build timeout check
  • Enhancement: Prevent re-enqueue of builds from same event.

Compatibility List

In order to have these improvements, you will need these minimum versions:

  • UI – v1.0.479
  • API – v0.5.835
  • Store – v3.10.3
  • Launcher – v6.0.42
  • Queue-Worker – v2.9.0

Contributors

Thanks to the following contributors for making this feature possible:

Questions and Suggestions

We’d love to hear from you. If you have any questions, please feel free to reach out here. You can also visit us on Github and Slack.

Screwdriver: Improvements and Fixes

By Blog, Project

Part 2 from the Screwdriver Team at Verizon Media

UI
  • Enhancement: Upgrade to node.js v12.
  • Enhancement: Users can now link to custom test & coverage URL via metadata.
  • Enhancement: Reduce number of API calls to fetch active build logs.
  • Enhancement: Display proper title for Commands and Templates pages.
  • Bug fix: Hide “My Pipelines” from Add to collection dialogue.
  • Enhancement: Display usage stats for a template.
image
API
Store
Compatibility List

In order to have these improvements, you will need these minimum versions:

  • UI – v1.0.491
  • API – v0.5.851
  • Store – v3.10.5
Contributors

Thanks to the following contributors for making this feature possible:

Questions and Suggestions

We’d love to hear from you. If you have any questions, please feel free to reach out here. You can also visit us on Github and Slack.

Screwdriver: Build cache – Disk Strategy

By Blog, Project

Screwdriver now has the ability to cache and restore files and directories from your builds to either s3 or disk-based storage. Rest all features related to the cache feature remains the same, only a new storage option is added. Please DO NOT USE this cache feature to store any SENSITIVE data or information.

The graph below is our Internal Screwdriver instance build-cache comparison between disk-based strategy vs aws s3.

Build cache – get cache – (disk strategy)

image

Build cache – get cache – (s3)

image

Build cache – set cache – (disk strategy)

image

Build cache – set cache – (s3)

image

Why disk-based strategy?

Based on the cache analysis, 1. The majority of time was spent pushing data from build to s3, 2. At times the cache push fails if the cache size is big (ex: >1gb). So, simplified the storage part by using a disk cache strategy and using filer/storage mount as a disk option. Each cluster will have its own filer/storage disk mount.

NOTE: When a cluster becomes unavailable and if the requested cache is not available in the new cluster, the cache will be rebuilt once as part of the build.

Cache Size: 

Max size limit per cache is configurable by Cluster admins.

Retention policy:

Cluster admins are responsible to enforce retention policy.

Cluster Admins:

Screwdriver cluster-admin has the ability to specify the cache storage strategy along with other options like compression, md5 check, cache max limit in MB

Reference: 

  1. https://github.com/screwdriver-cd/screwdriver/blob/master/config/default.yaml#L280
  2. https://github.com/screwdriver-cd/executor-k8s-vm/blob/master/index.js#L336
  3. Issue: https://github.com/screwdriver-cd/screwdriver/issues/1830

Compatibility List:

In order to use this feature, you will need these minimum versions:

Contributors:

Thanks to the following people for making this feature possible:

Screwdriver is an open-source build automation platform designed for Continuous Delivery. It is built (and used) by Yahoo. Don’t hesitate to reach out if you have questions or would like to contribute: http://docs.screwdriver.cd/about/support.

Spinnaker: 1.18 Release Introduces Spinnaker Community Stats

By Blog, Project

Author: Spinnaker Steering Committee (Travis Tomsu, Software Engineer, Google)

The Spinnaker community has grown significantly after launching as an open source project in 2015. The project maintainers increasingly look for ways to help the community better understand how Spinnaker is used, and to help contributors prioritize future improvements.

Today, feature development is guided by industry experts, community discussions, Special Interest Groups (SIGs), and events like the recently held Spinnaker Summit. In August 2019, the community published an RFC, which proposed the tooling that will enable everyone to make data-driven decisions based on product usage across all platforms. We encourage Spinnaker users to continue providing feedback, and to review and comment on the RFC.

Following on from this RFC, the Spinnaker 1.18 release includes an initial implementation of statistics collection capabilities that are used to collect generic deployment and usage information from Spinnaker installations around the world. Before going into the details, here are some important facts to know:

  • No personally identifying information (PII) is collected or logged.
  • The implementation was reviewed and is approved by the Linux Foundation’s Telemetry Data Collection and Usage Policy.
  • All stats collection code is open source and can be found in the Spinnaker statsEcho, and Kork repos found on GitHub.
  • Users can disable statistics collection at any time through a single Halyard command.
  • Community members that want to work with the underlying dataset and/or dashboard reports can request and receive full access.

This feature exists in the Spinnaker 1.18 release,but is disabled by default while we finalize testing of the backend and fine-tune report dashboards. The feature will be enabled by default in the Spinnaker 1.19 release (scheduled for March 2020).

All data will be stored in a Google BigQuery database, and report dashboards will be publicly available from the Community Stats page. Community members can request access to the collection data.

Data collected as part of this effort allows the entire community to better monitor the growth of Spinnaker, understand how Spinnaker is used “in the wild”, and prioritize feature development across a large community of Spinnaker contributors. Thank you for supporting Spinnaker and for your help in continuing to make Spinnaker better!

Tekton Beta Available Now! Looking for Tekton Task Catalog contributors, beta testers, and more!

By Announcement, Blog

Tekton Pipelines, the core component of the Tekton project, is moving to beta status with the release of v0.11.0 this week. Tekton is an open source project creating a cloud-native framework you can use to configure and run continuous integration and continuous delivery (CI/CD) pipelines within a Kubernetes cluster.

Try Tekton Now!

Tekton development began as Knative Build before becoming a founding project of the CD Foundation under the Linux Foundation last year.

The Tekton project follows the Kubernetes deprecation policies. With Tekton Pipelines upgrading to beta, most Tekton Pipelines CRDs (Custom Resource Definition) are now at beta level. This means overall beta level stability can be relied on. Please note, Tekton Triggers, Tekton Dashboard, Tekton Pipelines CLI and other components are still alpha and may continue to evolve from release to release. 

Tekton encourages all Tekton projects and users to migrate their integrations to the new apiVersion. Users of Tekton can see the migration guide on how to migrate from v1alpha1 to v1beta1.

Full list of Features, Deprecation Notices, Docs, Thanks and lots more

Who’s using Tekton?

Tekton is in its second year of development and is currently being used by both free and commercial offerings by multiple companies.

Join Now!

Now is a great time to contribute. There are many areas where you can jump in. For example, the Tekton Task Catalog allows you to share and reuse the components that make up your Pipeline. You can set a Cluster scope, and make tasks available to all users in a namespace, or you can set a Namespace scope, and make it usable only within a specific namespace. 

Get started Now!

Scaling Continuous Delivery and Runbook Automation via Tool Interoperability Interfaces

By Blog, Community

Originally posted on Medium by community member, Andreas Grimmer

Continuous Delivery (CD) and Runbook Automation are standard means to deploy, operate and manage software artifacts across the software life cycle. Based on our analysis of many delivery pipeline implementations, we have seen that on average seven or more tools are included in these processes, e.g., version control, build management, issue tracking, testing, monitoring, deployment automation, artifact management, incident management, or team communication. Most often, these tools are “glued together” using custom, ad-hoc integrations in order to form a full end-to-end workflow. Unfortunately, these custom ad-hoc tool integrations also exist in Runbook Automation processes.

Processes usually integrate multiple tools and exist in multiple permutations

Problem: Point-to-Point Integrations are Hard to Scale and Maintain

Not only is this approach error-prone but maintenance and troubleshooting of these integrations in all its permutations is time-intensive too. There are several factors that prevent organizations from scaling this across multiple teams:

  • Number of tools: Although the great availability of different tools always allows having the appropriate tool in place, the numberof required integrations explodes.
  • Tight coupling: The tool integrations are usually implemented within the pipeline, which results in a tight coupling between the pipeline and the tool.
  • Copy-paste pipeline programming: A common approach we are frequently seeing is that a pipeline with a working tool integration is often used as a starting point for new pipelines. If now the API of a used tool changes, all pipelines have to catch up to stay compatible and to prevent vulnerabilities.

Let’s imagine an organization with hundreds of copy-paste pipelines, which all contain a hard-coded piece of code for triggering Hey load tests. Now this organization would like to switch from Hey to JMeter. Therefore, they would have to change all their pipelines. This is clearly not efficient!

Solution: Providing Standardized Interoperability Interfaces

In order to solve these challenges, we propose introducing interoperability interfaces, which allow abstract tooling in CD and Runbook Automation processes. These interfaces should trigger operations in a tool-agnostic way.

For example, a test interface could abstract different testing tools. This interface can then be used within a pipeline to trigger a test without knowing which tool is executing the actual test in the background.

Interface abstracts the actual tooling

These interoperability interfaces are important and this is confirmed by the fact that the Continuous Delivery Foundation has implemented a dedicated working group on Interoperability, as well as the open-source project Eiffel, which provides an event-based protocol enabling a technology-agnostic communication especially for Continuous Integration tasks.

Use Events as Interoperability Interfaces

By implementing these interoperability interfaces, we define a standardized set of events. These events are based on CloudEvents and allow us to describe event data in a common way.

The first goal of our standardization efforts is to define a common set of CD and runbook automation operations. We identified the following common operations (please let us know if we are missing important operations!):

  • Operations in CD processes: deployment, test, evaluation, release, rollback
  • Operations in Runbook Automation processes: problem analysis, execution of the remediation action, evaluation, and escalation/resolution notification

For each of these operations, an interface is required, which abstracts the tooling executing the operation. When using events, each interface can be modeled as a dedicated event type.

The second goal is to standardize the data within the event, which is needed by the tools in order to trigger the respective operation. For example, a deployment tool would need the information of the artifact to be deployed in the event. Therefore, the event can either contain the required resources (e.g. a Helm chart for k8s) or a URI to these resources.

We already defined a first set of events https://github.com/keptn/spec, which is specifically designed for Keptn — an open-source project implementing a control plane for continuous delivery and automated operations. We know that these events are currently too tailored for Keptn and single tools. So, please

Let us Work Together on Standardizing Interoperability Interfaces

In order to work on a standardized set of events, we would like to ask you to join us in Keptn Slack.

We can use the #keptn-spec channel in order to work on standardizing interoperability interfaces, which eventually are directly interpreted by tools and will make custom tool integrations obsolete!