Skip to main content

Jenkins Interoperability with CloudEvents

By September 2, 2021July 24th, 2023Blog, Project

Contributed by Shruti Chaturvedi | originally published on medium.com

  • What is interoperability?
  • Do we really need interoperability? Why?
  • So, How can I design interoperable systems?

If any of these questions have ever wandered your mind, then this article is for you. Here, we will be talking about the What, Why and How of technical interoperability. We will then be looking at how Jenkins is implementing interoperability in the cloud to make working across Jenkins and cloud-native CI/CD tools easier. As a part of that, we will be taking a look at the CloudEvents specification is, and what it does for interoperability. 🎉 A little bonus at the end: I’ll be introducing the new CloudEvents Plugin for Jenkins, developed as a GSoC’21 project, which is aimed at enhancing interoperability between Jenkins and CI/CD tools!!! 🎊

Let’s start by answering the first question: what is interoperability? Before I give an answer to that, I want you to play a quick word game with me. I’m calling it, “break-and-guess”. If we were to break the word, interoperability, into two separate and coherent words, we’d get-

sand timer
Photo by NeONBRAND on Unsplash

inter and operate! The most simple definition of interoperability is a way for different entities to operate or function together. In its broader sense, entities here can be any ecosystem where two or more things coexist. One example of interoperability is our communication with people around us. We are using various forms of communication — voice, text, pictures — every day to “interoperate” within our ecosystem. Different entities can communicate differently, as long as the goal of transferring the meaning of the message is met.

How does this apply to technical interoperability, primarily interoperability in the cloud? It’s the same idea of facilitating communication so systems can co-exist and co-function. Interoperability is essentially defining that systems should be able to communicate efficiently to operate together.

Now that we understand what interoperability means, we are ready to tackle the next question: why do we need interoperability? And can we perhaps do without it?

Language exerts hidden power, like the moon on the tides.

– Rita Mae Brown

From the first part we understand that entities need to communicate to co-exist and co-function. That seems true for living entities for the most part, but interestingly, it is also very true for technical systems. The adoption of cloud-native tools and technology by organizations, big and small, has increased tremendously. Organizations are leveraging cloud-native tools for a whole variety of tasks- building, testing, delivering and managing applications.

With workloads becoming more diverse and complex, organizations are finding the need to build their application around an ecosystem with multiple tools. For example, an organization might be using Terraform to deploy and iterate over their infrastructure, Docker to containerize applications, Jenkins to test stability and Tekton to deliver. As we can see, this is a pretty diverse ecosystem of tools. And from what we have understood about co-operation in an ecosystem is that the services must be able to communicate among each other.

Most applications built these days are using multiple cloud native tools to meet their delivery and deployment requirements. These tools are, more often than not, inter-dependent on each other. A very simple example to think about this idea could be an automated CI/CD pipeline. As soon as a CI system finishes integrating and testing new changes, the CD system should deliver this new version of an application. Let’s pause for a second here, and think if this automated CI/CD pipeline would work if the two systems were not able to talk/communicate with each other? 🙊

speaking bubble neon sign
Photo by Jason Leung on Unsplash

We want interoperability in the cloud so we can truly leverage the power offered by cloud-native tools across different use-cases. As a Developer, I want my systems to work smartly with each other. I don’t want to manually deploy my application if a build has succeeded or cancel publishing a build artifact if a test has failed. I’d rather want these systems to interoperate so I can accomplish all these tasks with minimal effort without losing quality. And obviously, communication is the key to achieve that! And that’s the reason why we need interoperability.

With the what, and the why answered, let’s take another trip down the Interoperability Land and walk through the how’ of interoperability. There are two forms of interoperability we can implement into our systems:

  • Direct interoperability
  • Indirect interoperability.

I never get tired of real-life examples, so here’s one 😃

outdoor food market place
Photo by Renate Vanaga on Unsplash

This is an example of a bunch of traders in a market, wanting to do business with each other. Perhaps a market that sells designer clay pots 😍. A trader might be selling raw clay, the other might be into colors. The idea is simple: these traders want to buy and sell raw material off of each other. However, it’s no piece of 🍰. All traders involved in this business, speak a language that is unique to the others. To do business, it is important for the traders to be able to communicate with each other.

The trader who sells raw clay came up with an amazing idea to talk with the scoring tools trader. The raw-clay trader employed a translator who understands the language of both the raw-clay and the scoring-tools trader. This way, the communication between these two traders flows through this translator. Business between the two is running pretty smoothly. Amazing idea, indeed!? Well…

While the raw-clay trader has been able to communicate and do business with the scoring-tools trader via the translator she hired, there is no guarantee this translator will help the raw-clay trader also do business with other traders in the market. This trader might need to buy colors for the pots from a different trader. There is a big possibility that the translator she has hired might not help her communicate with other traders, because each trader uses a unique language. So to keep her business running, the raw-clay trader will have to put in more investment to hire more translators who’d understand a variety of languages and will help our raw-clay trader do business with more than just the scoring-tools trader.

This method of facilitating communication is called direct interoperability. It is called so because as we saw, the clay trader had to employ a translator or a service who’d communicate directly with the receiver of the message. In terms of technical interoperability in the cloud, we see this kind of interoperability when services have to design specific solutions to talk with another service, and this solution may — or may not — be useful across other services. Agents and client plugins are some examples of direct interoperability where a specific solution is designed to achieve interoperability. However, this solution might not be feasible as a generic solution, thereby increasing the need to implement more than one solution to interoperate with multiple tools. Not necessarily the most exciting, plus obviously the added cost of managing multiple solutions can be painful.

Fortunately, there is a simpler, shorter and nicer way of doing this too. And that’ll be indirect-interoperability! Imagine how easy it’d be if the clay-pot traders in the market derived a common language they all can use to talk with each other. Regardless of the language they speak, all the traders now understand this common language, which is essentially a new business language needed to do business with others. New traders joining the clay-pot market must know this language to communicate with other folks efficiently. It’s a one-time and a for-all solution. And did I mention easier?

This is indirect interoperability, because the need to maintain one-to-one, direct solutions of communication is eliminated. Indirect interoperability in the cloud is implemented by defining a common standard specification which all the tools involved can understand. This way, developers don’t have to design and maintain specific solutions to talk with each service, but rather have their tool support and understand this common language. Once implemented, it can work well with other systems which use this common language for communicating. See how easier this can make designing interoperable systems?

That is exactly what CloudEvents help accomplish in a multi-tool, cloud-native infrastructure. CloudEvents is an industry-adopted standard specification to describe events in a common format. The specification is open-sourced under CNCF.

The idea behind CloudEvents is to standardize how events are emitted and consumed between varied services. There is no limitation on which service can emit or consume CloudEvents-compliant events.

  • All services are written in different languages? No problem! Since CloudEvents describe a common format for how an event should look, interoperability between services written in different languages is super easy.
  • Is there a limitation on what kind of events can follow CloudEvents-spec? Not at all! CloudEvents is just a specification for defining a common format for events. An event can be anything: branch created/merged/deleted event; test succeeded event; test failed event (😢); build completed event; a DB-transaction event; request/response event. Really anything!
  • Can it be used if some services are on-prem and some on-cloud? Does CloudEvents mean service has to be on-cloud? You can use CloudEvents for any infrastructure, that’s the fun part!
cloudevents logo

A specification for describing event data in a common way.

“CloudEvents is a specification for describing event data in a common way. CloudEvents seeks to dramatically simplify event declaration and delivery across services, platforms, and beyond!”

– CloudEvents

CloudEvents has revolutionized interoperability between services. Using CloudEvents, we can chain multiple services together without building additional tools to talk with each other. Each of the services will know exactly how to read and/or emit CloudEvents-compliant events. The actual event data and metadata are usually different, however, the format of each event will be the same.

We’ll go back to our clay-pot market real-quick. Like the last time we were there, all people in the business were using a common language to talk with each other. So, while each trader might be speaking the same language, they will be saying different things. The raw-clay trader they are all talking with understands what they saying, but she’ll have to process each conversation differently and do business with them accordingly.

Similarly, a group of services using CloudEvents-compliant events will all use the same format of events. However, depending on the type of the event or the source of the event, each event will need to be processed differently. It will be the job of the receiver which will decide how it does that.

What can I build with CloudEvents?

Any system that can be built around events can be built with CloudEvents. And as we saw earlier, any occurrence of an action can be an event! Some open-source cloud-native integrations which support CloudEvents are:

A pretty wide range of tools serving purposes like eventing, CI/CD, runtime threat-detection, data-streaming, and more. Building an interoperable, event-driven system between these tools is very easy by using the CloudEvents spec. If an event is being sent from Falco to Tekton, ArgoEvents, Kept and OpenFaas, we’ll not have to alter the event from Falco to be built receiver-specific. Similarly, we’ll not have to alter any of the receivers to receive a sender-specific event. All that’s left for the receiver to do is look through the event, and make a decision on how it wants to process it.

🔴 Great News Ahead

🎉 🎊 🎈 Jenkins supports CloudEvents too now!!! 🎉 🎊 🎈

How cool!

Jenkins is a stunningly powerful, open-sourced CI tool with a great community powering this beauty. As a cloud-native CI tool, Jenkins has always had support for integrations to interoperate with other tools and services. However, most of these integrations required the direct-interoperability mechanism we talked about earlier. Imagine how complex that would get if we were trying to design an event-driven system with Jenkins and multiple cloud-native tools!

A couple months back, members of the CDF community came up with an amazing idea that would enhance Jenkins interoperability. The idea was to integrate Jenkins with CloudEvents. The goal of the project was to make Jenkins compatible with the new standards of interoperability, which also meant easier integration with CloudEvents-compliant tools.

The project grew as CloudEvents-plugin for Jenkins, which would allow users to configure Jenkins as a source and a sink, emitting and consuming CloudEvents among a group of cloud-native, CloudEvents-complaint tools. This project was accepted as a GSoC’21 Project under the Continuous Delivery FoundationI have been working alongside really amazing mentors and members of the CDF community since June to extend Jenkins to support CloudEvents through the CloudEvents Plugin. Check out the GitHub Repo of the CloudEvents Plugin for Jenkins.

As of right now, the plugin supports configuring Jenkins as a source, emitting CloudEvents. We are also working on extending support for Jenkins as a sink, consuming CloudEvents from various sources. When we are talking about Jenkins as a source, what that means is that for events occurring inside Jenkins, it will emit a CloudEvents-compliant event, and send it over to a receiver which a user will configure inside the plugin.

While CloudEvents support a variety of protocol bindings (industry-standard, open-sourced as well as vendor-specific) and serialization formats for the event data (read more here), for the initial phase of the plugin, we chose HTTP protocol binding with JSON-serialization (PS: we also designed a Proof of Concept (PoC) to send CloudEvents from Jenkins over Kafka, and it works beautifully ☺️).

The greatest thing about indirect interoperability, CloudEvents and the CloudEvents plugin for Jenkins, is that systems are designed agnostically, or independently of each other. Jenkins as a source is receiver/sink agnostic. Any sink, so long it supports the same protocol binding, can receive events sent by Jenkins.

Let’s take a deeper look, and start by seeing how to configure the CloudEvents Plugin inside Jenkins –

Welcome to Jenkins Dashboard. "Manage Jenkins" button in the left menu is highlighted
Step 1: Manage Jenkins
Jenkins Dashboard continued 
"Configure System" Button highlighted
Step 2: Configure System
CloudEvents Plugin Menu. All boxed are selected.
Step 3: Where the Magic Happens 🎆
  1. Select the Manage Jenkins option on the left of the Dashboard.
  2. Then on the Manage Jenkins page, select Configure System. This is where we will add all the required configurations for this plugin.
  3. When you are inside the Configure System page, scroll down to the CloudEvents Plugin section. Here, you’ll be configuring a receiver/sink where all the events you selected will be sent — in CloudEvents format — to the sink. There are a couple of things to configure here:
  • Sink Type — Dropdown to select the protocol used to send events. Currently, the plugin supports HTTP sink and is being tested with Kafka sinks.
  • Sink URL — URL of the sink where the events will be sent.
  • Events you want your receiver/sink to receive.

Note: This plugin also supports Config-as-Code, allowing the required configurations for Jenkins as a Source to be done easier.

Two red British postboxes
Photo by Kristina Tripkovic on Unsplash

There are different types of events within Jenkins that are currently supported by the plugin. Using the current HTTP binding, every time each of the select events occurs inside Jenkins, a post request will be made to the configured sink with CloudEvents-compliant event data and metadata. This plugin uses the binary format of CloudEvents for emitting events where event data is stored in the message body, and event attributes are stored as part of message meta-data.

Note: As of now, Jenkins as a source does not implement transient failures by itself, but rather it works with other tools like Knative Eventing to implement a fault-tolerant way of routing requests. We will see how we are using that ahead.

Here’s what HTTP request headers can look like for a job entered queue event:

ce-specversion: 1.0
ce-id: c42d1f19-9908-43da-9a7f-404405c52b60
ce-type: org.jenkinsci.queue.entered_waiting
ce-source: job/test2

And here’s an example of the event data for this entered-waiting event type:

{
"ciUrl": "http://3.101.116.80/",
"displayName": "test2",
"entryTime": 1626611053609,
"exitTime": null,
"startedBy": "shruti chaturvedi",
"jenkinsQueueId": 25,
"status": "ENTERED_WAITING",
"duration": 0,
"queueCauses": [
{
"reasonForWaiting": "In the quiet period. Expires in 0 ms",
"type": "entered_waiting"
}
]
}

This plugin supports node-specific events, job-specific events, build-specific events, and we will be adding support for other types of events too moving forward. Each of the supported events has a specific format of event payload/data, and the event metadata so to make the events CloudEvents-compliant.

Oh my god! That was quite a lot of things to go through. But stay with me, I insist because now I will be showing interoperability IRL 😄 💃.

While we were working on the CloudEvents plugin, a special interest group in the CDF community, the Events-SIG team was already working on designing PoC for interoperability between different cloud-native, CloudEvents-compliant tools. They had been working on testing an interoperable infrastructure between Tekton and Keptn with a Knative Eventing broker to handle the logic for network and other transient failures. Check the PoC developed by the Events-SIG team at CDF here. The Events-SIG PoC inspired us to also test the CloudEvents plugin with other tools which support CloudEvents. It’d be our litmus test to ensure that the plugin helps achieve our goal to enhance interoperability more easily. The PoC scripts can be found on the GitHub repo of the plugin.

For the Jenkins CloudEvents Plugin PoC, we installed Tekton, Knative Eventing and Jenkins in a Kubernetes cluster. Let’s take a bit more look into Knative Eventing brokers.

“Brokers are Kubernetes custom resources that define an event mesh for collecting a pool of CloudEvents. Brokers provide a discoverable endpoint, status.address, for event ingress, and triggers for event delivery. Event producers can send events to a broker by POSTing the event to the status.address.url of the broker.”

– Knative

Apart from routing events between producers and consumers, Knative Eventing also helps setting Triggers on the broker. Each trigger can define one or more filters on which event a producer will receive. For example, a producer can be configured to only receive events of ce-type(a custom CloudEvents attribute) entered_waiting.

Jenkins → CloudEvents → Knative CloudEvents Broker (Knative Trigger) →CloudEvens → Tekton Triggers → Tekton TaskRun/PipelineRun

For the PoC, the CloudEvents plugin for Jenkins sends CloudEvents-compliant events over HTTP post to the Knative Broker, which by default works with CloudEvents. On top of the Knative CloudEvents broker, we have also added a Knative CloudEvents Trigger.

The Knative Trigger allows setting filters on the received events based on CloudEvents attributes. With trigger added, any event will be sent to the producer if and only if it has the attribute/s set up in Knative Trigger. In the PoC, the Knative Trigger sets the filter: ce-type:org.jenkinsci.queue.entered_waiting .

Any event matching this filter will be sent over to a Tekton EventListener, which exposes a sink where the event will be routed from Knative Broker. We have also defined Tekton-triggers on the Tekton EventListener. Tekton triggers allow users to specify what kind of actions would they like to take when a certain event is received. The PoC also uses TriggerBinding in Tekton which allows extracting values from event payload or event attributes, and passing it down to the specified action in Tekton. In the PoC, as soon as an event is received at Tekton EventListener, a TriggerBinding extracts value from the event payload and triggers a Tekton TaskRun with this value.

The fun thing here is that we didn’t have to modify how events are sent from Jenkins to make an interoperable system with Knative and/or Tekton. Just in few steps, you can very easily add more tools in your ecosystem where the tools are driving operations among each other. That is the power of CloudEvents and the CloudEvents plugin for Jenkins.

After a really long, hopefully, fun journey, we are at the end of this article.🎈 The first version of the CloudEvents Plugin is now released, and available for you to try! We’d love to hear your feedback and thoughts! If you have any ideas for the plugin or anything you’d like to share, please open an issue at the GitHub repo of the CloudEvents plugin ❤️! This plugin is still under development, and we’ll keep working hard at making interoperability more efficient and easier! Take care 😄