Skip to main content

Continuous Delivery @ FOSDEM (Feb 6-7, 2021)

By February 2, 2021November 1st, 2023Blog, Staff

The CDF Community will be at FOSDEM this weekend!

When: Sunday, February 7, 10 am to 6 pm CET (Brussels Time)
How to Attend: Virtual in the CI/CD Devroom
Registration: No registration necessary

CDF Community Talks

The Road to Interoperability in CI/CD

When: Feb. 7 at 10:10 CET
Speakers: Fatih Degirmenci and Kara de la Marck

The emergence of virtualization, containers, and cloud native has resulted in tremendous advances in enabling organizations to develop new services and make them available to end users. In addition, new paradigms such as Continuous Integration (CI) and Continuous Delivery (CD) allow organisations to do this much faster than before, empowering them to go to market ahead of the competition.

Despite its many advantages, the CI/CD ecosystem has its challenges. This session will discuss issues arising from the lack of interoperability across proliferating CI/CD technologies. We will look at end user case studies, including existing integration initiatives such as that between Tekton and Jenkins X. However, these initiatives are localised to the projects involved and do not address the challenges holistically. We will highlight the necessity, and greater sustainability, of a holistic approach to interoperability in the CI/CD ecosystem and invite attendees to join community efforts.

Combining Progressive Delivery With GitOps And Continuous Delivery

When: Feb. 7 at 10:30 CET
Speakers: Viktor Farcic and Alexander Matyushentsev

Three phrases keep popping up when talking about modern workflows and development and deployment techniques; CD, GitOps, and progressive delivery.

Three phrases keep popping up when talking about modern workflows and development and deployment techniques. We have continuous delivery to automate the complete lifecycle of applications from a commit to a Git repository, all the way until a release is deployable to production. Then we have GitOps to define the desired states of our environments and let the machines handle the converge the actual into the desired state. Finally, there is a lot of focus on different deployment strategies grouped under progressive delivery. They are all focused on the iterative release of features to make the process safe, prevent downtime, and reduce the blast radius of potential issues.

While those three practices and the tooling behind those are focusing on specific areas, the “real” benefits are obtained when they are combined. Nevertheless, many did not yet reach that stage. Each of those practices alone can be daunting and, frankly, scary. Yet, we should go a step further and explore how to combine them together and see the benefits such a solution might provide.

Through a hands-on demo, we will combine Argo CD as a tool of choice for applying GitOps, Argo Rollouts for progressive delivery, and Argo Workflows for continuous delivery pipelines that will tie those two together with the rest of the steps needed in the lifecycle of our applications. If we are successful, we might remove humans from all the actions coming after pushing changes to Git repositories.

Events in CI/CD

When: Feb. 7 at 11:15 CET
Speakers: Andrea Frittoli

Continuous integration and deployment (CI/CD) system are hardly ever ceaseless as the name would suggest; they do aim though to follow changes in code, configurations and versions. They often achieve that by both handling and generating events. For instance, a CD system receives an event that describes a new version of an application, and it runs a workflow in response. When the workflow starts or when it reaches completion, the CD system generates events for the benefit of other processes that may want to trigger tests against the newly deployed application version. In this short presentation, we introduce the “Events in CI/CD” interest group, part of the Continuous Delivery Foundation, and its mission of standardization and interoperability between CI/CD systems via events.

Who watches the watchers – a Jenkins Observability tale Look Ma, No Hands! Jenkins testability and monitoring

When: Feb. 7 at 11:55 CET
Speakers: Victor Martinez and Ivan Fernandez Calvo

When Everything as Code converges to automate/test your processes, in this talk we would like to discuss further our journey and our vision to handle our automation programmatically.

When we started to use Jenkins at Elastic, there was already a team providing the service, similar to the SaaS you could see nowadays, so teams were encouraged to handle their automation using the Configuration as Code paradigm with the Jenkins Job Builder tool, but at a certain moment of time this particular approach didn’t scale much, and that’s when we started to think about a more robust, testable and automated CI/CD ecosystem.

Then our journey started by applying the below principles and practices:

  • Everything as Code
  • Everything is Tested
  • Documentation as Code
  • Functional testing
  • Continuous Improvement
  • Continuous Deployment

All the above concepts are the ones that helped us in this journey and that’s the reason for this talk to share our experience and vision. Besides gathering from you your opinions and feedback.

Collecting and visualizing Continuous Delivery Indicators
In a Kubernetes-based CI/CD platform, using Jenkins X, Lighthouse, Tekton, PostgreSQL and Grafana

When: Feb. 7 at 13:45 CET
Speakers: Vincent Behar

CD platforms are a critical part of the development process, and without it, nothing would go to production. How can we really know what is happening inside, and measure indicators that we can track and improve to ensure a smooth continuous delivery experience?

Based on our experience at Dailymotion, we’ll see why Continuous Delivery Indicators are important, how to define them, and then how to collect and visualize them. We’ll share which stability and throughput indicators we are tracking, and why. We’ll see how we implemented the collection and visualization of these indicators using open-source tools such as Golang, PostgreSQL, and Grafana. We’ll highlight the importance of Kubernetes-based CD components, such as Jenkins X, Lighthouse, and Tekton, which heavily rely on Kubernetes CRD and thus provide events for everything happening. We’ll also highlight the importance of Gitops-based workflows, where every operation goes through a VCS such as Git, and thus produce more events. This in turn makes it easy to get notifications for all actions, making our indicators collection process a simple one.

CI on Gitlab. Bringing GitLab, Tekton and Prow together (with some magic)

When: Feb. 7 at 15:55 CET
Speakers: Rafał Manhart

Many organizations are using Gitlab as a code repository and wondering too late how to establish CI pipelines. ChatOps, automatic merging, cloud-native, webhook event triggers, serverless, job reusability, scalability, bot-users, simplicity are often on the wishlist.

In this showcase we will fulfill the above wishlist with open source tools, speak about the issues that we overcame and demonstrate how to use GitLab as a pure code repository.

Identifying Performance Changes Using Peass

When: Feb. 7 at 17:25 CET
Speakers: David Georg Reichelt

Performance is a crucial property of software for both closed and open source software. Assuring that performance requirements are met in the CI process using benchmarks or load tests requires heavy manual effort for benchmark and load test specification. Unit tests often cover a big share of the use cases of a software and are maintained anyway. While they have some downsides for measuring the performance, e.g. since they test corner cases or use functional utilities like mocks, they still are a way of measuring realistic use cases with nearly no manual effort.

Therefore, we develop the tool Peass (https://github.com/DaGeRe/peass), which transforms unit tests into performance unit tests and measures their performance. The stand-alone tool Peass can be integrated into the CI-process using Peass-CI, which makes it possible to run performance tests with every build in Jenkins.

The talk starts by introducing the basic idea of Peass.

Then, the steps of the current prototype of Peass are presented: – a regression test selection, which prunes all unit tests that can not have a performance change by comparison of the execution traces of the same unit tests in two software versions and the source code of the called methods, – a measurement method, which repeats VM starts and measurement iterations inside the VM until the performance change can be statistically reliably detected and – a root cause analysis, which identified the node of the call tree which causes a performance change by measurement of individual nodes.

Finally, the talk demonstrates the usage of Peass in a running Jenkins instance.