Skip to main content

How Many Types of Quality Gates Can You Think of?

By June 14, 2022July 24th, 2023Blog, Community

Contributed by Justin Abrahms (eBay) and the Interoperability SIG

Some of the folks involved in #sig-interoperability (which could be you!) had a discussion about the different types of quality gates that folks could use when validating code destined for production. A quality gate, for those unfamiliar, is a part of the validate/deploy process that is intended to assert that the code that passes through it is safer than code that doesn’t. One common one that we use is “do the tests pass?” Before we run the tests, the change is risky. After we run our tests… it’s way less risky. The general idea is that if you have a bunch of quality gates you can “automate confidence” (to steal a phrase I first heard by Michael Stahnke at cdCon).

One great point by Ann Marie Fred was that when the number of quality gates increases, we begin to need a capability to support speculative execution (trying multiple pipeline steps even though their result may not be relevant because an earlier step failed). This is particularly important so developers can get the full list of issues they need to fix without submitting build after build.

So here they are:

Types of Quality Gates

Commit

  1. pre-commit hooks to validate commit messages

Merge

  1. Some folks have robots which do the merge for you
  2. Contributor License Agreement checks

Build

  1. SBOM generation
  2. Ensure there are no CVEs in the transitive dependencies
  3. Validate unit tests pass
  4. Run contract tests
  5. Gather other quality metrics (sonar, coverage, etc)
  6. License validation
  7. Validate application configuration is within range (e.g. not requesting too many replicas)
  8. Infrastructure policy validation (not exposing endpoints publicly)
  9. Validate credentials aren’t in the code
  10. Has had code review
  11. Validate correct cryptographic signatures are in place (e.g. on commits and resulting binaries)

First deploy

  1. Budget analysis (ensure the change won’t trigger cloud cost overruns)
  2. Deployment windows (don’t push during certain times of day)
  3. run end-to-end tests
  4. performance benchmarking

Prod deploy

  1. Roll out via canary (e.g. small percentage of traffic at first, and ramp as we gain confidence)
  2. Monitor key health metrics

Release

  1. Similar to First Deploy, but used for libraries who don’t deploy services.

Join the Conversation

If this sort of thing is as exciting to you as it is for us, we’d love to see you at the next sig-interoperability meetings. They are held on the first and third Thursdays at 15:00UTC. For more information, check us out on GitHub.

Thank you to the following folks who contributed to this discussion: