THE LINUX FOUNDATION PROJECTS

Who Let the AI Dogs Out?

By April 29, 2026Blog, Community

Contributed by Kate Scarcella, Cybersecurity Architect

Who Let the AI Dogs Out?

Let me start somewhere familiar. There’s a song: “Who Let the Dogs Out?” It’s funny because it captures a feeling we all recognize: something got loose… and now everyone is reacting.

But two components make up this scenario:

  1. The dogs didn’t get out on their own
  2. We opened the gate

Most of us have had a puppy, or at least know what that’s like. You don’t bring a puppy home and say:

  •  Let’s see if it bites
  •  Let’s see if it chews the furniture
  •  Let’s see if it runs into the street

You know it will, so you train it. You set boundaries. You shape behavior early, because you know something simple: Behavior compounds. And if that puppy runs into the street chasing a squirrel and gets hurt, we don’t stand there and say: “That was a bad dog.”

We say:

  • Where was the owner?
  • Why wasn’t there a leash?

Because we understand something instinctively; it’s not about blame, it’s about responsibility.

AI Puppies Running Loose

Now, let’s talk about AI.

Right now, we are in the “puppy phase” of AI.

  • It’s learning
  • It’s adapting
  • It’s responding to signals we don’t fully understand

And what are we doing?

We’re releasing capability first…and thinking about constraints later. We’re letting it run…and hoping it behaves.

I hope this sounds familiar because it’s happened before with cybersecurity.

We built systems with:

  1. Hardcoded credentials
  2. Static identities 
  3. Flat networks
  4. Implicit trust

And we said: We’ll secure it later.

Well, later came…at scale. And those early decisions didn’t stay small, they propagated like gremlins, (or IoT devices), and became systemic risk.

Secure AI Now

When we talk about “securing AI,” I want to be very clear: This is not about putting a leash on AI after the fact.

Because that’s not how complex systems work. You don’t chase behavior once it’s already running loose. You design the conditions that shape behavior before it ever emerges.

So yes!

Put the dog on the leash.

But understand what that really means in our world:

  • Identity boundaries
  • Data provenance
  • Model transparency
  • Governance that is built in…not bolted on

You design them right from the start…secure by design.

This isn’t about blaming the builders, the researchers, or the community. It’s about ownership. We built this. We are shaping it…right now. And the conditions we set today will define the behavior we live with tomorrow.

Maybe the better question isn’t: 

Who let the dogs out?

Maybe the question is: 

Are we willing to take responsibility for the environment we’ve created…before it’s too late to shape it?

That’s the moment we’re in. Not fear. Not panic, but ownership. What a brilliant idea.

Join the effort to shape secure AI before the gate is fully open.

The next chapter of software security is being written right now, and it’s not happening inside a single company or tool. It’s happening in the open. The CI/CD Cybersecurity SIG under the Continuous Delivery Foundation is bringing together practitioners, researchers, and platform engineers to define what secure-by-design AI in delivery pipelines actually looks like in practice.

We’re launching a new AI Security chapter in the CI/CD Cybersecurity Guide, and we’re looking for contributors who want to help answer the hard questions:

  • How do we establish identity boundaries for agents?
  • What does provenance look like for models and prompts?
  • Where should policy live in agent-enabled pipelines?
  • How do we prevent “secure later” from becoming our next legacy problem?

If you care about how AI behaves inside real software delivery systems—not just theory, but implementation—this is the place to help shape the guidance the community will rely on. 🐕‍🦺🔐

Join the SIG. Bring your perspective. Help write the guardrails before the puppies grow up.