Articles

Still Shipping With Docker? That Might Not Mean What It Used To

Christie Pronto
June 25, 2025

Still Shipping With Docker? That Might Not Mean What It Used To

There’s a moment in every technical team’s journey when a tool stops being a choice and starts becoming a habit.

And habit, in software, is where most complexity hides. Docker was built to solve a real problem—one that used to derail entire teams.

But in 2025, when tooling options have exploded and dev stacks have evolved, sticking with Docker without reevaluating its purpose can cost more than most teams realize.

Why Docker Rose—and Why It’s Starting to Slip

Docker became essential because it made setup simple, predictable, and portable. In the PHP 5.6 and Node 10 era, when runtime environments were brittle and inconsistent, Docker was a breakthrough.

It flattened configuration problems, reduced dev/prod drift, and made sharing applications more realistic across machines. It promised speed and predictability.

And for a time, it delivered.

But standard tools age, and defaults rot if left unexamined. What was once elegant has, for many teams, become cumbersome.

Simpler frontend tooling, streamlined backend ecosystems, and robust version managers have eliminated much of the need for heavyweight containers—yet Docker remains embedded in dev environments not because it’s still the best fit, but because no one has questioned it in years.

The shift from essential to habitual didn’t happen all at once. Teams inherited Dockerfiles. Developers built CI pipelines around containers because someone before them did.

Docs were copied, adapted, and wrapped in more Docker. Local builds grew slower. Debugging got weirder.

Tooling got more complex—not because it needed to be, but because no one wanted to be the one to break it.

Then came 2021. Docker changed its licensing. Docker Desktop became a paid product for businesses with over 100 employees.

The cost wasn’t catastrophic. But it was enough to trigger introspection. If this tool is so central, why don’t we understand it better?

Why are we paying for something we haven’t consciously chosen?

That price tag exposed what had been brewing underneath: Docker wasn’t helping every team—it was just familiar.

That realization wasn’t just isolated to small dev shops. Kubernetes announced back in 2020 that it would deprecate direct support for Docker as a container runtime.

Their reasoning? Docker, while widely used, was not designed to be used in production-level orchestration at scale.

That shift forced a major reevaluation in enterprise infrastructure. Many teams transitioned to containerd or CRI-O, leaner alternatives that solved the same problem with fewer layers.

Red Hat, too, moved toward Podman in its official documentation—an alternative that runs containers without a daemon and plays better with systems and rootless environments.

The Hidden Cost of Familiarity

We’ve seen what happens when familiarity becomes technical debt. One team’s local builds stretched past 20 minutes, entirely because Docker containers were overbuilt and full of legacy scripts.

Another team hired a DevOps engineer just to manage Dockerfiles for micro-services that no longer needed them. In yet another case, developers stalled releases, not due to bugs, but because they were afraid to touch or rebuild containers they didn’t fully understand.

This friction doesn’t show up in feature tickets.

It shows up in morale.

It appears in the slow drip of velocity lost to container rebuilds, permissions errors, mysterious config overrides, and general hesitation. Teams don’t say “Docker is the problem.”

They say “I don’t know why this takes so long,” or “Can someone else handle this build?” That’s not a tooling issue—it’s a trust issue.

We’ve helped teams trace their slowdowns, and Docker is often a common thread—not because it’s broken, but because it’s outdated for the problem they’re solving.

It's used out of fear, not strategy. In place of clarity, it adds complexity. And when complexity is baked into delivery pipelines, teams suffer.

Where Docker Still Works—and Where It Doesn’t

Docker still has its place.

We’ve seen it work well in:

  • Legacy applications with outdated dependencies that need isolation
  • Large-scale CI pipelines where reproducibility and sandboxing are key
  • Micro-service orchestration where container boundaries are meaningful

But those cases are specific. They’re not every project. They’re not every team. And they’re rarely greenfield.

In modern development workflows—especially small-to-mid-sized web apps, internal tools, and backend APIs—Docker is often a distraction.

Local environments can be cleanly managed with tools like Volta, NVM, or pyenv. Native installs and package managers have gotten smarter.

Platforms have embraced portability natively. Docker adds a layer of complexity that used to solve real problems—but often doesn’t anymore.

The Trust Layer Is What Breaks

When teams cling to Docker out of habit, they don’t just inherit tool debt—they inherit fear. We’ve worked with developers who can’t confidently explain their own build scripts.

We've seen onboarding documents that require 20 steps—15 of which involve “getting Docker working.” We’ve seen teams where nobody knows what’s inside the base image anymore.

This isn’t theoretical. It’s operational. It’s the emotional drag that shows up when developers spend Friday afternoon rebuilding a container because the base image changed upstream and now everything is broken.

It’s the resignation in the Slack message: “Docker’s being weird again, anyone else seeing this?”

The cost is real: hesitation, delays, workarounds, and long-term erosion of team trust in their own systems.

Better Questions, Better Systems

When we help clients clean up legacy systems or optimize dev flows, we don’t start by ripping out Docker.

We start by asking:

  • What problem is Docker solving here?
  • Would we choose this tool again today?
  • If we removed it, what actually breaks?

And most importantly: Do our developers feel confident touching this setup—or do they tiptoe around it?

The answers reveal whether Docker is still the right tool—or just the most familiar one. And when the answer is inertia, the next step is clear.

What This Says About the Way You Build

The tools we choose signal what we value. Fast onboarding? Clean deployment? Low-friction development? Then our tools should reflect that. Docker’s original value was clarity. Its new role, in many cases, is fog.

We believe that business is built on transparency and trust. We believe that good software is built the same way.

That means every piece of a dev environment should serve a purpose. It should make the work lighter—not heavier. It should give confidence—not caution.

If you’re still using Docker, ask the better question: why still? And if the answer is anything other than clarity, maybe it’s time to change.

Defaults aren’t sacred. Tools aren’t forever. And the best stacks are the ones you can explain, rebuild, and trust.

Build for that.

Dev
Tech
Christie Pronto
June 25, 2025
Podcasts

Still Shipping With Docker? That Might Not Mean What It Used To

Christie Pronto
June 25, 2025

Still Shipping With Docker? That Might Not Mean What It Used To

There’s a moment in every technical team’s journey when a tool stops being a choice and starts becoming a habit.

And habit, in software, is where most complexity hides. Docker was built to solve a real problem—one that used to derail entire teams.

But in 2025, when tooling options have exploded and dev stacks have evolved, sticking with Docker without reevaluating its purpose can cost more than most teams realize.

Why Docker Rose—and Why It’s Starting to Slip

Docker became essential because it made setup simple, predictable, and portable. In the PHP 5.6 and Node 10 era, when runtime environments were brittle and inconsistent, Docker was a breakthrough.

It flattened configuration problems, reduced dev/prod drift, and made sharing applications more realistic across machines. It promised speed and predictability.

And for a time, it delivered.

But standard tools age, and defaults rot if left unexamined. What was once elegant has, for many teams, become cumbersome.

Simpler frontend tooling, streamlined backend ecosystems, and robust version managers have eliminated much of the need for heavyweight containers—yet Docker remains embedded in dev environments not because it’s still the best fit, but because no one has questioned it in years.

The shift from essential to habitual didn’t happen all at once. Teams inherited Dockerfiles. Developers built CI pipelines around containers because someone before them did.

Docs were copied, adapted, and wrapped in more Docker. Local builds grew slower. Debugging got weirder.

Tooling got more complex—not because it needed to be, but because no one wanted to be the one to break it.

Then came 2021. Docker changed its licensing. Docker Desktop became a paid product for businesses with over 100 employees.

The cost wasn’t catastrophic. But it was enough to trigger introspection. If this tool is so central, why don’t we understand it better?

Why are we paying for something we haven’t consciously chosen?

That price tag exposed what had been brewing underneath: Docker wasn’t helping every team—it was just familiar.

That realization wasn’t just isolated to small dev shops. Kubernetes announced back in 2020 that it would deprecate direct support for Docker as a container runtime.

Their reasoning? Docker, while widely used, was not designed to be used in production-level orchestration at scale.

That shift forced a major reevaluation in enterprise infrastructure. Many teams transitioned to containerd or CRI-O, leaner alternatives that solved the same problem with fewer layers.

Red Hat, too, moved toward Podman in its official documentation—an alternative that runs containers without a daemon and plays better with systems and rootless environments.

The Hidden Cost of Familiarity

We’ve seen what happens when familiarity becomes technical debt. One team’s local builds stretched past 20 minutes, entirely because Docker containers were overbuilt and full of legacy scripts.

Another team hired a DevOps engineer just to manage Dockerfiles for micro-services that no longer needed them. In yet another case, developers stalled releases, not due to bugs, but because they were afraid to touch or rebuild containers they didn’t fully understand.

This friction doesn’t show up in feature tickets.

It shows up in morale.

It appears in the slow drip of velocity lost to container rebuilds, permissions errors, mysterious config overrides, and general hesitation. Teams don’t say “Docker is the problem.”

They say “I don’t know why this takes so long,” or “Can someone else handle this build?” That’s not a tooling issue—it’s a trust issue.

We’ve helped teams trace their slowdowns, and Docker is often a common thread—not because it’s broken, but because it’s outdated for the problem they’re solving.

It's used out of fear, not strategy. In place of clarity, it adds complexity. And when complexity is baked into delivery pipelines, teams suffer.

Where Docker Still Works—and Where It Doesn’t

Docker still has its place.

We’ve seen it work well in:

  • Legacy applications with outdated dependencies that need isolation
  • Large-scale CI pipelines where reproducibility and sandboxing are key
  • Micro-service orchestration where container boundaries are meaningful

But those cases are specific. They’re not every project. They’re not every team. And they’re rarely greenfield.

In modern development workflows—especially small-to-mid-sized web apps, internal tools, and backend APIs—Docker is often a distraction.

Local environments can be cleanly managed with tools like Volta, NVM, or pyenv. Native installs and package managers have gotten smarter.

Platforms have embraced portability natively. Docker adds a layer of complexity that used to solve real problems—but often doesn’t anymore.

The Trust Layer Is What Breaks

When teams cling to Docker out of habit, they don’t just inherit tool debt—they inherit fear. We’ve worked with developers who can’t confidently explain their own build scripts.

We've seen onboarding documents that require 20 steps—15 of which involve “getting Docker working.” We’ve seen teams where nobody knows what’s inside the base image anymore.

This isn’t theoretical. It’s operational. It’s the emotional drag that shows up when developers spend Friday afternoon rebuilding a container because the base image changed upstream and now everything is broken.

It’s the resignation in the Slack message: “Docker’s being weird again, anyone else seeing this?”

The cost is real: hesitation, delays, workarounds, and long-term erosion of team trust in their own systems.

Better Questions, Better Systems

When we help clients clean up legacy systems or optimize dev flows, we don’t start by ripping out Docker.

We start by asking:

  • What problem is Docker solving here?
  • Would we choose this tool again today?
  • If we removed it, what actually breaks?

And most importantly: Do our developers feel confident touching this setup—or do they tiptoe around it?

The answers reveal whether Docker is still the right tool—or just the most familiar one. And when the answer is inertia, the next step is clear.

What This Says About the Way You Build

The tools we choose signal what we value. Fast onboarding? Clean deployment? Low-friction development? Then our tools should reflect that. Docker’s original value was clarity. Its new role, in many cases, is fog.

We believe that business is built on transparency and trust. We believe that good software is built the same way.

That means every piece of a dev environment should serve a purpose. It should make the work lighter—not heavier. It should give confidence—not caution.

If you’re still using Docker, ask the better question: why still? And if the answer is anything other than clarity, maybe it’s time to change.

Defaults aren’t sacred. Tools aren’t forever. And the best stacks are the ones you can explain, rebuild, and trust.

Build for that.

Our superpower is custom software development that gets it done.