Maybe You Don't Need Kubernetes

Kubernetes was built on decade-old assumptions, and its networking model still reflects that history. Modern identity-first networking approaches expose how much operational complexity we've normalized — and offer a path to simpler systems with lower human cost.

Julian Mulla

Kubernetes was already built on decade-old ideas when it launched, and its networking model still reflects that history. As a result, a significant portion of modern DevOps work exists to manage inherited complexity rather than to express business intent directly. The expression cost — the effort required to translate simple operational goals into working infrastructure — has become wildly disproportionate to the intent.

Modern, identity-first networking approaches don’t replace Kubernetes — but they do expose how much operational machinery we’ve normalized to compensate for outdated assumptions. The real payoff isn’t cheaper cloud bills. It’s lower human cost, simpler systems, and organizations that scale without burning people out.

For large, heterogeneous organizations with strong platform teams, Kubernetes remains a powerful abstraction. But for a growing number of teams, the complexity it carries is no longer justified by the problems they actually have.

This isn’t a theoretical argument. It’s a conclusion drawn from operating real systems.


Kubernetes Was “Old” the Day It Arrived

On paper, Kubernetes looks relatively young. It was announced in June 2014 and reached version 1.0 in July 2015. As an open-source project, that makes it roughly a decade old.

But Kubernetes didn’t start from a blank page.

Its core ideas came directly from Google’s internal systems — Borg, dating back to the early 2000s, and Omega, developed in the early 2010s. By the time Kubernetes was released, Google had already spent ten to fifteen years refining declarative desired state, reconciliation loops, centralized scheduling, label-based placement, and large-scale cluster networking assumptions.

The code was new. The architecture was not. It was mature, opinionated, and shaped by a very specific environment. Kubernetes wasn’t experimental. It was exported.


Kubernetes Networking Reveals Its Era Most Clearly

Nowhere is Kubernetes’ age more visible than in its networking model.

At its core, Kubernetes assumes flat, trusted networks; IP addresses as primary identity; universal pod-to-pod reachability; and security layered on afterward via policy and tooling. These assumptions were reasonable in Google’s private datacenters and in the infrastructure landscape of 2013.

They are increasingly mismatched with a world defined by zero trust, ephemeral workloads, hybrid and multi-cloud deployments, constant lateral movement risk, and identity-aware security expectations.

This mismatch is why Kubernetes networking feels complex — and why so much engineering effort exists solely to compensate for it.


Why So Much DevOps Work Exists Today

A large portion of modern DevOps work exists to translate simple intent into arcane machinery.

Teams invest enormous effort in CIDR planning, kube-proxy behavior, iptables and IPVS tuning, NetworkPolicy debugging, service mesh operation, multi-cluster networking workarounds, and a steady accumulation of VPNs, bastions, gateways, and sidecars.

The business intent underneath this work is real: secure communication, service routing, access control. But the expression cost is wildly disproportionate to the intent.

💡 The Expression Cost Problem

Imagine telling a frontend engineering team that making one service call another requires understanding three layers of proxy configuration, a custom resource definition, and a CIDR allocation spreadsheet. In most engineering disciplines, that would be considered pathological. In Kubernetes networking, it’s Tuesday.

This isn’t because Kubernetes is “bad.” It’s because it inherited assumptions that no longer align with how systems are expected to operate.


What Identity-First Networking Changes

I use Tailscale in my own infrastructure, and the impact was immediate. Before Tailscale, I maintained bastion hosts, managed NAT gateways across environments, and spent real time debugging connectivity issues that had nothing to do with the services themselves — just the plumbing between them. After adopting Tailscale, those components disappeared. Not simplified. Removed. Entire categories of configuration I had treated as unavoidable turned out to be artifacts of the networking model, not requirements of the problem.

The shift looks like this:

Traditional KubernetesIdentity-First Networking
IP = identityIdentity = identity
Flat cluster networkZero-trust overlay
CIDRs and portsService-level policy
Perimeter securityPeer authentication

Instead of asking “What IP can talk to what IP?”, you ask “What service is allowed to talk to what service?”

That distinction matters because it removes complexity at the foundation rather than piling more infrastructure on top.

Note

Service meshes like Istio don’t exist because engineers enjoy complexity. They exist because Kubernetes’ base networking model makes it extraordinarily hard to express identity, security, and intent without building an entire parallel control plane. Service meshes are not an accident — they are the logical cost of retrofitting identity onto an IP-first system.


A Thought Experiment: Kubernetes Designed Today

If you were designing Kubernetes networking from scratch today, would you make IP addresses the foundation of service identity? Almost certainly not.

Once you ask that question, several consequences follow naturally.

Pod IPs would stop being first-class. IPs would still exist, but they wouldn’t define identity or policy. Workloads would authenticate as themselves, not as addresses. A NetworkPolicy that today looks like this —

ingress:
  - from:
      - ipBlock:
          cidr: 10.244.3.0/24
    ports:
      - protocol: TCP
        port: 8080

— would instead read like this:

allow:
  - from: service/payments
    to: service/orders

The difference is not cosmetic. The first version forces you to know the CIDR of the source workload, keep it current as pods reschedule, and hope that no other workload shares that range. The second version expresses intent. One is plumbing. The other is policy.

Services would become simpler. Stable identity removes the need for virtual IPs, kube-proxy, and much of today’s service abstraction machinery.

Service meshes would become optional rather than inevitable. Many mesh features — mutual TLS, encrypted transport, identity-based routing — exist primarily because base networking lacks native identity.

Multi-cluster networking would stop being “advanced.” Clusters would become scheduling boundaries, not isolated networking islands.


What This Means for DevOps

DevOps doesn’t disappear — it changes shape.

Kubernetes-era DevOps evolved to manage complexity: plumbing, glue, workarounds, and failure modes. Identity-first systems remove entire classes of problems. That means fewer DevOps engineers are needed — but the ones you need must be more senior.

The work shifts away from YAML sprawl and constant firefighting toward system design, policy definition, and guardrails. Less time is spent keeping the pipes from leaking, and more time designing systems that don’t require so many pipes in the first place.


What This Means for Cost

Some savings show up on the cloud bill: fewer load balancers, NAT gateways, VPNs, and less service mesh overhead. Those savings are real, but they’re not the main event.

The dominant cost is human.

The Real Cost

A senior SRE or infrastructure engineer in a major metro costs $200,000 or more per year. A managed Kubernetes cluster runs a few thousand dollars a month. Most organizations are not bottlenecked on compute — they’re bottlenecked on the people who keep the compute comprehensible.

One major outage caused by networking misconfiguration can cost more in engineer-hours, lost revenue, and recovery effort than months of infrastructure spend.

Systems that are simpler to reason about reduce cognitive load, shorten debugging sessions, lower on-call fatigue, reduce incident frequency, and speed recovery when things go wrong. Those gains compound quietly — and they matter far more than line items. And if your system needs Kubernetes primarily to manage its own complexity, that’s a signal worth listening to.


The Core Insight

Kubernetes made scaling infrastructure cheaper — but scaling the organizations that run it more expensive.

The next generation of tooling isn’t trying to replace Kubernetes outright. It’s trying to shed historical complexity that was reasonable in its era but that we no longer need to accept as inevitable.

That conclusion isn’t ideological. It’s operational. It’s what I’ve seen in my own work — and it’s what modern infrastructure should be optimizing for.