Podman is having a moment. Every "Docker alternatives" article puts it at the top. Red Hat ships it instead of Docker. Developers love the daemonless architecture. And they're right to — removing the Docker daemon was a genuine improvement.
But there's a conversation nobody's having: Podman uses the exact same container runtime as Docker, hits the exact same kernel boundaries, and has the exact same fundamental security limitation. The daemon was never the hard problem.
What Podman Actually Runs
Let's trace the execution path when you run a container:
Dockerdocker run nginx
→ dockerd (root daemon, listening on /var/run/docker.sock)
→ containerd (container lifecycle manager)
→ runc (OCI runtime — creates the actual container)
→ Linux kernel: namespaces + cgroups + seccomp
Podman
podman run nginx
→ conmon (per-container monitor, small C binary)
→ crun or runc (OCI runtime — creates the actual container)
→ Linux kernel: namespaces + cgroups + seccomp
See the bottom of both stacks? Identical. Both call an OCI-compliant runtime (runc or crun), which calls the same kernel APIs to create the same namespaces, attach the same cgroups, and apply the same seccomp filters.
Podman's innovation was architectural: replace the monolithic daemon with a fork-exec model. One conmon process per container instead of one dockerd for everything. If conmon crashes, only that container is affected. If dockerd crashes, every container is orphaned. That matters.
But it doesn't change what a container is.
The Shared Kernel Problem
Here's the part that matters for security: every container — Docker or Podman — shares the host's kernel. The kernel is the security boundary, and it's the same kernel.
This isn't theoretical. Let's count the kernel attack surface shared between a container and its host:
- ~400 syscalls exposed to containers (even with seccomp, a typical allowlist permits 200+)
- /proc and /sys filesystems (partially masked, but information leakage remains)
- Kernel networking stack (shared between host and container namespaces)
- UID/GID mapping (user namespaces help, but root-in-container → root-on-host is still the default for most workloads)
- Kernel vulnerabilities (CVE-2022-0185, CVE-2022-0847 "Dirty Pipe", CVE-2024-1086 — all container escapes via kernel bugs)
When CVE-2024-1086 dropped (a netfilter use-after-free), it didn't matter whether you ran Docker or Podman. The exploit path went through the kernel — below the container runtime entirely. Both were equally vulnerable.
Podman being daemonless doesn't help here. The daemon wasn't the kernel. Removing it didn't add a security boundary — it removed a management layer that happened to also be a liability.
What Podman Doesn't Fix
Beyond the shared kernel, Podman inherits Docker's entire storage and distribution model:
1. Layer-based storage (overlay2)
Both Docker and Podman use the same union filesystem approach. Layers are additive. If you delete a 500MB file in a later layer, the image is still 500MB bigger than it needs to be. There's no cross-image deduplication — if nginx:alpine and node:alpine share a base layer, the storage driver may deduplicate at the layer level, but not at the file or block level.
2. Docker Hub dependency
When you type podman pull nginx, where does it go? Docker Hub. The default registry, the image naming convention, the tag format — all Docker's. Podman is operationally independent but culturally dependent on Docker's ecosystem.
3. Containers-only thinking
Podman can run containers. That's it. There's no VM mode for when you need hardware-level isolation. No way to take a workload that's currently a container and say "this needs to be a VM now" without rebuilding everything. No content-addressed storage, no scale-to-zero, no workload lifecycle management.
The Daemonless Advantage Is Real But Narrow
Credit where it's due. Podman's daemonless architecture provides:
- No single point of failure. A process crash affects one container, not all of them.
- No root socket. Docker's
/var/run/docker.sockis a well-documented privilege escalation vector. Podman eliminates it. - Rootless by default. User namespaces allow running containers without root (Docker supports this too, but it's not the default).
- Fork-exec model. Better for systemd integration and process management.
These are meaningful improvements. If your threat model is "Docker daemon compromise" or "accidental socket exposure in CI," Podman solves your problem.
But if your threat model is "container escape via kernel vulnerability" — which is the threat model for multi-tenant hosting, regulated industries, and anyone running untrusted code — Podman offers the same protection as Docker: none.
What Actually Solves the Hard Problem
The shared kernel problem has exactly one solution: don't share the kernel. Run each workload with its own kernel in a virtual machine.
The traditional objection was "VMs are slow." That was true when VMs meant VMware or full QEMU — 30-second boot times, gigabytes of overhead, separate management plane. But microVM technology has changed the math:
- Firecracker (AWS): ~125ms boot, ~30MB overhead
- Cloud Hypervisor (Intel): Similar profile
- VoltVisor (ArmoredGate): Sub-millisecond boot, sub-32MB footprint
The real question isn't "containers or VMs" anymore. It's "why can't I have both, and toggle between them based on the workload's actual security requirements?"
A container for your nginx reverse proxy (kernel exploit doesn't matter if the container only serves static files). A VM for your multi-tenant code execution engine (kernel isolation is non-negotiable). Same management tools, same storage, same networking. Just a different isolation boundary.
That's the direction the industry needs to go. Podman took one step — removing the daemon. The next step is removing the assumption that everything has to be a container.