After updating Docker inside an LXC container on Proxmox VE 8, all Docker Compose stacks suddenly failed to start. The error was consistent and occurred across multiple containers:
OCI runtime create failed: runc create failed:
open sysctl net.ipv4.ip_unprivileged_port_start: permission denied
This issue turned out to be a combination of changes introduced in newer Docker/runc versions and the way LXC isolates kernel features on Proxmox VE 8. In this article, I will explain the exact technical cause and why the problem disappears completely after upgrading to Proxmox VE 9 (Debian 13 „Trixie“).
What the Error Actually Means
Modern versions of Docker (especially Docker 25+ with runc 1.2.x and containerd 2.0) started accessing a kernel sysctl:
net.ipv4.ip_unprivileged_port_start
This sysctl controls from which port non-root processes are allowed to create outbound connections. This worked fine on bare-metal, on VMs, and on modern kernels — but not inside LXC containers on Proxmox VE 8.
Why? Because LXC containers do not run their own kernel. They share the Proxmox host kernel. That means:
- The container sees host sysctls
- It is not allowed to read or write host-level sysctls
- LXC AppArmor and cgroup policies intentionally block such access
As a result, the very first initialization step of runc fails and Docker aborts container startup.
Why This Behavior Is New
Older versions of Docker did not query this particular sysctl at all. Starting with Docker 25+, Docker adopts a stricter approach to port allocation, and runc now checks this sysctl regardless of whether the container actually needs privileged ports.
This means the issue appears suddenly after updating Docker — even if the underlying LXC configuration never changed.
Why It Only Happens in LXC (and Never in VMs)
In a VM, Docker interacts with a fully isolated kernel. Sysctl access is always legal inside a VM.
In an LXC container, Docker interacts with the host kernel, not its own. That makes sysctl access subject to:
- LXC AppArmor policies
- cgroup v2 restrictions
- sysctl namespacing rules
Therefore, Docker inside an LXC environment behaves fundamentally differently from Docker inside a VM.
Why the Error Disappears Completely Under Proxmox VE 9
After upgrading the host to Proxmox VE 9 (Debian 13), the problem vanished instantly — without modifying anything inside the LXC container. The reason: multiple upstream components were updated in a way that resolves the incompatibility.
1. Newer Kernel (6.8 / 6.11 Series)
The Debian 13 kernel modernizes sysctl namespacing and network isolation. Missing or host-level sysctls no longer cause fatal errors.
2. Updated LXC 6.0.x Profiles
Proxmox VE 9 ships new LXC profiles with:
- Updated AppArmor rules
- Better cgroup v2 compatibility
- More accurate sysctl filtering
These changes prevent runc from attempting restricted sysctls in the first place.
3. Updated runc + containerd in Debian 13
These versions include an important change:
- runc checks whether a sysctl is namespaced before accessing it
Missing sysctls are now ignored gracefully rather than generating fatal errors.
4. Updated Proxmox LXC Templates
Proxmox VE 9 provides newer Debian/Ubuntu templates optimized for Docker usage, further reducing compatibility issues.
Temporary Workarounds for Proxmox VE 8 (Not Recommended)
There were several hacky ways to work around the problem on Proxmox VE 8, but none of them were safe or future-proof:
- Running the container with
lxc.apparmor.profile = unconfined(dangerous) - Allowing all device access via cgroup2 rules (breaks isolation)
- Pinning Docker to an older version (not sustainable)
- Overriding sysctl access inside Docker (inconsistent behavior)
The only clean, long-term fix: Upgrade the host to Proxmox VE 9.
Conclusion
This issue demonstrates how tightly Docker, runc, cgroups, AppArmor, and the kernel interact — and why LXC is fundamentally different from VM virtualization.
After upgrading to Proxmox VE 9 (Debian 13), Docker inside LXC works reliably again because all underlying components were modernized and aligned with current OCI runtime behavior.
For production setups, my recommendation is:
- Use LXC + Docker only when you fully understand the isolation model
- For maximum compatibility, prefer VMs for heavy Docker workloads
- Upgrade Proxmox hosts proactively to avoid kernel/runtime mismatches
In short: Docker in LXC works again — and Proxmox VE 9 is the reason.

Schreibe einen Kommentar