Enable IPv6 for K3S

Mar 30, 2026

Running a Kubernetes cluster often means diving deep into networking configurations. Recently, I needed to enable IPv6 dual-stack support on my K3s cluster. What started as a standard configuration update quickly turned into an intense troubleshooting session involving a Flannel panic and a complete node reset. Here is how I diagnosed the missing configuration, set up dual-stack routing, and successfully recovered from the crash.

Diagnosing Missing IPv6 Support

The first sign of trouble was noticing that my pods lacked routable IPv6 addresses. You can easily verify this by inspecting the network interfaces inside a running pod.

Run this command to check your pod’s addresses: kubectl exec -it <your-pod-name> -- ip -6 addr show.

If you only see link-local addresses starting with fe80 and no other inet6 addresses, your K3s cluster does not have IPv6 dual-stack support enabled.

Configuring Dual-Stack Routing

To fix this, you need to update the K3s service configuration to allocate IPv6 ranges for both your pods and services.

Edit your K3s service file located at /etc/systemd/system/k3s.service. Update the ExecStart line to include the necessary dual-stack CIDRs and Flannel settings.

Your configuration should look something like this:

ExecStart=/usr/local/bin/k3s \
    server \
    --cluster-cidr=10.42.0.0/16,2001:db8:42::/56 \
    --service-cidr=10.43.0.0/16,2001:db8:43::/112 \
    --flannel-ipv6-masq

Next, the host operating system needs to allow IPv6 forwarding. Check your current forwarding status by running sysctl net.ipv6.conf.all.forwarding.

Enable it dynamically by running sudo sysctl -w net.ipv6.conf.all.forwarding=1. To make this change permanent across system reboots, write net.ipv6.conf.all.forwarding=1 into /etc/sysctl.d/99-kubernetes-cri.conf or /etc/sysctl.conf, and then apply it with sudo sysctl -p.

Cleaning Up Old Network State

Before restarting K3s with your new settings, it is crucial to clear out the old IPv4-only networking state. Failing to do this can cause severe interface conflicts when Flannel tries to reconfigure the network.

First, stop the K3s service using sudo systemctl stop k3s.

Next, remove the old CNI and Flannel state directories and delete the virtual interfaces. You can safely ignore any “does not exist” warnings during this process.

  • Run sudo rm -rf /var/lib/cni/ to remove cached CNI data.
  • Run sudo rm -rf /run/flannel/ to clear Flannel’s runtime state.
  • Delete the Flannel IPv4 interface with sudo ip link delete flannel.1.
  • Delete the Flannel IPv6 interface with sudo ip link delete flannel-v6.1.
  • Remove the CNI bridge using sudo ip link delete cni0.

Recovering From the Flannel Panic

With a clean slate, I started K3s using sudo systemctl start k3s and checked the logs with sudo journalctl -u k3s -e -n 100 --no-pager. Instead of a healthy cluster, I was greeted by a fatal segmentation fault.

The K3s process crashed with a panic: runtime error: invalid memory address or nil pointer dereference inside the Flannel plugin. This crash happens because the Kubernetes node object still holds stale networking annotations that conflict with the new dual-stack reality, causing Flannel to choke when trying to configure IPv6 on the existing VXLAN device.

To recover from this crash, you need to force Kubernetes to recreate the node’s networking state from scratch.

  • Delete the node resource entirely by running sudo kubectl delete node k8s (replace k8s with your actual node name).
  • Restart the K3s service so it registers a fresh node object.
  • Delete all pods in the kube-system namespace to force the control plane components to restart and pick up the new configuration.
  • Finally, delete all other application pods in your cluster so they are recreated and can successfully acquire their new IPv6 addresses.

Once the pods spun back up, a quick check inside the containers confirmed that dual-stack routing was fully operational.


[back]