-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k3s + tailscale + dual-stack ipv6 doesn't start with tailscale advertise-routes error #11572
Comments
@manuelbuil any ideas? |
Also, seems like if trying to run in IPv6 only mode, k3s always picks up only the IPv4 address to use to advertise, which creates a |
What happens if you provide ipv4,ipv6 --node-ip addresses? |
Update for the original issue: Looks like it just randomly went away without changing anything. |
@m00nwtchr could you share the configuration for your cluster that seemed to make this work? I am currently trying to set this up and face the same issue. |
I don't have that exact configuration right now, because I switched to cilium CNI since then (but it did briefly work on flannel). Worst case scenario try setting up flannel over tailscale manually, instead of relying on the integration. (On cilium I had to set the node-ip, and external-node-ip to respectively tailscale and actual dual stack ips of the node, and advertise-address to one of the tailscale IPs, then just check what podCidrs were assigned to the node by kubernetes and advertise those as subnet routes, and finally make sure all your nodes accept routes) |
This is exactly what I am trying to figure out how to do now. I would be interested in seeing exactly how you achieved that, I am not really familiar with the underlying networking stuff. Since you got that sorted could I ask you if you could share those steps? We could also communicate via another medium if you'd prefer so as not to spam here. |
# /etc/rancher/k3s/config.yaml
cluster-cidr: 2001:cafe:42::/56,10.42.0.0/16
service-cidr: 2001:cafe:43::/112,10.43.0.0/16
node-ip: <tailscale IPv6>,<tailscale IPv4>
advertise-address: <tailscale IPv6>
node-external-ip: <external IPv6>,<ethernet interface IPv4>
# Options to switch flannel for cilium, probably not required for you
flannel-backend: none
disable-network-policy: true
disable-kube-proxy: true In my case the node has podCIDRs Flannel does also have its own flannel-iface options, IIRC, but I haven't tried those, since I had already switched to cilium at that point. |
How come your node has different CIDRs to what you defined in |
Each Node is automatically assigned a CIDR range of its own from the Cluster-wide range specified in This is my cilium config: (applied with cluster:
name: default
ipam:
# mode: kubernetes
operator:
clusterPoolIPv6MaskSize: 64
clusterPoolIPv6PodCIDRList:
- 2001:cafe:42::/56
clusterPoolIPv4PodCIDRList:
- 10.42.0.0/16
ipv4:
enabled: true
ipv4NativeRoutingCIDR: 10.42.0.0/16
ipv6:
enabled: true
ipv6NativeRoutingCIDR: 2001:cafe:42::/56
enableIPv6Masquerade: true
routingMode: native
bpf:
masquerade: true
k8s:
requireIPv6PodCIDR: true
requireIPv4PodCIDR: false
k8sServiceHost: ::1
k8sServicePort: 6444
kubeProxyReplacement: true Note: Cilium has its own CiliumNode CustomResources, which also separately specify podCidrs, which are used instead of the Kubernetes Node podCidrs if the |
@m00nwtchr thanks very much for you help I'll give this a try! |
Environmental Info:
K3s Version: v1.31.4+k3s1 (a562d09)
Node(s) CPU architecture, OS, and Version: Linux m-server 6.6.69-hardened1 NixOS SMP PREEMPT_DYNAMIC Thu Jan 2 09:32:11 UTC 2025 x86_64 GNU/Linux
Cluster Configuration:
1 server 1 agent
Describe the bug:
After enabling IPv6 dual stack and re-creating the cluster the following error message appears:
flannel exited: failed to register flannel network: failed to run command: tailscale set --accept-routes --advertise-routes=$SUBNET,$IPV6SUBNET Err: exit status 1 Output: ::/0 advertised without its IPv4 counterpart, please also advertise 0.0.0.0/0
Steps To Reproduce:
NixOS k3s module:
The text was updated successfully, but these errors were encountered: