Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

K8s kube-proxy and weave-net pods keep experiencing CrashLoopBackOff from time to time. #3994

Open
ghostdevops19 opened this issue Jul 2, 2023 · 1 comment

Comments

@ghostdevops19
Copy link

What you expected to happen?

Kube-proxy and weave-net pods should be running normally for the worker node.

What happened?

I was creating a k8s cluster, configure network CNI and applied weave-net pod as network add-on. Everything was running fine in kube-system namespace. When joining a worker node into the cluster, kube-proxy and and weave-net pods on worker node experienced CrashLoopBackOff

How to reproduce it?

Redeploy a cluster, applying weave-net and join a worker node.

Anything else we need to know?

K8s cluster deployed on :

  • Machine: VMWare Workstation Pro 17 (Network Bridged to Host)
  • OS : Ubuntu 22.04.2 LTS
  • Kernel : 5.19.0-46-generic
  • 2 CPU, 4 GB RAM

Master Node IP = 192.168.3.50/24
Worker Node IP = 192.168.3.51/24

K8s Cluster IP

master@master:~/Downloads$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   13h

Versions:

Weave is deployed using :

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
$ uname -a
Linux master 5.19.0-46-generic #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 21 15:35:31 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

$ kubectl version --short
Client Version: v1.27.3
Kustomize Version: v5.0.1
Server Version: v1.27.3

Logs:

Master Node

$ kubectl logs -n kube-system <weave-net-pod> weave

INFO: 2023/07/02 05:11:52.654645 ->[192.168.3.51:37237] connection accepted
INFO: 2023/07/02 05:11:52.675247 ->[192.168.3.51:37237|36:11:85:21:8c:f6(worker1)]: connection ready; using protocol version 2
INFO: 2023/07/02 05:11:52.676489 overlay_switch ->[36:11:85:21:8c:f6(worker1)] using fastdp
INFO: 2023/07/02 05:11:52.676512 ->[192.168.3.51:37237|36:11:85:21:8c:f6(worker1)]: connection added (new peer)
INFO: 2023/07/02 05:11:52.777744 ->[192.168.3.51:37237|36:11:85:21:8c:f6(worker1)]: connection fully established
INFO: 2023/07/02 05:11:53.189726 sleeve ->[192.168.3.51:6783|36:11:85:21:8c:f6(worker1)]: Effective MTU verified at 1438
INFO: 2023/07/02 05:11:54.730987 Discovered remote MAC 36:11:85:21:8c:f6 at 36:11:85:21:8c:f6(worker1)
INFO: 2023/07/02 05:12:38.168892 ->[192.168.3.51:37237|36:11:85:21:8c:f6(worker1)]: connection shutting down due to error: read tcp 192.168.3.50:6783->192.168.3.51:37237: read: connection reset by peer
INFO: 2023/07/02 05:12:38.169118 ->[192.168.3.51:37237|36:11:85:21:8c:f6(worker1)]: connection deleted
INFO: 2023/07/02 05:12:38.169178 Removed unreachable peer 36:11:85:21:8c:f6(worker1)
INFO: 2023/07/02 05:17:47.592952 ->[192.168.3.51:36981] connection accepted
INFO: 2023/07/02 05:17:47.605243 ->[192.168.3.51:36981|36:11:85:21:8c:f6(worker1)]: connection ready; using protocol version 2
INFO: 2023/07/02 05:17:47.605356 overlay_switch ->[36:11:85:21:8c:f6(worker1)] using fastdp
INFO: 2023/07/02 05:17:47.605395 ->[192.168.3.51:36981|36:11:85:21:8c:f6(worker1)]: connection added (new peer)
INFO: 2023/07/02 05:17:47.607818 ->[192.168.3.51:36981|36:11:85:21:8c:f6(worker1)]: connection fully established
INFO: 2023/07/02 05:17:47.716362 sleeve ->[192.168.3.51:6783|36:11:85:21:8c:f6(worker1)]: Effective MTU verified at 1438
INFO: 2023/07/02 05:17:49.703725 Discovered remote MAC 36:11:85:21:8c:f6 at 36:11:85:21:8c:f6(worker1)
INFO: 2023/07/02 05:18:18.088286 ->[192.168.3.51:36981|36:11:85:21:8c:f6(worker1)]: connection shutting down due to error: read tcp 192.168.3.50:6783->192.168.3.51:36981: read: connection reset by peer
INFO: 2023/07/02 05:18:18.088334 ->[192.168.3.51:36981|36:11:85:21:8c:f6(worker1)]: connection deleted
INFO: 2023/07/02 05:18:18.088452 Removed unreachable peer 36:11:85:21:8c:f6(worker1)
INFO: 2023/07/02 05:23:24.717410 ->[192.168.3.51:60669] connection accepted
INFO: 2023/07/02 05:23:24.735898 ->[192.168.3.51:60669|36:11:85:21:8c:f6(worker1)]: connection ready; using protocol version 2
INFO: 2023/07/02 05:23:24.735970 overlay_switch ->[36:11:85:21:8c:f6(worker1)] using fastdp
INFO: 2023/07/02 05:23:24.735989 ->[192.168.3.51:60669|36:11:85:21:8c:f6(worker1)]: connection added (new peer)
INFO: 2023/07/02 05:23:24.838900 overlay_switch ->[36:11:85:21:8c:f6(worker1)] using sleeve
INFO: 2023/07/02 05:23:24.839205 ->[192.168.3.51:60669|36:11:85:21:8c:f6(worker1)]: connection fully established
INFO: 2023/07/02 05:23:24.852468 sleeve ->[192.168.3.51:6783|36:11:85:21:8c:f6(worker1)]: Effective MTU verified at 1438
INFO: 2023/07/02 05:23:25.241373 overlay_switch ->[36:11:85:21:8c:f6(worker1)] using fastdp
INFO: 2023/07/02 05:23:26.792677 Discovered remote MAC 36:11:85:21:8c:f6 at 36:11:85:21:8c:f6(worker1)
INFO: 2023/07/02 05:24:09.151846 ->[192.168.3.51:60669|36:11:85:21:8c:f6(worker1)]: connection shutting down due to error: read tcp 192.168.3.50:6783->192.168.3.51:60669: read: connection reset by peer
INFO: 2023/07/02 05:24:09.152388 ->[192.168.3.51:60669|36:11:85:21:8c:f6(worker1)]: connection deleted
INFO: 2023/07/02 05:24:09.152532 Removed unreachable peer 36:11:85:21:8c:f6(worker1)

Worker Node

$ kubectl logs -n kube-system <weave-net-pod> weave

DEBU: 2023/07/02 05:11:52.162403 [kube-peers] Checking peer "36:11:85:21:8c:f6" against list &{[{86:e3:d2:15:be:e3 master} {36:11:85:21:8c:f6 worker1}]}
INFO: 2023/07/02 05:11:52.239363 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:36:11:85:21:8c:f6 nickname:worker1 no-dns:true no-masq-local:true port:6783]
INFO: 2023/07/02 05:11:52.239389 weave  git-34de0b10a69c
INFO: 2023/07/02 05:11:52.505446 Re-exposing 10.44.0.0/12 on bridge "weave"
INFO: 2023/07/02 05:11:52.529779 Bridge type is bridged_fastdp
INFO: 2023/07/02 05:11:52.529920 Communication between peers is unencrypted.
INFO: 2023/07/02 05:11:52.598601 Our name is 36:11:85:21:8c:f6(worker1)
INFO: 2023/07/02 05:11:52.598694 Launch detected - using supplied peer list: [192.168.3.50]
INFO: 2023/07/02 05:11:52.598784 Using "no-masq-local" LocalRangeTracker
INFO: 2023/07/02 05:11:52.598836 Checking for pre-existing addresses on weave bridge
INFO: 2023/07/02 05:11:52.599204 weave bridge has address 10.44.0.0/12
INFO: 2023/07/02 05:11:52.629819 adding entry 10.44.0.0/14 to weaver-no-masq-local of 0
INFO: 2023/07/02 05:11:52.629888 added entry 10.44.0.0/14 to weaver-no-masq-local of 0
INFO: 2023/07/02 05:11:52.635320 [allocator 36:11:85:21:8c:f6] Initialising with persisted data
INFO: 2023/07/02 05:11:52.635721 Sniffing traffic on datapath (via ODP)
INFO: 2023/07/02 05:11:52.642364 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2023/07/02 05:11:52.642402 Listening for metrics requests on 0.0.0.0:6782
INFO: 2023/07/02 05:11:52.647533 ->[192.168.3.50:6783] attempting connection
INFO: 2023/07/02 05:11:52.668760 ->[192.168.3.50:6783|86:e3:d2:15:be:e3(master)]: connection ready; using protocol version 2
INFO: 2023/07/02 05:11:52.668963 overlay_switch ->[86:e3:d2:15:be:e3(master)] using fastdp
INFO: 2023/07/02 05:11:52.669039 ->[192.168.3.50:6783|86:e3:d2:15:be:e3(master)]: connection added (new peer)
INFO: 2023/07/02 05:11:53.173725 ->[192.168.3.50:6783|86:e3:d2:15:be:e3(master)]: connection fully established
INFO: 2023/07/02 05:11:53.179601 sleeve ->[192.168.3.50:6783|86:e3:d2:15:be:e3(master)]: Effective MTU verified at 1438
INFO: 2023/07/02 05:11:53.298041 [kube-peers] Added myself to peer list &{[{86:e3:d2:15:be:e3 master} {36:11:85:21:8c:f6 worker1}]}
DEBU: 2023/07/02 05:11:53.305634 [kube-peers] Nodes that have disappeared: map[]
10.44.0.0
DEBU: 2023/07/02 05:11:53.408881 registering for updates for node delete events
INFO: 2023/07/02 05:11:54.728040 Discovered remote MAC ea:c9:b2:cf:e0:b8 at 86:e3:d2:15:be:e3(master)
INFO: 2023/07/02 05:11:54.729219 Discovered remote MAC a2:4f:e7:ed:bc:6c at 86:e3:d2:15:be:e3(master)
INFO: 2023/07/02 05:11:54.775756 Error checking version: Get "https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=5.19.0-46-generic&flag_network=fastdp&os=linux&signature=ZRP0TOIf16EUEgWIxR20G93TN4rquU56r9Ti0IrqTkw%3D&version=git-34de0b10a69c": dial tcp: lookup checkpoint-api.weave.works on 10.96.0.10:53: no such host
$ journalctl -u kubelet --no-pager

Jul 02 13:54:59 worker1 kubelet[11286]: E0702 13:54:59.339184   11286 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"ee112e4c46f7d5617a058e70fe5a38368eea3f260d8db332b40ba0d8866253d6\": not found" podSandboxID="ee112e4c46f7d5617a058e70fe5a38368eea3f260d8db332b40ba0d8866253d6"
Jul 02 13:54:59 worker1 kubelet[11286]: I0702 13:54:59.788451   11286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feaf2aac8074ac2601a02d98d8b2f003d156e7988dbd9044589da70a63540ac8"
Jul 02 13:54:59 worker1 kubelet[11286]: I0702 13:54:59.788734   11286 scope.go:115] "RemoveContainer" containerID="fcc618aecf021adb3e475df4bbb0f55e6dc1625630edba248262b1b76e365d57"
Jul 02 13:54:59 worker1 kubelet[11286]: I0702 13:54:59.794259   11286 scope.go:115] "RemoveContainer" containerID="659f6c16a788378ca4ad65d75f012fe3fca292d597c0d7acb08a55de1f4ebcc7"
Jul 02 13:55:03 worker1 kubelet[11286]: I0702 13:55:03.834278   11286 scope.go:115] "RemoveContainer" containerID="7c951cb5e08e63d6edf31ec05a44951237b9fdbd2226b7e79312a477ef48a8e4"
Jul 02 13:55:03 worker1 kubelet[11286]: I0702 13:55:03.836103   11286 scope.go:115] "RemoveContainer" containerID="35e22ed54e3a19b3b295487772f975fbe76ff791f154743d7b0b565bf8fdd790"
Jul 02 13:55:03 worker1 kubelet[11286]: E0702 13:55:03.837293   11286 pod_workers.go:1294] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave pod=weave-net-rdhvv_kube-system(437deab1-a96b-4fed-9b6a-93fecea970a6)\", failed to \"StartContainer\" for \"weave-npc\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave-npc pod=weave-net-rdhvv_kube-system(437deab1-a96b-4fed-9b6a-93fecea970a6)\"]" pod="kube-system/weave-net-rdhvv" podUID=437deab1-a96b-4fed-9b6a-93fecea970a6
Jul 02 13:55:08 worker1 kubelet[11286]: I0702 13:55:08.847471   11286 scope.go:115] "RemoveContainer" containerID="7c951cb5e08e63d6edf31ec05a44951237b9fdbd2226b7e79312a477ef48a8e4"
Jul 02 13:55:08 worker1 kubelet[11286]: I0702 13:55:08.847667   11286 scope.go:115] "RemoveContainer" containerID="35e22ed54e3a19b3b295487772f975fbe76ff791f154743d7b0b565bf8fdd790"
Jul 02 13:55:08 worker1 kubelet[11286]: E0702 13:55:08.849006   11286 pod_workers.go:1294] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"weave\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave pod=weave-net-rdhvv_kube-system(437deab1-a96b-4fed-9b6a-93fecea970a6)\", failed to \"StartContainer\" for \"weave-npc\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=weave-npc pod=weave-net-rdhvv_kube-system(437deab1-a96b-4fed-9b6a-93fecea970a6)\"]" pod="kube-system/weave-net-rdhvv" podUID=437deab1-a96b-4fed-9b6a-93fecea970a6
Jul 02 13:55:10 worker1 kubelet[11286]: I0702 13:55:10.176289   11286 scope.go:115] "RemoveContainer" containerID="db80b83a6e6de2cf70075c440f996cfba8fa918676c894222d2e493798abab65"
Jul 02 13:55:10 worker1 kubelet[11286]: E0702 13:55:10.178036   11286 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-mq5pc_kube-system(6380b599-11cf-40c3-93d5-86bbb8b66815)\"" pod="kube-system/kube-proxy-mq5pc" podUID=6380b599-11cf-40c3-93d5-86bbb8b66815
$ kubectl get events

LAST SEEN   TYPE     REASON     OBJECT         MESSAGE
55m         Normal   Starting   node/worker1   
49m         Normal   Starting   node/worker1   
42m         Normal   Starting   node/worker1   
36m         Normal   Starting   node/worker1   
30m         Normal   Starting   node/worker1   
23m         Normal   Starting   node/worker1   
17m         Normal   Starting   node/worker1   
11m         Normal   Starting   node/worker1   
4m38s       Normal   Starting   node/worker1   

Network:

Master Node

$ ip route

default via 192.168.3.1 dev ens33 proto dhcp metric 100 
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 
169.254.0.0/16 dev ens33 scope link metric 1000 
192.168.3.0/24 dev ens33 proto kernel scope link src 192.168.3.50 metric 100 

$ ip -4 -o addr

1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens33    inet 192.168.3.50/24 brd 192.168.3.255 scope global dynamic noprefixroute ens33\       valid_lft 74481sec preferred_lft 74481sec
5: weave    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave\       valid_lft forever preferred_lft forever


$ sudo iptables-save

# Generated by iptables-save v1.8.7 on Sun Jul  2 14:11:40 2023
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:WEAVE-CANARY - [0:0]
COMMIT
# Completed on Sun Jul  2 14:11:40 2023
# Generated by iptables-save v1.8.7 on Sun Jul  2 14:11:40 2023
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
COMMIT
# Completed on Sun Jul  2 14:11:40 2023
# Generated by iptables-save v1.8.7 on Sun Jul  2 14:11:40 2023
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-HNRDEUI2DD2IQNOC - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-SWLOBIBPXYBP7G2Z - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-ZCHNBYOGFZRFKYMA - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -j WEAVE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-HNRDEUI2DD2IQNOC -s 192.168.3.50/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-HNRDEUI2DD2IQNOC -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.3.50:6443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.2:9153
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.3:9153
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.32.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.32.0.3:53" -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.32.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SWLOBIBPXYBP7G2Z
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.32.0.3:9153" -j KUBE-SEP-ZCHNBYOGFZRFKYMA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.3.50:6443" -j KUBE-SEP-HNRDEUI2DD2IQNOC
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.32.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.32.0.3:53" -j KUBE-SEP-3DU66DE6VORVEQVD
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Sun Jul  2 14:11:40 2023
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

Worker Node

$ ip route

default via 192.168.3.1 dev ens33 proto dhcp metric 100 
10.32.0.0/12 dev weave proto kernel scope link src 10.44.0.0 
169.254.0.0/16 dev ens33 scope link metric 1000 
192.168.3.0/24 dev ens33 proto kernel scope link src 192.168.3.51 metric 100 

$ ip -4 -o addr

1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: ens33    inet 192.168.3.51/24 brd 192.168.3.255 scope global dynamic noprefixroute ens33\       valid_lft 74338sec preferred_lft 74338sec
5: weave    inet 10.44.0.0/12 brd 10.47.255.255 scope global weave\       valid_lft forever preferred_lft forever

$ sudo iptables-save

# Generated by iptables-save v1.8.7 on Sun Jul  2 14:04:22 2023
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:WEAVE-CANARY - [0:0]
COMMIT
# Completed on Sun Jul  2 14:04:22 2023
# Generated by iptables-save v1.8.7 on Sun Jul  2 14:04:22 2023
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes load balancer firewall" -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
COMMIT
# Completed on Sun Jul  2 14:04:22 2023
# Generated by iptables-save v1.8.7 on Sun Jul  2 14:04:22 2023
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-HNRDEUI2DD2IQNOC - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-SWLOBIBPXYBP7G2Z - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-ZCHNBYOGFZRFKYMA - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -j WEAVE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-HNRDEUI2DD2IQNOC -s 192.168.3.50/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-HNRDEUI2DD2IQNOC -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 192.168.3.50:6443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.2:9153
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -s 10.32.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.32.0.3:9153
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.32.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp -> 10.32.0.3:53" -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.32.0.2:9153" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SWLOBIBPXYBP7G2Z
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics -> 10.32.0.3:9153" -j KUBE-SEP-ZCHNBYOGFZRFKYMA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> 192.168.3.50:6443" -j KUBE-SEP-HNRDEUI2DD2IQNOC
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.32.0.2:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.32.0.3:53" -j KUBE-SEP-3DU66DE6VORVEQVD
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Sun Jul  2 14:04:22 2023
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

@tranvanthuc365
Copy link

I got the same error.
After joining the worker node. The kube-proxy & weave-net run successfully, but after 10 minutes, it crashes and is not stable, running -> crash -> running. It shows the same error above

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants