sh Take 2, K3S
Foudn too many issues running via kubeadm, time to try k3s.
For this take, we're going to stick with metallb and ingress-nginx
Could do:
k3sup install \
--host=k8s1 \
--user=fdamstra \
--local-path=config.demo.yaml \
--context demo \
--cluster \
--tls-san 10.68.0.240 \
--k3s-extra-args="--disable servicelb"
I don't think I want that.
##
curl -sfL https://get.k3s.io | sh -s - server \ --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
curl -sfL https://get.k3s.io \
| sh -s - server \
--datastore-endpoint="mysql://k8s:Bluem00n@tcp(10.42.42.10:3306)/k8s" \
--cluster-cidr 10.41.0.0/16 \ # default is 10.42.0.0!
--write-kubeconfig-mode 644 \
--tls-san k8s.home.monkeybox.org \
--no-deploy traefik \
--no-deploy servicelb
curl -sfL https://get.k3s.io | sh -s - server --datastore-endpoint="mysql://k8s:Bluem00n@tcp(10.42.42.10:3306)/k8s" --cluster-cidr 10.41.0.0/16 --write-kubeconfig-mode 644 --tls-san k8s.home.monkeybox.org --no-deploy traefik --no-deploy servicelb
but mysql error.
etcd mode:
k8s1:
curl -sfL https://get.k3s.io | K3S_TOKEN=Bluem00n sh -s - server --cluster-init --cluster-cidr 10.41.0.0/16 --write-kubeconfig-mode 644 --tls-san k8s.home.monkeybox.org --no-deploy traefik --no-deploy servicelb
# Maybe withoout "server"?
on 2 and 3:
curl -sfL https://get.k3s.io | K3S_TOKEN=Bluem00n sh -s - server --server https://10.42.42.201:6443 --cluster-cidr 10.41.0.0/16 --write-kubeconfig-mode 644 --tls-san k8s.home.monkeybox.org --no-deploy traefik --no-deploy servicelb
# Maybe withoout "server"?
k8s1 (no server)
curl -sfL https://get.k3s.io | K3S_TOKEN=Bluem00n sh -s - --cluster-init -write-kubeconfig-mode=644 --cluster-cidr=10.41.0.0/16 --tls-san=k8s.home.monkeybox.org --no-deploy=traefik,servicelb --node-external-ip=10.42.42.201 --flannel-backend ipsec
# Maybe withoout "server"?
on 2 and 3 (no server)
curl -sfL https://get.k3s.io | K3S_TOKEN=Bluem00n sh -s - --server https://10.42.42.201:6443 --write-kubeconfig-mode 644 --cluster-cidr=10.41.0.0/16 --tls-san k8s.home.monkeybox.org --no-deploy=traefik,servicelb --flannel-backend ipsec
# Maybe withoout "server"?
| INSTALL_K3S_EXEC="--write-kubeconfig-mode 644 --token secret-01010101-xabcdefx --tls-san k8s.home.monkeybox.org --disable traefik" \
sh -s - server --datastore-endpoint="mysql://k8s:Bluem00n@tcp(10.42.42.10:3306)/k8s"
started down the metallb road, but ran into an issue since the kube-proxy configmap isn't there.
cd monkeybox_kubernetes/Workloads/metallb/
kubectl apply -f 001*
kubectl apply -f 002*
kubectl apply -f 003*
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl get pods -n metallb-system
I updated all 3 servers /etc/hosts file to contain 10.42.42.201 k8s1.home.monkeybox.org 10.42.42.202 k8s2.home.monkeybox.org 10.42.42.203 k8s3.home.monkeybox.org
VIP: 10.42.42.8
sudo cp ~/monkeybox_kubernetes/Workloads/kube-vip/kube-vip-rbac.yaml /var/lib/rancher/k3s/server/manifests/
sudo bash # yuck
export VIP=10.42.42.8
export INTERFACE=eth0
crictl pull docker.io/plndr/kube-vip:0.3.2
alias kube-vip="ctr run --rm --net-host docker.io/plndr/kube-vip:0.3.2 vip /kube-vip"
kube-vip manifest daemonset --arp --interface $INTERFACE --address $VIP --controlplane --leaderElection --taint --inCluster /var/lib/rancher/k3s/server/manifests/kube-vip.yaml
kubectl apply -f /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml
# error messages, but it worked?
kubectl apply -f /var/lib/rancher/k3s/server/manifests/kube-vip.yaml
# error messages, but it worked?
kubectl -n kube-system get ds
# should show 3 3 3 3 3 3
ping 10.42.42.8
# pings!
exit
# updated the configs everywhere i could think of
Then I installed, following insturctions in the other readme:
Error from server (InternalError): error when creating "wildcard_staging_issuer.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded
kubectl create deployment pingtest --image=busybox --replicas=3 -- sleep infinity
kubectl get pods --selector=app=pingtest --output=wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pingtest-64f9cb6b84-2f9gs 1/1 Running 0 23s 10.41.2.5 k8s3 <none> <none>
pingtest-64f9cb6b84-cpsb9 1/1 Running 0 23s 10.41.0.5 k8s1 <none> <none>
pingtest-64f9cb6b84-m5q5w 1/1 Running 0 23s 10.41.1.7 k8s2 <none> <none>
kubectl exec -ti pingtest-64f9cb6b84-m5q5w -- sh
# ping <the other ips>
No ping. Problem may be iptables. on each node:
sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo reboot
ip route add 10.41.0.0/16 dev cni0 proto kernel scope link src 10.41.0.1
Had intermittent pings here? But then all failed.
Next, on each node:
sudo rm /var/lib/cni/networks/cbr0/lock
sudo reboot
--- I think I messed up at the beginning. I need both server and agent.
.... Starting over
sudo systemctl stop k3s
sudo /usr/local/bin/k3s-uninstall.sh