Fred Damstra (k8s1) ee73d451ca Minor updates from adding io as a node 2 years ago
..
filter_plugins 99a930d811 Ansible Directory Structure Added 4 years ago
group_vars be9f16da6a Minor updates when I had to rebuild a node 2 years ago
host_vars 99a930d811 Ansible Directory Structure Added 4 years ago
library 99a930d811 Ansible Directory Structure Added 4 years ago
logs 29c9dce05e Initial inventory 4 years ago
module_utils 99a930d811 Ansible Directory Structure Added 4 years ago
roles ee73d451ca Minor updates from adding io as a node 2 years ago
tasks be9f16da6a Minor updates when I had to rebuild a node 2 years ago
.git_save 99a930d811 Ansible Directory Structure Added 4 years ago
README.md ee73d451ca Minor updates from adding io as a node 2 years ago
ansible.cfg 55212fdcca Next up: Don't run command if it's been run 4 years ago
bootstrap_output.txt 9a1403f2f5 Updates 4 years ago
inventory.ini ee73d451ca Minor updates from adding io as a node 2 years ago
io.yml 5f4f1d879c Initial playbooks operational 4 years ago
k8s.yml 5f4f1d879c Initial playbooks operational 4 years ago
microk8s.yml eda297a99c Updates on k3s1 4 years ago
requirements.yml b3bf598b82 Next iteration. Bootstrapped the environment 4 years ago
site.yml eda297a99c Updates on k3s1 4 years ago
test.yml 19c7bf69be Install packages and handle reboots 4 years ago

README.md

Basically part and parcel from https://docs.ansible.com/ansible/latest/user_guide/sample_setup.html

production                # inventory file for production servers
staging                   # inventory file for staging environment

group_vars/
   group1.yml             # here we assign variables to particular groups
   group2.yml
host_vars/
   hostname1.yml          # here we assign variables to particular systems
   hostname2.yml

library/                  # if any custom modules, put them here (optional)
module_utils/             # if any custom module_utils to support modules, put them here (optional)
filter_plugins/           # if any custom filter plugins, put them here (optional)

site.yml                  # master playbook
webservers.yml            # playbook for webserver tier
dbservers.yml             # playbook for dbserver tier
tasks/                    # task files included from playbooks
    webservers-extra.yml  # <-- avoids confusing playbook with task files

roles/
    common/               # this hierarchy represents a "role"
        tasks/            #
            main.yml      #  <-- tasks file can include smaller files if warranted
        handlers/         #
            main.yml      #  <-- handlers file
        templates/        #  <-- files for use with the template resource
            ntp.conf.j2   #  <------- templates end in .j2
        files/            #
            bar.txt       #  <-- files for use with the copy resource
            foo.sh        #  <-- script files for use with the script resource
        vars/             #
            main.yml      #  <-- variables associated with this role
        defaults/         #
            main.yml      #  <-- default lower priority variables for this role
        meta/             #
            main.yml      #  <-- role dependencies
        library/          # roles can also include custom modules
        module_utils/     # roles can also include custom module_utils
        lookup_plugins/   # or other types of plugins, like lookup in this case

    webtier/              # same kind of structure as "common" was above, done for the webtier role
    monitoring/           # ""
    fooapp/               # ""

Bootstrapping

  1. Check out this repository
  2. Create a file ~/.ansible_vault with the vault password
  3. Use raspberry pi imager to install Ubuntu LTS (20.04 at time of this writing) 64-bit.
  4. Find IP (via DHCP lease, reservation, or console?).
  5. Login with username ubuntu, password ubuntu. Set a new password.
  6. Add hostname to inventory.
  7. Run ssh-copy-id ubuntu@hostname and enter your password.
  8. Install galaxy modules with ansible-galaxy install -r requirements.yml
  9. Run the initial_user.yml task:

    ansible-playbook -u ubuntu tasks/initial_users.yml --limit=k8s3
    

Bootstrapping the Cluster

Start with the first control node:

ansible-playbook microk8s.yml --limit=k8s1 --extra-vars="skip_git=True" # Leave off skip_git if github is up

It will pause at the bootstrap step, but you can go ahead and hit enter.

TODO: Automate this

After the playbook completes and you've received the notification to bootstrap, do the following:

k8s1:   sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --control-plane-endpoint k8s.home.monkeybox.org --upload-certs --apiserver-advertise-address 10.42.42.201
local:  ansible-vault edit ~/monkeybox_kubernetes/Ansible/bootstrap_output.txt
        # record output
local:  ansible-vault edit ~/monkeybox_kubernetes/Ansible/group_vars/k8s/vault # record values from output
k8s1:   cat /etc/kubernetes/admin.conf
local:  ansible-vault edit ~/monkeybox_kubernetes/Ansible/roles/k8s/files/config # record the config
k8s1:   mkdir -p $HOME/.kube
k8s1:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
k8s1:   sudo chown $(id -u):$(id -g) $HOME/.kube/config
k8s1:   kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Rerun the playbook:

ansible-playbook microk8s.yml --limit=k8s1 --extra-vars="skip_git=True" # Leave off skip_git if github is up

From here, you may proceed to "other nodes" or "deploying workloads"

MetallB

local:

scp -r ~/monkeybox_kubernetes k8s1:

On k8s1:

kubectl edit configmap -n kube-system kube-proxy
# search for `ipvs`, and set `strictARP` to true. See note [1]
cd monkeybox_kubernetes/Workloads/metallb/
kubectl apply -f 001*
kubectl apply -f 002*
kubectl apply -f 003*
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl get pods -n metallb-system

note [1]: They helpfully give the following shell code to help with automation.

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

nfs-provisioning

On k8s1:

cd ~/monkeybox_kubernetes/Workloads/nfs-provisioning
kubectl apply -f 001*
kubectl apply -f 002*
kubectl apply -f 003*

ingress-nginx

cd ~/monkeybox_kubernetes/Workloads/ingress-nginx
htpasswd -c auth fdamstra
kubectl create secret generic basic-auth --from-file=auth
kubectl apply -f ingress-nginx-controller-v0.45.0.yaml

cert-manager

  1. log into the aws console
  2. iam->users->letsencrypt-wildcard->security credentials
  3. 'create access key'
  4. Copy the secret into a file called password.txt in ~/monkeybox_kubernetes/Workloads/cert-manager
  5. Copy teh access key id into ~/monkeybox_kubernetes/Workloads/cert-manager/wildcard*

    cd ~/monkeybox_kubernetes/Workloads/cert-manager
    kubectl create secret generic aws-route53-creds --from-file=password.txt -n default
    kubectl apply -f cert-manager.yaml
    # big pause here
    kubectl apply -f staging_issuer.yaml
    kubectl apply -f prod_issuer.yaml
    kubectl apply -f wildcard_staging_issuer.yaml
    kubectl apply -f wildcard_prod_issuer.yaml
    

Other Nodes

Adding other nodes should be straightforward:

  1. Login with username ubuntu, password ubuntu. Set a new password.
  2. Add hostname to inventory.
  3. Run ssh-copy-id ubuntu@hostname and enter your password.
  4. Run the initial_user.yml task

    ansible-playbook -u ubuntu tasks/initial_users.yml --limit=k8s3
    
  5. Run the full playbook (may as well run on everything)

    ansible-playbook site.yml
    

Other workloasd

You just kinda work throught he You just kinda work through them.

sudo kubeadm join 10.42.42.201:6443 --token x01udi.u7clxsm3004sne9p

--discovery-token-ca-cert-hash sha256:093b597fbd39eca8fdc0298339f1403792e4b06d50fabc9fdc226606e40197b9 \
--control-plane --certificate-key 0f98113e1146805b2783e696f2df3f6760a12a223a393517036cc94b452406b1 \

--apiserver-advertise-address IPHERE

etcd manifest original values:

    livenessProbe:
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 60
    startupProbe:
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 60

apiserver manifest original values: ```

  initialDelaySeconds: 30
  periodSeconds: 30
  timeoutSeconds: 60
name: kube-apiserver
readinessProbe:
  periodSeconds: 3
  timeoutSeconds: 45