Kubernetes cluster under Talos Linux OS
This doc is about the installation of a simple kubernetes cluster with a single node running on Talos Linux OS from a Proxmox instance.
Main docs: - https://www.talos.dev - https://www.talos.dev/v1.9/introduction/getting-started/
Requirements: - Proxmox server instance - Homebrew
Talos Linux OS installation on a Proxmox VM
- Download latest Talos Linux ISO image from github releases.
- Upload the image into your Proxmox instance using the web interface (
Datacenter > pve > local (pve) > ISO images > Upload
) - Create a VM booting with the image (
Create VM > OS > ISO image
) - Boot the machine
Static IP address setup for your VM
- From your Proxmox instance using the web interface: retrieve the MAC address of your new VM running on Talos OS (
Datacenter > pve > YOUR_VM_INSTANCE > Hardware > Network device (net0)
) - From you router, assign a static IP to your VM using its MAC address (e.g from Freebox OS:
Freebox settings > DHCP > Static leases > Add a DHCP static lease
)
Cluster setup
During this step, we will setup our Talos cluster to run a fully functional Kubernetes cluster.
Talosctl installation
If you don't have talosctl
running on your local workstation, please install it using:
Talos configuration
Set environment variables:
* CLUSTER_NAME
is an arbitrary name, used as a label in your local client configuration. It should be unique in the configuration on your local workstation.
* CONTROL_PLANE_IP
is the static IP address you assigned to your VM above, which will serve as the control plane of your Talos cluster.
Generate talos cluster machines configurations using:
You should then have the following output, as Three files are created in your current directory:
* controlplane.yaml
: Configuration file to apply to your Talos cluster control plane node
* worker.yaml
: Configuration file to apply to each of your Talos cluster worker nodes (we won't use it in this documentation)
* talosconfig
: Configuration of your talos cluster
generating PKI and tokens
created /Users/user/controlplane.yaml
created /Users/user/worker.yaml
created /Users/user/talosconfig
Optional: You can update your local workstation Talos clusters configuration (.talos/config
) with ./talosconfig
content so it handle newly created Talos cluster.
context: <your-cluster-name>
contexts:
home-lab:
endpoints: [<your-control-plane-node-ip>]
ca: <your-ca>
crt: <your-crt>
key: <your-key>
Edit ./controlplane.yaml
in order to set the following directive in order to make sure that your kubernetes master node will be able to run workload:
Apply the configuration of your talos control plane node using:
talosctl apply-config --insecure -n $CONTROL_PLANE_IP --file ./controlplane.yaml \
--talosconfig=./talosconfig
Kubernetes bootstrap
Bootstrap Kubernetes with your Talos cluster using:
talosctl bootstrap --nodes $CONTROL_PLANE_IP --endpoints $CONTROL_PLANE_IP \
--talosconfig=./talosconfig
After a few moment you should be able to retrieve your kubernetes cluster configuration which will be automatically added (merged) into your local Kubernetes configuration (.kube/config
):
talosctl kubeconfig --nodes $CONTROL_PLANE_IP --endpoints $CONTROL_PLANE_IP \
--talosconfig=./talosconfig
Your kubernetes cluster should now be running. To verify the installation:
- If you do not have kubectl installed, please run:
- Verify that your current kube context is the one from the newly created cluster (
admin@${CLUSTER_NAME}
) by running:
- Run the following command:
If such result appear, then congratulations your kubernetes cluster under Talos OS is properly running:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-xxxxxxxxxx-xxxxx 1/1 Running 0 1s
kube-system coredns-xxxxxxxxxx-xxxxx 1/1 Running 0 1s
kube-system kube-apiserver-talos-xxx-xxx 1/1 Running 0 1s
kube-system kube-controller-manager-talos-xxx-xxx 1/1 Running 0 1s
kube-system kube-flannel-xxxxx 1/1 Running 0 1s
kube-system kube-proxy-xxxxx 1/1 Running 0 1s
kube-system kube-scheduler-talos-xxx-xxx 1/1 Running 0 1s
Optional: Kubernetes cluster initial setup
In this section we'll setup our kubernetes cluster to be fully prepared to run applications (metrics, ingress etc.).
Metrics server
In order to monitor metrics from our cluster, we need to deploy a metrics server.
Doc: * https://kubernetes-sigs.github.io/metrics-server/
- If you do not have kubectl installed, please run:
- Add metrics-server helm repository by using the following command:
- Run the following command in order to deploy metrics-server with the following values (we're only adding insecure tls in order to skip unnecessary tls verification when performing internal metrics API calls within the cluster):
helm install metrics-server metrics-server/metrics-server --namespace kube-system -f - <<EOF
defaultArgs:
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
EOF
- After a few moments run the following command to verify the installation:
If such result appear, then congratulations your metrics-server is properly running:
MetalLB as cluster external load balancer
To handle traffic coming from outside our cluster we will deploy metallb as the default/main external Load Balancer.
Doc: - https://metallb.io/ - https://metallb.io/installation/ - https://metallb.io/configuration/
- Add the metallb Helm repository to your Helm configuration:
- Deploy metallb Helm chart:
helm install metallb metallb/metallb --namespace kube-system -f - <<EOF
labels:
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/warn: privileged
EOF
- Deploy complementary resources (IPAddressPool & L2Advertisement):
kubectl apply -f - <<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: public-pool
namespace: kube-system
spec:
addresses:
- <your-control-plane-node-ip>-<your-control-plane-node-ip>
EOF
kubectl apply -f - <<EOF
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: public-pool-l2-advertisement
namespace: kube-system
spec:
ipAddressPools:
- public-pool
EOF
- After a few moments run the following command to verify the installation:
If such result appear, then congratulations your metallb external Load Balancer is properly running:
Traefik ingress controller
To handle the routing within your cluster, we will deploy Traefik as the default/main ingress controller.
Doc: - https://v2.doc.traefik.io/traefik/ - https://v2.doc.traefik.io/traefik/getting-started/install-traefik/
- Add the Traefik Helm repository to your Helm configuration:
- Deploy Traefik Helm chart:
helm install traefik traefik/traefik --namespace kube-system -f - <<EOF
providers:
kubernetesIngress:
publishedService:
enabled: false
kubernetesGateway:
enabled: true
gateway:
listeners:
web:
namespacePolicy: All
EOF
- After a few moments run the following command to verify the installation:
If such result appear, then congratulations your Traefik ingress controller is properly running: