Self-hosting platform on Raspberry Pi - K3s
This doc helps deploy a Kubernetes self-hosting platform on Raspberry Pi devices with K3s.
It uses a Terraform modules I have created to deploy the necessary software in the cluster.
Roadmap
- Configure Kubernetes cluster
- Self-host password manager: Bitwarden
- Self-host IoT dev platform: Node-RED
- Self-host home cloud: NextCloud
- Self-host home Media Center
- Transmission
- Flaresolverr
- Jackett
- Sonarr
- Radarr
- Plex
- Self-host ads/trackers protection: Pi-Hole
Prerequisites
- Accessible K8s/K3s cluster on your Pi.
- With
cert-manager
CustomResourceDefinition installed:kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.16.0/cert-manager.crds.yaml
- For transmission bittorrent client, an OpenVPN config file stored in
openvpn.ignore.ovpn
, withauth-user-pass
set to/config/openvpn-credentials.txt
(auto auth), including cert and key.
Usage
Clone the repository:
$ git clone https://github.com/NoeSamaille/terraform-self-hosting-platform-rpi
$ cd terraform-self-hosting-platform-rpi
Configure your environment:
Once it's done you can start deploying resources:
$ source scripts/init.sh # Generates service passwords
$ terraform init
$ terraform plan
$ terraform apply --auto-approve
... output ommited ...
Apply complete! Resources: 32 added, 0 changed, 0 destroyed.
To destroy all the resources:
$ terraform destroy --auto-approve
... output ommited ...
Apply complete! Resources: 0 added, 0 changed, 32 destroyed.
How to set up nodes
Base pi set up
Note: here we'll set up pi-master
i.e. our master pi, if you have additional workers (optional) you'll then have to repeat the following steps for each of the workers, replacing references to pi-master
by pi-worker-1
, pi-worker-2
, etc.
- Connect via SSH to the pi:
- Change password:
- Change host names:
- Enable container features:
- Make sure the system is up-to-date:
- Configure a static IP, Note that This could be also done at the network level via the router admin (DHCP):
- Reboot:
- Wait for a few sec, then connect via SSH to the pi using the new static IP you've just configured:
OPTIONAL: Set up NFS disk share
Create NFS Share on Master Pi
- On master pi, run the command
fdisk -l
to list all the connected disks to the system (includes the RAM) and try to identify the disk. - If your disk is new and freshly out of the package, you will need to create a partition.
- You can manually mount the disk to the directory
/mnt/hdd
. - To automatically mount the disk on startup, you first need to find the Unique ID of the disk using the command
blkid
: - Edit the file
/etc/fstab
and add the following line to configure auto-mount of the disk on startup: - Reboot the system
- Verify the disk is correctly mounted on startup with the following command:
- Install the required dependencies:
- Edit the file
/etc/exports
by running the following command: - Start the NFS Server:
Mount NFS share on Worker(s)
Note: repeat the following steps for each of the workers pi-worker-1
, pi-worker-2
, etc.
- Install the necessary dependencies:
- Create the directory to mount the NFS Share:
- Configure auto-mount of the NFS Share by adding the following line, where
<MASTER_IP>:/mnt/hdd
is the IP ofpi-master
followed by the NFS share path: - Reboot the system
- Optional: to mount manually you can run the following command, where
<MASTER_IP>:/mnt/hdd
is the IP ofpi-master
followed by the NFS share path:
Setup K3s
Start K3s on Master pi
pi@pi-master:~ $ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik" sh -
Register workers
- Get K3s token on master pi, copy the result:
- Run K3s installer on worker (repeat on each worker):
Access K3s cluster from workstation
- Copy kube config file from master pi:
- Edit kube config file to replace
127.0.0.1
with<MASTER_IP>
: - Test everything by running a
kubectl
command:
Tear down K3s
- Worker(s)
- Master
Known issues
Node-RED authentication
Node-RED authentication isn't set up by default atm, you can set it up by scaling the deployment down, editing the settings.js
file to enable authentication and scaling the deployment back up:
pi@pi-master:~ $ kubectl scale deployment/node-red --replicas=0 -n node-red
pi@pi-master:~ $ vim /path/to/node-red/settings.js
pi@pi-master:~ $ kubectl scale deployment/node-red --replicas=1 -n node-red
You can either set up authentication through GitHub (Documentation):
# settings.js
... Ommited ...
adminAuth: require('node-red-auth-github')({
clientID: "<GITHUB_CLIENT_ID>",
clientSecret: "<GITHUB_CLIENT_SECRET>",
baseURL: "https://node-red.<DOMAIN>/",
users: [
{ username: "<GITHUB_USERNAME>", permissions: ["*"]}
]
}),
... Ommited ...
Or classic user-pass authentication (generate a password hash using node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" <your-password-here>
):
# settings.js
... Ommited ...
adminAuth: {
type: "credentials",
users: [
{
username: "admin",
password: "$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN.",
permissions: "*"
},
{
username: "guest",
password: "$2b$08$wuAqPiKJlVN27eF5qJp.RuQYuy6ZYONW7a/UWYxDTtwKFCdB8F19y",
permissions: "read"
}
]
},
... Ommited ...
More information in the Docs: Securing Node-RED.