I've been thinking a lot about the network and server configuration, as well as "what are our goals."
I'm thinking about the machines that make up the home lab as corresponding to different functions. I sketched it out below.
Each device serves a purpose:
Synology is our NAS and is going to also contain NextCloud so that it can have all file management functionality in one place. I have notes on connecting to the Synology in a docker-compose so that it's baked into that image. I'll go over that in another post.
The Pi Cluster will host the majority of our services. I'm thinking Mastodon, GoToSocial, Calibre-Web, Pixelfed, RSSHub, Supabase, Directus, etc.
The B-link will house the Coder instance, GitHub Runners, and maybe Drone.io
We'll have a single Raspberry Pi that manages Umbrel (Bitcoin and Lightning nodes)
The NUC will host our media server, Plex.
I didn't want to get rid of any of the machines that we've collected over the years. Giving each section to a specific machine or cluster is going to make it easy to know where things should go as we add more capabilities.
As far as outside the home network, I think we'll continue to have a DigitalOcean droplet to host the public Reverse Proxy connected to our Tailscale Tailnet. I'm toying with either using a Virtual IP (self-assigned IP that the Keepalived service uses as a single entry point to the Pi Cluster) or each docker container has a permanent Tailscale IP that travels wherever the compose file is run on our system. If we can side car that in, then we almost loose the need to have a single Virtual IP that is managed on the cluster. It also means we could move that docker container to a completely different piece of hardware outside the cluster and it's Virtual IP and the reverse proxy would just pick it up.
And a hello to you too!
The good news is that my desk is fully set up for recording now!
Addressing the next steps:
Networking honestly scares me, so I’m probably not going to be able to help a whole lot here aside from the grunt work of running commands on the boxes.
That being said, I think that setting up a second LAN specifically for homelab with it’s own router should work to solve out IP issues. It feels like a very brute force and kludgy way of handling it, but its straight forward and, as I said, networking scares me.
Decommissioning the “outside” server is in our best interest in my opinion. I agree fully that the traffic we’re expecting isn’t going to be great enough that we’re gonna need that extra bandwidth that the external hosting is gonna provide us. Additionally, if the Matrix server goes down and we lose the data and then have to start from zero, I think we’ll be good. Nobody has mission critical data on there, and the setting up process won’t be headache inducing.
Side note about GoToSocial, I want to be more involved in that setup this time. I need to know how to control the server configuration if I’m going to continue to use the platform. [^1]
I’m in the dark mostly about how to setup things, so I’ll need to take marching orders, but I’m ready to work on this!
—Zane
[^1]: Side note: GoToSocial is really good and you should use it too :P
Hello Zane!
Hello Zane!
It's time to get our home lab back up and running. Thank you for all the research you did and the initial setup work this summer. I wanted to get written out what I'm envisioning for our next steps and what we want to build out. Once our schedules calm down I want to get back to recording our podcast with you.
Current Status:
Raspberry Pi cluster needs to have the one node that stopped responding swapped out. I have the parts and now it's a matter of just removing the old board.
The Synology is currently running our Docker containers: Plex, Portainer, and Directus (CMS that is connecting to Supabase for my personal data tracking. Once the PiCluster is back up we'll migrate those over.
Tailscale exit node is up and running.
We currently have Mastodon, GoToSocial, and a few other services running on DigitalOcean.
Next Steps:
Pi Cluster up and running is the first priority. I want to use the shared storage and implement the High Availability using KeepAliveD. I'm having trouble figuring out how to define a Virtual IP for the cluster to load balance through because we have Eero as our gateway. I am thinking about deploying a Ubiquity EdgeRouterX as an internal network that all the Home Lab services go through so that it's isolated on its own subnet. This, plus Tailscale should allow us to be able to access each machine but also address the cluster as a singular IP that allows for roll over in case of a node failure.
Once this is up and running I want to decommission the "outside" server. I want to host Mastodon and GoToSocial in our network on our own hardware since if there is an interruption in service they will come back up and fetch outstanding data. The only service that has an issue with this currently would be our Matrix chat server, and, honestly, I don't think we would have an issue if there is a service outage as long as we can stand it back up via a docker-compose file. Would love to know your thoughts on that.
Two big items to do that can set us up so we both can spin up services and get our media server into a high availability mode.
Can't wait to hear from you!
Dad
PS. This is going to be our first post on the revamped Ok, What Went Wrong?
PPS. Publishing from Obsidian is setup to the site through Git Publishing.