Dylan's blog

What my system architecture looks like (early 2024)

At the end of my last blog post, I mentioned sharing my system architecture and how I manage my services. In this blog post, I’d like to go over all the details into what make my services run, my configurations, what works well, and what doesn’t.

To hook you in, check out this awesome diagram I made in draw.io:

System Architecture

I’ll start with the hardest worker in my stack.

The Raspberry Pi 🥧

The busiest bee in my system is easily my Pi. I have a total of 9 Docker containers running and I don’t have any performance issues. My Pi runs a variety of different services, some being internal (typically limited to the LAN) and some being external (typically exposed to the WAN).

Bare-Metal

The only thing I run on bare-metal, that is installed directly to the Pi, is my Caddy server. I had issues setting it up and cooperating with my reverse proxies when I created a docker container for it, but haven’t had any issues with it installed on the OS.

Caddy on the Pi

Routes describe how users of an endpoint arrive at a specified location on a server. My Caddy setup uses many reverse proxies to achieve routes to services that do not need their port exposes to the WAN. Rather than make my network more insecure by exposing ports to the internet, I can open only the required ports for HTTPS (443) so services can be redirected through Caddy.

Since the service I’m using to run this blog doesn’t provide any default way of hosting content, I had to come up with an approach that exposed content in a directory that needs to be visible to users reading my blog. The way I achieved this was using a file_server directive, which spins up a file server in the file directory path you specify. Since typical users won’t be browsing through the directories of available content, there was no need to enable the browse feature - however, hosting if hosting a repository, this would be beneficial.

Here’s how my blog routes today:

www.blog.fivepixels.me blog.fivepixels.me {
        # Use security configuration
        import caddy_security.conf
        # Any time a URI with /content proceeding the base domain (i.e. blog.fivepixels.me/content/...), load the content from the file_server hosted at the root
        handle_path /content* {
                root * /super/secret/path/to/blog/content/
                file_server
        }
        @except not path /content*
        # Blog content should not be text/html, but all other pages should be
        header @except {
                Content-Type: text/html
        }
        # All other requests (typical blog pages) should route through their respective pages coming from the service
        reverse_proxy localhost:0000
}

One benefit to using Caddy is the automatic and free SSL certificates through Let’s Encrypt. Rather than managing any SSL certificate files, Caddy does all of this automagically and requires almost no configuration to get started.

You can learn more about Caddy here: https://caddyserver.com

Docker

Man oh man - two years ago, I wouldn’t have been able to explain Docker, but now I feel more acquainted with it than ever. Is this the Dunning-Kruger effect?

As of very recently, I’ve done some refactoring to my Docker setup. You might be wondering if there’s a tool I use to manage all of these services; I do - it’s the one built-in to Docker - Docker Compose! I’ve been using Compose for a while, but as I started adding services, so did the length of my file. I decided it was time to separate each service into it’s own compose.yaml file and try to organize the services depending on their type.

As I described earlier, I have some services that (usually) live on the LAN, or at least operate the most with the LAN. Currently this is my AdGuard Home instance, my python-matter-server instance (for having Matter in my smart home), and my personal cu-energy-service for automating getting energy reports from my energy provider (another smart home thing I talked about in my last blog post).

My external services on the other hand, are services that are intended to be viewed on the WAN or used on the WAN heavily. My exhaustive list includes:

While working on this post, I created a ‘root’ compose.yaml to make a distinction between internal and external services. Simultaneously, I created compose.yaml in the external and internal directories that house directories for each service, and their respective compose.yaml files. Confused yet? Here’s the tree:

.
├── compose.yaml
├── external
│   ├── compose.yaml
│   ├── freshrss
│   │   └── compose.yaml
│   ├── homeassistant
│   │   └── compose.yaml
│   ├── personal-blog
│   │   └── compose.yaml
│   ├── personal-cgit
│   │   └── compose.yaml
│   ├── personal-website
│   │   └── compose.yaml
│   └── vaultwarden
│       └── compose.yaml
└── internal
    ├── adguardhome
    │   └── compose.yaml
    ├── compose.yaml
    ├── cu-energy-service
    │   └── compose.yaml
    └── matter-server
        └── compose.yaml

12 directories, 12 files

The ‘root’ compose.yaml:

name: 'services'
include:
  - ./internal/compose.yaml
  - ./external/compose.yaml

./external/compose.yaml:

name: 'external'
services:
  homeassistant:
    extends:
      file: ./homeassistant/compose.yaml
      service: homeassistant
  freshrss:
    extends:
      file: ./freshrss/compose.yaml
      service: freshrss
  personal-blog:
    extends:
      file: ./personal-blog/compose.yaml
      service: personal-blog
  personal-cgit:
    extends:
      file: ./personal-cgit/compose.yaml
      service: personal-cgit
  personal-website:
    extends:
      file: ./personal-website/compose.yaml
      service: personal-website
  vaultwarden:
    extends:
      file: ./vaultwarden/compose.yaml
      service: vaultwarden

Easy peasy. Some things change when only the internal or external compose.yaml file is used. For starters, each of those will create their own Docker network, but only when running each ‘side’ individually. When you run the root compose.yaml, only one network is spawned.

At some point I should check this into Git…

Conclusion

There’s a small chance the Docker layout is overkill. I don’t think I’ll revert it, but if there are ways to make it less complex, we might be in business. I don’t plan on using any ‘wrapper’ software around Docker like Portainer in the future. I’d highly suggest learning Docker from scratch as it’s a fun and interesting journey learning how to deploy things quickly and easily, while gaining some knowledge of the underlying software.

The Desktop 🖥️

This section is going to be short as I only have one service I recently started hosting here, and that service is my Immich Instance. I decided to host this on my desktop since it has the storage capacity I need to actually backup my images. Immich gives you the ability to directly backup anything put into your iPhone’s photo library, meaning it can act as a replacement for iCloud. Eventually I’d like to stop paying for iCloud once I have synchronization working autonomously. Any time my desktop is off, synchronization won’t happen since the Immich instance isn’t live.

Immich Usage

One of the best features in Immich is the timeline scrollbar that shows the volume of each year in your library:

An image of Immich

Just like services on the Raspberry Pi, Immich runs in a Docker container. This service though requires more than one container, as some are allocated to machine-learning. Facial recognition is implemented, which means just like iCloud, faces are labeled appropriately and can be searched through.

What works well

You’ve already heard me rave about Docker containers in this blog post once, but I’ll repeat myself and describe how well these run. Despite having a small micro-computer, I have no slowdowns on any services. AdGuard Home, for example, is my DNS server that manages incoming traffic into my network or autonomous outward traffic. It tries to prevent any advertising or trackers from exposing personal information. Despite getting tens-of-thousands of requests in a day, I never have to wait for my Pi to act as the middle-man between all of my devices on the network.

AdGuard Stats

What could be better

One issue I had not too long ago was that when AdGuard Home went down, no websites would load. This was a trivial fix, as my router was only configured to use AdGuard Home as it’s primary DNS server - if that fails, there was no backup. Adding a backup solved the problem immediately.

While I did mention how well day-to-day performance is on the Pi, the biggest slowdown comes from bringing services down and back up, as well as performing updates to services. Knocking all services down takes about 11 seconds (if you also include the network), at the time of writing this post:

docker compose down
[+] Running 10/10
 ✔ Container adguardhome        Removed                                                                                7.2s
 ✔ Container personal-cgit      Removed                                                                                4.5s
 ✔ Container matter-server      Removed                                                                                4.3s
 ✔ Container personal-website   Removed                                                                                5.3s
 ✔ Container freshrss           Removed                                                                                5.7s
 ✔ Container personal-blog      Removed                                                                                2.3s
 ✔ Container vaultwarden        Removed                                                                                3.6s
 ✔ Container cu-energy-service  Removed                                                                                7.3s
 ✔ Container homeassistant      Removed                                                                               10.4s
 ✔ Network services_default     Removed                                                                                0.4s

This seems to vary, as sometimes it can take about 30 seconds. The same goes for bringing services up.

I’ve been considering purchasing a Raspberry Pi 5, as I’ve heard it makes 4 times the performance of the model I have (Model 4B, 8GB).

Conclusion

I cranked out this post in an evening, which is super unusual for me, as I usually take more than a week to write this. I don’t know why I was so motivated to write it but here we are.

If you got this far and you don’t know what Docker is but you want to understand it, you should check out the Docker overview to get a basic understanding of what’s going on.

If you have any questions, shoot me an email or a message and I’ll answer anything I can!

Thanks for reading and I hope you were able to learn something new!

EDIT: I’ve created a repository with the general layout of my Docker Compose files used to deploy services on my Raspberry Pi, which can be found here