Hosting like Emma
Over the last few years, I've been looking for and refining my way of deploying the software I write and rely on. These are mostly simple services, like a password manager, a Discord bot, or this blog.
The goals for me are (in descending order of importance):
- Simplicity: I want to be able to deploy my code with as little friction as possible. If this point isn't met, the service likely won't be deployed in the first place.
- Cost: The hosting solution must not be more expensive than necessary. I don't need a lot of resources, so I don't want to pay for them.
- Security: The services should be backed up. In combination with number one, those updates need to be easy to perform.
- Portability: Related to Security, in case a machine fails, I want to be able to restore my services quickly and easily. This is also standalone point, as moving services might be wanted for other reasons, like moving to a different provider.
- Reliability: I want to be able to rely on my services being up and running.
These goals might be wildly different from your own, but I hope that if they are somewhat similar, you can take something away from this post.
You may also be surprised by reliability being so low on the list, but I don't run any mission-critical services. If something goes down, I can live with it for a while, as long as I can restore it quickly. All of the services I run are either entirely for my personal use, or for a small group of friends, who are understanding of the occasional downtime. My fedi instance for example was down twice this last week for quite a few hours resulting in an uptime of less than 95% according to Fediverse Observer, but because only myself is on there, the only effect was that I couldn't post anything until I fixed it. More importantly to me, the fix was quick and easy, which I care about much more than uptime.
There are two general approaches I've used:
Static hosting: either on GitHub Pages or on Cloudflare Pages. This is great for static sites, but only works for static sites. They both use global CDNs, so they are fast and reliable.I've since moved away from static hosting, and simplified even more by also using Docker Compose for static sites.- Docker Compose: For anything that's not just a bunch of files, I use Docker Compose. Either on a VPS or whatever I am currently running on my home network (see New new new home infrastructure).
Static Hosting
For static sites, I use Eleventy to generate the HTML files from Markdown and other sources. The generated files are then pushed to a GitHub repository, which is then served by GitHub Pages or Cloudflare Pages. The files are generated inside of a Dockerfile, then uploaded to the integrated Docker registry, and then served by a Nginx container.
Docker Compose
For everything else, the workflow is basically the same:
- Write the code.
- Write a Dockerfile to build the image.
- Set up a Forgejo Action to build the image and push it to the Docker registry
- Push the code to a Forgejo git repository.
- Have the Action build the image and push it to the Docker registry.
- Set up a container in the Portainer host and activate the webhook
- Then copy the url into the Action to trigger the build.
- The service is now running on portainer and automatically updating whenever new code is pushed.
Dockerfile
The Dockerfile is responsible for building the image that will run the service. Most of the time, there is more dependencies for building the code than for running it, so I use a multi-stage build to keep the final image small.
For example, the Dockerfile for this blog looks like this:
FROM node:23-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . .
RUN yarn exec eleventy
FROM lipanski/docker-static-website:latest
COPY /app/_site /home/static
There is no expose port or entrypoint necessary, as the image will be used by Docker Compose, which will set those up and can change without rebuilding the image.
Workflow
The workflow is responsible for running the commands in the runner that will build the image and push it to the Docker registry. They are somewhat long and complicated, but they are almost identical for all of my services, so I can copy and paste them easily.
show workflow ▼
name: Build and Push Docker Image
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Allow manual triggering
workflow_dispatch:
jobs:
build-and-push:
runs-on: [main]
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: git.emmabyte.de
username: emma
password: $
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: git.emmabyte.de/$
tags: |
type=sha,format=long
type=ref,event=branch
type=ref,event=pr
type=semver,pattern=
type=semver,pattern=.
latest
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: $true
tags: $
labels: $
- name: Trigger Portainer webhook
if: $true
run: |
echo "Triggering Portainer webhook to update service..."
curl -X POST $ -H "Content-Type: application/json"
Docker Compose
The Docker Compose file is responsible for running the service and setting up the environment. They are where most of the configuration is done, like environment variables, volumes, and ports. For example, the Docker Compose file for this blog looks like this:
services:
app:
image: 'git.emmabyte.de/emma/emmabyte'
restart: unless-stopped
networks:
- nginx
networks:
nginx:
external: true
As you can see, it is very simple. It just pulls the image from the Docker registry and runs it in a network called nginx
, which is how the Nginx reverse proxy can access it.
Portainer
Portainer is a web-based management interface for Docker. Specifically, I use it to have an easy way to get a webhook URL to trigger the build of the image. I may want to replace it with a simpler solution in the future, but for now, it works well enough.
Nginx Reverse Proxy
I use the very simple to set up Nginx Proxy Manager to reverse proxy the services to the outside world. It shares a network with all of the services, so it can access them by their service name.
DNS
For even simpler setup of new services, I've set up a wildcard DNS entry for *.main.emmabyte.de
that points to the IP of the Nginx Proxy Manager. This way, I can just add a new service with a name like portainer.main.emmabyte.de
and it will automatically be accessible through the reverse proxy without any additional configuration, taking one more step out of the way.
Conclusion
This setup has served me well for a while now, and I hope it can help you set up your own services with ease. The simplicity of the setup is what makes it so effective for me, and I hope you can take something away from it. If you have any questions or suggestions, feel free to reach out to me on Fedi or Matrix. I am always happy to help and discuss new ideas.