This is the second part of my "Ghostjs on Docker" series. You can click here to read the first part, which was about running Ghostjs in development with Docker.

This one is about how I initially deployed my website at https://rameau.me.

Prerequisites

Remember, this is about deployment, so first thing first, get yourself a VPS with the following specs:

  • Ubuntu 16.04, Ubuntu 18.04 or Ubuntu 20.04
  • At least 25 GB of storage
  • At least 1GB memory
  • With those ports open to the public: 80 (http), 443 (https), 22 (SSH).

You will eventually need to login into this VPS, so make sure you have a sudo user ready along with the password. Do NOT use root.

You will also want to serve your blog under a specific domain name, so either purchase one or be ready to update the DNS configurations for an existing one.

This is a high level tutorial so I'm sorry if you're a beginner, I will just fly over stuff. So I recommend you get to understand the concept of containers, Docker and Docker Compose.

I also recommend you get to know Caddy, which is an awesome alternative to Nginx.  And I hope you understand the fundamentals of SSL certificates.

Oh, and make sure you have a basic understanding of GitHub Actions. We'll use one for Continuous Delivery of the blog.

I like to add links to official (or interesting) articles about things I discuss about and I highly recommend you click on them so you may benefit from them as well ;)

Steps

We will complete this tutorial in 6 steps:

  1. Point the domain name to the IP of the VPS
  2. Create a Docker Compose file for production
  3. Install Docker and Docker Compose on the VPS
  4. Create a Caddyfile (to serve Ghostjs with SSL/TLS)
  5. Create a GitHub repository, a GitHub Action and use GitHub Secrets (to automate deployment/redeployment)
  6. Serve the blog

Without further ado, let's get to it.

1/6 - Point the domain name to the IP of the VPS

I will not go into details for this step because it largely depends upon your domain name provider. If you don't know how to do it with your provider, I recommend you look up the following:

"How to create an A record on YOUR-PROVIDER-NAME".

Using the tutorial you find, make sure to point your domain name to your server IP address.

2/6 - Create a Docker Compose file for production

In my last article, I shared my development docker-compose file, and I made sure to explain it bit by bit. We're going to create a new one that heavily depends on this one, and which is ready for deployment.

I created a repository on Github for this series, so first thing first, clone it locally so you can follow along:

git clone git@github.com:R4meau/ghost-on-docker.git
cd ghost-on-docker

If you're here for the end result, click here for the branch with the final changes.

As expected, the only real file was the docker-compose.yml file that I used to run Ghost locally. I also included an example .env file.

I named it .env-example because I've setup a .gitignore for .env files. Just in case.

Here's what the docker-compose file currently looks like:

version: '3.8'

services:
  ghost:
    image: ghost:4.2.1-alpine
    container_name: ghost-dev
    restart: unless-stopped
    depends_on:
      - db
    ports:
      - 2368:2368
    environment:
      # see https://ghost.org/docs/config/#configuration-options
      database__client: mysql
      database__connection__host: db
      database__connection__user: ${MYSQL_USER}
      database__connection__password: ${MYSQL_PASSWORD}
      database__connection__database: ${MYSQL_DATABASE}
      url: http://localhost:2368
      NODE_ENV: development
    volumes:
      - ghost_content:/var/lib/ghost/content

  db:
    image: mysql:8
    container_name: ghost-dev-db
    command: mysqld --default-authentication-plugin=mysql_native_password
    restart: unless-stopped
    ports:
      - 3307:3306
    environment:
      # see https://hub.docker.com/_/mysql
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - mysql_ghost_data:/var/lib/mysql

volumes:
  mysql_ghost_data:
  ghost_content:
docker-compose.yml

Let's copy it into a new file called docker-compose.prod.yml:

cp docker-compose.yml docker-compose.prod.yml

Now open this new file using your favorite editor. First, update the docker-compose version to make sure you're using the latest version that supports what we're doing here. As of right now, version 3.8 is the latest stable version.

Click here to check currently available versions.

Then take care of the ghost service by making the following changes

  1. Update the image version to a more recent one. At the time of writing, it's v4.3.3
  2. Update the container name. It's not a dev environment anymore
  3. Update the url environment variable. Set your own domain name, using https, not http
  4. Get rid of the NODE_ENV environment variable. It's set to production by default, according to the image documentation on DockerHub.

The ghost service now looks like:

ghost:
    image: ghost:4.3.3-alpine
    container_name: ghost-prod
    restart: unless-stopped
    depends_on:
      - db
    ports:
      - 2368:2368
    environment:
      # see https://ghost.org/docs/config/#configuration-options
      database__client: mysql
      database__connection__host: db
      database__connection__user: ${MYSQL_USER}
      database__connection__password: ${MYSQL_PASSWORD}
      database__connection__database: ${MYSQL_DATABASE}
      url: https://your-domain-name
    volumes:
      - ghost_content:/var/lib/ghost/content

As for the db service, the only change I've made was for the container name:

container_name: ghost-prod-mysql

Finally, let's introduce a new service. The amazing Caddy.

I went ahead and used the service they shared in their docker-compose example. It looks like:

caddy:
    image: caddy:<version>
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - $PWD/site:/srv
      - caddy_data:/data
      - caddy_config:/config
Their example on the DockerHub image documentation

But let's readjust it using the following changes:

  1. At the time of writing, the latest stable version for the official Caddy image on DockerHub is v2.3.0. Use the one appropriate for you
  2. I always make sure to explicitly name my containers, so I name this one as well
  3. I make sure it waits for the ghost service to start, using depends_on
  4. Since I'm not serving Ghostjs through static files, I get rid of the custom bind mount that would map a local directory to the container's srv directory, where Caddy serves static files by default
  5. I also renamed the volumes to something more specific

It now should look like the following:

caddy:
    image: caddy:2.3.0-alpine
    container_name: ghost-prod-caddy
    depends_on:
      - ghost
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - ghost_caddy_data:/data
      - ghost_caddy_config:/config

As you can see, the first volume is really just a bind mount so the container can use our custom Caddyfile as its default configuration file. See step 4 for more details.

The other two are so Caddy can store certificate files and the other one is for the Caddy configuration files.

We need to declare those volumes, so we add them to our map of volumes:

volumes:
    mysql_ghost_data:
    ghost_content:
    ghost_caddy_data:
    ghost_caddy_config:

Done. Our final docker-compose.prod.yml file looks like this:

version: '3.8'

services:
  caddy:
    image: caddy:2.3.0-alpine
    container_name: ghost-prod-caddy
    depends_on:
      - ghost
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile
      - ghost_caddy_data:/data
      - ghost_caddy_config:/config

  ghost:
    image: ghost:4.3.3-alpine
    container_name: ghost-prod
    restart: unless-stopped
    depends_on:
      - db
    ports:
      - 2368:2368
    environment:
      # see https://ghost.org/docs/config/#configuration-options
      database__client: mysql
      database__connection__host: db
      database__connection__user: ${MYSQL_USER}
      database__connection__password: ${MYSQL_PASSWORD}
      database__connection__database: ${MYSQL_DATABASE}
      url: https://your-domain-name
    volumes:
      - ghost_content:/var/lib/ghost/content

  db:
    image: mysql:8
    container_name: ghost-prod-mysql
    command: mysqld --default-authentication-plugin=mysql_native_password
    restart: unless-stopped
    ports:
      - 3307:3306
    environment:
      # see https://hub.docker.com/_/mysql
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - mysql_ghost_data:/var/lib/mysql

volumes:
  mysql_ghost_data:
  ghost_content:
  ghost_caddy_data:
  ghost_caddy_config:
docker-compose.prod.yml

3/6 - Install Docker and Docker Compose on the VPS

This is potentially the last time you login into this particular VPS before a while, as we will automate the rest of the process with GitHub Actions. To install Docker and Docker Compose, first, login into your server via SSH:

ssh your-sudo-user@your-vps-ip

If you have a private key (.pem  file), make sure to use the -i flag to be able to login with it.

Once you're in there, update, then upgrade your existing packages:

sudo apt update && sudo apt upgrade

From there, to install Docker, follow the official documentation.

To install Docker Compose, follow the official documentation.

Then feel free to logout of your server.

4/6 - Create a Caddyfile

Caddy is an open source web server with automatic HTTPS. It's written in the Go programming language.

The key words here are "automatic HTTPS". Basically, Caddy does what Nginx and Cerbot combined would do for you, in an elegant way.

Still within the directory you created, or the one you created from cloning this series' repo, create a new file called "Caddyfile", with no extensions. Then fill it up with the following content:

your-domain-name {
  reverse_proxy ghost:2368
}
Caddyfile

You have to replace "your-domain-name" with the domain name you wish to serve your blog under. In my case, this file would look like:

rameau.me {
  reverse_proxy ghost:2368
}
Personalized Caddyfile

This will essentially generate and set up an SSL certificate for your domain, then it will allow you to visit your Ghostjs blog under https://your-domain-name, which is running under the ghost Docker service/host name. The port is 2368.

This is all Caddy needs to serve your website with TLS encryption. You can check out the Caddy documentation for more examples and use cases.

5/6 - Create a GitHub repository, a GitHub Action and use GitHub Secrets

We're pretty much done with the setup. If you don't mind handling deployments manually, you can go ahead and do the rest of the deployment yourself.

If not, let's keep going.

The Repository

First thing first, go to GitHub to create a private repository for your blog.

Then on your computer, get back into the local directory you created for the above files, turn it into a Git repository and link it to your new GitHub repo:

git init
git remote add origin git@github.com:your-username/your-repo-name.git

I use the SSH version of my repository because I've setup an SSH key for my GitHub account. Feel free to use the HTTPS version if you haven't.

If you've been following along using the repository I created for this series, you can use a different remote than origin. You can call it my-blog for example and only push to it when you need to update your production website:

cd ~/ghost-on-docker
git remote add my-blog git@github.com:your-username/your-repo-name.git

Or you can replace the origin remote to your repo instead of mine:

cd ~/ghost-on-docker
git remote set-url origin git@github.com:your-username/your-repo-name.git

Next, let's stage, commit, and push our previously created files to the repo:

git add .
git commit -m "Initial commit"
git push origin master

You may need to update the branch name if it's main instead of master.

The GitHub Workflow/Action

Now let's create the GitHub Action for automated deployment. Still within your local Git repo, create a new directory at .github/worfklows:

mkdir -p .github/workflows

Now move into it and create a new file called upload-and-deploy.yml. Fill it up with the following:

name: Ghost Production SSH Upload and Deploy

on: [workflow_dispatch, pull_request, push]

jobs:
  upload_and_deploy:
      if: github.ref == 'refs/heads/master'
      runs-on: ubuntu-latest
      name: Ghost Production SSH Upload and Deploy

So far, we've declared only one job in this GH Action. This is the only one we'll need for our deployment.

jobs:
  upload_files:

As you can see, it allows us to run this Action manually (on workflow_dispatch), upon a pull request (on pull_request), and upon a push (on push).

on: [workflow_dispatch, pull_request, push]

Although, the action only runs when the trigger is on the master branch.

upload_files:
  if: github.ref == 'refs/heads/master'

Make sure to compare github.refto a different name if you don't use the master branch.

This job needs to do three things for us:

  1. Checkout your GitHub repo so it can reference the files in subsequent steps - with the Checkout Action
  2. Upload your repo files to the VPS via SSH - with the SFTP SSH Action
  3. Still via SSH, generate the .env file from the GitHub Secrets and use Docker Compose to start the services -  with the ssh-scp-ssh-pipelines Action

Note that I made sure to link to each GitHub action I used to accomplish those steps. Check their documentation for how they work.

I pretty much copied  the examples they have on their documentation page.

Here's what the final GitHub Action file looks like:

name: Ghost Production SSH Upload and Deploy

on: [workflow_dispatch, pull_request, push]

jobs:
  upload_and_deploy:
      if: github.ref == 'refs/heads/master'
      runs-on: ubuntu-latest
      name: Ghost Production SSH Upload and Deploy
      steps:
      - name: Checkout Repository
      	# see https://github.com/marketplace/actions/checkout
        uses: actions/checkout@v2.3.4
      - name: Upload Files
        # See https://github.com/marketplace/actions/sftp-ssh-action
        uses: Creepios/sftp-action@v1.0.1
        with:
          host: ${{ secrets.PRODUCTION_SERVER_HOST }}
          port: ${{ secrets.PRODUCTION_SERVER_PORT }}
          username: ${{ secrets.PRODUCTION_SERVER_USERNAME }}
          password: ${{ secrets.PRODUCTION_SERVER_PASSWORD }}
          localPath: './'
          remotePath: './ghost-blog'
      - name: Deploy Ghost
        # See https://github.com/marketplace/actions/ssh-scp-ssh-pipelines
        uses: cross-the-world/ssh-scp-ssh-pipelines@latest
        env:
          MYSQL_ROOT_PASSWORD: ${{ secrets.MYSQL_ROOT_PASSWORD }}
          MYSQL_DATABASE: ${{ secrets.MYSQL_DATABASE }}
          MYSQL_USER: ${{ secrets.MYSQL_USER }}
          MYSQL_PASSWORD: ${{ secrets.MYSQL_PASSWORD }}
        with:
          host: ${{ secrets.PRODUCTION_SERVER_HOST }}
          user: ${{ secrets.PRODUCTION_SERVER_USERNAME }}
          pass: ${{ secrets.PRODUCTION_SERVER_PASSWORD }}
          port: ${{ secrets.PRODUCTION_SERVER_PORT }}
          connect_timeout: 10s
          last_ssh: |
            cd ~/ghost-blog &&
            echo "MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD" >> .env &&
            echo "MYSQL_DATABASE=$MYSQL_DATABASE" >> .env &&
            echo "MYSQL_USER=$MYSQL_USER" >> .env &&
            echo "MYSQL_PASSWORD=$MYSQL_PASSWORD" >> .env &&
            docker-compose -f docker-compose.prod.yml up -d --build
upload-and-deploy.yml

The way I've set it up here is that it will upload all the files to a directory called ghost-blog in the user home directory of your VPS.

According to the documentation of the GH Action I used for uploading, it will create this directory if it doesn't exist yet.

Great. Let's now create those secret variables from the GitHub interface.

The GitHub Secrets

Now let's use GitHub Secrets to register two sets of environment variables. One for Docker Compose to pass to its services (we used them to create the .env file above), the other for a GitHub Action to login to our VPS via SSH.

Go to your repository settings on GitHub. More precisely, in the Secrets section. It's currently https://github.com/your-github-user/your-repo-name/settings/secrets/actions.

Then create the 4 environment variables Docker Compose needs, one by one, on the GitHub interface:

MYSQL_ROOT_PASSWORD=your-mysql-root-pass
MYSQL_DATABASE=ghost
MYSQL_USER=your-mysql-user
MYSQL_PASSWORD=your-mysql-user-pass

Do the same for the environment variables we need for one of the GitHub Actions that will login to your VPS:

PRODUCTION_SERVER_HOST=your-VPS-IP
PRODUCTION_SERVER_PORT=22
PRODUCTION_SERVER_USERNAME=your-ssh-user
PRODUCTION_SERVER_PASSWORD=your-ssh-user-pass

Alright. We're done with the GitHub interface part. Let's deploy the website, shall we?

6/6 - Serve the Blog

We are now ready to deploy the website/blog. Let's just stage our latest changes, commit, push, and then wait for the Docker Compose services to successfully start:

git add .
git commit -m "Introduce GitHub Action for CD"
git push origin master

Use the GitHub interface to figure out the status of your Action. It's currently located at https://github.com/your-github-user/your-repo-name/actions.

Alternatively, you can SSH back into your VPS, move into the ghost-blog directory and use the docker-compose logs command to check the latest log output from all the services in the referenced docker-compose file, or you can pass the name of the service you wish to read the logs for:

docker-compose logs -f docker-compose.prod.yml ghost

Note that I used the -f or --follow flag so it can persist on the terminal. Exit it without stopping the container using Ctrl/CMD + C.

Conclusion

I hope this was useful and detailed enough. This is the very least you can do for a production-ready Ghost deployment.

At the time of writing, my current instance is only about a couple weeks old. It hasn't matured that much from what we just did yet but I definitely recommend you make it harder for someone to guess how you deploy a piece of Software.

For example, you can serve the admin interface from a sub-path other than the default '/ghost'.

Here are some other ideas of things you can do from there:

  • Use a tool like Mailgun to setup transactional emails and a newsletter system
  • Temporarily map a port on the VPS to the default MySQL port (3306) of your db service so that you can access your Ghostjs database using your DB credentials on a local RDBMS
  • Use Google Analytics or an open-source alternative to see stuff like who visits your site, what they're up to and for how long they stay
  • If you expect a lot of visitors, you should definitely scale your deployment by using either Docker Swarm or Kubernetes

I might write more articles around this subject. I'll make sure to update this one if I do.