Github Pages @ Home w/ Jekyll, Gitea, Ansible & Linode
Brown house blue roof -- Rowan Heuvel @ unsplash
Hosting your own website can be fun. Developing a whole setup to emulate what Github Pages provides allowing you to manage your own content as you please is even more fun. In this article we will build our own Github Pages using Gitea, Jekyll and Ansible. This document is not exhaustive but comprehensive enough to be useful to host your own Github Pages.
Linode Server Setup
To setup a Linode to publish to as my production site I’ve written a quick role for myself to quickly compose Linode configurations. In the role we setup the Linode instance with:
- name: "Create Linode Instance: {{ linode_instance_name }}"
linode.cloud.instance:
api_token: "{{ linode_api_token }}"
label: "{{ linode_instance_label }}"
type: "{{ linode_instance_type }}"
region: "{{ linode_instance_region }}"
image: "{{ linode_instance_image }}"
root_pass: "{{ linode_instance_root_pass }}"
private_ip: true
booted: true
firewall_id: "{{ linode_fw_id }}"
authorized_keys: "{{ linode_instance_authorized_keys | trim }}"
group: "{{ linode_instance_group }}"
tags: "{{ linode_instance_tags }}"
interfaces: '{{ linode_interfaces }}'
state: present
register: _linode_new_instance
Make sure the location you deploy to has full deployment availability: https://www.linode.com/global-infrastructure/availability/ Depending on where you want to deploy adding volumes to your Linode might be limited. And you get an error like this:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "failed to create volume:
POST /v4/volumes: [400] region: Resource creation in this region is currently restricted due to limited deployment.
Please refer to the Region Availability documentation for more details."}
DNS Management
DNS is handled by creating a new DNS A record based on the linode created:
---
- name: "Add A Record for Linode"
linode.cloud.domain_record:
api_token: '{{ linode_api_token }}'
domain: "{{ linode_domain }}"
name: "{{ linode_instance_name }}"
type: "A"
target: "{{ _linode_new_instance.instance.ipv4[0] }}"
state: present
Webserver Configuration
The instance is going to use Alma Linux. To configure Nginx with dehydrated to manage SSL certificates with Let’s Encrypt.
Since Alma Linux is a modern and secure distribution, SELinux is enabled by default. Installing everything necessary and configuring it comes naturally with its own requirements.
---
- name: Install all required modules and packages
ansible.builtin.package:
state: present
name:
- nginx
- nginx-all-modules
- dehydrated
- setools-console
- policycoreutils-python-utils
setools-console and policycoreutils-python-utils ensure we can configure SELinux to enable nginx to do what it needs to do to
reverse proxy our containerised static hosting with a caching service.
We need to give nginx access to the disk so dehydrated verification for the Challenge works.
SELinux
Telling SELinux that our WebServer can access and read from these directories requires a handful of commands:
- name: SELinux - Enable proxy pass
ansible.builtin.shell: |
setsebool httpd_can_network_connect 1 -P
setsebool -P httpd_setrlimit 1
chcon -v --type=httpd_sys_content_t /var/www
semanage fcontext -a -t httpd_sys_content_t '/var/www(/.*)?'
restorecon -Rv /var/www
setsebool -P httpd_setrlimit 1
Shoutout to F5’s documentation for making this much easier to setup!
NGinx config
The main nginx config will look like this:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
Before the first certs have been created, your server directive configuration will look like this:
#;; -*- nginx-mode -*-
server {
listen 80;
server_name example.org;
access_log /var/log/nginx/example.access.log main;
error_log /var/log/nginx/example.error.log;
gzip on;
gzip_types text/plain
text/html
text/css
text/csv
text/javascript
application/xml
application/json
image/png
image/gif
image/svg;
gzip_min_length 1000;
gzip_proxied no-cache no-store private expired auth;
location / {
root /var/www;
index index.html;
sendfile on;
sendfile_max_chunk 3m;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
}
location /.well-known/acme-challenge {
alias /var/www/dehydrated/example.org;
}
}
Later on, when dehydrated has run for the first time your config can be extended to include SSL
server {
listen 443 ssl http2;
#listen [::]:443 ssl ipv6only=off http2;
server_name example.org;
ssl_certificate /etc/nginx/certs/example.org/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/example.org/privkey.pem;
# optional but recommended
include /etc/nginx/ssl-options.conf;
# optional but recommended
ssl_dhparam /etc/nginx/dhparam.pem;
}
As a shared configuration let’s keep SSL options separate on disk so we can always manage all sites the same when it comes to ciphers:
#; -*- nginx-mode -*-
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
Back to Ansible, always make sure your nginx configuration is valid before continuing:
- name: Verify NGinx config is working
ansible.builtin.shell: nginx -t
Firewall
Alma Linux uses firewalld to manage network ports. Thankfully ansible’s ansible.posix.firewalld makes this simple.
- name: Permanently enable https service, also enable it immediately if possible
ansible.posix.firewalld:
service: https
state: enabled
permanent: true
immediate: true
offline: true
- name: Permanently enable http service, also enable it immediately if possible
ansible.posix.firewalld:
service: http
state: enabled
permanent: true
immediate: true
offline: true
Dehydrated
Installing the configuration for dehydrated is as easy as copying the files in place.
- name: Update dehydrated/config
ansible.builtin.copy:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: root
group: root
mode: "{{ item.mode }}"
loop:
- src: files/dehydrated-conf.sh
dest: /etc/dehydrated/config
mode: "u=rw,g=,o="
- src: files/hook.sh
dest: /etc/dehydrated/hook.sh
mode: "u=rwx,g=rx,o="
- src: files/domains.txt
dest: /etc/dehydrated/domains.txt
mode: "u=rw,g=r,o="
For the next step to work make sure hooks are enabled in your configuration:
HOOK="${BASEDIR}/hook.sh"
The hook.sh here is important since it allows us to copy our certs into place so nginx can find it.
...
deploy_challenge() {
local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}"
mkdir -p "/var/www/dehydrated/${DOMAIN}/"
echo -n "${TOKEN_VALUE}" > "/var/www/dehydrated/${DOMAIN}/${TOKEN_FILENAME}"
chown nginx:nginx -R /var/www/
echo "/var/www/dehydrated/${DOMAIN}/${TOKEN_FILENAME} created: $(ls -l /var/www/dehydrated/${DOMAIN}/${TOKEN_FILENAME})"
}
clean_challenge() {
local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}"
rm -f /var/www/dehydrated/${DOMAIN}/${TOKEN_FILENAME}
}
...
deploy_cert() {
local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" TIMESTAMP="${6}"
mkdir -p /etc/nginx/certs/${DOMAIN}/
cp $KEYFILE /etc/nginx/certs/${DOMAIN}/
cp $CERTFILE /etc/nginx/certs/${DOMAIN}/
cp $FULLCHAINFILE /etc/nginx/certs/${DOMAIN}/
cp $CHAINFILE /etc/nginx/certs/${DOMAIN}/
cp $REQUESTFILE /etc/nginx/certs/${DOMAIN}/
chown nginx: -R /etc/nginx/certs
ls -l /etc/nginx/certs/${DOMAIN}
systemctl reload nginx.service
}
deploy_ocsp() {
local DOMAIN="${1}" OCSPFILE="${2}" TIMESTAMP="${3}"
# Simple example: Copy file to nginx config
cp "${OCSPFILE}" /etc/nginx/certs/${DOMAIN}/;
chown -R nginx: /etc/nginx/certs/
echo "${TIMESTAMP}: OCSP File for ${DOMAIN} created at ${OCSPFILE} and copied to: /etc/nginx/${DOMAIN}"
systemctl reload nginx
}
...
Truncated to the important bits, but extend it as you need to from the hook.sh script you get with your dehydrated configuration.
In your ansible tasks, make sure you only register with LetsEncrypt once:
- name: Check if accounts folder is empty before proceeding
ansible.builtin.find:
paths: '/etc/dehydrated/accounts'
file_type: directory
register: accounts_registration_found
- name: dehydrated - Register and accept terms
when: accounts_registration_found.matched == 0
ansible.builtin.shell: dehydrated --register --accept-terms
Once that’s done run the dehydrated --cron to have dehydrated create the challenge and verify your host is actually your host:
- name: dehydrated - Create
ansible.builtin.shell: dehydrated --cron
Restart nginx and enable it to start everytime you reboot the machine:
- name: Restart nginx service
ansible.builtin.service:
name: nginx
state: restarted
enabled: true
To have dehydrated automatically renew your certs, enable the timer
- name: Enable dehydrated timer
ansible.builtin.systemd_service:
name: dehydrated.timer
enabled: true
state: started
Containers
As you can probably tell the overall configuration is pretty generic and will allown you to host whatever you feel like behind the nginx reverse proxy.
In the case of this website it’s a 2 pair of containers running in a composition letting the installed nginx run as the SSL offloader. It makes life easier, since you don’t need to deal with the complexities of SSL cert renewal inside the container and how Docker implements DNS especially within compositions.
Our container deployment works as follows in Ansible:
- name: Create Webserver Directory
ansible.builtin.file:
state: directory
path: "{{ item.path }}"
owner: "{{ container_nginx_comp_owner }}"
group: "{{ container_nginx_comp_group }}"
mode: "{{ item.mode }}"
loop:
- path: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}"
mode: "u=rwx,g=rw,o=r"
- path: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/static"
mode: "u=rwx,g=rwx,o=rwx"
- path: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/log"
mode: "u=rwx,g=rwx,o=rwx"
- path: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/site_template"
mode: "u=rwx,g=rwx,o=rwx"
- path: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/frontend_cache_template"
mode: "u=rwx,g=rwx,o=rwx"
- path: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/data-cache"
mode: "u=rwx,g=rwx,o=rwx"
- name: Deploy composition template
ansible.builtin.copy:
src: files/nginx/docker-compose.yml
dest: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/docker-compose.yml"
owner: "{{ container_nginx_comp_owner }}"
group: "{{ container_nginx_comp_group }}"
mode: "u=rw,g=rw,o=r"
- name: Deploy templated script files
ansible.builtin.template:
src: "templates/nginx/{{item}}.j2"
dest: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/{{ item }}"
owner: "{{ container_nginx_comp_owner }}"
group: "{{ container_nginx_comp_group }}"
mode: "u=rwx,g=rwx,o=rx"
loop:
- cleanup.sh
- nginx-chown.sh
- nginx-restart.sh
- name: Copy static template
ansible.builtin.copy:
src: files/nginx/static.default.conf.template
dest: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/site_template/default.conf.template"
owner: "{{ container_nginx_comp_owner }}"
group: "{{ container_nginx_comp_group }}"
mode: "u=rw,g=rw,o=r"
- name: Copy frontend cache template
ansible.builtin.copy:
src: files/nginx/cache.default.conf.template
dest: "{{ container_nginx_comp_base_path }}/web-{{ container_nginx_env }}/frontend_cache_template/default.conf.template"
owner: "{{ container_nginx_comp_owner }}"
group: "{{ container_nginx_comp_group }}"
mode: "u=rw,g=rw,o=r"
Optimizing Nginx delivery
To make sure nginx is fast and responds quickly with data we’re ensuring some baseline optimizations are applied to the frontend cache:
proxy_cache_path /var/cache/nginx
keys_zone=frontend_cache:10m
loader_threshold=300
loader_files=200
max_size=200m;
server {
listen 80;
server_name archive;
access_log /var/log/nginx/cache.access.log main;
error_log /var/log/nginx/cache.error.log;
# Enable GZip Compression for content delivery
gzip on;
gzip_types text/plain
text/html
text/css
text/csv
text/javascript
application/xml
application/json
image/png
image/gif
image/svg;
gzip_min_length 1000;
gzip_proxied no-cache no-store private expired auth;
location / {
# Speeeed
sendfile on;
sendfile_max_chunk 3m;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Proxy Caching so we can keep the server up while the static content is rebooted
proxy_cache frontend_cache;
proxy_cache_key $uri$is_args$slice_range;
proxy_cache_valid 200 206 5m;
proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
# Proxy Data to pass through
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffers 16 4k;
proxy_buffer_size 2k;
proxy_pass http://<your internal IP here>;
}
}
This will be the “template” used by nginx’ container to configure nginx in the container
The actual static server host is pretty simple:
# -*- nginx -*-
server {
listen 80 backlog=4096;
server_name archive;
access_log /var/log/nginx/static.access.log main;
error_log /var/log/nginx/static.error.log;
location / {
sendfile on;
sendfile_max_chunk 3m;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
root /var/share/nginx/html;
index index.html;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
}
Both of which end up in their respective bind volumes in the composition:
---
networks:
nginx:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.10.0/24
ip_range: 172.30.10.0/24
gateway: 172.30.10.1
services:
frontend_cache:
image: "nginx:latest"
restart: unless-stopped
ports:
- "10082:80"
networks:
nginx:
ipv4_address: 172.30.10.20
volumes:
- type: bind
source: "./log"
target: "/var/log/nginx"
- type: bind
source: "./frontend_cache_template"
target: "/etc/nginx/templates"
- type: bind
source: "./data-cache"
target: /var/cache/nginx
site:
image: "nginx:latest"
restart: unless-stopped
networks:
nginx:
ipv4_address: 172.30.10.10
volumes:
- type: bind
source: "./static"
target: "/var/share/nginx/html"
- type: bind
source: "./log"
target: "/var/log/nginx"
- type: bind
source: "./site_template"
target: "/etc/nginx/templates"
You could replace the above with whatever other application you wish to run behind nginx.
Scripts used
The cleanup.sh, nginx-chown.sh and nginx-restart.sh are all just there to handle work inside the containers.
cleanup.sh deletes the old content of the static pages generated by Jekyll, but you could do this just as well with hugo.
nginx-chown.sh sets the permissions inside the container mount to give the containerised webserver access to the static content.
nginx-restart.sh restarts the container in the composition.
To deploy to the server we can now run the a playbook like this:
---
- name: Deploy Staging
hosts: www
become: false
tasks:
- name: Run Cleanup
ansible.builtin.shell:
cmd: sh ./cleanup.sh
chdir: /opt/website/web-{{ nginx_env }}/
- name: Re-Create directory
ansible.builtin.file:
state: directory
path: "/opt/website/web-{{ nginx_env }}/static/"
owner: nginx
group: nginx
mode: "u=rwx,g=rwx,o=rwx"
- name: Run restart
ansible.builtin.shell:
cmd: sh ./nginx-restart.sh
chdir: /opt/website/web-{{ nginx_env }}/
- name: Upload new Static Content
ansible.builtin.copy:
src: ../site/
dest: "/opt/website/web-{{ nginx_env }}/static"
owner: nginx
group: nginx
mode: "u=rwx,g=rwx,o=rw"
- name: Run chown
ansible.builtin.shell:
cmd: sh ./nginx-chown.sh
chdir: /opt/website/web-{{ nginx_env }}/
Gitea as a VCS
Gitea as a VCS is super easy to use and to setup. It can be ready to receive your code with as much as a simple docker composition:
---
version: "3"
networks:
gitea:
external: false
services:
server:
image: gitea/gitea:1.22.3
container_name: gitea
environment:
- USER_UID=1029
- USER_GID=65538
restart: always
networks:
- gitea
volumes:
- ./data:/data
- ./timezone:/etc/timezone:ro
- /var/services/homes/git/.ssh:/data/git/.ssh
ports:
- "7000:3000"
- "2221:22"
Excuse the weird ports, my Gitea is set up along a whole bunch of other containers on my synology NAS storage:
$ $ sudo docker ps --format '{{ .Names }},{{ .Image }},{{ .Status }},{{.Ports}}' | wc -l
24
Gitea Workflows
In gitea with the ability to run Github Actions we can now run jekyll and deploy to our staging environment at home:
---
name: Build Website Staging
run-name: ${{ gitea.actor }} Build Website in staging
on:
workflow_run:
workflows: ['Jekyll Container']
types:
- completed # Run after container was built, ensuring, if container changed we wait for it first
push:
jobs:
jekyll-staging:
name: Render Jekyll Contents and Archive
runs-on: ubuntu-latest
container:
image: archive:7000/documentation/jekyll:latest
credentials:
username: andreas
password: ${{ secrets.REGISTRY_GITEA }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Build Frontend Code
run: |
npm ci
node_modules/.bin/bower install
node_modules/.bin/grunt
- name: Run Jekyll
run: |
jekyll build --future --lsi --drafts --unpublished --verbose --destination /srv/jekyll/
- uses: actions/upload-artifact@v3
with:
name: site-staging
path: /srv/jekyll
deploy_to_staging:
name: Deploy artifacts to staging server
needs: "jekyll-staging"
runs-on: ubuntu-latest
container:
image: archive:7000/automation/home-management:latest
credentials:
username: andreas
password: ${{ secrets.REGISTRY_GITEA }}
steps:
- name: Checkout Pull Automation Repository
uses: actions/checkout@v4
with:
repository: "automation/home-management"
ref: "master"
path: "automation"
token: ${{ secrets.GITEA_TOKEN }}
- name: Pull staging artifact
uses: actions/download-artifact@v3
with:
name: site-staging
path: automation/site
- name: Deploy assets to staging
working-directory: automation
run: |
ansible-playbook \
-e nginx_env=staging \
--vault-password-file .vault-pass \
-i inventory/homelab.yml -l archive \
playbooks/deploy-staging-static.yml
As you can see it uses our ansible playbooks to deploy to the staging environment.
To publish to production upon a merge to the main or master branch we can create a similar workflow:
---
name: Build Website Production
run-name: ${{ gitea.actor }} Build Website in production
on:
workflow_run:
workflows: ['Jekyll Container']
types:
- completed # Run after container was built, ensuring, if container changed we wait for it first
push:
branches:
- master
jobs:
jekyll-prod:
name: Render Jekyll Contents and Archive
runs-on: ubuntu-latest
container:
image: archive:7000/documentation/jekyll:latest
credentials:
username: andreas
password: ${{ secrets.REGISTRY_GITEA }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Build Frontend Code
run: |
npm ci
node_modules/.bin/bower install
node_modules/.bin/grunt
- name: Run Jekyll
run: |
jekyll build --lsi --verbose --destination /srv/jekyll/
- uses: actions/upload-artifact@v3
with:
name: site-prod
path: /srv/jekyll
deploy_to_prod:
name: Deploy artifacts to prod server
needs: "jekyll-prod"
runs-on: ubuntu-latest
container:
image: archive:7000/automation/home-management:latest
credentials:
username: andreas
password: ${{ secrets.REGISTRY_GITEA }}
steps:
- name: Checkout Pull Automation Repository
uses: actions/checkout@v4
with:
repository: "automation/home-management"
ref: "master"
path: "automation"
token: ${{ secrets.GITEA_TOKEN }}
- name: Pull prod artifact
uses: actions/download-artifact@v3
with:
name: site-prod
path: automation/site
- name: Deploy assets to prod
working-directory: automation
run: |
ansible-playbook \
--vault-password-file .vault-pass \
-i inventory/linode.yml -l www \
-e@vaults/linode.yml \
-e nginx_env=prod \
playbooks/deploy-prod-static.yml
To make scheduled releases work for jekyll generated content we can use the same production publishing method but with the schedule trigger:
on:
schedule:
- cron: '0 9,16 * * *'
# ... same as above
Closing remarks
This configuration was cobbled together over a couple of days and was fun to set up. Ansible and Gitea can be a powerful tool to automate workflows to rollout your product to your audience. I hope this article was a great window into a setup that is non-trivial but also SO satisfying once it’s working.