I tend to forget how and why I did things. Therefore, to circumvent this flaw of mine, I’ve decided to enforce a habit of writing up stuff that I do. So, for my own future reference, this is what I did to get this site up and running.
I decided to opt for hosting my website in a Virtual Private Server since that provides me with the flexibility I want to play around with different web servers and protocols (HTTP/2, QUIC). After consulting Google search I decided to go for Miss Hosting for no particular reason other than the competetive pricing and small scale VPS offering. Currently I’m running on 512 MB RAM, 1 core and 1 TB traffic. I don’t expect much traffic so it should be ok. The total cost for one year (domain name + VPS) ended up below 450 SEK.
Configuring DNS
This was a straight forward operation using the management interfaces provided by Miss Hosting. I logged into the VPS manager and created a DNS zone for my domain. Then I added two A-records (*mydomain.com
, *.mydomain.com
) and pointed them to to IP address of the VPS. I also added a CNAME-record pointing www.mydomain.com
to the naked domain.
Installing a web server
I wanted to try out either nginx or Caddy and decided to go for Caddy (for the time being) since it’s easy to configure and provides simple means to add HTTPS. I’m running Debian 8 on the VPS and basically followed the instructions in this guide.
Log into the VPS as root, update the system, install Curl and Caddy.
apt-get update && apt-get -y upgrade
apt-get install curl
curl https://getcaddy.com | bash
Allow Caddy to bind to a port less than 1024 (port 443 in this case).
setcap cap_net_bind_service=+ep /usr/local/bin/caddy
Caddy also need a few directories to store configuration and SSL certificates. The web page will live in /var/www/mydomain.com
.
mkdir /etc/caddy
chown -R root:www-data /etc/caddy
mkdir /etc/ssl/caddy
chown -R www-data:root /etc/ssl/caddy
chmod 0770 /etc/ssl/caddy
touch /etc/caddy/Caddyfile
mkdir /var/www
chown www-data: /var/www
mkdir -p /var/www/my-domain.com
chown www-data: /var/www/my-domain.com
A systemd configuration is needed to automatically run the Caddy service.
vim /lib/systemd/system/caddy.service
[Unit]
Description=Caddy HTTP/2 web server
Documentation=https://caddyserver.com/docs
After=network-online.target
Wants=network-online.target
[Service]
Restart=on-failure
StartLimitInterval=86400
StartLimitBurst=5
User=www-data
Group=www-data
; Letsencrypt-issued certificates will be written to this directory.
Environment=CADDYPATH=/etc/ssl/caddy
ExecStart=/usr/local/bin/caddy -log stdout -agree=true -conf=/etc/caddy/Caddyfile -root=/var/tmp
ExecReload=/bin/kill -USR1 $MAINPID
LimitNOFILE=1048576
LimitNPROC=64
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
ProtectSystem=full
ReadWriteDirectories=/etc/ssl/caddy
[Install]
WantedBy=multi-user.target
Enable Caddy to start on boot.
systemctl enable caddy.service
Configuring Caddy
The server configuration lives in the Caddyfile. I want to serve the site, enable HTTPS and HTTP/2, use gzip by default and configure cache control for the static assets. It’s really the simplest thing in the world.
vim /etc/caddy/Caddyfile
mydomain.com {
root /var/www/mydomain.com
gzip
tls my@email.com
header /css Cache-Control "max-age=604800"
header /js Cache-Control "max-age=604800"
header /img Cache-Control "max-age=604800"
}
Finally, start the Caddy service.
systemctl start caddy.service
Git setup
I want to deploy blog updates by running git push
and nothing more. For this we need to setup a git user, install git (and add the local computer public SSH key) and init the bare repository.
adduser git
chown www-data:git /var/www/my-domain.com
apt-get install git
su git
cd ~
mkdir .ssh && touch .ssh/authorized_keys
git init --bare my-blog.git
Now, let’s hook up the local git repository with the server.
# On the local computer
cat .ssh/id_rsa.pub | ssh git@mydomain.com "cat >> ~/.ssh/authorized_keys"
#cd into blog git repo
git remote add origin git@mydomain.com:my-blog.git
The final piece of the puzzle is to create a post-receive hook at the server which will automatically copy the site to /var/www/mydomain.com/
when a new git push
is received. To access the files we need to create a clone of the working directory since it is not present in a bare git repository.
mkdir /var/git
cd /var/git
git clone ~/my-blog.git
touch ~/my-blog.git/hooks/post-receive
chmod 750 ~/my-blog.git/hooks/post-receive
#!/bin/bash
# Content of post-receive file
echo "Running post-receive"
targetdir=/var/www/mydomain.com
echo "cd $targetdir"
cd $targetdir
# Backup previous version
time_suffix=`date "+%Y-%m-%d-%H-%M-%S"`
echo "Backup timestamp $time_suffix"
tar czf ../backups/example.$time_suffix.tgz *
echo "Remove $targetdir ..."
rm -rf $targetdir/*
echo "Check out local copy"
export GIT_WORK_TREE=/var/git/my-blog
export GIT_DIR=/var/git/my-blog/.git
cd $GIT_WORK_TREE
git checkout -f
echo "Copying to $targetdir"
cp -r /var/git/my-blog/_site/* $targetdir
Web page optimizations
I want my page to be lean and optimized for speed. A great introduction to this topic is Ilya Grigorik’s book High Performance Browser Networking. I came across the book back in 2014 when I worked with web optimizations for mobile networks (I work for Ericsson). I actually suggested some edits for the book regarding details on the LTE state machine, but it only made it to a blog post.
A great source for practical optimization recommendations is Google’s PageSpeed Insights. The goal is of course to score 100/100 in the test (I’m on 99/100 due to cache control rules for the Google Analytics script which I cannot impact). Following the recommendations, here is what I did.
- Compress images
- Minify HTML/JS/CSS
- Inline critical CSS (see e.g. critical)
- Leverage browser caching
I also made some changes to the boilerplate used for this blog. I removed JQuery and rewrote the bits I needed in plain Javascript. I also inlined the few icons I used from Font Awesome using a tool called Fontello. In the future I’ll probably revisit the need for bootstrap.css
which is included in the boilerplate.
I made sure all CSS and JS are loaded in an asynchronous fashion (not blocking the rendering of the page) and inlined the critical CSS. The final landing page is around 5 kB when compressed. My VPS uses 10 segments for the initial TCP window which translates to approximately 14 kB, well enough to accomodate my page within the first transmission. The bottleneck right know is the initial roundtrips needed to establish TCP and TLS connections. Hopfully I can mitigate this in the future by experimenting with QUIC and/or TLS 1.3.
Oh, I also installed some jekyll plugins to create brotli and gzip compressed versions of all static files (jekyll-brotli and jekyll-gzip). Caddy will happily serve pre-compressed brotli files if supported by the browser.
—— Eric