publish new article
All checks were successful
continuous-integration/drone/push Build is passing

This commit is contained in:
Robert Kaussow 2020-07-30 01:09:00 +02:00
parent 8df82e8aae
commit 34b7d18d16
No known key found for this signature in database
GPG Key ID: 65362AE74AF98B61
4 changed files with 129 additions and 2 deletions

View File

@ -6,3 +6,12 @@ Minio
S3 S3
thegeeklab thegeeklab
backend backend
vHost
Nginx
Netlify
DevOps
SCP
SSH
HTTPS
URI
webspace

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

View File

@ -0,0 +1,120 @@
---
title: "Create a static site hosting platform"
date: 2020-07-30T01:05:00+02:00
authors:
- robert-kaussow
tags:
- DevOps
- Hosting
resources:
- name: feature
src: 'images/feature.jpg'
params:
anchor: Center
credits: >
[Fabian Grohs](https://unsplash.com/@grohsfabian) on
[Unsplash](https://unsplash.com/s/photos/coding)
---
There are a lot of static site generators out there and users have a lot of possibilities to automate and continuously deploy static sites these days. Solutions like GitHub pages or Netlify are free to use and easy to set up, even a cheap webspace could work. If one of these services is sufficient for your use case you could stop reading at this point.
<!--more-->
As I wanted to have more control over such a setup and because it might be fun I decided to create my own service. Before we look into the setup details, lets talk about some requirements:
- deploy multiple project documentation
- use git repository name as subdomain
- easy CI workflow
The required software stack is quite simple:
- Nginx as web server to deliver static files
- Minio S3 as files backend
Of course, Minio could be removed from the stack but after a few tests, my personal impression was that Minio is a way better to handle file sync and uploads instead over SSH/SCP, at least in a CI driven workflow. The whole workflow is nearly the same as for GitHub pages. Right beside your projects source code in the Git repository you will have a `docs` folder which contains the required files to build the documentation site. It's up to you what static site generator to use. Instead of using e.g. the `gh-pages` branch to publish the site, a CI system will build the documentation form the docs folder and publish it to a prepared Minio S3 bucket. The Nginx server will basically use the defined bucket as source directory and convert every sub-directory into something like `https://<reponame>.mydocs.com`.
As a first step, install Minio and Nginx on a server, I will not cover the basic setup in this guide. To simplify the setup I will use a single server for Minio and Nginx but it's also possible to split this into a Two-Tier architecture.
After the basic setup we need to create a Minio bucket e.g. `mydocs` using the Minio client command `mc mb local/mydocs`. To allow Nginx to access these bucket to deliver the pages without authentication we need to set a bucket policy `mc policy set download local/mydocs`. This policy will allow public read access. In theory, it should also be possible to add authentication headers to Nginx to server sites from private buckets but I have not tried that on my own.
Preparing the Minio bucket was the easy part, now we need to teach Nginx to rewrite the subdomains to sub-directories and deliver the sites properly. Let us assume we are still using `mydocs` as the base Minio bucket and `mydocs.com` as root domain. Here is how my current vHost configuration looks like:
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<!-- spellchecker-disable -->
{{< highlight nginx "linenos=table" >}}
upstream backend_mydocs {
server localhost:61000;
}
server {
listen 80;
server_name ~^(www\.)?(?<name>(.+\.)?mydocs\.com)$;
return 301 https://$name$request_uri;
}
server {
listen 443 ssl;
server_name ~^((?<repo>.*)\.)mydocs\.com$;
ssl_certificate [..];
ssl_certificate_key [..];
client_max_body_size 100M;
recursive_error_pages on;
location / {
proxy_pass http://backend_mydocs/mydocs/${repo}${request_path};
proxy_http_version 1.1;
proxy_buffering off;
proxy_connect_timeout 300;
proxy_intercept_errors on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
rewrite ^([^.]*[^/])$ $1/ permanent;
error_page 404 = /404_backend.html;
}
location /404_backend.html {
proxy_pass http://backend_mydocs/mydocs/${repo}/404.html;
proxy_intercept_errors on;
}
}
{{< /highlight >}}
<!-- spellchecker-enable -->
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
We will go through this configuration to understand how it works.
__*Lines 1-3*__ defines a backend, in this case it's the Minio server running on `localhost:61000`.
__*Lines 5-10*__ should also be straight forward, this block will redirect HTTP to HTTPS.
__*Line 14*__ is where the magic starts. We are using a named regular expression to capture the first part of the subdomain and translate it into the bucket sub-directory. For a given URL like `demoproject.mydocs.com` Nginx will try to serve `mydocs/demoproject` from the Minio server. That's what __*Line 23*__ does. Some of you may notice that the used variable `${request_path}` is not defined in the vHost configuration.
Right, we need to add another configuration snippet to the `nginx.conf`. But why do we need this variable at all? For me, that was the hardest part to solve. As the setup is using `proxy_pass` Nginx will *not* try to lookup `index.html` automatically. That's a problem because every folder will at least contain an `index.html`. In general, it's required to tell Nginx to rewrite the request URI to `/index.html` if the origin is a folder and ends with `/`. One way would be an `if` condition in the vHost configuration but such conditions are evil[^if-is-evil] in most cases and should be avoided if possible. Luckily there is a better option:
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<!-- spellchecker-disable -->
{{< highlight nginx "linenos=table" >}}
map $request_uri $request_path {
default $request_uri;
~/$ ${request_uri}index.html;
}
{{< /highlight >}}
<!-- spellchecker-enable -->
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
[Nginx maps](https://nginx.org/en/docs/http/ngx_http_map_module.html) are a solid way to create conditionals. In this example set `$request_uri` as input and `$request_path` as output. Each line between the braces is a condition. The first line will simply apply `$request_uri` to the output variable if no other condition match. The second condition applies `${request_uri}index.html` to the output variable if the input variable ends with a slash (and therefor is a directory).
__*Line 38-41*__ of the vHost configuration tries to deliver the custom error page of your site and will fallback to the default Nginx error page.
We are done! Nginx should now be able to server your static sites from a sub-directory of the Minio source bucket. I'm using it since a few weeks and I'm really happy with the current setup.
[^if-is-evil]: [This article](https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/) from the Nginx team explains it very well.

View File

@ -5,8 +5,6 @@ authors:
- robert-kaussow - robert-kaussow
tags: tags:
- General - General
images:
- "asasas"
resources: resources:
- name: feature - name: feature
src: 'images/feature.jpg' src: 'images/feature.jpg'