post: Store Terraform State on Backblaze S3
All checks were successful
continuous-integration/drone/push Build is passing

This commit is contained in:
Robert Kaussow 2022-09-04 22:27:08 +02:00
parent d235b0dd1c
commit 24022c4ee6
Signed by: xoxys
GPG Key ID: 4E692A2EAECC03C0
5 changed files with 64 additions and 8 deletions

View File

@ -66,3 +66,7 @@ UI
Zigbee2MQTT
Zigbee
MQTT
Terraform
Backblaze
AES-256
base64

View File

@ -22,7 +22,7 @@ There are a lot of static site generators out there and users have a lot of poss
<!--more-->
As I wanted to have more control over such a setup and because it might be fun I decided to create my own service. Before we look into the setup details, lets talk about some requirements:
As I wanted to have more control over such a setup and because it might be fun I decided to create my own service. Before looking into the setup details, lets talk about some requirements:
- deploy multiple project documentation
- use git repository name as subdomain
@ -37,9 +37,9 @@ Of course, Minio could be removed from the stack but after a few tests, my perso
As a first step, install Minio and Nginx on a server, I will not cover the basic setup in this guide. To simplify the setup I will use a single server for Minio and Nginx but it's also possible to split this into a Two-Tier architecture.
After the basic setup we need to create a Minio bucket e.g. `mydocs` using the Minio client command `mc mb local/mydocs`. To allow Nginx to access these bucket to deliver the pages without authentication we need to set a bucket policy `mc policy set download local/mydocs`. This policy will allow public read access. In theory, it should also be possible to add authentication headers to Nginx to server sites from private buckets but I have not tried that on my own.
After the basic setup it's required to create a Minio bucket e.g. `mydocs` using the Minio client command `mc mb local/mydocs`. To allow Nginx to access these bucket to deliver the pages without authentication a bucket policy `mc policy set download local/mydocs` need to be set. This policy will allow public read access. In theory, it should also be possible to add authentication headers to Nginx to server sites from private buckets but I have not tried that on my own.
Preparing the Minio bucket was the easy part, now we need to teach Nginx to rewrite the subdomains to sub-directories and deliver the sites properly. Let us assume we are still using `mydocs` as the base Minio bucket and `mydocs.com` as root domain. Here is how my current vHost configuration looks like:
Preparing the Minio bucket was the easy part, now Nginx need to know how to rewrite the subdomains to sub-directories and properly deliver the sites. Let's assume `mydocs` is still used as the base Minio bucket and `mydocs.com` as root domain. Here is how my current vHost configuration looks like:
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
@ -98,9 +98,9 @@ We will go through this configuration to understand how it works.
**_Lines 5-10_** should also be straight forward, this block will redirect HTTP to HTTPS.
**_Line 14_** is where the magic starts. We are using a named regular expression to capture the first part of the subdomain and translate it into the bucket sub-directory. For a given URL like `demoproject.mydocs.com` Nginx will try to serve `mydocs/demoproject` from the Minio server. That's what **_Line 23_** does. Some of you may notice that the used variable `${request_path}` is not defined in the vHost configuration.
**_Line 14_** is where the magic starts. A named regular expression is used to capture the first part of the subdomain and translate it into the bucket sub-directory. For a given URL like `demoproject.mydocs.com` Nginx will try to serve `mydocs/demoproject` from the Minio server. That's what **_Line 23_** does. Some of you may notice that the used variable `${request_path}` is not defined in the vHost configuration.
Right, we need to add another configuration snippet to the `nginx.conf`. But why do we need this variable at all? For me, that was the hardest part to solve. As the setup is using `proxy_pass` Nginx will _not_ try to lookup `index.html` automatically. That's a problem because every folder will at least contain an `index.html`. In general, it's required to tell Nginx to rewrite the request URI to `/index.html` if the origin is a folder and ends with `/`. One way would be an `if` condition in the vHost configuration but such conditions are evil[^if-is-evil] in most cases and should be avoided if possible. Luckily there is a better option:
Right, another configuration snippet needs to be added to the `nginx.conf`. But why is this variable required at all? For me, that was the hardest part to solve. As the setup is using `proxy_pass` Nginx will _not_ try to lookup `index.html` automatically. That's a problem because every folder will at least contain an `index.html`. In general, it's required to tell Nginx to rewrite the request URI to `/index.html` if the origin is a folder and ends with `/`. One way would be an `if` condition in the vHost configuration but such conditions are evil[^if-is-evil] in most cases and should be avoided if possible. Luckily there is a better option:
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->

Binary file not shown.

After

Width:  |  Height:  |  Size: 363 KiB

View File

@ -0,0 +1,52 @@
---
title: "Store Terraform State on Backblaze S3"
date: 2022-09-04T20:47:53+02:00
authors:
- robert-kaussow
tags:
- sysadmin
- automation
resources:
- name: feature
src: "images/feature.jpg"
params:
anchor: Center
credits: >
[Wesley Tingey](https://unsplash.com/@wesleyphotography) on
[Unsplash](https://unsplash.com/s/photos/file)
---
Terraform is an open source infrastructure-as-code tool for creating, modifying, and extending infrastructure in a secure and predictable way. Terraform needs to store a state about the managed infrastructure and configuration. This state is used by Terraform to map real-world resources to your configuration and track metadata. By default, this state is stored in a local file, but it can also be stored remotely.
Terraform supports multiple remote backend provider including S3. I already use Backblaze for backups and have had good experiences with it. Since Backblaze also provides an S3 Compatible API, I wanted to use it for Terraform. How to use S3 as a state backend is well [documented](https://www.terraform.io/language/settings/backends/s3), but as it's focused on Amazon S3 there are a few things to take care of. A basic working configuration will look like this:
```Terraform
terraform {
backend "s3" {
bucket = "my-bucket"
key = "state.json"
skip_credentials_validation = true
skip_region_validation = true
endpoint = "https://s3.us-west-004.backblazeb2.com"
region = "us-west-004"
access_key = "0041234567899990000000004"
secret_key = "K001abcdefgklmnopqrstuvw"
}
}
```
It is required to enable `skip_credentials_validation` and `skip_region_validation` because Backblaze uses a different format for these values and the validation only covers Amazon S3. For security reasons, I would recommend setting at least the `access_key` and `secret_key` parameters as environment variables instead of writing them to the Terraform file. For the credentials, it is required to create an App Key on Backblaze. After creating the Key the `keyName` need to be used as `access_key` and `applicationKey` as `secret_key`.
That's basically all. If you want to go a step further, it is possible to save the state encrypted with [Server-Side Encryption](https://www.backblaze.com/b2/docs/server_side_encryption.html) (SSE). There are two options available. The first is `SSE-B2`, where the key is managed and stored by Backblaze, which is quiet simple to configure. The second option is to use customer managed keys. Using this option, an AES-256 and base64 encoded key is used. To generate a proper key, the command `openssl rand -base64 32` can be used.
To enable encryption in the Terraform Provider, the configuration must be extended:
```Terraform
terraform {
backend "s3" {
...
encrypt = true
sse_customer_key = "fsRb1SXBjiUqBM0rw/YqvDixScWnDCZsK7BhnPTc93Y="
}
}
```

View File

@ -21,7 +21,7 @@ Sometimes it can be helpful if single actions or entire automation in your Home
Pairing the Buttons with the [Zigbee2MQTT](https://www.zigbee2mqtt.io) coordinator was simple as usual, but I had some trouble to figure out how to use it in [Home Assistant](https://www.home-assistant.io/docs). Zigbee2MQTT recommends to use the [MQTT device triggers](https://www.zigbee2mqtt.io/guide/usage/integrations/home_assistant.html#via-mqtt-device-trigger-recommended) but the documentation wasn't that clear on this part.
The first thing to do is to figure out the available triggers of the device. Triggers are published to the MQTT topic `<discovery_prefix>/device_automation/` and we can subscribe to is to discover the messages. To subscribe to an MQTT topic we can use the `mosquitto_sub` CLI command e.g. `mosquitto_sub -h 192.168.0.1 -p 8883 -u user -P secure-password -t homeassistant/device_automation/#` or the Home Assistant UI at **_Settings_** \/ **_Devices & Services_** \/ <!-- spellchecker-disable -->**_Integrations_**<!-- spellchecker-enable --> \/ **_MQTT_** \/ **_Configure_** \/ **_Listen to a topic_**.
The first thing to do is to figure out the available triggers of the device. Triggers are published to the MQTT topic `<discovery_prefix>/device_automation/` and you can subscribe to is to discover the messages. To subscribe to an MQTT topic `mosquitto_sub` CLI command can be used e.g. `mosquitto_sub -h 192.168.0.1 -p 8883 -u user -P secure-password -t homeassistant/device_automation/#` or the Home Assistant UI at **_Settings_** \/ **_Devices & Services_** \/ <!-- spellchecker-disable -->**_Integrations_**<!-- spellchecker-enable --> \/ **_MQTT_** \/ **_Configure_** \/ **_Listen to a topic_**.
After subscribing to the topic, remember that device triggers are not published until the event has been triggered at least once on the device. Afterwards the following triggers should be listed for the Tradfri Button:
@ -92,9 +92,9 @@ After subscribing to the topic, remember that device triggers are not published
}
```
Great, with this data we are almost done. The only missing information is the `device_id`. I have not found a nice way to get this ID yet, but it is part of the URL after navigating to **_Settings_** \/ **_Devices & Services_** \/ **_Devices_** \/ **_\<device\>_**.
Great, with this data determined, it's almost done. The only missing information is the `device_id`. I have not found a nice way to get this ID yet, but it is part of the URL after navigating to **_Settings_** \/ **_Devices & Services_** \/ **_Devices_** \/ **_\<device\>_**.
Finally we can add an action to the button. The example below is starting the Vacuum cleaner on a `single_click` (`on`) trigger:
Finally an action can be assigned to a button. The example below is starting the Vacuum cleaner on a `single_click` (`on`) trigger:
```YAML
- alias: Start Vaccum