chore: fix spellcheck
continuous-integration/drone/push Build is passing Details

This commit is contained in:
Robert Kaussow 2022-07-25 09:01:38 +02:00
parent 437578f03b
commit 5b9da548a0
Signed by: xoxys
GPG Key ID: 4E692A2EAECC03C0
1 changed files with 1 additions and 1 deletions

View File

@ -23,7 +23,7 @@ Fortunately, [Graylog](https://docs.graylog.org/) also knows this and recommends
. As you usually don't wants to operate software that no longer receives security updates, I have started to look into a migration and prepared the Container and Ansible setup. My fist mistake on this journey was to believe I can just use the latest Opensearch (OS) release. Had I read the documentation <!-- spellchecker-disable -->[^graylog]<!-- spellchecker-enable -->
more carefully I would have saved myself a lot of trouble...
Anyway, the actual migration of the cluster from ES v7.10 to OS v2.1 succeeded surprisingly smoothly. Well, almost, after all I had to rewrite the complete Ansible role because OS 2.x has changed almost all configuration parameters and API calls :tada: But as you can imagine, everything explodes while trying to start Graylog again. Dang. Just downgrading Opensearch was also not possible as the cluster and all indices were migrated successfully already. To get it back in a working state I decided to reset the entire cluster and restore the snapshots from the S3 backup repository before I start to start a next try, this time with a supported OS 1.x version :fingers_crossed: At least I have already completed the ES disaster recovery test for this year.
Anyway, the actual migration of the cluster from <!-- spellchecker-disable -->ES v7.10 to OS v2.1<!-- spellchecker-enable --> succeeded surprisingly smoothly. Well, almost, after all I had to rewrite the complete Ansible role because OS 2.x has changed almost all configuration parameters and API calls :tada: But as you can imagine, everything explodes while trying to start Graylog again. Dang. Just downgrading Opensearch was also not possible as the cluster and all indices were migrated successfully already. To get it back in a working state I decided to reset the entire cluster and restore the snapshots from the S3 backup repository before I start to start a next try, this time with a supported OS 1.x version :fingers_crossed: At least I have already completed the ES disaster recovery test for this year.
Lessons Learned: