tl;dr - If you clicked because of the title/description, you got jebaited (consider not falling for bait? I don’t know I fall for it too), but it’s not far off – this is a rant against using automation on F/OSS repositories to automatically close issues I’ll keep this short (originally I considered doing a tongue-in-cheek woke culture roleplay thing) because I just made a breakthrough in some other yak shaving (the Kubernetes storage provider tests round 2 series of posts).| vadosware.io
+ / / tl;dr - In order to test storage performance I set up a completely automated test bed for all the storage plugins, this article chronicles the installations of some of the plugins. It’s particularly long because I made lots of mistakes. Mostly useless sections are prefaced with a notice on why you can skip them, skim the ToC and click on anything you like. UPDATE (04/09/2020) The GitLab repository is up!| vadosware.io
+ / / tl;dr - I explain the YAML and Makefile scripts that power the fio and pgbench (oltpbench) tests I’m going to run. UPDATE (04/10/2021) Turns out I was mistaken -- OpenEBS Mayastor doesn't support single-node disk-level failure domains. It's very well described on their website in the FAQ, but I somehow missed and/or forgot that, so the tests for Mayastor will only represent JBOD setup (no replication). On a different but related note, cStor supports cross disk replication (mirroring o...| vadosware.io
+ / / tl;dr - Finally, the results of the benchmarking. You can find the code on GitLab. There are some issues with the benchmarks but there was enough decent data to make a decision for me at least. As far as which storage plugins I’m going to run, I’m actually going to run both OpenEBS Mayastor and Ceph via Rook on LVM. I look forward to emails from users/corporations/devrel letting me know how I misused their products if I did – please file an issue on GitLab!| vadosware.io