As promissed the last time, here is my crazy idea …

Updates

  • added Baloo fix for Btrfs, which is now scheduled for KF 5.111

Operating system / distribution

This time round, I will go for EndeavourOS.

Cue “I use Arch, BTW” jokes ;)

Why would I want to do this to myself?

Good question!

Honestly, because I fell it is time I had some fun with my system again (and GNU HURD is not ready yet1) and get my hands a bit dirtier. If it turns out it will take me too much time and effort, I will find something else.

I have used Manjaro for the past few years, and used Gentoo for a decade, so I feel like an(other) Arch(-based) distro is well within my reach. I chose EndeavourOS over vanilla Arch, because I do not want to do it entirely from scratch2 and EndeavourOS has a great forum and community.

With a rolling release distro like Arch though one can inadvertently update oneself into trouble. But this is where my biggest complication comes in …

File system

… that is right, the file system – The most common way to mess up your computer, seconded only by a typo in rm -rf3!

I am not a file system expert

If you have not noticed yet, I am not an expert in the field and I only half-grasp some of the concepts … on a good day! Please read up yourself if you want to venture down this rabbit hole.

Things may be wrong in this blog – if you spot something that you know is false, please let me know and I will correct it.

That said, I spent way too much time reading dozens of articles and forum threads on different file systems and set-ups while waiting for my new laptop to arrive, so I will try to explain my decision and enhance it with the most relevant links.

Btrfs

The obvious solution to the problem is, of course, making snapshots of the system. And Btrfs is one of the file systems that does this really well. It takes up minimal extra space and you can switch between them on the fly.

I dipped my toes into Btrfs and tried to use snapshots as backup before, but reverted back to Ext4, because I did not really understand it all and as such I also did not fully implement it. Which is commonly well known as “not a smart way of doing things”.

This time I am not going to use snapshots for (quasi-)backup purposes, because I am quite happy with my current backup system, but intend to leverage their superpowers for reverting back to a snapshot on the fly.

The idea therefore is that the system would automatically make a snapshot of the / subvolume on every update.4 So if anything goes wrong, I could simply boot into the snapshot before the mistake happened and wait for an update that works.

I also wanted to have a COW file system because it is said data on it is safer … and cows are cool (got to remind you how very little I know about these things! ;))

Another major reason is that Btrfs is capable of self-healing from bit rot and other disk degradation, but more on that down below.

There are some caveats with Btrfs though:

  • support for RAID-5/6 is still not stable – but I did not want to use those anyway;
  • there is an ongoing issue that on Btrfs Baloo reindexes everything after every reboot – but there is a patch out there already which fixes it, it has just not been merged into the upstream yetit was merged and will be part KDE Frameworks 5.111 release; some distros are already applying it though;
  • making snapshots (too often) can degrade SSD faster – so a smart subvolume plan is in order to decide what to snapshot and what not.

Why not …?

I did consider others file systems too, of course.

The following links I found pretty useful:

Ext4

Tried, tested, stable, apparently still(?) the fastest file system on SSD – that is the venerable Ext4 alright.

Why not then? Two reasons really:

  • it is not as exciting, and while I messed up a Btrfs set-up before, I messed up a LVM + Ext4 set-up before as well, so ¯\_(ツ)_/¯;
  • I really want to make use of snapshots to be able to roll back messed up updates and bad decisions.

ZFS

I never built up the courage to set up ZFS and from what I understand you need a lot more than two drives to make proper use of its features. And the Slimbook has “only” two M.2 slots.

It also sounds like it would be much more work to set it up on Linux (licensing questions5 aside).

XFS

I have close to no knowledge of XFS apart from that it quietly became a default in Fedora, instead of Btrfs. So my decision against it is based solely on the fact that I have some prior experience with Btrfs and that Btrfs is more commonly used.

It also seems that XFS is also more susceptible to bit rot and it is much harder to restore lost data from it.

Bcachefs

Wow! Bcachefs just sounds like the future! Like the best parts of ZFS and Btrfs and XFS, but made cleaner and better and more modern and … and … YEAH!!

(I apologise, this is very much way over my head.)

As for why not Bcachefs, it is not yet included in the Linux kernel – and if it the kernel devs do not consider it to be ready, I would rather not risk running it as a primary (encrypted, to boot!) file system on my primary machine.

Definitely a file system I will keep an eye on to potentially use it a few years from now!

Reiser5

Now that is a name you probably did not hear in over a decade!

Separating the artist from their art, ReiserFS was/is a great file system, especially for a lot of (very) small files that change often.

I remember using ReiserFS 3 for Portage files on Gentoo (on HDD) and it was a huge boost in performance!

From what I can tell neither Reiser4 nor Reiser5 where merged into the Linux kernel yet. And honestly, I am a bit concerned that with not many people talking about it, it would be difficult for me to find any help, when I inevitably mess something up.

LUKS

Since I cannot recall when was the last time I used a laptop that did not have a full-disk encryption, LUKS is happening, period.

I realise that this does not prevent from several attacks, but it does prevent from certain attacks, which I am OK with.

I am playing with the idea of having the /boot/ partition6 (or the decryption key) on a small USB stick, but that may be a complication I will postpone for another time.

There are some caveats though …

  • depending on the attack vector you are concerned about, LUKS on SSD may not be as secure as LUKS on HDD simply due to SSD erasing data less frequently to avoid degradation of the drive – check §5.19 of LUKS FAQ – should be fine for the most common use risk of a randomly stolen laptop though;
  • furthermore, using TRIM on an encrypted SSD can make it near impossible to restore;
  • currently LUKS2 – which is much improved over LUKS1 – does not run on GRUB – at least not out of the box. Systemd-boot does though, but it looks even uglier than LILO and I have not figured out yet how hard is it to set up to boot from snapshots. Will need to think about this a bit more, but am leaning towards either the Argon2- and LUKS2- patched GRUB or just using LUKS1 until GRUB catches on and then upgrade my disks to LUKS2.

RAID

To levarage the magic of Btrfs (or ZFS) to self-heal the file system if sectors on the drive corrupt, you can set two or more physical drives into RAID-1 (or RAID-10).

This is exacty what I intend to do – put two similar SSD, but different brands/models into a Btrfs RAID-1. Also if one of the drives fails completely, I can7 simply remove the faulty drive and re-add it to the RAID array.

Here it is important to note that block-based RAID does not help here, it needs to be a file-system-based RAID for the file system to be able to self-heal.

Again, caveats …

  • apparently if you put Btrfs into RAID you cannot use Swapfiles for your swap, but need to create a separate swap partition(s) – I will probably just create a swap partition on each SSD and add both to the swap pool.

Defaults are fine

I spent way too much time reading up on mount options to optimise SSD, TRIM, etc. There are so many pros and many cons, and above all there are massive caveats.

Watch out for outdated info!

In the past decade things have improved immensely when it comes to SSD technology, its support in Linux and Btrfs.

With that, also the defaults have adapted to reflect these changes.

If you are looking into overriding the defaults, make sure you are not reading outdated articles!

As an example of the above issue, I was reading up TRIM optimisations and the caveats of different combinations of file systems, RAID, encryption, etc. … until several articles in, I found a message on Linux RAID mailing list that stated that modern (i.e. from 2012!) SSD not only do not necessarily require TRIM any more, but forcing TRIM on them could actually have the opposite effect and degrade the SSD faster.

If you want to dive into SSD optimisations, I found the following resources the most useful (mind the warning above!):

Reading through that list and a few other things, what I learnt – in very broad strokes – is that a async TRIM is by default enabled on Btrfs if the kernel recognises the drive to be SSD, but if LUKS / dm-crypt is used it will override that default to not TRIM. Then there is also a list of specific SSD models coded into the Linux kernel, where the kernel itself will disable features that those drives cannot safely handle. And I am sure this is just the tip of the iceberg.

Ultimately … the defaults are sane and safe, whatever they end up being :) So change them only if you really know what you are doing.

Tmpfs

Tmpfs is this magic thing that mounts a section of your RAM as a block device, primarily with the intention of storing your ephemeral temporary files (typically /tmp/, but often /run/, /var/run/, and /var/lock/ too) into RAM instead of the SSD or HDD.

This great for performance, since RAM is much faster even than SSD, so storing caches there makes sense. It is especially great for use with SSD because using Tmpfs can greatly reduce the writing and deleting of data from the drive.

A further trick I recently found out about is to put the browser cache, or even the whole browser profile, into Tmpfs. Similarly this both preserves the SSD a bit and also apparently greatly improves browser performance. With 32 GB of RAM, I think I can afford testing this out :)

There are several approaches to this, and I have not made my mind up yet, which one I will adopt:

Oh, and you can totally also compile directly on Tmpfs, but you really need enough RAM for that.

Backups

Of course, just RAID for hardware failure (especially encrypted) is not enough of a guarantee that no data will be lost, so I fully intend to continue using Borg backups.

What will likely change though is that I will migrate from (my fork of) Demure’s script to Borgmatic8.

Wayland

At least initially, I will try how Wayland works.

I am cautiously optimistic, but also half-expect to move back to X11 for a few more months, if some things are still broken.

I hope I am wrong though and can see Wayland + Plasma improve in time.9

Next time

Yesterday I received my Slimbook Pro X 14.

Unexpectedly, it did have a working OS already installed, so I started playing around with it a bit. So the next blog post will be about first impressions of both the laptop and a distro and DE that I rarely interact with.

hook out → laptop arrived! excitement levels through the roof!!!


  1. Ha! I am full of crap jokes today! 

  2. I have done Gentoo from Stage 1 back on Gentoo 1.2 or 1.4, and warmly recommend it to anyone who wants to learn about how Linux works and has time for it. I do not have the time for such adventures nowadays. 

  3. I just made that up. But it is true that I am still sitting on my second-to-last crazy file system experiment – a LVM RAID-1 set-up with a single surviving HDD – and I still need to muster up the courage to try and rescue the photos from it. 

  4. There are also Linux distributions that do this at a more granular level as an integral part of their package manager – NixOS and Guix come to mind here. 

  5. I know SFLC said it is OK and I agree it is a sensible interpretation of GPL-2.0’s text. Whether that is a consequence the drafters of GPL-2.0 intended, is a separate question. 

  6. As a side note, when I was still on Gentoo and regularly compiled my own kernel, I used to keep /boot/ on a separate partition that was Ext2 and was not in /etc/fstab. So I had to remember to mount it every time I upgraded or modified the kernel. IIRC the point back then was because 1) booting from Ext4 was a problem in early GRUB, 2) you need /boot/ only when you actually boot and when you update your kernel, and as such 3) you do not need a journal for /boot/. I suspect there is no need for /boot/ to be treated that way anymore. Happy to be told otherwise! 

  7. I hope so, I am still a bit scared after the LVM RAID fiasco. 

  8. Even Demure himself said that might be a good idea. 

  9. I have a pang of nostalgia for those days where with every update on Linux you saw a major improvement. One update, you could hear music and sound effect at the same time; the next a modem would start working; monitors would get auto-detected; then USB got much faster … It was truly a time of wonder (but also of broken installs, expensive hardware and frustration). 


Related Posts


Published

New laptop install 2023

Category

Tehne

Tags