From 5361b9f692434a3ab77ea21779654028c38e5c23 Mon Sep 17 00:00:00 2001 From: Ezri Brimhall Date: Tue, 1 Oct 2024 18:51:16 -0600 Subject: [PATCH] Added size watchdog docs --- size-watchdog/README.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100644 size-watchdog/README.md diff --git a/size-watchdog/README.md b/size-watchdog/README.md new file mode 100644 index 0000000..79d69a1 --- /dev/null +++ b/size-watchdog/README.md @@ -0,0 +1,13 @@ +## Watchdog Script + +Since this should be a one-liner, I'm going to do it in a markdown file to explain my thought-process. + +The command that will produce the output we want is `du -hd0 /var /home`. This will likely need to be run as root, since root is generally the only user that can see the entirety of `/var`. + +To run it repeatedly without using `watch`, we can put it in a `while true` loop, like so: + +`while true; do du -hd0 /var /home; sleep 60; done` + +I figure 60 seconds is a decent middle-ground here, but honestly this is a fairly expensive command to have running all the time, since it needs to recursively read the metdata of every file in the directory to calculate the directory's size. If this kind of monitoring needs to be run frequently, it would be better to set up the system such that `/var` and `/home` are on their own filesystems (a BTRFS subvolume or ZFS dataset would work as well), and then use either `df -h /var /home` (if on their own volumes) or the appropriate filesystem-specific command in order to report the space used by these directories. + +If this is running in a daily cron job or some such, with the reports being emailed to the system administrator, then performance shouldn't be an issue, and we can simply run the `du` command.