Basically title. I’m in the process of setting up a proper backup for my configured containers on Unraid and I’m wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

  • Avid Amoeba
    link
    fedilink
    English
    0
    edit-2
    3 months ago

    Every hour. Could do it more frequently if needed.

    It depends on how resource intensive the backup process is.

    Consider an 800GB Immich instance.

    Using Duplicity or rsync takes 1 hour per backup. 99% of the time is spent in traversing the directory structure and checking which files have changed. 1% is spent into transferring the difference to the backup. Any backup system that operates on top of the file system would take this much. In addition, unless you’re using something that can take snapshots of the filesystem, you have to stop Immich during the backup process in order to prevent backing up an invalid app state.

    Using ZFS send on the other hand (with syncoid) takes less than 5 seconds to discover the differences and the rest of the time is spent on the data transfer, at 100MB/s in my case. Since ZFS send is based on snapshots, I don’t have to stop the service either.

    When I used Duplicity to backup, I would backup once week because the backup process was long and heavy on the disk array. Since I switched to ZFS send, I do it once an hour because there’s almost no visible impact.

    I’m now in the process of migrating my laptop to ZFS on root in order to be able to utilize ZFS send for regular full system backups. If successful, eventually I’ll move all my machines to ZFS on root.

  • @JASN_DE@lemmy.world
    link
    fedilink
    English
    03 months ago

    Nextcloud data daily, same for the docker configs. Less important/rarely changing data once per week. Automatic sync to NAS and online storage. Irregular and manual sync to an external disk.

    7 daily backups, 4 weekly backups, “infinite” monthly backups retained (until I clean them up by hand).

  • Nicht BurningTurtle
    link
    fedilink
    English
    03 months ago

    Timeshift creates a btrfs snapshot on each boot for me. And my server gets nightly borg backups.

      • @tal@lemmy.today
        link
        fedilink
        English
        03 months ago

        You’re correct and probably the person you’re responding to is treating one as an alternative as another.

        However, theoretically filesystem snapshotting can be used to enable backups, because they permit for an instantaneous, consistent view of a filesystem. I don’t know if there are backup systems that do this with btrfs today, but this would involve taking a snapshot and then having the backup system backing up the snapshot rather than the live view of the filesystem.

        Otherwise, stuff like drive images and database files that are being written to while being backed up can just have a corrupted, inconsistent file in the backup.

        • QuizzaciousOtter
          link
          fedilink
          English
          03 months ago

          Absolutely, my backup solution is actually based on BTRFS snapshots. I use btrbk (already mentioned in another reply) to take the snapshots and copy them to another drive. Then a nightly restic job backs up the latest snapshot to B2.

        • @vividspecter@lemm.ee
          link
          fedilink
          English
          03 months ago

          btrbk works that way essentially. Takes read-only snapshots on a schedule, and uses btrfs send/receive to create backups.

          There’s also snapraid-btrfs which uses snapshots to help minimise write hole issues with snapraid, by creating parity data from snapshots, rather than the raw filesystem.

          • @tal@lemmy.today
            link
            fedilink
            English
            0
            edit-2
            3 months ago

            and uses btrfs send/receive to create backups.

            I’m not familiar with that, but if it permits for faster identification of modified data since a given time than scanning a filesystem for modified files, which a filesystem could potentially do, that could also be a useful backup enabler, since now your scan-for-changes time doesn’t need to be linear in the number of files in the filesystem. If you don’t do that, your next best bet on Linux – and this way would be filesystem-agnostic – is gonna require something like having a daemon that runs and uses inotify to build some kind of on-disk index of modifications since the last backup, and a backup system that can understand that.

            looks at btrfs-send(1) man page

            Ah, yeah, it does do that. Well, the man page doesn’t say what time it runs in, but I assume that it’s better than linear in file count on the filesystem.

  • Scrubbles
    link
    fedilink
    English
    03 months ago

    Boils down to how much are you willing to lose? Personally I do weekly

  • SavvyWolf
    link
    fedilink
    English
    03 months ago

    Daily backups here. Storage is cheap. Losing data is not.

    • @IsoKiero@sopuli.xyz
      link
      fedilink
      English
      03 months ago

      Yep. Even if the data I’m backing up doesn’t really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.

  • @AMillionMonkeys@lemmy.world
    link
    fedilink
    English
    03 months ago

    I tried Kopia but it was unstable and janky, so now it’s whenever I remember to manually run a bunch of rsync. I backup my desktop to cold storage on the first of the month, so I should get in the habit of backing up my server to the NAS then also.

  • @truxnell@infosec.pub
    link
    fedilink
    English
    03 months ago

    Daily backups. Currently using restic on my NixOS servers. To avoid data corruption, I make a zfs snapshot at 2am, and after that restic does a backup of my mutable data dirs both to my local Nas and CloudFlare r3. The Nas backup folder is synced to backblaze nightly as well for a more cold store.

  • @AnExerciseInFalling@programming.dev
    link
    fedilink
    English
    03 months ago

    I use Duplicati for my backups, and have backup retention set up like this:

    Save one backup each day for the past week, then save one each week for the past month, then save one each month for the past year.

    That way I have granual backups for anything recent, and the further back in the past you go the less frequent the backups are to save space

  • @ikidd@lemmy.world
    link
    fedilink
    English
    0
    edit-2
    3 months ago

    Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.

    Periodic test restores of all backups at various granularities at least monthly or whenever I’m bored or fuck something up.

    Yes, former sysadmin.

    • @scarecrow365@reddthat.com
      link
      fedilink
      English
      03 months ago

      This is very similar to how I run mine, except that I use Ceph instead of ZFS. Nightly backups of the CephFS data with Duplicati, followed by staggered nightly backups for all VMs and containers to a PBS VM on a the NAS. File backups from unraid get sent up to CrashPlan.

      Slightly fewer retention points to cut down on overall storage, and a similar test pattern.

      Yes, current sysadmin.

  • @infinitevalence@discuss.online
    link
    fedilink
    English
    03 months ago

    Depends on the application. I run a nightly backup of a few VM’s because realistically they dont change much. I have containers on the other hand that run critical (to me) systems like my photo backup and they are backed up twice a day.

  • Shimitar
    link
    fedilink
    English
    03 months ago

    Daily toward all my three locations:

    • local on the server
    • in-house but on a different device
    • offsite

    But not all three destinations backup the same amount of data due to storage limitations.

  • @madame_gaymes@programming.dev
    link
    fedilink
    English
    0
    edit-2
    3 months ago

    I’m always backing up with SyncThing in realtime, but every week I do an off-site type of tarball backup that isn’t within the SyncThing setup.