I recently purchased a used PowerEdge R420 rack server with a Compellent SC220 Storage Shelf. I currently have four 3.5" HDDs in the R420 and ten 2.5" HDDs in the SC220. The R420 server previously had TrueNAS installed, so all of the hard drives on both the R420 and the SC220 are formatted with ZFS. I’m now running Ubuntu on the R420 using ZFS.

The server I’m replacing is an old gaming PC running Manjaro and BTRFS. It has one SSD with the operating system and two 4 TB HDDs set up as RAID0. I’ve been using the RAID to store media downloaded via the Servarr stack.

So, my goal is to create a large pool out of all of the HDDs (except the one running the OS) on the R420 and SC220, and then migrate the media data on the two 4 TB RAID0 drives on my old gaming PC over to R420/SC220 pool. I would then move my Servarr stack over to the R420 as well. Ideally, I’d also like to physically move the two 4 TB HDDs over to the R420. Presumably, I would have to reformat the drives to use ZFS rather the BTRFS and then integrate them somehow into the ZFS pool?

Anyway, I’m not sure of the best procedure to accomplish all of this, so I would be grateful to hear from anyone who has any experience or insight. Thanks in advance.

    • sailingbythelee@lemmy.world
      cake
      OP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      Is it necessary (or especially advantageous) to use a hardware RAID controller to create the RAID? I’m completely ignorant of those hardware aspects of servers, so was hoping to create a software RAID using ZFS.

      • user134450@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        Initiator-target (IT) mode enables creating a JBOD with zfs vdevs on it. You can have the zfs vdevs in raidz configuration (which would give you the same drive redundancy as a hardware raid, with raidz1 performing similar to RAID5)

        zfs is commonly used with a JBOD configuration on a raid controller but you can also use any other kind of controller as long as the individual drives can be written to. examples for this would be NVMe drives directly attached to the PCIe bus or normal SATA controllers. This is more a performance optimization than an issue with compatibility.

        • sailingbythelee@lemmy.world
          cake
          OP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Okay, I think that’s basically what I’m trying to do, though I don’t know if I already have a JBOD. My drives certainly do show up on my desktop as just a bunch of individuals drives, haha. How do I access the hardware controller to see how it is currently set up?

          • user134450@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            just a bunch of individuals drives

            that is literally what JBOD means so congratulations you already have one. a classical RAID would show up as a single drive.

      • ScottE@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        Nope - that’s the whole point of ZFS - you don’t need any special hardware, nor do you want that layer hiding the details since ZFS manages the drives. Plus, you probably want to use RAIDZx with spare drives to absorb failure.

        • sailingbythelee@lemmy.world
          cake
          OP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Yes, thanks. After some experimenting, I did find the raidz1 setting and plan to use it for sure!