I’m sketching the idea of building a NAS in my home, using a USB RAID enclosure (which may eventually turn into a proper NAS enclosure).

I haven’t got the enclosure yet but that’s not that big of a deal, right now I’m thinking whether to buy HDDs for the storage (currently have none) to setup RAID, but I cannot find good deals on HDDs.

I found on reddit that people were buying high capacity drives for as low as $15/TB, e.g. paying $100 for 10/12TB drives, but nowadays it’s just impossible to find drives at a bargain price, thanks to AI datacenters, I guess.

In Europe I’ve heard of datablocks.dev where you can buy white-label or recertified Seagate disks, sometimes you can find refurbished drives in eBay, but I can’t find these bargain deals everyone seemed to have up until last year?

For example, is 134 EUR for a 6TB refurbished Toshiba HDD a good price, considering the price hikes? What price per TB should I be looking for to consider the drives cheap? Where else can I search for these cheap drives?

  • SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    I have been toying with the idea of using USB storage, but my concern is that the controllers are not meant to be used that heavily. Supposedly SATA controllers are also not built for the abuse I have been throwing them in my machines, and I don’t want to push it.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Supposedly SATA controllers are also not built for the abuse I have been throwing them in my machines, and I don’t want to push it.

      what makes you say that?

      • SpikesOtherDog@ani.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 days ago

        I just read that recently. Let me see if I can run that source back down.

        Edit: All in one CompTIA server plus certification exam guide second edition exam SK0-005 McGraw-Hill Daniel LaChance 2021 Page 138. In the table there it says that SATA is not designing for constant use.

        Edit 2:

        https://www.hp.com/us-en/shop/tech-takes/sas-vs-sata

        Reliability:

        SAS: Designed for 24/7 operation with higher >mean time between failures (MTBF), often 1.6 million hours or more
        SATA: Suitable for regular use but not as robust as SAS for constant, heavy workloads, with MTBF typically around 1.2 million hour
        

        They are saying that SAS is a better option with a longer MTBF, but I don’t expect my drives to last 5 years, much less 136.

        My own two cents here is that you probably don’t want to use SATA ZFS JBOD in an enterprise environment, but that’s more based on enterprise lifecycle management than utility.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          thanks! as you say because tye 5 vs 136 years it does not really matter in our environment, but it probably starts mattering when you have lots of disks.

          I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.
          But with 30 drives that will be 150 and indicate that you will likely have at least one error of some kind because of using SATA

          • SpikesOtherDog@ani.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            Hey, I’m not sure where you got your factor of 5 years, but it was a number I pulled out my ass. I’m a repair depot I typically didn’t see drives that live much longer than 17k hours (just under 2 years). That didn’t mean that they always fall at that age, only that systems that came through had about that much time on them max.

            Regarding the 136 vs 150 million numbers, those numbers are pure bullshit. MTBF is a raw calculation of how long it will take these devices to fall based on operational runtime over how many failures were experienced in the field. They most likely applied a small number of warranty failures over a massive number of manufacturing runs and projected that it would take that long for about half their drives to fall.

            In reality, you will see failure spikes in the lifetime of a product. The initial failures will spike and drop off. I recall reading either the data surrounding this article or something similar when they realized that the bathtub curve may not be the full picture. They just updated it again for numbers from up to last year and you can see that it would be difficult to project an average lifetime of 20 years, much less 150.

            My last thought on this is that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS. This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment). Now, they could be referring to more expensive SATA drives, but I can’t imagine that they are using anything but SAS at this point in their lifecycle.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I’m a repair depot I typically didn’t see drives that live much longer than 17k hours (just under 2 years).

              I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives

              that it would be difficult to project an average lifetime of 20 years

              I did not mean an average timeline of 20 years

              that when Backblaze mentions consumer vs enterprise drives they are possibly discussing SATA vs SAS.

              there are plenty of enterprise SATA drives

              This comes from the realization that enterprise workstation drives are still just consumer drives with a part number label on them (seen in Dell and HP Enterprise equipment).

              that’s workstation drives. Obviously if your work buys 2 TB wd blue drives they won’t become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.

              • SpikesOtherDog@ani.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives

                Yeah. I was tempering that statement with the fact that I was getting computers for repair, often with bad drives, that had 2 years of use. Now that I really think about it, we were seeing them up to about 5 years. I recall that we were discussing whether to proactively replace the drives with that much time on there. At the time I wanted to ship them back out, and others were saying that 5 years was end of life. Our job was just to get them running again vs. performing full repairs.

                I did not mean an average timeline of 20 years

                Then I was not sure what you meant by this:

                I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.

                there are plenty of enterprise SATA drives

                that’s workstation drives. Obviously if your work buys 2 TB wd blue drives they won’t become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.

                Those weren’t really on my radar, TBH. I took a look at the Ultrastar spec sheet and have to concede that the drive interface itself doesn’t seem to affect the lifecycle of the drive itself. I do have to say that the spec sheet does say at the bottom: “MTBF and AFR specifications are based on a sample population and are estimated by statistical measurements and acceleration algorithms under typical operating conditions for this drive model,” which is what I was guessing before for those million-hour numbers.

                All in all, I am at this point only trying to track down and relay what I’m seeing about SAS vs SATA. From what I can tell, they are mostly the same, but SAS has more features (higher transfer rate, hot-swap capabilities, etc, etc,) HP says that SAS is more reliable, but I don’t see anything on that other than the features I just mentioned. Lenovo seems to agree with that take, saying that the reliability between SAS and SATA is comparable,

                • WhyJiffie@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 hours ago

                  Then I was not sure what you meant by this:

                  I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.

                  5 years of drive runtime for one drive. 20 “years” for 4 drives, 40 “years” for 8 drives. I say “years” because the way I mean it is like this: running 4 drives for 10 minutes is 40 minutes of combined drive runtime. running 4 drives for 5 years is 20 years of drive runtime. I think calculating it like this can be compared to MTBF. but again, I’m not totally confident that it really works this way.

                  All in all, I am at this point only trying to track down and relay what I’m seeing about SAS vs SATA.

                  I think it might be because SATA drives you normally run across, especially in laptops, are not the enterprise kind, but consumer drives built from cheaper components and simpler designs. and those are lower quality. while SAS drives are always enterprise grade.

                  but still, in my experience SATA drives can have a long life too. but it may be more unpredictable than enterprise SATA/SAS drives

                  HP says that SAS is more reliable

                  could be controller chips and cable quality. but also, SFF-8644 type SAS connector can be used to attach a drive to multiple HBA cards as I heard, maybe even multiple machines, for redundancy

                  • SpikesOtherDog@ani.social
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    6 hours ago

                    Ok my 20 and your 20 are not the same.

                    I was saying the large numbers didn’t make sense if you don’t have a large fleet of drives. Say you have ten servers, each with ten drives, and the MTBF is 100 million hours (yay, easy math!). That means that half your drives will have failed after 100k hours, or 11 years of use.

                    Some of the sites I have been looking at are saying that this number will increase significantly because 8 hours of daily use would give you about 33 years of use.

                    I think I like the annualized failure rate better, but I don’t think either really tell a great picture.

                    https://www.seagate.com/support/kb/hard-disk-drive-reliability-and-mtbf-afr-174791en/

                    https://ssdcentral.net/hddfail/

                    I would rather if the annualized rate were recalculated annually.

                    Regarding the controllers, that has been nagging at me this whole conversation. Most SATA peripheral cards do not have heat sinks, but most SAS cards do. The SAS cards at least have a more rugged appearance.