I’m sketching the idea of building a NAS in my home, using a USB RAID enclosure (which may eventually turn into a proper NAS enclosure).

I haven’t got the enclosure yet but that’s not that big of a deal, right now I’m thinking whether to buy HDDs for the storage (currently have none) to setup RAID, but I cannot find good deals on HDDs.

I found on reddit that people were buying high capacity drives for as low as $15/TB, e.g. paying $100 for 10/12TB drives, but nowadays it’s just impossible to find drives at a bargain price, thanks to AI datacenters, I guess.

In Europe I’ve heard of datablocks.dev where you can buy white-label or recertified Seagate disks, sometimes you can find refurbished drives in eBay, but I can’t find these bargain deals everyone seemed to have up until last year?

For example, is 134 EUR for a 6TB refurbished Toshiba HDD a good price, considering the price hikes? What price per TB should I be looking for to consider the drives cheap? Where else can I search for these cheap drives?

  • SpikesOtherDog@ani.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I have a bunch of working drives with 2+ years, and in my area almost everyone still has their system installed on old hard drives

    Yeah. I was tempering that statement with the fact that I was getting computers for repair, often with bad drives, that had 2 years of use. Now that I really think about it, we were seeing them up to about 5 years. I recall that we were discussing whether to proactively replace the drives with that much time on there. At the time I wanted to ship them back out, and others were saying that 5 years was end of life. Our job was just to get them running again vs. performing full repairs.

    I did not mean an average timeline of 20 years

    Then I was not sure what you meant by this:

    I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.

    there are plenty of enterprise SATA drives

    that’s workstation drives. Obviously if your work buys 2 TB wd blue drives they won’t become enterprise drives. enterprise drives include like that of wd red pro, ultrastars, etc, which do use the SATA interface.

    Those weren’t really on my radar, TBH. I took a look at the Ultrastar spec sheet and have to concede that the drive interface itself doesn’t seem to affect the lifecycle of the drive itself. I do have to say that the spec sheet does say at the bottom: “MTBF and AFR specifications are based on a sample population and are estimated by statistical measurements and acceleration algorithms under typical operating conditions for this drive model,” which is what I was guessing before for those million-hour numbers.

    All in all, I am at this point only trying to track down and relay what I’m seeing about SAS vs SATA. From what I can tell, they are mostly the same, but SAS has more features (higher transfer rate, hot-swap capabilities, etc, etc,) HP says that SAS is more reliable, but I don’t see anything on that other than the features I just mentioned. Lenovo seems to agree with that take, saying that the reliability between SAS and SATA is comparable,

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      Then I was not sure what you meant by this:

      I don’t actually know if this is the right way to calculate it, but if for each disk you count the time separately, and add it together for a combined MTBF, then that is 20 out of the 136 MTBF years.

      5 years of drive runtime for one drive. 20 “years” for 4 drives, 40 “years” for 8 drives. I say “years” because the way I mean it is like this: running 4 drives for 10 minutes is 40 minutes of combined drive runtime. running 4 drives for 5 years is 20 years of drive runtime. I think calculating it like this can be compared to MTBF. but again, I’m not totally confident that it really works this way.

      All in all, I am at this point only trying to track down and relay what I’m seeing about SAS vs SATA.

      I think it might be because SATA drives you normally run across, especially in laptops, are not the enterprise kind, but consumer drives built from cheaper components and simpler designs. and those are lower quality. while SAS drives are always enterprise grade.

      but still, in my experience SATA drives can have a long life too. but it may be more unpredictable than enterprise SATA/SAS drives

      HP says that SAS is more reliable

      could be controller chips and cable quality. but also, SFF-8644 type SAS connector can be used to attach a drive to multiple HBA cards as I heard, maybe even multiple machines, for redundancy

      • SpikesOtherDog@ani.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Ok my 20 and your 20 are not the same.

        I was saying the large numbers didn’t make sense if you don’t have a large fleet of drives. Say you have ten servers, each with ten drives, and the MTBF is 100 million hours (yay, easy math!). That means that half your drives will have failed after 100k hours, or 11 years of use.

        Some of the sites I have been looking at are saying that this number will increase significantly because 8 hours of daily use would give you about 33 years of use.

        I think I like the annualized failure rate better, but I don’t think either really tell a great picture.

        https://www.seagate.com/support/kb/hard-disk-drive-reliability-and-mtbf-afr-174791en/

        https://ssdcentral.net/hddfail/

        I would rather if the annualized rate were recalculated annually.

        Regarding the controllers, that has been nagging at me this whole conversation. Most SATA peripheral cards do not have heat sinks, but most SAS cards do. The SAS cards at least have a more rugged appearance.