Here’s one for you. I bought (and still really like) the Synology NAS earlier this year. It’s proven to be faster and more flexible than the server it replaced. Granted the Hyper-V server could run any workload I wanted, however I ended up using it only for file and iTunes/music sharing so that was going to waste.
I had four Western Digital Red 3TB drives in it. The WD Reds are quite likely my current favorite drives for this purpose. They are 5400 RPM, quiet, cool and fast enough. I do recommend them. My only real gripe is they have a three year warranty instead of a five, but that’s what it is. More expensive WD drives have longer MTBF/warranty times, so you have options.
That being said, three of my drives are at the end of their lifespan. In fact, one was giving me bad sectors so I decided it was time to replace it, followed by the rest over the next few weeks. With that in mind, I ordered two new drives, used one to back up the data I cared about and set about replacing the failing drive two.
Within 5 minutes of replacing that drive and starting the rebuild process, I received a number of emails about failures and ultimately a crashed array because drive FOUR had started generating bad sectors.
Ugh.
Next time I have the NAS perform RAID maintenance to catch this. I’m fairly confident that had I done that it would have fixed the issues and moved on, but now I have a permanently degraded array and a fairly big issue. I’m not in the process of copying EVERYTHING off the NAS and onto a collection of spare drives (three 2tb Reds and a couple other 1tb SSD/HDDs).
At least the array stayed live, albeit read-only, so I could get this done. It’s more annoying than fatal at this point, but I must admit I’m a bit annoyed the ongoing self-checks didn’t catch the issue. I also suspect that somewhere there was a note to perform basic maintenance on the array before pulling a drive ….
Ce’st la vie …