Monday, July 10, 2017

Synology NAS Update

I now have the Synology back up and running, at least up to a point.  It appears I was able to fully back up my data, replace the disks and rebuild the volume.  I’m now in the process of trying to restore the data and reconnect it to OneDrive, Dropbox and other online services.

One nice side effect of this is I’m able to perform the basic reorganization I should have done a while back.  At least now I can move the folders and make really, truly take one copy of everything with me instead of multiples.

That being said, I do have one more rebuild to go.  I had four disks in there and one was new.  I thought I’d left that as disk #1, but apparently I was wrong.  I think it was actually disk #4 which ended up failing and causing the whole meltdown to begin with.  So, I now need to perform the replacement one last time.  I’m just waiting for the full disk scan and parity check to complete, then I shutdown, replace the drive and rebuild.

Saturday, July 08, 2017

Hard Drive Crash

Here’s one for you.  I bought (and still really like) the Synology NAS earlier this year.  It’s proven to be faster and more flexible than the server it replaced.  Granted the Hyper-V server could run any workload I wanted, however I ended up using it only for file and iTunes/music sharing so that was going to waste.

I had four Western Digital Red 3TB drives in it.  The WD Reds are quite likely my current favorite drives for this purpose.  They are 5400 RPM, quiet, cool and fast enough.  I do recommend them.  My only real gripe is they have a three year warranty instead of a five, but that’s what it is.  More expensive WD drives have longer MTBF/warranty times, so you have options.

That being said, three of my drives are at the end of their lifespan.  In fact, one was giving me bad sectors so I decided it was time to replace it, followed by the rest over the next few weeks.  With that in mind, I ordered two new drives, used one to back up the data I cared about and set about replacing the failing drive two.

Within 5 minutes of replacing that drive and starting the rebuild process, I received a number of emails about failures and ultimately a crashed array because drive FOUR had started generating bad sectors.

Ugh.

Next time I have the NAS perform RAID maintenance to catch this. I’m fairly confident that had I done that it would have fixed the issues and moved on, but now I have a permanently degraded array and a fairly big issue.  I’m not in the process of copying EVERYTHING off the NAS and onto a collection of spare drives (three 2tb Reds and a couple other 1tb SSD/HDDs).

At least the array stayed live, albeit read-only, so I could get this done.  It’s more annoying than fatal at this point, but I must admit I’m a bit annoyed the ongoing self-checks didn’t catch the issue.  I also suspect that somewhere there was a note to perform basic maintenance on the array before pulling a drive ….

Ce’st la vie …