- WD Community
- News & Assistance
- New to Community
- Forum Feedback
- Software & Apps
- WD Software
- WD Mobile Apps
- Software & Accessory Ideas
- WD TV Live Streaming
- Live Streaming Discussions
- Live Streaming Firmware
- Live Streaming Ideas
- Live Streaming Issues
- WD TV Live Hub
- Hub Discussions
- Hub Firmware
- Hub Themes
- WD TV Live Hub Ideas
- WD TV Live Hub Issue Reporting
- WD TV Play
- WD TV Play
- Live & Live Plus
- Live Discussions
- Live Firmware
- Elements Play
- Elements Play
- External Drives
- Mac Externals
- PC Externals
- Portable Drives
- External Drive Ideas
- Network Devices
- Networking Devices
- Live Duo
- My Book Live
- Other Network Drives
- Network Product Ideas
- Internal Drives
- Desktop & Portable
- Internal Drive Ideas
- Nuevo a La Comunidad
- Los Productos de WD
- Software y Accesorios
- Reproductores Multimedia
- Unidades de Red
- Unidades Externas
- Unidades Internas
- Neu in der Community
- WD Produkte
- WD Programme
- WD TV Media Player
- Netzwerk Laufwerke
- Externe Laufwerke
- Interne Laufwerke
- Annunci e Novita'
- Nuovo per La Comunita'
- Prodotti WD
- Programmi & Accessori
- Riproduttori Multimediali
- Dischi di Rete
- Dischi Esterni
- Dischi Interni
- WD TV Legacy
- Hub Network
- Live Networking
- WD TV HD
- WD TV Mini
- WD Photos
- Other Software & Accessories
- Hard Drives
- WD ShareSpace
- Other Externals
- Other Internal Drives
03-02-2012 05:03 PM - edited 03-02-2012 05:08 PM
Ok, i know this subject has been discussed before, including a nice HOW-TO by dudemanbubba: (http://community.wdc.com/t5/WD-ShareSpace/HOWTO-Sh
Anyways, my situation: my wife plugged in a kettlle and powered it on. For some reason it tripped the whole house, switching everything off, inclduing the NAS drive. When we discovered it was the problem kettle, we swtiched everything back on and I worryingly ran to look at the NAS drive, which showed 4 failed drives. I did the normal things, switched it off and truned it back on...the same... I accessed the GUI interface which displayed the drives as failed, giving me the option to remove or format. At first I thought the drives where buggered but after reading around I realised it was the RAID controller not the drives which had failed.
I accessed the NAS drive using a Gentoo LIVECD, utilising SSH.
ssh root@"your NAS IP address" --- (withouth the parenthesis)
password: "welc0me" --- (without the parenthesis)
YOU WILL THEN BE IN THE "SSH" SHELL
I succesfully logged into the system and typed 'fdisk -l' and to my delight could see all the drives pop up with their relvent partitions. That supported the idea the drives were fine. For some reason though, WD format each drive with 4 partitons, the last holding all the data and which the chosen RAID (5 in my case) is applied. After talking around with a few people, it seems WD utilise the hard drives for a very basic UNIX/LINUX base system (hence the other 3 partitions), suggesting the RAID configuration is software based. This make things a little complicated.
Next, I examined the drives using "cat /proc/mdstat" and "mdadm --examine --scan" revealing that the drives were good too (I don't have the outputs of these, sorry). I don't remeber eveything about the details but the outputs showed the drives were clean, not degraded. It surprises me that I can manage to access the system this way but the system maintains the drives have failed!? but they obviously have not.
Anyways, I deceide to be safe (and because i thiought i didn't have the relevent hardware) to take to an IT store. They didn't do data recovery per se but did have some success in RAID recovery, and they were cheap. I also made sure they cloned the drives before their recovery attempt, which they did normally anyway. After a week of them messing around, they stated they could not access the GUI interface nor see their partitions. This got me worried they'd mucked it all up. But, alas, they did not. When I got it back home, it was still in the same state as I sent it.. I could till access and see the drives and their partitions and access the GUI interface.. Why could they not???? I reset the GUI interface so it would be defaulted for easy access for them... I got further than they did in a few hours to their week... Not taking things their again.. Useless..
Now I have it home, i'm reattempting recovery.. I bought an addittional 2TB drive to 2TB I already had and have them both in individual USB caddies. I did thing about using DD to clone the relevant partions to the other drives but have been pursuaded to use ddrescue instead. DD will stop if it finds any bad blocks, ddrescue does not.
This is how I have set up things cuurently:
I have one of the 1TB drives from the NAS drive housed in one of the USB caddies and a 2TB drive in the other, which has 2 partitions on it. I've initated the ddrescue to clone the 4th partition of the NAS drive to the 1st partion of the 2TB drive.
ddrescue -v -f /dev/sdb4 /dev/sdc1 bs=64k
(-v = verboses (displays text of whats going on) the porcess
-f = forces the process to overwrite the destination
/dev/sda4 (the 4th partition of the NAS drive (holding all the data you need) BE AWARE: THIS CAN CHANGE DEPENDING ON THE ORDER YOUR SYSTEM REGISTERES THE HARD DRIVES. IN MY CASE, I HAVE AN INTERNAL HARD DRIVE (dev/sda) AND TWO USB DRIVES, ONE HOLDING THE NAS DRIVE (/dev/sdb), THE OTHER IS THE DESTINATION DRIVE (/dev/sdc).
/dev/sdc1 = 1st partition of the destination hard drive.
bs=64k = block size is 64k (this was determined by examining the NAS using 'mdadm --examine /dev/sdb4' - however, 64k is pretty standard from what I understand)
This is where i'm upto. I'm currently cloning the 4th partition of the NAS drives into 4 partitions on 2 2TB drives (ensuring to maintain the order of the NAS drives as they are in the NAS housing).This ensures the original drives don't become damaged by mistakes i may make.
I will then attempt to reassemble the RAID using mdadm.
My only concern regarding this method is I'm sure not it will work, whether i can reassmeble the RAID with 2 TB with 2 partitions on each.. I've searched the interent for an answer and by asking around but I haven't found an answer. Is this possible, i'm assuming it can at this point...
Does anyone have any suggestions???
Solved! Go to Solution.
03-03-2012 10:58 AM
Don't know if you will be able to recover your files from the two 2TB hard drives, if I were you I would have gone with the dudemanbubba guide, seems like a safer bet.
03-03-2012 05:44 PM
You are pioneering new territory here. I have no idea if what you are doing will work. I am somewhat savvy with computers in terms of being able to tinker and keep things running. But I don't really understand the underlying technology of how the things work on the software level.
My two cents... the money spent on the additional 2TB drive could have bought you a machine at a garage sale with enough in it to do the job. If you have data that you treasure, I would tread lightly in the direction you are going.
The fact you can get into ssh may be a good sign as you can see the unit is functioning. I know it has a stripped down linux on it. Perhaps there is enough on there to "send" your data to a remote location? Is ftp available from the command line on the SS when logged into ssh? I don't have my unit any more or I would check myself...
Good luck to you! I hope you get things resolved. Post back if you do for others to share!
03-03-2012 08:24 PM
Thanks for your replies..
The problem with following dudemanbubba method is I wasn't keen on spending an a destop computer and new drives as well. The drives I have are more like an investment to assembling a larger RAID later on and I will also have back-up drives on top.
I agree my method is a bit untested, considering no mention of this method been done any where on the net I can find.. I'm just hoping mdadm will consider each partition as seperate entities, as it does in dudemanbubba method, but having two parts of the RAID on each of the two drives... We'll see..
I'm in the middle of still cloning the drives so I can mess with the cloned RAID parts instead of messing up the originals if i make a mistake. But the cloning is painfully slow, running at 44MB/s. And this is using a 2 Sata HDD dock via a eSata cable. I changed to this method from the two USB caddies which were running at half this. I'm not sure how to speed this up. Oh, and I deceided to use DD now too.
03-03-2012 08:43 PM
I have just found evidence to support that you can infact indeed partition a drive and have each partition as part of the array. This is generally not done because if that drive failed, the RAID is buggered and unrecoverable (example: you have 4 drives, 3 drives are 1TB and the forth is 2TB but split into 2, gving you 5 parts to your array. Howver, if your 2TB drive failed, your array is buggered). However, you can't do this in the WD Sharespace because all drives have to be the same size and there is no option to partiton them..
But you can if you built you array in a Linux environment. So there is light at the end of the tunnel.
03-05-2012 10:33 PM
This is far more complicated than I thought... I can not for the life of me get into these disks... How is it possible for this to happen so seriously.. We rely on these products to do the job their designed to do, too much... I'm exhughsted and really depressed that i may have lost all my precious files, including the birth of my children. WD are completly useless to deal with who have no meanignful solutions to this re-occuring problem, for many, many people... There should be some safty device installed in the device to stop this.. I think my particular problems stems from the a power cut not only nonce but twice.. I noticed that the WD sharespace automatically switches on when power is restored, and i think this is the problem... Why does this machine power on by its self??? Even when i plug the power cord in the bcak it turns on without me pressing the power button... rediculas.. And becasue of this, is buggered... Thanks WD for nothing!!!
03-06-2012 09:32 AM
Have you modified the original disks at all or have you tried to make duplicates? If you still have the original disks in tact then your data should be there. Remember, you only need three of the four disks to make this work since it is a RAID 5. If you modified partitions on the original disks at all you are probably out of luck.
Perhaps you know someone with an old pc that you could use for the purpose. With my tutorial, you will not affect the original machine at all. You will be unplugging their drives and plugging yours in temporarily. Once you are done you can plug the original drives back in and all should be good to go.
Sorry you are having so much trouble...
03-06-2012 03:45 PM
03-07-2012 03:19 AM - edited 03-07-2012 03:21 AM
Well, this is what i'm able to ascertain from my device:
(NOTE: i have removed one of the disks from this array for the moment)
I could also not use fsck.ext2 either.. Would not do anything, just stated I had missing superblocks...
~ $ fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 26 208844+ fd Linux raid autodetect /dev/sda2 27 156 1044225 fd Linux raid autodetect /dev/sda3 157 182 208845 fd Linux raid autodetect /dev/sda4 183 121601 975298117+ fd Linux raid autodetect Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 26 208844+ fd Linux raid autodetect /dev/sdb2 27 156 1044225 fd Linux raid autodetect /dev/sdb3 157 182 208845 fd Linux raid autodetect /dev/sdb4 183 121601 975298117+ fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 26 208844+ fd Linux raid autodetect /dev/sdc2 27 156 1044225 fd Linux raid autodetect /dev/sdc3 157 182 208845 fd Linux raid autodetect /dev/sdc4 183 121601 975298117+ fd Linux raid autodetect ~ $ mdadm --detail /dev/sd[abc]4 mdadm: /dev/sda4 does not appear to be an md device mdadm: /dev/sdb4 does not appear to be an md device mdadm: /dev/sdc4 does not appear to be an md device ~ $ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] md124 : active raid1 sda2 1044160 blocks [4/1] [___U] md1 : active raid1 sdc2 sdb2 1044160 blocks [4/2] [U_U_] md0 : active raid1 sdc1 sdb1 sda1 208768 blocks [4/3] [_UUU] unused devices: <none> ~ $ mdadm --examine --scan ARRAY /dev/md0 level=raid1 num-devices=4 UUID=ff74d9bf:5ab0ca74:3301d4f1:44541c6d ARRAY /dev/md1 level=raid1 num-devices=4 UUID=44c458bb:e1ebf9e8:1cd440a8:86a64cde ARRAY /dev/md2 level=raid5 num-devices=4 UUID=3049bb1e:078f8250:46c2159e:49a0be43 ~ $ mdadm --examine /dev/sd[abc]4 /dev/sda4: Magic : a92b4efc Version : 0.90.00 UUID : 3049bb1e:078f8250:46c2159e:49a0be43 Creation Time : Thu Oct 28 07:37:28 2010 Raid Level : raid5 Used Dev Size : 975097920 (929.93 GiB 998.50 GB) Array Size : 2925293760 (2789.78 GiB 2995.50 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 2 Update Time : Sun Feb 19 18:07:33 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : 47bf1668 - correct Events : 1527202 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 4 0 active sync /dev/sda4 0 0 8 4 0 active sync /dev/sda4 1 1 8 20 1 active sync /dev/sdb4 2 2 8 36 2 active sync /dev/sdc4 3 3 8 52 3 active sync /dev/sdd4 /dev/sdb4: Magic : a92b4efc Version : 0.90.00 UUID : 3049bb1e:078f8250:46c2159e:49a0be43 Creation Time : Thu Oct 28 07:37:28 2010 Raid Level : raid5 Used Dev Size : 975097920 (929.93 GiB 998.50 GB) Array Size : 2925293760 (2789.78 GiB 2995.50 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 2 Update Time : Wed Mar 7 12:51:38 2012 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : 47d5360c - correct Events : 1527223 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 20 1 active sync /dev/sdb4 0 0 0 0 0 removed 1 1 8 20 1 active sync /dev/sdb4 2 2 8 36 2 active sync /dev/sdc4 3 3 8 52 3 active sync /dev/sdd4 /dev/sdc4: Magic : a92b4efc Version : 0.90.00 UUID : 3049bb1e:078f8250:46c2159e:49a0be43 Creation Time : Thu Oct 28 07:37:28 2010 Raid Level : raid5 Used Dev Size : 975097920 (929.93 GiB 998.50 GB) Array Size : 2925293760 (2789.78 GiB 2995.50 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 2 Update Time : Wed Mar 7 12:51:38 2012 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : 47d5361e - correct Events : 1527223 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 2 8 36 2 active sync /dev/sdc4 0 0 0 0 0 removed 1 1 8 20 1 active sync /dev/sdb4 2 2 8 36 2 active sync /dev/sdc4 3 3 8 52 3 active sync /dev/sdd4 ~ $ pvs /dev/sdd: open failed: No such device or address /dev/sdd3: open failed: No such device or address ~ $ pvscan No matching physical volumes found ~ $ lvscan No volume groups found
03-11-2012 02:06 PM - edited 03-11-2012 02:32 PM