Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Software Raid 1 - How to achieve read striping

  • 15-12-2007 9:37pm
    #1
    Registered Users, Registered Users 2 Posts: 37,485 ✭✭✭✭


    Following on from my raid 5 thread, I decided to go with raid 1 for my main OS drive. (I've gone linux softraid mad)

    I had tried 2 x samsung 200G drives and it had failed miserably in the performance stakes (50MB/s). I was expecting the kernel raid 1 to stripe reads accross the two disks, which it didn't. Needless to say, I was disappointed. In fairness to it, accessing two files at once on the same filesystem did devote one disk to each task, giving twice the effective throughput, but that wasn't really what I wanted. It's rare that you're doing multiple reads in parallel in a home environment.

    Just this afternoon, in a hungover haze, a thought occured to me:

    Raid 1+0 on two disks.

    Sounds like lunacy, but the numbers don't lie: 95MB/s for reads, around 60MB/s for writes.

    I get the same effective space as a raid 1 array (1 x 200GB from 2 x 200GB disks), the same redundancy, but the read striping I was after.

    Here's what I did:

    Create a raid 1 mirror /dev/md0 from /dev/sda1 and /dev/sdb1 (both 100GB)
    Create a raid 1 mirror /dev/md1 from /dev/sda2 and /dev/sdb2 (both 100GB)
    Create a raid 0 stripe /dev/md2 from /dev/md0 and /dev/md1
    mount /dev/md2

    I'm writing this on a livecd as my OS is rsync'd back to the array. Really looking forward to firing it up.


Comments

  • Registered Users, Registered Users 2 Posts: 37,485 ✭✭✭✭Khannie


    Well.....all the patting myself on the back stopped pretty quickly when I couldn't get the system to boot.

    The problem is that mdadm will start up each of the raid 1 stripes automatically, but wont see the raid 0 stripe made out of them for the root device (which contains the details on how to start the stripe). Bit of a chicken and egg scenario.

    I just got my computer back up and running this morning for working from home. Bit of a disaster tbh.

    The workaround, for anyone who's interested, is this:

    Option A)
    newer kernels support raid 10 directly. I couldn't find much on this. I'll probably try it out later, but you can't specify which partitions form the raid 1 stripes from what I can see. This is why I avoided using this method.

    Option B)
    Use an initrd. TBH, I didn't really know how these worked until last night. Basically, you create a very minimal filesystem in an image file. This is used at boot time as a ramdisk and used to create the raid 10 from two raid 1's and a raid 0. The initrd can then direct the main kernel to use this filesystem as the root one. This gets around the chicken and egg situation.

    I gave option B a lash at 2am last night but wasn't coherent enough to pull it off (these things never work first time in my experience anyway). The initrd wouldn't mount properly, though I think I have the right tools in it once I get it working.


  • Registered Users, Registered Users 2 Posts: 37,485 ✭✭✭✭Khannie


    Khannie wrote: »
    Option A)
    newer kernels support raid 10 directly. I couldn't find much on this. I'll probably try it out later, but you can't specify which partitions form the raid 1 stripes from what I can see. This is why I avoided using this method.

    Well....This method does work. By "work" I mean that it boots without any trouble. Performance is absolutely horrible with only 2 disks though. 12-19MB/s. I'm using a 2.6.23 kernel.

    This is obviously a configuration problem but I couldn't find any documentation at all on how to set this up properly, let alone for two disks (I did find a blog entry from some punter who had done it with the intended 4 disks and got great performance, but in theory at least, it should be fine for 2 disks). The man page was lacking / too cryptic for me.

    Sooo.....tbh.....I'll probably just go back to raid 1 for now. I'm not arsed with the initrd method because I'm sure it's a road of pain and anguish and I already have an SAS drive on the way that should give me the performance I'm after.


  • Registered Users, Registered Users 2 Posts: 37,485 ✭✭✭✭Khannie


    I've decided that I'm gonna make this work. Pretty poxy "howto" if I don't really. I'm on holliers from Friday and there's no sign of my SAS drive / controller yet, so expect some kind of solution towards the end of next week. If nothing else, it'll be interesting to learn about initrd's.


Advertisement