Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Raid?

  • 11-02-2004 11:26pm
    #1
    Registered Users, Registered Users 2 Posts: 4,008 ✭✭✭


    hello,
    Im very confused about hard drives and the current connections, especially raid,
    Can anyone explain them to me in simple terms and whats the benefits of each over the other, Currently i have a 120GB WD 8mb SE and a 80GB Maxtor both IDE (ata133). but they are nearly full and want to drop a bit of cash on getting more storage, but i want to get one that are fast and fairly future proof?

    Setup:

    Abit IC-7
    P4 2.4 @ 3.2
    2 x Matched 512mb corsair
    i have 2 x Optical drives and 2 x HDD's is it possible for my PC to support more? do all i need is a cable with 3 IDE connectors?
    Sean


Comments

  • Registered Users, Registered Users 2 Posts: 1,714 ✭✭✭Ryaner


    First thing you have to ask is can your board support more disk? What board is it? And you ask can it support raid and if so which ones. More than likely it raid0 or raid1.
    Basically the difference is that in one setup both disks act together as a single disk giving near two the speed of a single disk and near twice the size. The other one is a mirrored setup. Ie the two disks write the same thing. Writing is the same speed and reading is near twice as fast. Size is that off one disk.
    In both situations you need identical disk. In the first setup if one disk fails all the data on both disks are gone. In the second setup, if one fails the other still works.
    If your going to be getting alot of space and need a safety net on disk etc get a raid5 card and three four disks.


  • Registered Users, Registered Users 2 Posts: 15,817 ✭✭✭✭po0k


    Buy a 4-channel PCI SerialATA controller card that supports RAID levels 0,1, 1+0 and maybe JBOD.
    Buy 2 160Gig SATA 7200rpm 8Meg cache hard drives
    WDs seem to be best performance according to www.storagereview.com - basing this on the IDE drives btw
    Seagates have a reputation as being very quiet yet still fast performers.
    Put them in a RAID0 array, the stripe and cluster size are dependent on the size of the majority of files you'll be storing on it and how much you want to thrash the drives :)
    Both have an effect on performance.
    If you want to (roughly) double your read speed, you could build a RAID0+1 array with 2 drives striping as in RAID0, with another 2 mirroring the first two (RAID1).
    Write speed will not be improved as you're still writing the same data to 2 drives at the same time, and so takes the same amount of time.
    Of course, PCI is limited to 132Mbytes/sec transfer, so if the RAID0 array is able to throw out around the 70-80MByte/sec (google for reviews on cards and drive configs) there's little point in getting another 2 drives for RAID0+1 unless you want to ensure data security and reliability, along with maxing out your possible read speed (stripe and cluster size permitting) to the PCI bus limit.
    The only real downside to RAID0 is that if one drive dies, the array is toast as you only have half of every file on the remaining drive, which is useless.
    If read-speeds are all that matter, and you're happy with paying double the cost per Gig of storage for the data security, you could just go pure RAID1 with 2 drives.

    As regards cards, unless you run Linux, Promise seem to make fairly decent value/performance cards under Windows. They have terrible problems with Linux.
    Again, for more in-depth info, google is your friend.
    Someone else on this forum might be able to give you more info, as for myself, I'm pooped :)


  • Registered Users, Registered Users 2 Posts: 15,817 ✭✭✭✭po0k


    Originally posted by rabbitinlights
    do all i need is a cable with 3 IDE connectors?
    Sean

    IDE doesn't support more than 2 devices (master and slave - feck off PC americans) and only one can be access at any one time anyways.


    Also, the reason I suggested an add-in card is so that should you upgrade the motherboard, you can simply pull out the card and the drives and slap em into your new system without having to re-initialise them (which meanslosing all your data).
    Of course, this only applies if you'd be using the (RAID0)array for storage/seperate storage and OS partitions on the array and not just for the OS - which would neccessitate a format of the array and loss of any stored data anyways to permit a clean upgrade of the board w.r.t. the OS and drivers.


  • Closed Accounts Posts: 8,264 ✭✭✭RicardoSmith


    Any advantages of a SATA over a IDE RAID 1?


  • Registered Users, Registered Users 2 Posts: 1,714 ✭✭✭Ryaner


    SATA drives can be hot swappable. Ie plug them in and out when the system is running. (depends on drives and the sata card.)
    The drives tend to alot faster on their own and not too much slower than a raid1 array. A single drive is not going to be as faster as two of them unless you have two old ide drives and one of the newer sata one's.
    The cpu over head is smaller as the sata cards in alot of cases handle most of the work.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 15,817 ✭✭✭✭po0k


    Originally posted by Ryaner
    SATA drives can be hot swappable. Ie plug them in and out when the system is running. (depends on drives and the sata card.)
    The drives tend to alot faster on their own and not too much slower than a raid1 array. A single drive is not going to be as faster as two of them unless you have two old ide drives and one of the newer sata one's.
    The cpu over head is smaller as the sata cards in alot of cases handle most of the work.

    You're right about the hot-swap.
    The rest of your post is BS tbh.
    With the exception of WD's Raptor drives (essentially 10k rpm scsi drives with sata electronics), sata drives today are just ordinary ide drives with a serialiser onboard or native sata electronics.
    The drive mechanism is the exact same.
    Sata is purely single device per channel, has more bandwidth (which 7200rpm drives can't really fill anyways) and sometimes supports tagged command queueing.
    Unless you're trying to put the two drives in the raid1 array on teh one channel you won't really notice a difference.
    but I'd reccomend getting a sata controller for a bit of future proofing anyways.
    As for CPU utilisation, that's totally down to the actual controller, the driver and the OS.
    There are god and bad IDE and sata controllers.
    Also, modern onboard controllers are generally integrated into the southbridge and so are not limited by the PCI bandwidth, isntead going over hypertransport/whatever South/northbridge bus is used.
    A PCI plug-in card will be limited to the max of the PCI bus (132MBytes/sec) and will also be in contention with any other devices on the bus.
    One way out would be to get a 64-bit/PCI-X card if you have PCI-X slots on your motherboard.
    Of course that's only of use if you intend running 4 or more drives in a config that could saturate the PCI bus.


  • Registered Users, Registered Users 2 Posts: 1,714 ✭✭✭Ryaner


    I've an maxtor sata drive. It gets read speed of over 50mbs. The wd 10000rpm drives on ide get that but I've seen no other ones that do. I'll admit the first lot of sata drives were crap performance wise but the later ones are showing the benifits.
    Most sata cards support raid anyway. And most will cut the cpu usage down even if they are onboard and they dont have their own chip. CPU usage aint gonna make much difference anyway unless your running a server with 10+drives.

    As for pci-x? Is that released yet? I thought the first boards were coming out the end of march?


  • Registered Users, Registered Users 2 Posts: 17,165 ✭✭✭✭astrofool


    www.storagereview.com will reveal the answer :)

    anyway, get sata, cables are much nicer, and will be much more future proof. DvD/CD-Rom/writers should also be going sata this year.


  • Registered Users, Registered Users 2 Posts: 15,817 ✭✭✭✭po0k


    Originally posted by Ryaner
    I've an maxtor sata drive. It gets read speed of over 50mbs. The wd 10000rpm drives on ide get that but I've seen no other ones that do. I'll admit the first lot of sata drives were crap performance wise but the later ones are showing the benifits.
    Most sata cards support raid anyway. And most will cut the cpu usage down even if they are onboard and they dont have their own chip. CPU usage aint gonna make much difference anyway unless your running a server with 10+drives.

    As for pci-x? Is that released yet? I thought the first boards were coming out the end of march?
    Just about every standard 7200rpm ide/sata drive I've seen quotes around the 50MByte/sec internal transfer rate.
    the WD10K raptor drives aren't available for IDE (I take it you mean PATA), unless you use a sata-to-pata converter.
    THe first batch of Sata drives weren't crap, they just didn't show any improvement over the pata interface, as they were essentially still pata drives, as are the majority of sata drives today (native or not). They still have the exact same drive mechanism as their pata cousins. The only exception being WD's Raptor drives, which have what are essntially low-end scsi drive mechanisms.
    Why do you mention onboard sata/pata controllers as though they're a bad thing?
    the vast vast majority of drive controllers are integrated into the southbridge, and the remaining chips (on board, but external to the southbridge) are the exact same chip models as can be found packaged in a PCI card.
    If anything being onboard gives them a possible advantage as the chipset their tied to may allow them to be connected in a different manner than simply hanging them off the PCI bus (Intel's CSA bus/port for the onboard gigabit chips in the 865/875 chipsets).

    CPU utilisation is again down to the specific chip used, the OS they're used under and the manufacturer's/OEM's driver support/maturity for that OS.

    PCI-X has been out for years.
    it's basicly a wider overclocked PCI bus.
    64-bits instead of 32, running at speeds from 66Mhz, 100Mhz and 133Mhz with multiplied versions coming out aswell 9much like AGP). Runs at 3.3V but, dependent on the board/chipset implementation, can include backwards compatibiity for 5V cards running at 63-bit/33Mhz or 32-bit/66Mhz etc.
    Mainly seen in server motherboards or SMP boards, a comon example being the AMD760MP/MPX chipset for the dual-capable AthlonMPs, which had 2 PCI 64-bit/66Mhz slots tied in to the North and South bridge interface.
    Most PCI-X cards these days run at the full 133Mhz and 3.3V, but can clock back to 100 or 66Mhz as required.
    In most cases putting in a card which doesn't support the full speed of the slot (ie a 64-bit/33Mhz card into a 64-bit/66Mhz slot) will slow the whole bus down to 33Mhz and (as in the 760MPX case) can adversely affect the northbridge/southbridge interface and 32-bit/33Mhz PCI bus maxiaml bandwidths.
    You're confusing PCI-X with PCI-Express which is a totally different interface, which bares similarities with the PATA-to-SATA changeover we're seeing.
    IE. it is a collection of ~32 lanes (each 8-bit wide afaik) which can be divvied up into slots of lanes with the collective bandwidth of those slots dedicated to the device plugged into the slot(16x, 8x, 4x, 2x, 1x).
    Graphics cards are being aimed at the 16x slot, with network interfaces, soundcards and general purpose I/O cards being aimed at the lower slots.
    I think the overall bandwidth will start off at around 4-8GByte/sec, which would give ~2-4GByte/sec for the graphics card.
    Afaik it's scalable aswell.


  • Registered Users, Registered Users 2 Posts: 1,714 ✭✭✭Ryaner


    Most ide drives I've seen have hit 45max on burst rate. The constant transfer being around 40mbps. As for the wd drives being avail on ide or ata (a difference I aint too familiar which is prob where I'm making some mistakes), I'm quite sure I have seen them in reviews.
    As for the first batch of sata drives, many of them didnt even hit 40 on constant transfer rates. The one I had was getting 28 across the full disk compared to the one I have now which is 52.
    I do agree that most onboard controllers are better but in regard cpu, some dont have their own chip for processing and use the cpu. Some of the better ones do which was point I was trying to make.
    As in for confusing pci-x with pci express, yes that is exactly what happened.


  • Advertisement
Advertisement