Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Home Server/NAS build

Options
123578

Comments

  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Oh ok, didn't realise it had a partner application to go with the mobile app. Must have a look at that soon.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    For reference of anyone else looking at this later on, some numbers from my server after moving to unRAID:

    To an SSD cache pool:
    DbWSIA4.png

    To a 10-drive HDD array:
    0PSQV0C.png

    As you can see, I'm completely limited by a single 1Gb/s cable. :P

    There's no internal benchmark for unRAID (that I know of), but something that is not dissimilar is the parity check process, as it reads from all drives simultaneously, and is limited by the read/write to the parity drive. During this, I've seen sustained reads of 1.2GB/s.


  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    Easy way to bench it is to spin up a VM, the virtual network adapter should remove any bottleneck.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Alright, I'm blaming that one on college projects, and lack of brain power at the end of the day...


  • Closed Accounts Posts: 5,017 ✭✭✭tsue921i8wljb3


    Has unRaid improved the write speed direct to the parity protected array? I thought it was limited to ~30MB/s due to each write actually incurring four disk operations? Hence the need for a cache drive.


  • Advertisement
  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    It's been far higher than that for me. When I was migrating data over from my last array I was pretty consistently getting around 100MB/s (as reported by the disk stats plugin) write. This is with TB+ size transfers.


  • Registered Users Posts: 7,003 ✭✭✭Wossack


    little bit out of the loop, but I believe all the network wizardry linus helped them tweak and min/max is expected to be rolled into the 6.2.0 update (currently in advanced beta stage afaik)

    Personally, I only get 100mb/s for the first 500-600 megs or so, then drops to 50/s (writes to cache, half those speeds for parity protected writes). I do have fairly questionable network settings however (a cursory glance through the driver attributes and disabling any buffering/acceleration stuff pretty much doubled my transfer speeds. could do with another look now I think..), and my cabling is somewhat suspect. No so easily remedied,as the server resides in the attic hah


  • Closed Accounts Posts: 5,017 ✭✭✭tsue921i8wljb3


    Serephucus wrote: »
    It's been far higher than that for me. When I was migrating data over from my last array I was pretty consistently getting around 100MB/s (as reported by the disk stats plugin) write. This is with TB+ size transfers.

    On my HP Microserver running version 5 with no cache drive I max out at about 35MB/s for writes. Perhaps version 6 has improved this.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    AFAIK it can depend a lot on the filesystem you're using as well. If you're still on ReiserFS that could be it. (main array is on XFS, BTRFS for cache pool)


  • Closed Accounts Posts: 5,017 ✭✭✭tsue921i8wljb3


    Serephucus wrote: »
    AFAIK it can depend a lot on the filesystem you're using as well. If you're still on ReiserFS that could be it. (main array is on XFS, BTRFS for cache pool)

    I am on Reiser. I might consider upgrading. Thanks.


  • Advertisement
  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    Doing some fiddling to slow the fans (goofed, bought 2x reducers instead of 3 and not significant enough ones) when I found a bug in the system.

    SnIjcEol.jpg

    As the prodigy say, invaders must die!


    On another note: Flexraid is junk. I've checked everything on the system and it still throws wild errors sporadically making my parity data useless. If it wasn't for Crashplan I'd have dumped it all already but a rebuild to Unraid will be a laborious task.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    You need... RAID

    raid-ant-and-roach-17.png?la=en-US&hash=52983A6D3B4A19035D5A15C4B10AF7D3DCBA90E4

    How many drives have you got? Because I did a gradual move to unRAID. Started with three drives (bought 2, used the parity from FlexRAID. Not best practice, but hey, needed a drive), copied a couple of drives at a time, then added them in, and kept going. Minus all the testing the array was only out of commission for about three days.


  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    3x3 for Data, 1x3 for parity + an Auxiliary drive for backups and stuff thats flex incompatible. I'm about 80% full so I'd probably buy another couple 3's to move with. I've been waiting for prices to drop but there's been really little movement in the spinning market despite how SSDs are hammering them.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    If it's any use to you, I saw a 6TB WD Purple in CEX Tallaght the other day for €150. Sorely tempted to pick it up, but I don't physically have space for the damn thing. Move your data on that and you're most of the way there.


  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    Just to add a little note in case anyone else has the same issue.

    The Marvell controllers have been known to be crap. Cant keep up with SSDs, crashed on high load (now patched with a FW update), crappy support site, Dodgy management software. The issue I had was drives appearing as SCSI disks even though they were clearly SATA only. This causes problems as certain tools will ignore them or treat them as USB.

    Solution, drop the driver.
    bwIS5.png

    Now things work as they should. Another reason to yearn for more Intel ports.


    On another note: Its worth adding that the C2750D4I is likely to die. Dont copy this build.


  • Registered Users Posts: 7,805 ✭✭✭Calibos


    I am the thread Resurrection and the light!! :D

    This might be the year I finally build my media server after talking about it for years. Going to use 6 of the 9 Hotswap Bay type Caddies I bought years ago for an 18 Bay media server in the Corsair Air 540 case and using a spare mITX H87 mobo and Haswell i5 4590S and 8gb of DDR3. I also have a Windows 10 Home Licence.

    Just need to buy a PSU, the Noctua NH-D9L cooler and the PCIe SAS/Sata card. I see prices of the 16 port versions have come down a lot since I last looked. I'd be using Windows Storage Spaces.

    Can anyone recommend a 16 port SAS/Sata card to me.

    http://www.ebay.com/bhp/16-port-sata?rmvSB=true

    426042.jpg


  • Registered Users Posts: 7,003 ✭✭✭Wossack


    i use an ibm m1015, bought in 2014 for £104 - you'd need two though, as they're only 2 port SAS cards (each port broken out to 4x sata lanes)


  • Registered Users Posts: 7,805 ✭✭✭Calibos


    Wossack wrote: »
    i use an ibm m1015, bought in 2014 for £104 - you'd need two though, as they're only 2 port SAS cards (each port broken out to 4x sata lanes)

    mITX mobo so only 1 PCIe slot. Thats why I'm looking at 16 port ones.


  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    not that it would help you Calibos as you still need 2 pci slots but Wossack just in case you ever need more, in that situation it would be cheaper to get a HP sas expander (less than 100 euro, sometimes way less) you would then have 32 ports to play with


  • Registered Users Posts: 7,805 ✭✭✭Calibos


    Should I pick up something like this new from the likes of Newegg

    https://www.newegg.com/global/ie/Product/Product.aspx?Item=N82E16816118142

    or can anyone see anything as suitable and cheaper on the ebay link.


  • Advertisement
  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    Calibos wrote: »
    Should I pick up something like this new from the likes of Newegg

    https://www.newegg.com/global/ie/Product/Product.aspx?Item=N82E16816118142

    or can anyone see anything as suitable and cheaper on the ebay link.

    what about an m1015 + Intel RES2SV240 (powered by molex not pci)


  • Registered Users Posts: 7,003 ✭✭✭Wossack


    neat solution that..!

    cant imagine me needing more then the 8 personally, something to consider though


  • Registered Users Posts: 462 ✭✭tazzzZ


    I really like that case setup.... currently in need of a new case for a home server build. keep looking at rackmount cases with like 20 hot swap bays. at max capacity of my current case for hdd. or would love a standard case with 9 or 12 5.25 drive bays and put in 3 5 bay hotswap cages. cant find anything perfect for my needs though.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    RE 16-port card: There's a bunch of LSI-based ones. Any 9201 design one should be grand.

    RE case: I've been in the same situation for a while. I finally ended up ordering a couple of extra HDD cages for an old Nanoxia case I have (DS1). I'll lose hot-swap, but I gain a tonne of capacity, so... eh. Compromises.

    There was a lovely case full of 5.25 bays, sound damping, the lot, but it's not made any more, and I can't find it: Nexus Prominent 9.

    After that you're into rackmount stuff, which I'll be moving to at some point, but has to wait until I'm out of my parents' place. :P


  • Registered Users Posts: 462 ✭✭tazzzZ


    yea im thinking of just going rackmount tbh. the one thing against it for me is the convenience of the hotswap drive cages i was getting. literally just slide the hdd in and it works. no tray or anything. so nice to use.

    Im also tempted to sell my whole server and start again with ryzen 7 build. starting to get the odd thing in 4k h265 that my processor struggles with. 4790S.


  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    Got two tasks for mine at the minute. Firstly replace Crashplan and then get rid of flexraid.

    Been looking at BTRFS/ZFS/SS/Raid Cards with expansion support (UPS..?)/Other. Lots of sorta close options but nothing jumps out at me.


  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    ed, FreeBSD ZFS and an LSI 9211-8i (/Dell PERC H200) flashed to IT mode then the HP sas expander from above or a fancy case with a sas expander backplane etc. cheapest way to cost effective resiliant storage from my bits of researching this stuff for myself

    I'm always looking for used enterprise gear but nothing ever turns up here, in USA they can get unbelievable gear lightly used for peanuts and then run it with their nearly free electricity


  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    If i was putting ZFS on top of the drives then I could probably leave them on the Marvell controllers no? (through the DS380 backplane and maybe an ESATA extension cage).

    Edit: Intel + good Marvell controller.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    I'd imagine so, though you'd probably want something better than eSATA going to the external box. Could grab one of the LSI cards with external SAS ports or something I suppose, and hook that up to a JBOD box with expander backplane.



    Out of curiosity: What are your guys' thoughts on ZFS vs. something like unRAID?

    The nerd in me wants to try ESXi with failover clustering, but then I've no idea what I'd use it for. It's kinda like that for ZFS. It looks lovely for the amount of stuff it does, but the lack of expandability is annoying. I know you can, but the mechanics require quite a bit of overhead and initial investment.


  • Advertisement
  • Registered Users Posts: 36,167 ✭✭✭✭ED E


    Wouldnt really bother going super garage-enterprise unless you're angling for a Sys Admin job soon. For me Im looking for something hands off as Flex has involved far too much interaction.


    Looks like Crashplan was the only backup that will run on a server OS, so maybe the next setup is going to have to take that into account. Or an S3 bucket... ($0.0125 per GB so $10.50/mo instead of 5-6...).


Advertisement