Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi all! We have been experiencing an issue on site where threads have been missing the latest postings. The platform host Vanilla are working on this issue. A workaround that has been used by some is to navigate back from 1 to 10+ pages to re-sync the thread and this will then show the latest posts. Thanks, Mike.
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Home Server/NAS build

135

Comments

  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Regarding the above, updates are quite quick. Verifys are SLOOOOOOW. I wasnt allocating enough "off" time to services running on top of the array for the verify to complete. New schedule is Update daily, Verify Sundays.


    And. Now TTL has built a new baby.




    Finally, this article should be left here in case anyone else is building one.
    https://www.backblaze.com/blog/what-hard-drive-should-i-buy/


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Yeah, full verify on my array takes around eight hours, from what I remember. Haven't run one in a while actually (usually just stick with quick validate), should probably do that...

    Mental server as usual. Don't think I've heard of anyone strapping an AIO WC unit to a RAID card before. :P


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    He doesnt do half measures. Ever.

    Wonder what the upper limit for Flex is. Like if you attached two eSata enclosures and a rake more drives, how big could you go?

    EDIT: Theoretical is 16mil TB's
    http://forum.flexraid.com/index.php?topic=514.0


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    I'd imagine it's pretty massive. Given that it's also software, file-based, I'd imagine there's an artificial limit to the drives, but it's probably something massive.

    Just had a quick Google, and according to Brahim (FS dev), the pool can grow to 16 million TB. I think you'd be fine with an eSATA enclosure so. :P


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    If 5k content becomes a thing Ill be putting it to the test.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Hurray for bad eyesight! I can tell the difference between 720 and 1080p content on a TV, though it's only a slight improvement, so I'll be sticking with 1080p movies at the most for the foreseeable future.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Only have 1080P monitors for now, except my phone (its kinda wrong that my phone is so much higher res than a 22"). Yet. 34" 1440P of some sort will be required very soon.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Just realised I never benched it as a NAS, not locally. Almost maxes out GigE. Wasnt actually using it at full tilt for a while as windows decided to use the NIC connected to a 10/100 switch, not the one directly onto the router(less hops AND faster).

    iUpv2pC.pngWLzxnyX.png
    AzPsIfa.png


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Little update in case anyone is using this thread for reference. Expanded the array this weekend to go from 2x3TB + 3TB to 3x3TB +3TB. Usable space from 5.5TB to 8.18TB. Fittingly right after the drive shipped amazon shaved €25 off the price of it (I sourced mine on ebay). Still good that prices look to be falling.

    Performance has never been an issue but I was curios to see how the figures are holding up. Local figures didnt change much by adding the drive.

    WUwmKYt.pngmRnrGIo.png
    (Think that 512k R is an anomaly )

    Unfortunately CIFs has never been the speediest of protocols, only sometimes maxes out the link. GigE gives about 125MB/s peak, transfers tend to start at 110 but then scale back down to 60-85MB/s. Still plenty fast for home use. Maybe I'll go 10Gbps in a year or two for ****s and giggles.

    LLC3kBQ.png
    d735dCG.png

    Long term flex has been pretty solid. Updates take 3-12 minutes and run at 6AM, very easy to keep parity with that. Verifys though sometimes roll on for ages, 6hrs plus, at which point one of the services running on the pool may have restarted(I kill activity when the verify starts) and change something, breaking parity. I think thats the nature of reading all 12TB of raw data in one go, so I may just do them less often or expand the time allocated.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    About FlexRAID: Can confirm. Mine has been nice and stable too. The verifies fail half the time, but that's just because some obscure file somewhere got a timestamp change or something similar.

    This is my array (6x4TB, 5+1). I think it might actually be the first time I've benchmarked it...

    3JPZe8M.png


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,810 ✭✭✭Calibos


    Gonna be asking both of you to hold my hand in a few weeks once Flexraid supports Windows 10.

    The oul lad is in desperate need of a new PC so I've gone back to the idea of picking up an i3 and swapping it out with the i5 in my own uSFF PC and giving that PC to him. He's currently using an atom powered miniPC with Win7 that was originally purchased to be an XBMC/Kodi Openelec machine. It's woefully slow. His monitor was the 1080p LCD TV. His eyesight is getting worse and I found him kneeling in front of the TV using the PC. He said he was grand but I told him, I couldn't be having that and would give him a better PC. Found out a brother had mothballed a Dell 2405. 2005 vintage that he thought had yellowed and dimmed with age. Picked it up and all that was wrong was a film of nicotine and a different menu setting brought back the vibrancy comparable to my own LG LED monitor. So dad will now have a decent little PC and decent large monitor when I'm finished.

    The Haswell i5 will go in an AsRock z97 mobo with 10 Sata ports along with another cast off from my brother of a GTX580. The case will be the Corsair Air540 I've had mothballed for over a year waiting for the server build. Remember the renders I posted. 27 3.5" bays. A couple of SAS/Sata cards will go in as needed in the future. This will be dual use Server/Personal PC for the next 9 months or so till I build my custom SFF 5820k/Pascal based Oculus VR gaming PC.

    As such, I want Windows 10 on it but that'll postpone the Flexraid install a couple of months till its supported. My current media in my two ancient Dlink NAS's has been unprotected for 7 or 8 years anyway so another few months is no biggy :D

    That said is there anything I need to change about the install on this machine when I build it in the next week or three to take account for the Flexraid install a couple of months later. drive naming, formatting, folder structure etc or is the install a normal windows install in all respects and it's just a case of adding the media drives to the storage pool once I've set up Flexraid.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Calibos wrote: »
    That said is there anything I need to change about the install on this machine when I build it in the next week or three to take account for the Flexraid install a couple of months later. drive naming, formatting, folder structure etc or is the install a normal windows install in all respects and it's just a case of adding the media drives to the storage pool once I've set up Flexraid.

    Snapshot raid is raid over the file system, but I've never migrated in full drives. The simplest methodology is probably to keep 1 of your largest single drives and 1 other empty. Then you can build a 1PPU 1DRU array, copy in one full load of DRU data, sync up, then delete that data off the source drive, add it as DRU2 and update. And so on and so forth.

    Flexraid locks windows out of mounting the drives once they're added to a pool so lettering etc doest matter much.

    One thing I do that might help is number every drive I buy. Record its Model & Serial number and what bay I'm installing it in. Then if you get a drive failure notification for "WD12345678910" you know exactly what bay to go to pull it and swap in a spare. Also helps track which disks are oldest and should be swapped out first if upgrading(looking to 3yrs from now when 8TB may be the norm).


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    BTW the 3TB and 4TB perform similarly in benches so I'm curious as to how your figures are that high Serephucus. Not that it matters with GigE.

    Do you have "Turn off windows write cache" buffering checked Seph?

    EDIT: Nevermind, its diskmark.

    YuLH4yI.png


  • Registered Users, Registered Users 2 Posts: 7,810 ✭✭✭Calibos


    Can ye lads recommend an 1150 Haswell board either the original or refresh in a micro-ATX form factor that will support 3x 8 Port Sas/Sata controller cards. (ie. 3x x8 capable PCI-E slots???) My Corsair Air540 with the Hotswap caddies I bought will support a max of 29x HDD's and a Bluray drive. Ultimately I'll eventually need 3x 8 port Sas/Sata cards anyway which is 24 drives so whatever motherboard I get can have as few as 6 Sata ports. Will be putting in my Haswell 4570S i5. It needs to be a Micro ATX in order to fit the motherboard in the orientation and position where I need it to fit. You may recall I'm modding this particular case and using it very unconventionally with the HDD caddies filling the side of the case the mobo et al usually goes and actually fitting the mobo on the thin side of the case where the PSU and cables usually go. I can still fit a Noctua C14 CPU cooler so combined with the low powered i5 I have it should make for a very quiet server.

    [Edit]How about the AsRock H97M http://www.asrock.com/mb/Intel/H97M/

    Its only €77 from Mindfactory.de

    In particular can you look over the PCI specs. I'm assuming a supposedly Quad Crossfire capable board must have PCI-e slots capable of running 3x SUPERMICRO AOC-SAS2LP-MV8 PCI-Express 2.0 x8 cards. ie any clue what the lane division is. Will I get x8 x8 x8 on 3 of the PCI slots for the Supermicro sas cards?


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Necro, but I wanted to update the thread.

    Warning: If upgrading flexraid to 2.1 you may lose all your schedules. This means your array will no longer be anyway redundant!
    [StoragePool][UPDATE] SUCCESS 13:18 (30 NOV)
    [StoragePool][UPDATE] SUCCESS 14 Nov

    Not pleased that this happened with no warning in the changelogs/post about the update. Next time I'm going ZFS/BTRFS I think.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Just found out SuperMicro have an IPMI app, rather handy if you wanna check something while on your back working in a closet etc.

    hukXX4P.pngokqHhIi.png


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Ooh, nice! I assume it works remotely as well?

    RE other thread: I knew I meant to do something before starting unRAID. I even looked the other day and noticed the updates. I'll do that once this parity check finishes (only 5 hours!)


    On unRAID:

    I haven't done a huge amount yet in terms of testing. I got two 8TB drives in, and am using a spare 1TB drive, as well as the 4TB parity from my current array to test things out. So I've got a 2+1 and cache arrangement going on.

    The initial parity sync took just over 20 hours (ugh), and the parity check looks like it'll take about 16.

    For those wondering: A parity sync in unRAID is like a parity update with FlexRAID. The check is like a verify. With unRAID being real-time though, this shouldn't be needed (the parity is updated on write to the array), and only really protects against some sort of silent corruption.

    As I want to start out with a known good array before I force failures, I haven't got to testing yet, and I'm a bit limited in what I can test, mainly because I don't have another drive to add to make a cache pool (in this case, a 1:1 mirror) like I'll be running with the final install, but at least I can test the main array parts.

    I really, really like all the folder options with this. No more bollocking around with Windows horrible networking permissions. Breaks down like this:

    Public - full r/w/x for guests and users
    Secure - guests read, r or r/w set per user
    private - guests no access, r, r/w or none set per user

    You can also choose to export the share or not (whether or not it's browsable over the network), or to export it, but have it hidden (so you need to know it exists and the path beforehand).

    And it all just works!

    VMs work wonderfully too. You get a little indicator on your dashboard of any VMs or Dockers you have running, and two clicks and you're either looking at the WebUI (Docker) or opening a VNC session (VM).

    Then there's Docker, and plugins... Coming from FlexRAID, the whole experience just feels much more polished and cohesive.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Good to hear its going well, if I was going from scratch I think it would probably win out. Linus putting it through silly stresses and still hold up really highlighted its potential.

    Now that its behaving again and assuming that it stays that way I think I'd have been happier with it if the limitations were highlighted. Brahim doesnt make it obvious that while its NTFS, its not real NTFS, not the same as NTFS on a RAID or a Storage Space. That tripped me, but knowing to work around it from Day 1 would have halved the teething.

    Also you should be advised to re-create when you upgrade versions. Lots of errors appeared post 2.1 and are gone since the fresh start (data in place).

    Important RE C2750D41:
    If you update the BMC, things probably will break. Has happened both times for me. Networking and login credentials. The supermicro tool fixes this, but if you don't know to look for this it'll leave you cursing.


    Next step is going from 4 to 7 drives, but I'm waiting for the 3TBs to yoyo lower than 74.45 GBP.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    ED E wrote: »
    Important RE C2750D41:
    If you update the BMC, things probably will break. Has happened both times for me. Networking and login credentials. The supermicro tool fixes this, but if you don't know to look for this it'll leave you cursing.

    AFAIK, the IP gets reset to 0.0.0.0, but how would the Supermicro app fix this?


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Serephucus wrote: »
    AFAIK, the IP gets reset to 0.0.0.0, but how would the Supermicro app fix this?

    Its an executable, you run it in the host system and reconfigure the BMC.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Oh ok, didn't realise it had a partner application to go with the mobile app. Must have a look at that soon.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    For reference of anyone else looking at this later on, some numbers from my server after moving to unRAID:

    To an SSD cache pool:
    DbWSIA4.png

    To a 10-drive HDD array:
    0PSQV0C.png

    As you can see, I'm completely limited by a single 1Gb/s cable. :P

    There's no internal benchmark for unRAID (that I know of), but something that is not dissimilar is the parity check process, as it reads from all drives simultaneously, and is limited by the read/write to the parity drive. During this, I've seen sustained reads of 1.2GB/s.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Easy way to bench it is to spin up a VM, the virtual network adapter should remove any bottleneck.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Alright, I'm blaming that one on college projects, and lack of brain power at the end of the day...


  • Closed Accounts Posts: 5,017 ✭✭✭tsue921i8wljb3


    Has unRaid improved the write speed direct to the parity protected array? I thought it was limited to ~30MB/s due to each write actually incurring four disk operations? Hence the need for a cache drive.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    It's been far higher than that for me. When I was migrating data over from my last array I was pretty consistently getting around 100MB/s (as reported by the disk stats plugin) write. This is with TB+ size transfers.


  • Registered Users, Registered Users 2 Posts: 7,026 ✭✭✭Wossack


    little bit out of the loop, but I believe all the network wizardry linus helped them tweak and min/max is expected to be rolled into the 6.2.0 update (currently in advanced beta stage afaik)

    Personally, I only get 100mb/s for the first 500-600 megs or so, then drops to 50/s (writes to cache, half those speeds for parity protected writes). I do have fairly questionable network settings however (a cursory glance through the driver attributes and disabling any buffering/acceleration stuff pretty much doubled my transfer speeds. could do with another look now I think..), and my cabling is somewhat suspect. No so easily remedied,as the server resides in the attic hah


  • Closed Accounts Posts: 5,017 ✭✭✭tsue921i8wljb3


    Serephucus wrote: »
    It's been far higher than that for me. When I was migrating data over from my last array I was pretty consistently getting around 100MB/s (as reported by the disk stats plugin) write. This is with TB+ size transfers.

    On my HP Microserver running version 5 with no cache drive I max out at about 35MB/s for writes. Perhaps version 6 has improved this.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    AFAIK it can depend a lot on the filesystem you're using as well. If you're still on ReiserFS that could be it. (main array is on XFS, BTRFS for cache pool)


  • Closed Accounts Posts: 5,017 ✭✭✭tsue921i8wljb3


    Serephucus wrote: »
    AFAIK it can depend a lot on the filesystem you're using as well. If you're still on ReiserFS that could be it. (main array is on XFS, BTRFS for cache pool)

    I am on Reiser. I might consider upgrading. Thanks.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Doing some fiddling to slow the fans (goofed, bought 2x reducers instead of 3 and not significant enough ones) when I found a bug in the system.

    SnIjcEol.jpg

    As the prodigy say, invaders must die!


    On another note: Flexraid is junk. I've checked everything on the system and it still throws wild errors sporadically making my parity data useless. If it wasn't for Crashplan I'd have dumped it all already but a rebuild to Unraid will be a laborious task.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    You need... RAID

    raid-ant-and-roach-17.png?la=en-US&hash=52983A6D3B4A19035D5A15C4B10AF7D3DCBA90E4

    How many drives have you got? Because I did a gradual move to unRAID. Started with three drives (bought 2, used the parity from FlexRAID. Not best practice, but hey, needed a drive), copied a couple of drives at a time, then added them in, and kept going. Minus all the testing the array was only out of commission for about three days.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    3x3 for Data, 1x3 for parity + an Auxiliary drive for backups and stuff thats flex incompatible. I'm about 80% full so I'd probably buy another couple 3's to move with. I've been waiting for prices to drop but there's been really little movement in the spinning market despite how SSDs are hammering them.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    If it's any use to you, I saw a 6TB WD Purple in CEX Tallaght the other day for €150. Sorely tempted to pick it up, but I don't physically have space for the damn thing. Move your data on that and you're most of the way there.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Just to add a little note in case anyone else has the same issue.

    The Marvell controllers have been known to be crap. Cant keep up with SSDs, crashed on high load (now patched with a FW update), crappy support site, Dodgy management software. The issue I had was drives appearing as SCSI disks even though they were clearly SATA only. This causes problems as certain tools will ignore them or treat them as USB.

    Solution, drop the driver.
    bwIS5.png

    Now things work as they should. Another reason to yearn for more Intel ports.


    On another note: Its worth adding that the C2750D4I is likely to die. Dont copy this build.


  • Registered Users, Registered Users 2 Posts: 7,810 ✭✭✭Calibos


    I am the thread Resurrection and the light!! :D

    This might be the year I finally build my media server after talking about it for years. Going to use 6 of the 9 Hotswap Bay type Caddies I bought years ago for an 18 Bay media server in the Corsair Air 540 case and using a spare mITX H87 mobo and Haswell i5 4590S and 8gb of DDR3. I also have a Windows 10 Home Licence.

    Just need to buy a PSU, the Noctua NH-D9L cooler and the PCIe SAS/Sata card. I see prices of the 16 port versions have come down a lot since I last looked. I'd be using Windows Storage Spaces.

    Can anyone recommend a 16 port SAS/Sata card to me.

    http://www.ebay.com/bhp/16-port-sata?rmvSB=true

    426042.jpg


  • Registered Users, Registered Users 2 Posts: 7,026 ✭✭✭Wossack


    i use an ibm m1015, bought in 2014 for £104 - you'd need two though, as they're only 2 port SAS cards (each port broken out to 4x sata lanes)


  • Registered Users, Registered Users 2 Posts: 7,810 ✭✭✭Calibos


    Wossack wrote: »
    i use an ibm m1015, bought in 2014 for £104 - you'd need two though, as they're only 2 port SAS cards (each port broken out to 4x sata lanes)

    mITX mobo so only 1 PCIe slot. Thats why I'm looking at 16 port ones.


  • Registered Users, Registered Users 2 Posts: 7,882 ✭✭✭frozenfrozen


    not that it would help you Calibos as you still need 2 pci slots but Wossack just in case you ever need more, in that situation it would be cheaper to get a HP sas expander (less than 100 euro, sometimes way less) you would then have 32 ports to play with


  • Registered Users, Registered Users 2 Posts: 7,810 ✭✭✭Calibos


    Should I pick up something like this new from the likes of Newegg

    https://www.newegg.com/global/ie/Product/Product.aspx?Item=N82E16816118142

    or can anyone see anything as suitable and cheaper on the ebay link.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,882 ✭✭✭frozenfrozen


    Calibos wrote: »
    Should I pick up something like this new from the likes of Newegg

    https://www.newegg.com/global/ie/Product/Product.aspx?Item=N82E16816118142

    or can anyone see anything as suitable and cheaper on the ebay link.

    what about an m1015 + Intel RES2SV240 (powered by molex not pci)


  • Registered Users, Registered Users 2 Posts: 7,026 ✭✭✭Wossack


    neat solution that..!

    cant imagine me needing more then the 8 personally, something to consider though


  • Registered Users, Registered Users 2 Posts: 462 ✭✭tazzzZ


    I really like that case setup.... currently in need of a new case for a home server build. keep looking at rackmount cases with like 20 hot swap bays. at max capacity of my current case for hdd. or would love a standard case with 9 or 12 5.25 drive bays and put in 3 5 bay hotswap cages. cant find anything perfect for my needs though.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    RE 16-port card: There's a bunch of LSI-based ones. Any 9201 design one should be grand.

    RE case: I've been in the same situation for a while. I finally ended up ordering a couple of extra HDD cages for an old Nanoxia case I have (DS1). I'll lose hot-swap, but I gain a tonne of capacity, so... eh. Compromises.

    There was a lovely case full of 5.25 bays, sound damping, the lot, but it's not made any more, and I can't find it: Nexus Prominent 9.

    After that you're into rackmount stuff, which I'll be moving to at some point, but has to wait until I'm out of my parents' place. :P


  • Registered Users, Registered Users 2 Posts: 462 ✭✭tazzzZ


    yea im thinking of just going rackmount tbh. the one thing against it for me is the convenience of the hotswap drive cages i was getting. literally just slide the hdd in and it works. no tray or anything. so nice to use.

    Im also tempted to sell my whole server and start again with ryzen 7 build. starting to get the odd thing in 4k h265 that my processor struggles with. 4790S.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Got two tasks for mine at the minute. Firstly replace Crashplan and then get rid of flexraid.

    Been looking at BTRFS/ZFS/SS/Raid Cards with expansion support (UPS..?)/Other. Lots of sorta close options but nothing jumps out at me.


  • Registered Users, Registered Users 2 Posts: 7,882 ✭✭✭frozenfrozen


    ed, FreeBSD ZFS and an LSI 9211-8i (/Dell PERC H200) flashed to IT mode then the HP sas expander from above or a fancy case with a sas expander backplane etc. cheapest way to cost effective resiliant storage from my bits of researching this stuff for myself

    I'm always looking for used enterprise gear but nothing ever turns up here, in USA they can get unbelievable gear lightly used for peanuts and then run it with their nearly free electricity


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    If i was putting ZFS on top of the drives then I could probably leave them on the Marvell controllers no? (through the DS380 backplane and maybe an ESATA extension cage).

    Edit: Intel + good Marvell controller.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    I'd imagine so, though you'd probably want something better than eSATA going to the external box. Could grab one of the LSI cards with external SAS ports or something I suppose, and hook that up to a JBOD box with expander backplane.



    Out of curiosity: What are your guys' thoughts on ZFS vs. something like unRAID?

    The nerd in me wants to try ESXi with failover clustering, but then I've no idea what I'd use it for. It's kinda like that for ZFS. It looks lovely for the amount of stuff it does, but the lack of expandability is annoying. I know you can, but the mechanics require quite a bit of overhead and initial investment.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Wouldnt really bother going super garage-enterprise unless you're angling for a Sys Admin job soon. For me Im looking for something hands off as Flex has involved far too much interaction.


    Looks like Crashplan was the only backup that will run on a server OS, so maybe the next setup is going to have to take that into account. Or an S3 bucket... ($0.0125 per GB so $10.50/mo instead of 5-6...).


Advertisement