Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Home Server/NAS build

Options
123468

Comments

  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    There are just a million different options. Whatever I would go with would have to be something sysadmin-ish so I could write off the time and money as learning something. I spent a lot of time datahoarding and next to no time actually using the content..

    You could look at MergerFS + Snapraid as per linuxserver.io's home server 2017 guide


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Anyone heard much about EPYC or TR WS boards?

    Was kinda hoping to see something about an X399-WS from Asus, but nothing yet.


  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    I was asking my sister's boyfriend (who is a sysadmin) what they do with their old gear does it get recycled etc and he said they're just after decommissioning some stuff and I can have whatever he's allowed to take

    if it wasn't for this thread I wouldn't have asked him so time will tell what free stuff I get.. he said its all stuff just out of warranty so here's hoping it's at least socket 2011-era stuff and not space heaters


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    It'll most likely be R710s, if I had to guess. So Socket 2011 v1/2 stuff.


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    The homelab favourite. IIRC they can be spun quietly.


  • Advertisement
  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    just to not leave anyone hanging................................ turns out it's an ibm x3400 M3 from around 2012..

    Two 4gb dimms :(, e5607 4 core without hyperthreading :(

    good points are there's a IBM ServeRAID M5014 which can be flashed to LSI firmware to be a HBA, there's an 8 port SFF backplane and a 4 port intel gig network card. No drives, no caddies.


    need to buy a 5 euro dongle if I want to get iKVM

    Stuck 2 laptop drives in without caddies and a 3.5" drive into a 5.25" bay converter and installed esxi on a thumb drive


    was messing around with VyOS earlier and now replicating my previous attempt at a tvheadend server...


    I need a power usage meter to see if I can leave this on 24/7 because TBH although it's not at all powerful, it could easily run plex / tvheadend / sonarr / pihole most of which is on my pc 24/7...

    but it's essentially unupgradable anyway at the moment. Ram is expensive, can't add a second cpu without finding a vrm module (350+ euro..!).. so I'll just hang onto it for the moment as is..

    he did give me a 24 port dumb gig switch which was nice

    they probably had a deal with recyclers or something who took the good stuff... so I'll have to go back on the lookout for used enterprise stuff.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    As this is sort of the unofficial home server thread...

    Recently got some 8TB drives from the states, and was all ready to hook them up with a PERC in a new case... Only to realise that I'd completely forgotten about the backplane in my DS380, and the fact that it powers 8 drives via 2 Molex connectors.

    So I need a new PSU. How many drives do you reckon I can chain from a connector? I have a total of 15 disks. Thinking of picking up a new PSU anyway (thinking of keeping the SFX build as a backup server). Current contender is a HX750i. 12 SATA and a bunch of Molex as well.

    Going for higher wattage as I'm hoping to move to Threadripper/EPYC down the line, possibly with GPUs as well.


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    The connector shouldnt really be the concern, its the PSUs 12V rail that matters. And again only at spin up will you see problems I suspect.

    Say a WD80EFZX is:
    12VDC ±10% (A, peak) 1.79
    5VDC ±10% (A, peak) -
    Average power requirements (W)
    Read/Write
    Idle
    Standby/Sleep
    6.4
    5.2
    0.7

    So the ST30SF (thats what you have right?) is rated at 25A on the 12v so supports 13 drives, almost 14 so 13 would be very safe.

    Get the spec sheets for your actual disks as some will be more or less (My REs are probably 4A a piece :rolleyes: ).


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Yup, 30SF is what I've got.

    Hmmm... Better having a new PSU so.

    HX760i with a bunch of SATA, or move my 600W Platinum unit over, and get something new. I could almost run the SFX thing in my desktop as stop-game, except that it only has a 6-pin PCIE, and my card needs on 8-pin.

    The PSU in my desktop is a very good one, but at this point it's 6+ years old. Probably fine for a couple more years.

    All of my disks are either 4TB Reds (which use less), or the 8TBs.
    (The Easystores actually have a slightly different model number - WD80EMAZ - which I can't find specs on, but I'd assume they aren't hugely different).


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    WD2000FYYZ: They dont list the peak amperage....
    WD30EFRX: 1.79
    ST1000DM000: 2.0

    Why would the datacentre drives not list it when the SME/SOHO ones do? Strange. Would like to know what headroom I have.


  • Advertisement
  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Ironwolf (Pro): 1.8A (2.0A) startup. 5-7.8W Not too bad for the 10TB drives.

    HX750i ordered. Test-fitted a couple of drives in my Nanoxia case, and man is it going to be a pain. Really tempted to just get a 4U case and be done with it.


  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    I'm looking for many 3.5" drive hot swap bays too.. But direct attach I don't need another system running 24/7

    I have found 4tb sff drives..STDR4000100 now I know they're 5400 rpm but at 33 euro per tb you get into the huge selection of sff backplanes which are dirt cheap. I could add another 8 sff hot swap bays for 20 euro or so to the ibm tower?

    I will have to do a little spread sheet or something with all the costs.. Per tb per slot per caddy etc..

    But with 10TB drives all the 12 bay lff chassis like Dell r510, could be good.. That is if those backplanes even support disks that big? I've no idea

    The 10tb disks are expensive but not so bad when cost per Tb comes into it at many many disks I suppose


    Can you stagger disks spinning up on startup? There must be a way...


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    For disk spin-up, yeah. There's usually a BIOS option to stagger it.

    The Ironwolf drives are really good value. 10TB not so much, but the 8TBs in comparison to the Reds are really good. You can get 5-year warranty for not a lot more than the 3-year Red.


  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    Serephucus wrote: »
    For disk spin-up, yeah. There's usually a BIOS option to stagger it.

    The Ironwolf drives are really good value. 10TB not so much, but the 8TBs in comparison to the Reds are really good. You can get 5-year warranty for not a lot more than the 3-year Red.

    Would staggering stop you needing a new psu? Have the load spread out over 30 seconds rather than all at once


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Possibly, but it would be really close.

    Also, I do any kind of hardware upgrade, and I'd probably need a new one anyway. Currently using a 20W SoC, so basically anything would probably push me over the limit.


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    Lets be honest here Sere, you just want to buy new toys. No need to make excuses :pac:

    The spin up would be controlled by the PERC in your case so it'd be up to its FW to support a staggered start. Only ever played with LSI Raid/HBMs so not sure if that's offered or not.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    ED E wrote: »
    Lets be honest here Sere, you just want to buy new toys. No need to make excuses :pac:

    The spin up would be controlled by the PERC in your case so it'd be up to its FW to support a staggered start. Only ever played with LSI Raid/HBMs so not sure if that's offered or not.

    Ssssshhhhh!

    I'll be flashing the PERC to LSI FW anyway. Not actually sure if the PERC supports staggered start. I should know that...

    I have the 8TB drives preclearing over USB at the moment, so at least they'll be ready to go when the new PSU gets here.

    Anyone heard any more on EPYC? Was announced (it feels like) ages ago, but haven't seen anything of retail availability, never mind the motherboards. Ideally I'd like ASUS to step up and make a WS/IPMI X399 board, but I don't know if something like that it in the works or not.


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    Still havent made a move on this one but really should.

    - H800 seems a little limiting for €150+, wont spend more on a card I don't think
    - BTRFS > ZFS (probs) but is still a bit unstable, or is it? Nobody can agree. Also my nix isnt very strong, shame on me, so that doesn't inspire huge confidence.


  • Registered Users Posts: 7,882 ✭✭✭frozenfrozen


    ED E wrote: »
    Still havent made a move on this one but really should.

    - H800 seems a little limiting for €150+, wont spend more on a card I don't think
    - BTRFS > ZFS (probs) but is still a bit unstable, or is it? Nobody can agree. Also my nix isnt very strong, shame on me, so that doesn't inspire huge confidence.

    Absolutely no way with btrfs run ZFS, it's not even up for debate anywhere I've looked btrfs is a really interesting way to lose data. ZFS is just slow without l2arc and uses lots of ram if you want data deduplication, and you need to add new drives in sets rather than being able to expand drive by drive.

    I'm all talk when it comes to that stuff, I have jbod of loads of drives and any important files are just manually copied to several places.. :cool:


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    At least I use Synctoy (I know some have had issues with it, fine for me).

    rBFNUFm.png

    Leo on todays SN told the story of them getting a fake HDD, never would have even imagined that. 3rd party Amazon seller taking ****e drives and putting new stickers on them to sell for a premium price. "Product of thailand" was the giveaway when it gave trouble.

    *looks suspiciously at own RE disks*


  • Advertisement
  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    Sorry to keep bumping this, but as an FYI:

    BMC crashed on the 2750, had to pull out a monitor etc to get into the bios to see it, then a cold boot to get it back. Frustrating.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Weird. Haven't had that happen (yet)

    Did have a recent Java version update break the console viewer though. Rolled back to 8u121 and it works again.


    Still waiting - patiently - with a TR board with IPMI, or EPYC availability...


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    What the...thought she was dead
    FlexRAID RAID-F Refresher Release (2017-11-21)


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    I was very surprised too! Interested to see what the "major release" contains. I mean, I won't be using it, but I'm still curious. :P


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    DO NOT UPDATE YOUR IPMI VIA CHROME

    3U33vPGh.png

    Works fine in FF.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Nice one! I meant to post this the last time I had to update things. I won't say how many times I tried the update until I tried it in FF...

    Also, that new FlexRAID release didn't seem to be anything groundbreaking. The only thing that wasn't explained was FlexRAID Standards, whatever the hell that is.


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    My install was doing all sorts of strange things (disabling any disk inserted in Bay 5 but no others). Waiting to see if its less unstable now before I try the latest build.

    The SMB fix is a biggie (so fúckin annoying) but the rest is still pants. Possibly thinking StableBit DP next but really not sure. New RAM was returned DOA so can't go ZFS etc just yet.

    Decisions decisions.


  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    No unRAID love?


  • Registered Users Posts: 36,166 ✭✭✭✭ED E


    Its in the spreadsheet


  • Advertisement
  • Registered Users Posts: 7,179 ✭✭✭Serephucus


    Depends what you're going to be doing I guess.

    I've been thinking ESXi for a while, mainly because some of the really cool VM stuff - Pretty similar to what you can do with Server 2016 - but that would mean a separate VM for Docker, and unRAID's Docker is just so feckin' simple to use, it'd be a same to give that up.

    So my next task will be trying to set up an ESXi VM in unRAID. Backwards to how most people would do it, but hey, I'm a glutton for punishment.


Advertisement