Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi all! We have been experiencing an issue on site where threads have been missing the latest postings. The platform host Vanilla are working on this issue. A workaround that has been used by some is to navigate back from 1 to 10+ pages to re-sync the thread and this will then show the latest posts. Thanks, Mike.
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

NAS for Dummies

  • 28-05-2019 4:10pm
    #1
    Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭


    Hi folks, so I'm getting my brain in gear to move from local hdd storage, to a NAS. What I'm looking to do is to buy a NAS, a couple of drives, and load it up with all my media - and for that media to be fed to a couple of consumer devices in the house like x2 Nvidia Shields, an Xbox, etc. I don't envisage any more than x2 people consuming media at the same time. I've a few questions first, if anyone would be kind enough to chime in and enlighten me :o

    1) I'm looking at the Synology DS918+ as the NAS itself. To populate it, I'm mulling between x2 12TB Ironwolves to start with, or perhaps x4 8TB Ironwolves. Not fully decided yet, but my question relates to drive volumes. Lets say I go with x4 8TB drives...When I install the four drives, format them, and create a volume....does the NAS itself then handle that volume automatically, ie, will it see all four drives as one volume and automatically spread any files/content across that volume accordingly? Or would I have to place files/folders manually where they're supposed to go, in terms of drive a/b/c or d?

    2) I'm going to need redundancy, so am considering RAID 1. This will cut my storage capacity in half I gather, as content is essentially mirrored between drives (what's on drive a, is mirrored onto drive b). How does RAID 1 work though when there's more than x2 drives? Lets say again I choose x4 drives, with a RAID 1 setup - drive a gets mirrored onto drive b, and drive c is mirrored onto drive d? Will the NAS handle all this automatically?

    3) Transcoding - I've zero use for accessing the NAS from outside of the LAN, and zero need/desire to access the NAS from devices such as phones/small screens. I'm therefore wondering about transcoding. I don't want any quality loss when using the NAS versus local drives in a pc...I want full quality, every time I use it. Considering that, do I absolutely need to use transcoding? At the moment 4K content is not something I'll bother with, so consider the bandwidth needed for sending 1080p content through the LAN for max x2 people at the same time. Is this achievable with cat5e/gigabit lan ports, with no transcoding??

    Feel free to educate me, because while I'm very comfortable with pc hardware/building/upgrades, etc, I'm not that clued in when it comes to networking/raid etc. Cheers folks


«1

Comments

  • Registered Users, Registered Users 2 Posts: 30,123 ✭✭✭✭Star Lord


    I can't help with the NAS information, but regarding RAID, as you're mirroring, if you want to up read/write speed, you could use RAID 10, which is a combination of the mirroring of RAID 1 and the striping of RAID 0, so essentially with the 4 disk setup, it'll see it as one drive, the size of two of the disks. The data will be written across both disks 0 and 2, and those are mirrored onto disks 1 and 3.

    raid10.png
    Image source: https://www.thegeekstuff.com/2010/08/raid-levels-tutorial

    That gives you both speed and resilience. So you theoretically could lose one of each of the mirrored pairs and continue to operate

    Haven't looked up the NAS or the disks you're looking at, but make sure they're hot-swappable so you can replace and let them rebuild without needing any downtime. Just make sure you don't remove more than one disk at a time, and let the RAID rebuild complete prior to removing/replacing any other disks, to ensure that everything stays running and you don't lose data.


  • Registered Users, Registered Users 2 Posts: 1,561 ✭✭✭Mizu_Ger


    Inviere wrote: »
    1) I'm looking at the Synology DS918+ as the NAS itself. To populate it, I'm mulling between x2 12TB Ironwolves to start with, or perhaps x4 8TB Ironwolves. Not fully decided yet, but my question relates to drive volumes. Lets say I go with x4 8TB drives...When I install the four drives, format them, and create a volume....does the NAS itself then handle that volume automatically, ie, will it see all four drives as one volume and automatically spread any files/content across that volume accordingly? Or would I have to place files/folders manually where they're supposed to go, in terms of drive a/b/c or d?

    I have mine (also a DS918+) set as JBOD (Just a Bunch Of Disks), so no RAID protection. I have 4 HDDs in mine and the NAS just makes the drives look like one big volume.


  • Registered Users, Registered Users 2 Posts: 13,518 ✭✭✭✭kowloon


    Once you set it up as a single volume the enclosure will handle the distribution. You won't need a fancy array to stream video. A warning: Don't use RAID as backup, if the enclosure goes you could lose everything.


  • Registered Users, Registered Users 2 Posts: 4,197 ✭✭✭smuggler.ie


    NAS devices, most of time in RAID1 config, often used for backup.

    RAID configuration is stored on disk set itself.

    If current NAS configuration is backed up(Synology, as many other, have this feature) and available, it could be loaded to new, same/similar model NAS box from same manufacturer. In theory this should recognize RAID straightaway.

    In event of disaster, providing disks are OK, data still can be recovered, however, it is lengthy and convoluted process, especially on big capacity volumes. Probably that's why is called "disaster recovery":D

    Recovering Data from a Failed Synology NAS

    How can I recover data on my Synology NAS using a PC


  • Registered Users, Registered Users 2 Posts: 740 ✭✭✭z0oT


    An alternative to RAID 1 is to use Sync software to sync the contents of one drive into the other. Then to have it run in the background as a scheduled task.

    The end result is the same as RAID 1, and that way you can swap drives in and out of the system totally independent of any motherboard.

    I've always rathered the sync method for that reason, but I haven't ran RAID in a very long time.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 4,197 ✭✭✭smuggler.ie


    RAID 1 is the high read performance, as data can be read off any of the drives in the array. RAID(except 0) provide high availability. Should not be treated as backup

    Sync is more like backup solution, without performance or resilience gain.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    NAS devices, most of time in RAID1 config, often used for backup.

    NO. RAID 1 is not backup.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Just had a whole post typed out and then hit CTRL-R not CTRL-T....eugh. Why do I post late at night.....


    In brief there have been some horror stories with Syn where mobos die and the array is lost where that shouldnt be the case. If you lose an LSI card you just buy it again and import the array. Syn like for like replacement hasnt always worked like it should.



    For you:
    RAID 0: Hell no
    RAID 1: Not applicable
    RAID 5: No no no no no no
    RAID 6: No no no no
    RAID 10: Meh, ok.

    AFAIK the Syn won't handle a R1 to R10 expansion so you'll have to buy all your disks at the start. Put in 32TB RAW get 16TB to use. Fairly tolerant.




    Transcoding won't be for bandwith reasons for you, itll be for codec support. If you've HEVC content (becoming mainstream, iPhones now default to HEVC) and say an older tablet it won't play the video period without it being transcoded. This is CPU intensive and requires a fair bit of heft. Your shields play HEVC natively so won't be a problem. Check out the rest of the devices to see if that'll be an issue.


  • Moderators, Society & Culture Moderators Posts: 15,778 Mod ✭✭✭✭smacl


    ED E wrote: »
    In brief there have been some horror stories with Syn where mobos die and the array is lost where that shouldnt be the case. If you lose an LSI card you just buy it again and import the array. Syn like for like replacement hasnt always worked like it should.

    Useful to know. For a small office data storage and back up solution I've had a few lower end NAS systems that have never been any great shakes and am thinking of replacing them all with a storage server. I'm also running an older PowerEdge, mainly for keeping an ancient but much loved CRM system ticking over, that has been running without any issues for years.

    Outside of space and possibly cost, does NAS offer any advantages over a small dedicated server?


  • Registered Users, Registered Users 2 Posts: 4,197 ✭✭✭smuggler.ie


    ED E wrote: »
    NO. RAID 1 is not backup.

    RAID is not backup, but NAS in RAID 1 is used as backup storage


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Wow, plenty of info there folks, many thanks. Glad I asked now, as this seems a bit more involved than I'd thought!
    Mizu_Ger wrote: »
    I have mine (also a DS918+) set as JBOD (Just a Bunch Of Disks), so no RAID protection. I have 4 HDDs in mine and the NAS just makes the drives look like one big volume.

    Interesting, cheers. Can you tell me a bit about JBOD, ie, is it basically one big volume spread across all the disks? Does it provide redundancy at all?
    kowloon wrote: »
    A warning: Don't use RAID as backup, if the enclosure goes you could lose everything.
    RAID configuration is stored on disk set itself.

    If current NAS configuration is backed up(Synology, as many other, have this feature) and available, it could be loaded to new, same/similar model NAS box from same manufacturer. In theory this should recognize RAID straightaway.

    Very interesting, I never knew that. So if the NAS fails, and the replacement doesn't like the RAID config stored on the existing disks, the volume is potentially lose? God that'd kill me, I've about 6/7TB of media that has taken a LONG time to curate. I absolutely need redundancy, it's the top priority.
    z0oT wrote: »
    An alternative to RAID 1 is to use Sync software to sync the contents of one drive into the other. Then to have it run in the background as a scheduled task.

    The end result is the same as RAID 1, and that way you can swap drives in and out of the system totally independent of any motherboard.

    I've always rathered the sync method for that reason, but I haven't ran RAID in a very long time.

    This sounds like it might be the way to go. I presume then within the NAS, you'd need x2 separate volumes? Ie, one for the media say on two disks, and the other volume would be x2 backup disks?

    Sorry for any silly questions guys, this is a schoolday for me in terms of the context of the thread, so I really do appreciate the help :)


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    smacl wrote: »
    Outside of space and possibly cost, does NAS offer any advantages over a small dedicated server?

    A dedicated box can be useful if theres no admin on site. Even a monkey could swap a dead disk and let it rebuild.


  • Registered Users, Registered Users 2 Posts: 13,518 ✭✭✭✭kowloon


    I have my data on the NAS for 24 hour access and backed up with sync software to a desktop with a bunch of drives in it. I would never have both copies of my data attached to the same machine as I don't want anything taking out both copies. You take that risk if you have all your drives in the same NAS enclosure regardless of which RAID type you use. If it's movies and music and the likes you don't need an array, the enclosure will have no trouble keeping up with streaming over wi-fi.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    So guys, does the following hypothetical scenario make sense (keeping in mind I've none of the hardware here/bought yet, and this is from what I've learned thus far)...

    Synology DS918+
    x4 8TB Ironwolf drives

    I would create x2 separate volumes, one volume would be media storage on drives 1 and 2 (JBOD volume 1), the other volume would be a mirror/backup, on drives 3 and 4 (JBOD volume 2). I would not use a Raid solution, but a scheduled sync-task to automate the process of backing up the contents of drives 1 & 2, onto drives 3 & 4. This means that because there's no Raid implemented, in the event of a drive/nas failure, I'd have no issues either using a new/replacement nas or a new drive. I'd have 16TB of storage, and 16TB to act as a backup.

    ^^ make sense?


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    smacl wrote: »
    Outside of space and possibly cost, does NAS offer any advantages over a small dedicated server?

    It depends exactly what you're asking here. Traditionally, a NAS is just a box of hard drives with the minimum supporting hardware to get the files on the network. So if you're talking about one of the prebuilt solutions - Synology, WD, etc. - then sure:

    They're smaller, quieter, and there's often a lot less configuring with them. You put the drives in, create your storage and you're done.
    You pay for this convenience through the nose though; a 4-bay Synology is over €550!
    Inviere wrote: »
    Interesting, cheers. Can you tell me a bit about JBOD, ie, is it basically one big volume spread across all the disks? Does it provide redundancy at all?

    JBOD = Just a bunch of disks, and it's exactly what it sounds like. It means there's no RAID config on top. The disks are just given to you as-is, seen by Windows as one big volume.

    Inviere wrote: »
    So guys, does the following hypothetical scenario make sense (keeping in mind I've none of the hardware here/bought yet, and this is from what I've learned thus far)...

    Synology DS918+
    x4 8TB Ironwolf drives

    I would create x2 separate volumes, one volume would be media storage on drives 1 and 2 (JBOD volume 1), the other volume would be a mirror/backup, on drives 3 and 4 (JBOD volume 2). I would not use a Raid solution, but a scheduled sync-task to automate the process of backing up the contents of drives 1 & 2, onto drives 3 & 4. This means that because there's no Raid implemented, in the event of a drive/nas failure, I'd have no issues either using a new/replacement nas or a new drive. I'd have 16TB of storage, and 16TB to act as a backup.

    ^^ make sense?

    I'm not familiar with exactly what configs Synology allows, but assuming the software will let you do this, then yes, that's a perfectly fine way to do things* (see end)

    I'm going to muddy the waters on you a little now though, and suggest you have a look at the software I'm using on my own home server. It's a little more advanced than what you're looking at now, but as you're having a school day, you might as well make the most of it.

    unRAID

    It's a Linux-based NAS OS. Don't let the L-word scare you! It's designed for people who don't know anything about Linux to be able to use. You should never have to look at command line while using this.

    In brief: It uses software RAID to manage the disks, so there's no controller or anything to worry about incompatibility. If something has a problem you just replace it and you're good to go. I've personally moved my server's disks across three totally different machines with no issues (yay compulsive upgrading). It just boots up and goes.

    Here's a good video on RAID in general, and how unRAID works:


    The main reason I'm thinking unRAID might be good for you:
    In a traditional RAID, if you lose more disks than your setup allows for, the whole array is gone. With unRAID (and some other software RAID solutions), you only lose the data on that disk, everything else is still there.

    Also, which how much that bloody Synology costs, you could build two different servers, and back one up to another one.

    * The config you mentioned protects against disk failures, but not everything. Ideally, you'd want that backup in a different box, or better yet, a different location. Your setup doesn't help if your house gets broken into, or if there's a power surge or something that wrecks the NAS. They're unlikely sure, so it depends how important the data is to you, etc.


    Giant wall of text for thought. Enjoy!


  • Registered Users, Registered Users 2 Posts: 4,197 ✭✭✭smuggler.ie


    Like everything else, you gain something at cost of loss of other.
    Its your call, do more research on JBOD and sorry if i add more confusion to the matter.

    General rule for backup is to store it on separate, independent device, evaluating fire/flood/theft/other risks - different room/building/geolocation (might not apply to you).

    One of benefits of JBOD - ability to expand and still present it as single volume to OS/app "on the fly". No gain on performance or resilience/ability to use space down to KB - same/different as individual drive. Actually, i think it would downgrade to lowest if you introduce "slower" drive.

    In this setup you occupy all drive slots on NAS and you "kill two rabbits with one shot" - data and backup "in one basket" and no room for expansion.

    Another thing to consider - JBOD, as name suggest, doesn't show you(OS/app) what folder/file is stored on what disk. Depending on folder structure, it might impact you somewhat in event of one or more drive failure.
    Lets say your two disk JBOD array(containing Pictures, Documents, Video, TV Show, etc folders) is full. You add another disk and have more room for storage without need to move/realocate anything - great! However, data is stored/spanned on different drive now. If one drive fail you lost part of data stored on that drive - recovery might be convoluted as over time, data blocks has/could been spanned over multiple disks. You could rebuild whole 16GB(or more) "JBOD set", but this will be lengthy and you will have no access to any data during rebuild.

    RAID (except 0) will have ability rebuild itself. Due to lack of knowledge/experience, can't comment of this with JBOD. Once again - RAID is availability, at reduced performance, not backup solution!

    If it doesn't mater many volumes you present to OS/app you could have presented it as separate drives containing different folders. In event of drive failure you just install new drive and copy/restore data from backup - drive to drive. Same performance. Down side of this, as sample, would be, in event of filling up one drive, containing "Video" folder and need to create "Video2" on another/new drive.

    Other opinions welcome. Some interesting above.


  • Registered Users, Registered Users 2 Posts: 2,032 ✭✭✭colm_c


    Don't get ironwolf drives, I got three last year, all failed within 30 days.

    The WD Red I replaced them with are much more reliable.

    Also +1 on unraid.


  • Registered Users, Registered Users 2 Posts: 740 ✭✭✭z0oT


    Inviere wrote: »
    This sounds like it might be the way to go. I presume then within the NAS, you'd need x2 separate volumes? Ie, one for the media say on two disks, and the other volume would be x2 backup disks?
    The way I do it on my Plex/File Server is this:

    I've a total of 9 Drives connected to the motherboard. 6 to the onboard SATA ports and a further 3 to a PCIe SATA card. I plan to expand them to full 10 10TB NAS drives in time.

    4 of those drives are the same size. On each pair of 2 I use Sync software to sync the contents of one drive into the other. (Drive 1 is sync'd to Drive 2, Drive 3 is sync'd to Drive 4). You could add additional backup drives if you like, but you would reduce the amount of storage available to you in each case.

    I have Drive 1 & 3 shared on the network so I can read/write stuff into it on other machines connected to the local network at home. The sync software is set to run in the background as a scheduled task on the server, so anything I copy to the shared drives is automatically backed up to the backup drives in the background.

    FreeFileSync is what I use. It works well.

    I'm not saying this is better or worse than RAID, but I just rather the simplicity of this setup.

    I can swap drives in and out of the machine independent of the motherboard, and in the event of one drive in each of the pairs dying the sync software will re-sync the contents of one drive into the other.

    I've cloud storage added to the mix aswell for additional backups of essential stuff. External Drives are good to have aswell.


  • Registered Users Posts: 24 bumders


    There's so many ways to do this.

    1. Depending on how good your broadband is could do a backup to the cloud weekly.
    2. Get a tape drive and software to backup the Nas device on a weekly basis.(I know I know tapes are becoming obsolete).


    Don't rely on a single device for anything!!! I prefer having my backups completely off the network and separate.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Serephucus wrote: »
    I'm going to muddy the waters on you a little now though...Giant wall of text for thought. Enjoy!

    Fantastic, many thanks for the info. I'd prefer muddy waters during this phase, rather than dropping a large amount of money to go in the wrong direction afterwards :)

    I'm going to watch that video now & read up on unRaid - sounds like a definite alternative, even better if I can make some savings in doing so!

    Will switch to WD Red's too, thanks colm...I asked this in another thread (which vendor to go with) & didn't get a ton of replies. I've always used WD's anyway, and have been/am wary of Seagate. I thought the Ironwolves were solid, but obviously not in your case!

    I'll be back with more head wrecking questions over the weekend folks, but I'm chuffed with the info so far, thanks guys :cool:


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 13,518 ✭✭✭✭kowloon


    colm_c wrote: »
    Don't get ironwolf drives, I got three last year, all failed within 30 days.

    The WD Red I replaced them with are much more reliable.

    Also +1 on unraid.

    I currently have four of them running with no issues, two of them are relatively new and the other two have been running well over a year. I've never had a WD drive fail either. What I have had happen to me twice was a machine blowing - a desktop and a NAS enclosure on separate occasions - and taking the data with them. A backup is there to save the data in case of some sort of unforseen event, the only accident a RAID array saves you from is individual drives failing. Having both sets of your data in one machine means any other type of failure puts the original and the backup at risk. If you are determined to stick four disks into one enclosure you may aswell put them all in one array as the chance of two drive failures at once, the only thing you're protected from, is pretty slim.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Inviere wrote: »
    Will switch to WD Red's too, thanks colm...I asked this in another thread (which vendor to go with) & didn't get a ton of replies. I've always used WD's anyway, and have been/am wary of Seagate. I thought the Ironwolves were solid, but obviously not in your case!

    I'll be back with more head wrecking questions over the weekend folks, but I'm chuffed with the info so far, thanks guys :cool:

    Feel free to fire away with the questions. The dumb one is the one you don't ask.

    unRAID can do a lot of stuff other than just data storage, but that's always been their core, and it's not going away. Lots of people use it and only do what you're doing, and it works excellently for that.

    As for Ironwolfs (Ironwolves?)... I dunno. I haven't used them personally, and I know Seagate gets a really bad rep for their 3TB Barracudas years ago that had a bad run, but the failure rates these days don't seem to be any different than WD, going by Backblaze and Newegg/Amazon reviews.

    I started out much like you - I just wanted a box I could leave on all the time to store stuff in, so I got the cheapest parts I could, bunged a load of HDDs in, and I was happy. Bit by bit, little by little, I upgraded here and there, and 5 or so years later I'm now running a Threadripper 1950X, 64GB of ECC RAM, 19 HDDs with about 85TB of storage, all in a 24-bay rackmount case. Ye be warned. ;)


  • Registered Users, Registered Users 2 Posts: 13,518 ✭✭✭✭kowloon


    Serephucus wrote: »
    and 5 or so years later I'm now running a Threadripper 1950X, 64GB of ECC RAM, 19 HDDs with about 85TB of storage, all in a 24-bay rackmount case. Ye be warned. ;)

    What are you doing with that beast?


  • Registered Users, Registered Users 2 Posts: 1,656 ✭✭✭deepimpact


    Serephucus wrote: »
    Also, which how much that bloody Synology costs, you could build two different servers, and back one up to another one.

    Hay S, what sort of spec server could be built for the price of the OP’s Synology (minus drives)?


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Serephucus wrote: »
    Ye be warned. ;)

    It's already spiralling :rolleyes: On the distant horizon is an incoming thread from me about UPS's and MySQL docker images and how to tie UnRaid, MySQL, & Kodi all together :o (24 hours ago I wouldn't have understood this post!)

    I think I'm convinced, UnRaid seems to tick all the boxes, while offering the options/flexibility/expansion options that 1st party NAS boxes don't have (well the ones in my price range anyway). This is turning into a real project now, and I'm loving learning about this stuff, so thanks again all.

    Shifting the discussion to hardware for the moment, I'm considering potential build options for such a box. Ryzen is clearly a no-brainer, but I am reading about difficulties between UnRaid and Ryzen, specifically in terms of bios settings/options. A lot of the info I'm reading isn't exactly new though, so I'm wondering if things are a bit smoother these days between Ryzen and UnRaid? I'll continue researching into it.

    Failing that, Intel. Meh, I'm so burned out with Intel at the mo, and the latest mitigations for vulnerabilities are seemingly crucifying hyperthreading performance. I'd really love to give team blue a miss for this build, but if the above about Ryzen issues is still relevant, Intel might have to be the only choice here?

    Which brings me to the whole point, efficiency. All I'll need this box to do is serve video files across the network (ideally not transcoded given the client side players can handle the source codecs/formats no problem), and run a MySQL database docker image in order to benefit Kodi and keep everything synced between rooms. That's really it. So I don't foresee needing anything beefy at all. This box will run 24/7 (mostly at idle tbh), so my primary concerns are running costs and thermals (while fulfilling the duties it's being built for).

    Initially, I'm looking at between the Ryzen 3 1200 (I know there's Zen 2 very soon, but given the requirements here, maybe Zen 1 is up to the task? £53 on Amazon at the mo. Or, the i3 9100 which is £125 on OCUK (preorder). Both are 4/4's, and I doubt the additional speed of the i3 would have any real bearing on my use case, or would it?


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Oh and just to thank those who emphasised the point about not storing everything in one box, I hear ya, and thanks! I plan to have some type of basic backup in place, hopefully building up to something better down the line. I'm thinking of using my existing drives, with some new ones, and just buying a external caddy just to manually back everything up, and then archive those drives. Not ideal, and not foolproof, but it's something for now anyway


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    kowloon wrote: »
    What are you doing with that beast?

    VMs. Lots of VMs. Either game servers, or learning stuff for work (HA clusters on Hyper-V and vCenter).
    scanlant wrote: »
    Hay S, what sort of spec server could be built for the price of the OP’s Synology (minus drives)?

    Well, I may have been exaggerating a little with the two servers bit. You could probably do it, but it would be bottom-of-the-barrel stuff. You could pretty handily do an A320/B450 build with a 1600, 8GB RAM, and a 450W PSU for around that money, I'd guess. That wouldn't factor in the €60-120 cost of unRAID (if you were going that way) though.
    Inviere wrote: »
    Shifting the discussion to hardware for the moment [ ... ] I'm looking at between the Ryzen 3 1200 (I know there's Zen 2 very soon, but given the requirements here, maybe Zen 1 is up to the task? £53 on Amazon at the mo. Or, the i3 9100 which is £125 on OCUK (preorder). Both are 4/4's, and I doubt the additional speed of the i3 would have any real bearing on my use case, or would it?

    As someone who was running Threadripper from almost as soon as it was supported in unRAID, it's completely fine. There were a couple of issues early on with idle power doing something weird. It's been rock solid for a long time at this point though.

    I'd say 1st gen will be fine, and you can always upgrade later if you want with very little hassle. One thing I would have a think about is the motherboard and case however. Depending on what you end up doing with it, these can end up lasting you longer than a CPU or drives.

    Before my Threadripper system, I was using a C2750D4I. A lovely little bit of kit, and still being used as my backup server. But I would not buy it.

    The main reason I that I liked (other than it being weirdly perfect for Silverstone's DS380) was the IPMI BMC built in. This thing lets you connect to a web-based management page for the motherboard itself. You can change BIOS settings, attach ISO images to boot from, and even open a virtual console window as if you're physically sitting in front of it with a monitor. In theory not used much, but a god-send for troubleshooting or tweaking.

    ASRock do have something similar for AM4 (link), but it wouldn't be cheap, so it's up to you if it's worth it or not. Just giving you all the options. You could absolutely go with desktop parts if you don't care about creature comforts.


  • Registered Users, Registered Users 2 Posts: 1,656 ✭✭✭deepimpact


    Serephucus wrote: »
    Well, I may have been exaggerating a little with the two servers bit. You could probably do it, but it would be bottom-of-the-barrel stuff. You could pretty handily do an A320/B450 build with a 1600, 8GB RAM, and a 450W PSU for around that money, I'd guess. That wouldn't factor in the €60-120 cost of unRAID (if you were going that way) though.

    Oh, I know. The bargain bin isn’t really what I’m aiming for, should have made that clearer.

    I actually have a 1600 and x370 mb in my current computer that I could just repurpose and upgrade. It has 6 SATA ports and an m.2 slot that doesn’t use those SATA ports (I think, I’ll have to check the manual).


  • Registered Users, Registered Users 2 Posts: 3,739 ✭✭✭scamalert


    i played around with freeNAS and found it quite easy to setup on basically any spare pc, that said i would be more interested to know whats the power draw cost to say run NAS based on pc parts as opposed to likes of Synology where power draw is 30w, electricity cost wise over the year.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    For what it's worth, The C2750D4I I mentioned earlier usually idled at around 25-30W with some drives spinning. If I pushed the CPU while all the drives were spinning during a parity check, it might get up to 60W.

    My TR system idles at about 260W, and I've had it pull close to 400W.

    * Idled = standard day-to-day running, not literally "idling doing nothing".


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    Must actually put mine on a meter now its fully populated.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    OK, trying to begin a preliminary build list here & am running into trouble already. I'll need (at bare minimum) x3 sata ports for this build (x2 data drives and a parity drive). This is bare minimum.

    I'd like to also use a cache drive for Unraid, preferably an M.2 drive for speed. This is where I'm running into trouble, the boards I've looked at seem to suggest that using an M.2 drive will disable 1 or 2 of the SATA slots on the board, presumably because the M.2 slot is using SATA resources? If using NVME, it's the same, sometimes disabling x2 sata ports.

    So I'm head scratching at the mo, has anyone any ideas of an AM4 board that'd allow me to use x3 SATA drives, an M.2 drive as a cache drive, and still leave me say x2 sata ports for the future?? I'd look at server hardware, but I'm trying to keep costs down, and I know zilch about server hardware anyway :(

    Edit - how does Unraid handle pci-e expansion cards for sata drives??


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Example - ASRock B450M Pro4

    https://www.amazon.co.uk/gp/product/B07FVYKFXF/ref=ox_sc_act_title_1?smid=A3P5ROKL5A1OLE&psc=1

    The specs say:
    4 x SATA3 6.0 Gb/s Connectors, support RAID (RAID 0,
    RAID 1 and RAID 10), NCQ, AHCI and Hot Plug - M2_2 and SATA3_3 share lanes. If either one of them is in use,
    the other one will be disabled.

    1 x Ultra M.2 Socket (M2_1), supports M Key type
    2242/2260/2280 M.2 PCI Express module up to Gen3 x4
    (32 Gb/s) (with Summit Ridge, Raven Ridge and Pinnacle
    Ridge)

    1 x M.2 Socket (M2_2), supports M Key type
    2230/2242/2260/2280 M.2 SATA3 6.0 Gb/s module

    ^^ So if I understand the above correctly, the motherboard has x4 Sata ports, and x2 M.2 ports. M.2 port 1 uses pci-e lanes, and M.2 port 2 uses sata lanes. If you use M.2 port 1, you'll have all four sata ports available, and if you use M.2 port 2, then sata port #3 will be disabled....correct?


  • Registered Users, Registered Users 2 Posts: 13,518 ✭✭✭✭kowloon


    Inviere wrote: »
    ^^ So if I understand the above correctly, the motherboard has x4 Sata ports, and x2 M.2 ports. M.2 port 1 uses pci-e lanes, and M.2 port 2 uses sata lanes. If you use M.2 port 1, you'll have all four sata ports available, and if you use M.2 port 2, then sata port #3 will be disabled....correct?

    Yeah, looks like one is SATA and the other is for NVMe. Some NAS enclosure let you use an SSD as a cache drive, that might be an option for you if you were to fill that extra M.2 slot.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    K found a more suitable board - https://www.amazon.co.uk/gp/product/B06X9F3FKP/ref=ox_sc_act_title_1?smid=A2S99BUYURIQ27&psc=1

    It's an X370 though, so kind of mismatched with the Ryzen 3 1200 cpu, but it's the only iteration of AM4 boards that seems to have enough sata for my needs.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Something worth mentioning:
    You can always add extra SATA ports very easily. Something like an LSI HBA will give you 8x SATA ports (and that one even comes with the cables). You'll often see the 9211 being recommended on forums, but I find it a pain in the ass. You have to go flashing firmware on it, and usually the utilities don't play nice with UEFI, and it's generally annoying. the 9207 is a faster card, and is just plug-and-go.

    (I'm using two of these in my server, for what it's worth)

    There are SATA cards that are much cheaper, but typically they're very low performance if you're hitting all the drives at once (as you would do during a parity check/rebuild).

    Also, if you're just transferring over the network, a SATA SSD will be completely fine. You're only going to be transferring at ~120MB/s, and a SATA drive will do 4x that with no problems. The only reason you'd use NVME really would be for VM vdisks or something. Though you mentioned an SQL DB, so maybe it would be useful there, I don't know.


  • Registered Users, Registered Users 2 Posts: 7,810 ✭✭✭Calibos


    That LSI HBA, have you bought them from that ebay vendor in your link Serephucus? Should I click 'Buy Now'. Exactly what I need. Flashing firmware always scared me. Any experience with Sata Couplers though cause I'd be using with semi hotswap bays with integrated Sata Data and Power cables so need to connect the SAS/SATA cables to those.

    I was actually looking at the Syba 8 Port but then read that the Marvell chipset on them can be flakey or interfere with mobo Sata chipsets.

    Yeah, still haven't built the media server I started planning 5 years ago and started buying parts for back in 2014 (Case and Hotswap bays) Life, Health and finances got in the way. Thankfully my little D-Link 323 Nas's I must have bought in 2007/2008 that were already ancient then went on working 24/7 for another 5 years after 2014. LOL. I am absolutely playing with fire now though!! I have to get this done soon and shunt all the content archived off the Nas onto family PC's back in one place accessible 24/7 along with the stuff still on the Nas's.

    While I did get hold of a cast off 4670K and mITX mobo from a family member after they upgraded, I'm now at the point where I'm thinking about upgrading my main VR/Gaming rigs 6700K and ATX mobo to a 9700K and use the former in the Media server build and then when I eventually upgrade my GTX1080 GPU, move that over to the server too and turn the Media server into a Steam In-Home Streaming game server too. Thats why I will be sticking with Windows on the Media Server and probably go with Stablebit Drivepool and Snapraid. The cast off 4670K mITX I'll now turn into a CCTV NVR.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Serephucus wrote: »
    You can always add extra SATA ports very easily. Something like an LSI HBA will give you 8x SATA ports (and that one even comes with the cables).

    Also, if you're just transferring over the network, a SATA SSD will be completely fine. You're only going to be transferring at ~120MB/s, and a SATA drive will do 4x that with no problems. The only reason you'd use NVME really would be for VM vdisks or something. Though you mentioned an SQL DB, so maybe it would be useful there, I don't know.

    Serephucus you're an absolute gent, I owe you a piny sometime for all the help with this, again, thank you for the effort & guidance!

    This build has been doing laps around my brain for the while weekend :o Here's where I'm at currently, and below, why I've chose these parts...

    Motherboard - Asus PRIME X370-PRO
    CPU - AMD Ryzen 7 1700
    CPU Coller - Cooler Master Hyper 212 EVO
    RAM - Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4 2400MHz
    PSU - EVGA SuperNOVA 550 G3 80 Plus Gold
    Case - Phanteks Enthoo Pro
    Cache Drives - WD Blue SSD's x2
    Data Drives - WD 8TB Red's x3 (x2 Data, x1 Parity)

    Ok, so it's a far cry from the Synology DS918+ I started out with here in this thread. I've gone with the motherboard because it offers x8 on-board SATA ports. This will offer me medium term expandability without having to buy any additional cards. It's a desktop component, and ECC ram support seems to be very contradictory. I don't think I can afford to take a chance buying ECC ram for the board/bios version that may/may not support it, so feel with the above build, I have to use non-ecc ram.

    The cpu is a far cry from the R3 1200 I had in mind, but for the extra price, I feel the jump to the R1700 is worth it. I'll obviously need it to run media server duties, but I'm also looking at running a MySQL docker container, AND possibly a Pi Hole one separately. Who knows what else I might need it to do down the line, so going from a 4/4 to an 8/16 now at the build/planning stage, seems to save the headache down the road.

    The PSU is probably overkill, but I want something gold rated, and something proven that won't give trouble. I've used G2's in the past and found them very nice PSU's, so feel the G3 here should more than comfortably provide the power draw for the build, while not restricting me to future additions.

    The cache drives, I think Unraid basically raid-1's these, so x2 500GB drives yields 500GB storage. I'd happily go with 250GB variants of these, but these were the same price on Amazon so listed them here. I'll obviously do proper price checking when I've settled on the components & am ready to buy, so these are subject to change to 250GB models later on.

    The RAM is 2400, but I'm not at all fussed with fast ram for a NAS, I don't think there's any need? It's desktop ram, so I'm not sure it's sitting well with me to not use ecc-ram, but I'm limited by the mobo. If I go with a board that supports ecc-ram, I seem to be sacrificing SATA ports, so that means an additional SAS card, which is more cost. Do I 100% need ecc-ram? I know Unraid lives in ram when booted, and ecc-ram can save some serious headaches, but are those cases rare? Would appreciate thoughts on this point (my biggest concern is the running of this 'server' 24/7, are desktop components up to that??)

    Cpu Cooler - the Evo 212 seems to review really highly. Seems perfect for the use case?

    Case - The Phanteks Enthoo Pro seems to fit the bill very sell, excellent cooling options, plenty of storage for drives, excellent for cable management & thus airflow...and again, reviews really well.

    So guys & gals, thoughts on the above build candidate? Is ECC ram a must? Should I consider server grade hardware (is anyone willing to hold my hand through that, & is it considerably more expensive)?


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Just realised I'll need a cheap as chips GPU too for bios configuration etc, but I'll pick something very very basic up for that.


  • Registered Users, Registered Users 2 Posts: 36,169 ✭✭✭✭ED E


    What, no IPMI? Serephs is in love with it and he's right.

    1. Install power and LAN
    2. Never need to touch it again.

    No VGA, no power buttons, nada.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Build looks good to me. There's 3200MHz RAM you can get for £2 more though, which I'd go for. The cores on the CPU talk to each other through Infinity Fabric, and the speed of this is coupled to the speed of the RAM.

    No IPMI I noticed. :P That one's up to you though, and it would drive the cost up quite a bit to jump to a server mobo. To be honest, they're not really any different than desktop boards for your use-case. They're just a lot less flashy, and go through a lot more testing and validation.

    For GPU, yeah, you'll need something for initial config. I use some piece of **** I got on Adverts 10 years ago for that. Quick Google shows me this: https://ie.webuy.com/product-detail?id=sgragef2101gb&categoryName=graphics-cards-pci-e&superCatName=computing&title=nvidia-geforce-210-1gb-dx10.1, though if I'm being picky, I'd probably look for a passive one. No need to have some little fan in there making extra noise if you don't need it.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    ED E wrote: »
    What, no IPMI? Serephs is in love with it and he's right.

    1. Install power and LAN
    2. Never need to touch it again.

    No VGA, no power buttons, nada.

    My plan is, rather than have the server running 24/7 (hugely wasteful given how much I'd use it), I'll have the BIOS wake the machine up every day at a given time, I'll set Unraid to start the array automatically, and then I'll set Unraid to stop the array and shutdown at a given time. So once everything is set up and configured, I shouldn't need remote management tools like IPMI (as nice as it is!).
    Serephucus wrote: »
    Build looks good to me. There's 3200MHz RAM you can get for £2 more though, which I'd go for. The cores on the CPU talk to each other through Infinity Fabric, and the speed of this is coupled to the speed of the RAM.

    No IPMI I noticed. :P That one's up to you though, and it would drive the cost up quite a bit to jump to a server mobo. To be honest, they're not really any different than desktop boards for your use-case. They're just a lot less flashy, and go through a lot more testing and validation.

    For GPU, yeah, you'll need something for initial config. I use some piece of **** I got on Adverts 10 years ago for that. Quick Google shows me this: https://ie.webuy.com/product-detail?id=sgragef2101gb&categoryName=graphics-cards-pci-e&superCatName=computing&title=nvidia-geforce-210-1gb-dx10.1, though if I'm being picky, I'd probably look for a passive one. No need to have some little fan in there making extra noise if you don't need it.

    Good spot on the RAM, I've changed it there in my Amazon basket. No IPMI, mainly for cost reasons - I'll prob only have this server running ~8 hours a day, fully automated, and accessible through the Unraid web interface. Any bios changes and I'll just connect a screen to it temporarily. I have a passive GPU in the basket (Asus GT 710, passively cooled). Ideally, the mobo will post without a gpu in it, in which case I'll leave it removed once I'm finished with it.

    I've also dropped from a 1700 to a 2600, passmark scores are very very close anyway. The reason being is I'm still reading about people with gen 1 Ryzen's having freezing/locking up issues in Unraid, with the countermeasures seemingly not working for everyone. Knowing me, that'd include me, so I've gone to a 2600. The issue apparently is resolved for the 2k series, and I'd still be running with 6/12, so should be plenty hopefully.

    DYING to get started on this, and in the meantime I'm looking more & more into Unraid, Docker, and MySQL for Kodi. Thank you all so much again!


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    I could be mistaken, but I'm pretty sure the stuff with Ryzen was BIOS/chipset related, and was resolved with Linux kernel patches (unRAID releases), and AGESA updates. Anyway, I'd probably with with the 2600 regardless. Didn't realise they were quite that close in overall perf. and the clockspeed bump would be nice.

    To mitigate not having IPMI somewhat, you can also set unRAID to be its own syslog server, so in the event of an unexpected crash at least you don't lose the log files.

    To give you even more to look at, here's what's currently on my server Docker-wise:

    482100.png


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Serephucus wrote: »
    I could be mistaken, but I'm pretty sure the stuff with Ryzen was BIOS/chipset related, and was resolved with Linux kernel patches (unRAID releases), and AGESA updates.

    Aye, there were several bios updates. One fix involves disabling c-states, another involved adding a line of code to an Unraid config file, another involves changing a power setting in the bios from "auto" to "normal" or somesuch. Works for some, others reporting no changes and their servers locking up. I think I'll just save the headache at this stage, if I ever need more cores I'll just pony up for a 2700 (might even drop in price when Zen 3 lands).
    To mitigate not having IPMI somewhat, you can also set unRAID to be its own syslog server, so in the event of an unexpected crash at least you don't lose the log files.

    Excellent to know! :cool:
    To give you even more to look at, here's what's currently on my server Docker-wise:

    I'm not even going to pretend I know what half of that stuff is! :o I'll get there tho!

    Can I ask you one final question, just going back to transcoding. Without ever having sat in front of an Unraid interface, how is transcoding managed/configured? Ideally, I don't want to transcode anything, I just want to serve the raw media files to x3 clients, all of whom can natively handle the files/codecs anyway...


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Inviere wrote: »
    I'm not even going to pretend I know what half of that stuff is! :o I'll get there tho!

    Didn't expect you to, but it gives you some stuff to start Googling when you ask the inevitable: "What else can I do with this?". ;)
    Inviere wrote: »
    Can I ask you one final question, just going back to transcoding. Without ever having sat in front of an Unraid interface, how is transcoding managed/configured? Ideally, I don't want to transcode anything, I just want to serve the raw media files to x3 clients, all of whom can natively handle the files/codecs anyway...

    So unRAID on its own has nothing to do with the transcoding. This will depend entirely on what you use to host and play your media. What are your clients?

    If you know all of your clients can play the media fine, then you can just export the media as a normal share over SMB and browse the folders from your client's file explorer.

    Otherwise, you've got a bunch of options. The two main ones are Plex and Emby. Some people prefer Emby because it's open-source, though I think that's been changing in recent times because a bunch of Ex-Emby people forked it to create Jellyfin.

    I've always just used Plex myself. They've revamped their client and server apps a couple of times since I've used it, and personally I feel like they've gotten worse, so I might look into changing in the future, but for the moment it's joined the longer-every-day list of "I'll get around to it".


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Serephucus wrote: »
    So unRAID on its own has nothing to do with the transcoding. This will depend entirely on what you use to host and play your media. What are your clients?

    If you know all of your clients can play the media fine, then you can just export the media as a normal share over SMB and browse the folders from your client's file explorer.

    Ahah, I see. The clients are x2 Nvidia Shields running Kodi, and potentially an Xbox One running Kodi too (the Xbox one isn’t high on my priorities, so basically two shields). As I understand it, I’ll share the media as NFS shares, and Kodi will play nicely with NFS (maybe SMB will work the same, I’m not 100% just yet). I’ll tie in a MySQL setup on Unraid, and configure both Kodi installs to look up via the MySQL database (done via advanced settings.xml through Kodi).

    The Shields play anything I have with zero trouble, so I don’t want to transcode really and lose quality. I know I’d have to if using tablets/phones to stream media, but I 100% won’t be. Two shields, maybe an XBO, and that’s it.


  • Registered Users, Registered Users 2 Posts: 7,180 ✭✭✭Serephucus


    Yeah, Shields should play anything fine, in my experience. Export via NFS/SMB should be all you need.


  • Registered Users, Registered Users 2 Posts: 7,759 ✭✭✭Inviere


    Serephucus wrote: »
    Yeah, Shields should play anything fine, in my experience. Export via NFS/SMB should be all you need.

    No doubt there’ll be a problem in every step of the way....but sure that’s half the fun of the hobby :o

    Oh, have gone from x3 8TB Red’s to x4 6TB’s instead, more space for very little extra. Ok there’s an extra point of failure with more drives, but it’s all a balancing act. Anyway, you’ve done more than enough man, many many many thanks for the advice, info, and patience :) I will be sure to come back to annoy you when this build gets started!


  • Registered Users, Registered Users 2 Posts: 17,463 ✭✭✭✭Blazer


    Just out of curiosity...you're going to a lot of work for a server/nas which won't be running 24/7.
    Why won't it be running that?
    Also constant startup/shutdown is not recommended for servers as it lessens the lifetime of components.
    Once a week would be fine, better once a month.
    I had looked at building one of these before but went for a synology (on my second one in 12 years) because it was quieter and while pricey at the start costs far less to run over time plus it has a great UI.


  • Registered Users, Registered Users 2 Posts: 3,739 ✭✭✭scamalert


    Blazer wrote: »
    Just out of curiosity...you're going to a lot of work for a server/nas which won't be running 24/7.
    Why won't it be running that?
    Also constant startup/shutdown is not recommended for servers as it lessens the lifetime of components.
    Once a week would be fine, better once a month.
    I had looked at building one of these before but went for a synology (on my second one in 12 years) because it was quieter and while pricey at the start costs far less to run over time plus it has a great UI.
    i think you answered that yourself power consumption on pc parts, as ive asked before what would be running costs and total draw. As didnt follow every step but OP is essentially using pc parts if configured correctly to start shut down shouldn't really be any issues, as pc components at this age are well able to handle that for years, hardest step will be config part to make it happen clean and not have corruptions in the process.


  • Advertisement
Advertisement