Jump to content
RESET Forums (homeservershow.com)
AVTechMan

Question on Either Building NAS or iSCSI SAN

Recommended Posts

AVTechMan

Hello, 

 

I have been walking around in circles most of the day trying to figure out whether to build a NAS or a server to use as an iSCSI SAN target. Here's my situation.

 

I am building up a networking lab at home and so far I have two physical hosts, each running 2012 R2 Datacenter version with the Hyper-V role installed, and have 2 VM's running on each one. I also have Active Directory setup in one of the VM's along with the DNS and DHCP.

 

I want to build a file server in order to share files, to where my wife can have her own folder to send/save files to, as well as myself.  I have many files on different hard drives and USB sticks that I want to put them in one place for centralized storage. I also do some video editing on the side so I would need to transfer alot of media based files to it as well. 

 

Thing is, I would like to have my VM's to be ran from the file server and have PC backups from clients saved to the server too. I know with iSCSI the target can be set up as a LUN where the server sees it as a local drive. I've been reading about failover and other features that will be very useful, but I am trying to get a grasp on how to do it.  

 

Since I have alot of cases and parts around, including a few 4U server chassis that are currently unused, I put together another system that uses the Skylake 6700K system along with 32GB of RAM on the Z170 Extreme6 motherboard. Installed Intel Quad NIC and the LSI MegaRAID 9260-8i card. The chassis I am using now is the Rosewill L-4500 case in which I have a total of 8 2TB  drives for around  14TB formatted capacity. I do have two HBA cards not currently used at the moment.

 

I am thinking that going the NAS route would be the way to go here but I don't think Hyper-V works with a NAS if I thought about having my VM's run from there......I could simply create a new volume with the RAID card and just section out space for iSCSI  targets and the remaining drives for NAS usage. This is where I am stuck at.

 

Should I just look into doing a NAS based system with WS2012R2 or look into setting up iSCSI for my drives? This will be Ethernet based since I know the Fiber Channel based SAN is very expensive and requires more hardware. I know what I would like to do seems like overkill, but since I am planning to return to school after 20+ years to learn IT I figure to try to get a head start.

 

Many thanks for any thoughts!

Share this post


Link to post
Share on other sites
ShadowPeo

Reasonable easy answer, do both. I am not sure about Windows Storage Spaces or alike (although they do have some functions that allow for certain configurations to work better for VM storage vs general storage). I have also not tried it on a server as I have always done SAN's or direct server storage

 

Considering its a Home Lab though if you are willing to give it a shot I would build the server into boot disks mirrored (I normally mirror two SSD's for server boot) then split the disks into two RAID arrays (or one if you build the file server as a virtual server in Hyper-V which is how I do it most of the time) and then set up the storage server to either host files on one array and the other as an ISCSI target. I know for instance Synology allows different volumes to have ISCSI on one and storage on the other as a commercial solution.

 

If you RAID it as all one unit and end up with 14TB you may want to read www.zdnet.com/article/why-raid-6-stops-working-in-2019/ first

Share this post


Link to post
Share on other sites
AVTechMan
1 hour ago, ShadowPeo said:

Reasonable easy answer, do both. I am not sure about Windows Storage Spaces or alike (although they do have some functions that allow for certain configurations to work better for VM storage vs general storage). I have also not tried it on a server as I have always done SAN's or direct server storage

 

Considering its a Home Lab though if you are willing to give it a shot I would build the server into boot disks mirrored (I normally mirror two SSD's for server boot) then split the disks into two RAID arrays (or one if you build the file server as a virtual server in Hyper-V which is how I do it most of the time) and then set up the storage server to either host files on one array and the other as an ISCSI target. I know for instance Synology allows different volumes to have ISCSI on one and storage on the other as a commercial solution.

 

If you RAID it as all one unit and end up with 14TB you may want to read www.zdnet.com/article/why-raid-6-stops-working-in-2019/ first

I started by creating a RAID 10 to see how it turned out.......I had 7.5TB for usable space. I read from many different articles and forum posts, including the one you linked that RAID 6 isn't really viable now because as the drive capacity gets higher, there's a higher chance of unrecoverable errors during a rebuild.

 

Read about Storage Spaces but some articles and opinions says that it isn't that great, plus being software based RAID. Of course I may do a test with that on another spare system to see how effective it is.

 

I will consider your suggestion on creating a mirror for the OS drive and splitting the arrays, perhaps both RAID 10 but will give me about 4TB usable space. I have a total of eight drives under the RAID controller and the remaining four under the motherboard ports. 

 

If I get this accomplished, then I can study on creating a failover cluster with the two Hyper-V hosts, and perhaps running the VM's from the iSCSI targets (one target for each host).

Share this post


Link to post
Share on other sites
ShadowPeo
2 hours ago, AVTechMan said:

I started by creating a RAID 10 to see how it turned out.......I had 7.5TB for usable space. I read from many different articles and forum posts, including the one you linked that RAID 6 isn't really viable now because as the drive capacity gets higher, there's a higher chance of unrecoverable errors during a rebuild.

 

Read about Storage Spaces but some articles and opinions says that it isn't that great, plus being software based RAID. Of course I may do a test with that on another spare system to see how effective it is.

 

I will consider your suggestion on creating a mirror for the OS drive and splitting the arrays, perhaps both RAID 10 but will give me about 4TB usable space. I have a total of eight drives under the RAID controller and the remaining four under the motherboard ports. 

 

If I get this accomplished, then I can study on creating a failover cluster with the two Hyper-V hosts, and perhaps running the VM's from the iSCSI targets (one target for each host).

 

You are quite correct, RAID 6 has issues or will have in the future, the article that I linked to in the first post explains a little of the math behind it. As it is to do with sectors however rather than disk numbers I doubt your disks would have RAID rebuild issues utilising RAID 6.

 

Again correct on storage spaces and whilst I still do not like it I have been lumped with it as the PERC cards no longer support SSD caching, I mean who in their right mind would now removing caching given the drives are finally cheap enough to use for it. so I have had to go a software route for it and I have been pleasantly surprised

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...