Jump to content
RESET Forums (homeservershow.com)

Home Server and Virtualization Build


DesertServer
 Share

Recommended Posts

So I've listened to a few podcast's and have a need to build a home server.

 

1. CPU choice I need this primarily for testing/ practice on various server os's. I just got a job at a datacenter that has many different OS's. I toyed with the idea of having a sever blade lab setup but after playing with one dell 2u rack piece I realized those are way too noisy for my bedroom. I was looking at AMD x6 core cpu's 1055T seems to be sweet spot for cost and performance. I know some software can only use 4 cores but what about VMWARE or other options?

 

2. Raid ?'s - I've already ordered a raid cage/backplane for 5 drives. I was thinking about starting with 4 1tb drives. From what i've read I was considering the RE drives from WD due to TLER support. I was going to just put a cheaper consumer grade drive in but I wonder how many issues I'd have.. WD blue was consumer product I was looking at. Also I want a hardware based raid card. Looks like dell Perc cards get high marks, probably have to buy that used off ebay.. Other suggestions for true, hardware based, raid card? I want to run it in raid 5 or 10.

 

3. Motherboard What are quality brand motherboards these days? Last pc I built with new parts was a few years back, I have a gigabyte, I like the dual bios feature... Other suggestions for AM3 socket if I get AMD or a i3 intel board if you think I should go that direction.

 

4. Base OS choices. Going to install on a mushkin 60gb ssd. I have 2008 R2 and 2003 server and any microsoft product I want (thanks to MDNAA education alliance!) Free and legal, what a deal.. I'm comfortable with linux, I run a FreeNAS box now, its totally full and aging (512mb ram +boots off a 256 mb usb stick).It serves my website, has a few windows shares setup and does a few other duties, probably keep it in the network for a while as I transition. Would Ubuntu have any advantage for a virtual environment over the microsoft choices? Seems it might be more stable and less virus prone..

 

There are a few other considerations to be made but these seem to be the bigest ones..

 

Thanks!

 

Trent

Link to comment
Share on other sites

So I've listened to a few podcast's and have a need to build a home server.

 

1. CPU choice I need this primarily for testing/ practice on various server os's. I just got a job at a datacenter that has many different OS's. I toyed with the idea of having a sever blade lab setup but after playing with one dell 2u rack piece I realized those are way too noisy for my bedroom. I was looking at AMD x6 core cpu's 1055T seems to be sweet spot for cost and performance. I know some software can only use 4 cores but what about VMWARE or other options?

 

2. Raid ?'s - I've already ordered a raid cage/backplane for 5 drives. I was thinking about starting with 4 1tb drives. From what i've read I was considering the RE drives from WD due to TLER support. I was going to just put a cheaper consumer grade drive in but I wonder how many issues I'd have.. WD blue was consumer product I was looking at. Also I want a hardware based raid card. Looks like dell Perc cards get high marks, probably have to buy that used off ebay.. Other suggestions for true, hardware based, raid card? I want to run it in raid 5 or 10.

 

3. Motherboard What are quality brand motherboards these days? Last pc I built with new parts was a few years back, I have a gigabyte, I like the dual bios feature... Other suggestions for AM3 socket if I get AMD or a i3 intel board if you think I should go that direction.

 

4. Base OS choices. Going to install on a mushkin 60gb ssd. I have 2008 R2 and 2003 server and any microsoft product I want (thanks to MDNAA education alliance!) Free and legal, what a deal.. I'm comfortable with linux, I run a FreeNAS box now, its totally full and aging (512mb ram +boots off a 256 mb usb stick).It serves my website, has a few windows shares setup and does a few other duties, probably keep it in the network for a while as I transition. Would Ubuntu have any advantage for a virtual environment over the microsoft choices? Seems it might be more stable and less virus prone..

 

There are a few other considerations to be made but these seem to be the bigest ones..

 

Thanks!

 

Trent

 

 

Lots of different ways to go but it does not sound like you are really build a home server but rater a VM or test box. Either way I would go with any of the Core I Clarkdale chips if you you think you need video. I believe there is more bang for the buck with Intel than AMD. Based on what you stated as your OS, I would go with at least 4-8 gigs of RAM. As for RAID, I am using green drives in two separate RAID configurations right now and have had no issues. The RE series is the safest but most expensive way to go. Again, if you are using RAID, you will not be using WHS V1. I think you really need to refine what you are going to do with the system. What you pick for hardware differs significantly if you are using WHS or an other OS such as Win7. WHS is a completely different animal. As for the SSD, great choice for a general purpose or HTPC but a bad choice for the WHS. Just my two cents.

Edited by cskenney
quoted name change
Link to comment
Share on other sites

Guest no-control

So I've listened to a few podcast's and have a need to build a home server.

 

1. CPU choice I need this primarily for testing/ practice on various server os's. I just got a job at a datacenter that has many different OS's. I toyed with the idea of having a sever blade lab setup but after playing with one dell 2u rack piece I realized those are way too noisy for my bedroom. I was looking at AMD x6 core cpu's 1055T seems to be sweet spot for cost and performance. I know some software can only use 4 cores but what about VMWARE or other options?

 

2. Raid ?'s - I've already ordered a raid cage/backplane for 5 drives. I was thinking about starting with 4 1tb drives. From what i've read I was considering the RE drives from WD due to TLER support. I was going to just put a cheaper consumer grade drive in but I wonder how many issues I'd have.. WD blue was consumer product I was looking at. Also I want a hardware based raid card. Looks like dell Perc cards get high marks, probably have to buy that used off ebay.. Other suggestions for true, hardware based, raid card? I want to run it in raid 5 or 10.

 

3. Motherboard What are quality brand motherboards these days? Last pc I built with new parts was a few years back, I have a gigabyte, I like the dual bios feature... Other suggestions for AM3 socket if I get AMD or a i3 intel board if you think I should go that direction.

 

4. Base OS choices. Going to install on a mushkin 60gb ssd. I have 2008 R2 and 2003 server and any microsoft product I want (thanks to MDNAA education alliance!) Free and legal, what a deal.. I'm comfortable with linux, I run a FreeNAS box now, its totally full and aging (512mb ram +boots off a 256 mb usb stick).It serves my website, has a few windows shares setup and does a few other duties, probably keep it in the network for a while as I transition. Would Ubuntu have any advantage for a virtual environment over the microsoft choices? Seems it might be more stable and less virus prone..

 

There are a few other considerations to be made but these seem to be the bigest ones..

 

Thanks!

 

Trent

 

We need you to define what this is for. Right now it's all over the map. Is this a VM machine or primary that will have VM's for testing? Type I or Type II VM? do you want this to anything else? if so what? How much data do you want to store? What is your budget? What type of VMs are wanting to run? What do mean by "home server"? A lot of the *nix hypervisors are very narrow in their hardware support. So you cannot just throw together just any pile of parts and expect it to work.

 

1. Forget the AMD 1055T unless you're upgrading existing hardware it's not going to be much cheaper. And if you are planning on running more than 3 VMs at once, I would strongly suggest going with a hyper threaded quad core like the i7-870, i7-950, or a Xeon x3440. I've only tested in ESXi so I cannot comment on the 4 core limit. Hyper-V has no issues. You als do not state if you are going with a Type I or Type II VM if Type II what is the host O/S?

 

2. I've never had a TLER issue in any RAID I've run. I see no reason what so ever to get the RE drives. Especially since this is a test server. Consumer grade drives are fine. For a hardware RAID card the Dell PERC 5/6 cards looks like a deal. I've had several, but you need to until you consider they use an older cable standard (can be expensive) and most do not come with the BBU (expensive) which will cause it cry every time you boot. Oh did I mention the initialization time for just the PERC is around 15 seconds on top of normal BIOS and O/S loading times? I would go for either a cheap 4 port SATA RAID card and let the CPU deal with the overhead, its cheaper. or If you must have top level performance get something with minimum of 512mb of on board memory and use SAS fanout cables. but these are $600+

 

3. Dual BIOS is great for overclocking. No reason for it at all on a server board. Again depending on what you want to specifically do will drive what board you need. Gigabyte and ASUS are great boards on the server side SuperMicro has a few decent offerings. mainly you need to decide what features you need. How many slots and what type to run the cards you want? How many NIC's do they need to be Intel? IKVM or IPMI support? The hole gets deeper....

 

4. The SSD is a waste for the host O/S especially since boot time savings will be destroyed due to H/W RAID initialization. A better setup would be to use the SSD for VM storage but 60gb isn't going to cut it. I would just get rid of the SSD altogether. Either get a single spindle drive of a decent size 500gb-1TB or run smaller drives in a RAID 1 for redundancy 2x 320GB-640GB for your host O/S. As for an O/S if you have access to Server 08 R2 then the choice is clear. as this will allow you to run with most flexibility, and support. not sure where you're getting that Linux is more stable than WS08R2 it's not. No server should be virus prone as no one should be surfing the net with it. If you lock down your firewall properly this shouldn't be an issue. With decent hardware running an AV in the background isn't going to hurt either.

 

Sorry to be so negative. I'm not trying to be, I'm just lost as to what you want to do. I would love to help, but I need more specifics. I'll share that I recently built a production VM server (see my sig) and with some minor tweaks it could easily handle any RAID setup in addition to vm duties. It will discussed in an upcoming podcast.

Edited by cskenney
quote name change
Link to comment
Share on other sites

We need you to define what this is for. Right now it's all over the map. Is this a VM machine or primary that will have VM's for testing? Type I or Type II VM? do you want this to anything else? if so what? How much data do you want to store? What is your budget? What type of VMs are wanting to run? What do mean by "home server"? A lot of the *nix hypervisors are very narrow in their hardware support. So you cannot just throw together just any pile of parts and expect it to work.

 

1. Forget the AMD 1055T unless you're upgrading existing hardware it's not going to be much cheaper. And if you are planning on running more than 3 VMs at once, I would strongly suggest going with a hyper threaded quad core like the i7-870, i7-950, or a Xeon x3440. I've only tested in ESXi so I cannot comment on the 4 core limit. Hyper-V has no issues. You als do not state if you are going with a Type I or Type II VM if Type II what is the host O/S?

 

2. I've never had a TLER issue in any RAID I've run. I see no reason what so ever to get the RE drives. Especially since this is a test server. Consumer grade drives are fine. For a hardware RAID card the Dell PERC 5/6 cards looks like a deal. I've had several, but you need to until you consider they use an older cable standard (can be expensive) and most do not come with the BBU (expensive) which will cause it cry every time you boot. Oh did I mention the initialization time for just the PERC is around 15 seconds on top of normal BIOS and O/S loading times? I would go for either a cheap 4 port SATA RAID card and let the CPU deal with the overhead, its cheaper. or If you must have top level performance get something with minimum of 512mb of on board memory and use SAS fanout cables. but these are $600+

 

3. Dual BIOS is great for overclocking. No reason for it at all on a server board. Again depending on what you want to specifically do will drive what board you need. Gigabyte and ASUS are great boards on the server side SuperMicro has a few decent offerings. mainly you need to decide what features you need. How many slots and what type to run the cards you want? How many NIC's do they need to be Intel? IKVM or IPMI support? The hole gets deeper....

 

4. The SSD is a waste for the host O/S especially since boot time savings will be destroyed due to H/W RAID initialization. A better setup would be to use the SSD for VM storage but 60gb isn't going to cut it. I would just get rid of the SSD altogether. Either get a single spindle drive of a decent size 500gb-1TB or run smaller drives in a RAID 1 for redundancy 2x 320GB-640GB for your host O/S. As for an O/S if you have access to Server 08 R2 then the choice is clear. as this will allow you to run with most flexibility, and support. not sure where you're getting that Linux is more stable than WS08R2 it's not. No server should be virus prone as no one should be surfing the net with it. If you lock down your firewall properly this shouldn't be an issue. With decent hardware running an AV in the background isn't going to hurt either.

 

Sorry to be so negative. I'm not trying to be, I'm just lost as to what you want to do. I would love to help, but I need more specifics. I'll share that I recently built a production VM server (see my sig) and with some minor tweaks it could easily handle any RAID setup in addition to vm duties. It will discussed in an upcoming podcast.

You're right, this needs to be defined better.. I think that I have a variety of needs that haven't been addressed for a while and was hoping to solve them all in one fell swoop with a new build..

 

1. Leaning toward Xeon chip, I will doing type II vm's.

 

2. You're right about the enterprise drives; in this application they are a waste of cash (mine). I think i'll just get some wd green drives. I need about 3 tb of storage so I think 4 in raid 5 will do the trick.. Perc 5i cards seem reasonable on ebay, a few include bbu and breakout cable.

 

3. iKVM would be good.. 2 Nic's, one could be built into motherboard. Full size board would be best I think.

 

4. Boot times were not really why i was considering using ssd for os. Wouldn't the VM's run a lot faster from the ssd? I guess I don't know footprint of server 2008 r2, it may take up too much space on 60gb drive..

 

So I'm out of space and thats one reason for the new system. Currently I have media spread across two systems. HTPC, hooked to my projector, has 1tb wd black hd thats full. I have freenas server setup with about 400gb of space that is also full. I was going to migrate both sets of data to new server use upnp to stream to xbox and to htpc. I think this is best but I'm open to more streamlined ideas to resolve this. I started this post saying I needed to define what I was trying to build.. Not sure that's been accomplished... Probably doing more brainstorming than question asking here...

 

 

Thanks for the response!

 

Trent

Link to comment
Share on other sites

Guest no-control

You're right, this needs to be defined better.. I think that I have a variety of needs that haven't been addressed for a while and was hoping to solve them all in one fell swoop with a new build..

 

1. Leaning toward Xeon chip, I will doing type II vm's.

 

2. You're right about the enterprise drives; in this application they are a waste of cash (mine). I think i'll just get some wd green drives. I need about 3 tb of storage so I think 4 in raid 5 will do the trick.. Perc 5i cards seem reasonable on ebay, a few include bbu and breakout cable.

 

3. iKVM would be good.. 2 Nic's, one could be built into motherboard. Full size board would be best I think.

 

4. Boot times were not really why i was considering using ssd for os. Wouldn't the VM's run a lot faster from the ssd? I guess I don't know footprint of server 2008 r2, it may take up too much space on 60gb drive..

 

So I'm out of space and thats one reason for the new system. Currently I have media spread across two systems. HTPC, hooked to my projector, has 1tb wd black hd thats full. I have freenas server setup with about 400gb of space that is also full. I was going to migrate both sets of data to new server use upnp to stream to xbox and to htpc. I think this is best but I'm open to more streamlined ideas to resolve this. I started this post saying I needed to define what I was trying to build.. Not sure that's been accomplished... Probably doing more brainstorming than question asking here...

 

 

Thanks for the response!

 

Trent

 

1. Type II using what o/s as the host and what Hypervisor for the VM? Since TechNET was mentioned I'm going to assume WS08R2 w/ Hyper-V Role installed, unless you tell me otherwise.

 

2. The PERC's are definitely a bargain for the performance. I would go for the ones with at least the BBU. 4x 2TB WD GP would be a really nice RAID 5

 

3. I would recommend the SuperMicro MBD-X8SIL-F-O mobo . It has Dual GigE Intel NICs, IMPI, plenty of slots and 6 onboard SATA for future expansion. Only downside (if you can call it that) is it's mATX. If you're willing to pay the extra $50 for the full ATX you'll get more slots and RAM DIMMs, up to you if there's value there. Also consider you'll need to buy ECC RAM.

 

 

4. Yes the VM's will run faster as the IOPS increase. but consider that the WS08R@ w/ Hyper-v is ~30GB. Each VM is going to need minimum requirements to install. A 60GB SSD isn't going to cut it. I would use a separate drive for VM storage. either dedicate a single 120gb SSD or larger to just VM's, use your RAID array (it's going to be pretty fast!) or get a fast spindle drive like a 300+GB VelociRaptor or an array of smaller 15k SAS drives would be really fast as well. You can run the 4 SATA drives for the array on the 1st channel of the PERC then run 3-4 really small <100gb SAS drives on the 2nd channel.

I opted for the VelociRaptor as it was under $100 on sale. Then placed the backups of the VMs on the O/S drive.

 

 

There's some options for you to mull over....keep asking questions so we can keep narrowing it down to a final solution.

Link to comment
Share on other sites

To dwell on point #4 for a while...

 

Disk resources and performance are THE most important aspects for virtualization. CPU and RAM resources are no doubt important but storage will make or break a good VM server.

 

I build and use ESXi servers at work and without any doubt at all, the biggest benefit after a process goes from my test server to the live server for ESXi is the massive difference in disk I/O. The test server is a 4 core (2 x 2 core Xeon C2D) HP Proliant server with 2 x 72GB 10k RPM drives. The production server is the same but the drive array is a set of 6 15k RPM drives on an HP P400 controller and set in RAID5. On the production server, a WinXP VM will boot 20% faster than a physical PC with 4 times the RAM and a dual core AMD Athlon II. Not the same experience at all on the test server. Noticeably slower.

 

So the real question for storage is how much is your mix of VMs going to use and of that, how much I/O are you expecting the VMs to generate? If the VMs are more of appliances that do simple things (Disk wise) like bit-torrent or say Astaro Security Gateway or something similar, then loads of disk I/O is not a really big deal once the VM is up and running. Otherwise, you will want to look at nothing less than WD Black series drives in RAID 5 or RAID 10.

 

And for a brief moment on CPU, I have 6 VMs on the quad core (2 x 2) production server and not one of them lags at all. They are all running several unattended processes each that are somewhat CPU intensive(Oracle data processing for specific functions in the operation that I support). So, from where I sit, CPU is definitely secondary to Disk.

 

I should also say that I am not at all a fan of Type II VMs. I started out that way but found bare metal Type I much better. But this is in a work environment so use what is best for your needs.

Link to comment
Share on other sites

Guest no-control

While I somewhat agree, having enough RAM is more important, but once you have enough to drive all the VMs then Disk performance will matter. As suggested a RAID 5 is to be employed I haven't seen a huge difference between the blacks and the greens. Obviously Blacks will tend to be slightly faster. The difference you see is due to the # of disks you have (6) and their speed (15k). The OP was talking about using an SSD so unless he's using 3 of them in a RAID or a big fat one for just VMs. It's a moot point. for the cost of 6 15k SAS drives he can easily create a 3 120gb SSD array.

 

But let's also consider the OP actual use of the server? As far as I can tell this is for media streaming, and a web server, with some testing. Hardly anything that will load the server. Also without a hard number for a budget I'm going to assume he's not looking to spend a whole lot. The CPU recommendation was simply due to the need for more than 3 machines. the x3440 would allow 7 vm + host.

 

Lastly while I tend to agree on the Type II VM's for production, you've obviously never used Hyper-V. I've used ESXi and if you deviate from the HCL you're screwed. Not too mention WS08R2 +Hyper-V will be the only way he can direct attach HDD or an array to WHS should choose to VM it.

Link to comment
Share on other sites

No-control, you are correct on the not using Hyper-V. Hence why I caveated that what I experienced was at work and that he should use what works best for him.

 

I never bothered to mention RAM as it is generally moot where it is so damn inexpensive for so very much of it. If I was building a VM environment at home, I would blow it out with whatever the motherboard would hold short of spending say more than $400, which right now for a desktop mobo would set you up with no less than 16 GB.

 

And ya, VMware ESX/ESXi is problematic if you do not stay either within the official HCL or the community maintained HCL so it has a major influence on what motherboard you can chose.

 

But your points do swing this back full circle to your first comment that this project of his needs more of a solid and specific set of goals as that will even more influence the decisions vs just building one machine for one specific task. Virtualization is awesome but one poor choice can spoil the whole experience.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...