Jump to content
RESET Forums (homeservershow.com)

To SSD or not


cattlerustler
 Share

Recommended Posts

So I already own 2 Gen8 both running ESX boot from USB, SSD cache and spinning disk via an Adaptec Raid controllers for the guests

So this performs well enough, one server is running an i5 3470T and the other is running an E3-1260L

I do also have a Gen7 N54 which I want to retire 

So, I have just purchased a 2nd hand Gen with 16GB of RAM and have a ordered an E3-1265L V2 cpu

I already have an adaptec 5405H raid card to go in, but here is the question

I have no disks for the machine, I was planning on a similar approach to my other servers, boot from USB SSD cache, but what do I do with the guests?

 

So should I SSD or not? Was going to get 4 x 1TB spinning disk (RAID5) maybe 7200rpm to keep up the access time, but then I thought what about SSD? I don't need that much storage, I have several synology NAS for that, maybe 4 x 500GB SSD in raid 5 instead, but will they be ok? Do I need to buy specific ones? I really don't know.

Help? Anyone done similar and gone SSD?

Link to comment
Share on other sites

I used to run mine booting from SSD, data on spinners, and then an SSD for my VM's...

I was running server 2016 though..

Link to comment
Share on other sites

Depends on what you need or what bottleneck you want to deal with. A single Gigabit Ethernet port will give you around 125MB/second transfers. You could team the two onboard NICs and get close to 250MB/s. A RAID5 of 7200RPM drives will give you more than 250MB transfers. The Adaptec 5405 appears to be 3Gb/s SATA,, a four drive RAID5 using SSD should give you 700-750MB/s throughput. 1TB SSD's are the sweet spot for price starting around $100, but you will need to buy adapters to mount them in the 3.5" bays.

It depends on your workloads and if you need IO performance internal to the server or throughput to the outside world.

Link to comment
Share on other sites

it is internal work load not data transfer, I made a typo on the raid card it is a 6405H so is 6gb, I will be running 4 windows VMs I'm always constrained by RAM sadly, but the foot print and price of the Gen8 makes them so attractive with a cpu upgrade they are hard to ignore, just wish they could run 32gb or ram

Link to comment
Share on other sites

Please don't take this the wrong way ...

... but ...

... buying all the components except 1 and not saying what your network is used for ...

... means you might not like my reply which could be complete rubbish.

 

When the Microservers first came out at £100-125 a pop they were a very cheap way to ease into home lab construction and home storage (becoming better an better for storage as hard disks became larger). Each generation has become less cost-effective in those roles as prices rose.

As you have found the base model needs to be upgraded with CPU, memory and RAM to produce any level of production facility for VM's.

 

I would have sold all the old Microservers and bought a more modern system with:

- NVME storage

- 32, 48, 64 ... GB RAM according to requirements

- fast CPU

- 8 disk bays for 3.5" disks

- 10GBE

 

 

Link to comment
Share on other sites

problem is I need multiple devices for resilience etc, also power consumption etc.

last problem I'm rubbish at ever getting round to selling things, buying cheap 2nd hand works for me currently.

If you really feel it is important, I sadly have little trust of the Cloud and feel the need to keep my data and especially emails close to me and under my control, I also happen to be a Microsoft partner so I also run systems to keep up to date. Finally I also like to keep my finger in with ESX, so, with all that I run Exchange 2016 with shared DAGs spread over multiple VMs, I host websites and email services for non profit organisations for free because I can. I want to migrate to Exchange 2019, but I just don't have enough cpu capacity currently hence me buying another Gen 8, I was really only trying to gauge if using SSD was going to provide a reasonable performance gain. Server 2016 and 2019 use huge IOPS during windows updates and running exchange services so I wanted to speed them up.

I have now done quite a reasonable amount of research and I think I'm going to go for WD red 500GB SSD, they are a nice price point and have a 5 year warranty, with the bonus of being design for 24/7 operation.

Whilst I might use reasonably cheap kit for providing servers I tend not to skimp on discs as that is where most failures come from. I have a tendency to cycle my older discs to less important backup tasks before they are retired after 5 years or so.

Enterprise SSD however are just completely out of reach price wise, it will be interesting to see if I can get a reasonable performance improvement with the SSD

Oh, and I have no need for 10GBE it provides no benefit to my use currently

Edited by cattlerustler
Link to comment
Share on other sites

One area I forgot to mention is write endurance on SSD. The majority of SSD sold are read intensive. Your workloads will dictate your read/write intensity. RAID5 would add some writes due to parity. In the enterprise most server vendors define as; Read Intensive is rated at 1 full daily write per day for 5 years, Mixed Use is rated at 5 full writes per day for 5 years, and Write Intensive is 10 full drive writes for 5 years, for a SSD with a 5 year warranty. The use I am seeing in the enterprise, most are using Read Intensive, Mixed Use for database applications and Write Intensive for log files or data ingestion. Likely, RI will work fine for your workloads.  

Good article comparing DWPD vs TBW:

https://thessdguy.com/comparing-dwpd-to-tbw/

Link to comment
Share on other sites

  • 2 weeks later...

Well, I've now finished all my upgrades after having to wait for a new CPU. I have to say the new server with the SSD is a breath of fresh air, the guests are so much faster. I'm now going to save up and change all of them to SSD. I've also found a Chinese seller of P420i raid cards so I think I will swap out all the raid cards so I can properly see their health, the LSi cards are great, but you can't see the health of the drives with ESXi 6.5 U3 and above

Link to comment
Share on other sites

I thought you were using Adaptec? You will need to wipe and rebuild your RAID sets if you move to HPE. The P420i is a mezzanine card and will not fit in a MS Gen8. You need a P420.

Link to comment
Share on other sites

I was due to use  an Adaptec card, but it would not configure the array properly, I'd had it in a box for a few years, maybe that was why it was in a box? So I stripped the LSi from the old N54 and used that in the new server. Sorry I got carried away and added an I on the ned, just confirmed it is a P420 1GB cache with a new battery, in fact it looks new. Happy to rebuild raids as I can vMotion the servers off and rebuild the arrays onto new cards or maybe even SSD if I can find the cash for another 8!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...