Hi everyone, today I wanted to install esxi 6.7 to try out the various functions, my question is the following, I have a server where I have installed esxi in an SSD and I have two other 4 tb disks so 8 tb total. I would like to make sure that when I create a virtual machine in this case I will install the various virtual machines in the SSD and use the two disks for data. So how can I do this? Thank you very much for those who can help me.
Hi! I am planning to buy a Microserver Gen10 Plus E-2224 for a small business, and want to add as much disk space as possible. I already have the older Gen10 with 4x4TB HDDs in RAID10 (effective capacity 8TB) and a 2TB SSD in the 5th slot (which is missing on the new G10 plus). The max stated capacity on the HPE site seems to be limited with their own enterprise HDDs, so I'd like to see if I can go for bigger disks. The server would be running a sort of an application which collects lots of data from devices, and the data tends to grow over time (1-2TB per year), and I'd prefer not to worry about the capacity for the next several years.
My primary concern is the max capacity (max single drive capacity + max total RAID capacity with the Smart Array S100i) I can reliably install. If possible, some actual setup which has been shown to work with 32GB RAM and its limited 180W power supply? I was also planning on upgrading it to 32GB. If I am buying a 16GB RAM server, does this means it has both slots occupied with 8GB modules and I need a 2x32GB kit? For the os SSD, I won't have the PCIe slot available because we need to insert a certain GPS PCIe card that the server uses for time synchronization. So, if I want an SSD, this means I should use something like 1xSSD + 3xHDD in RAID5, instead of RAID10?
I've been planning to install 16TB Seagate Exos X16 drives, with rated max operating power at 10W (6.3W for random reads/writes). Does this seam feasible?
Thanks a lot for your tips!
Looking for a quick answer here from someone in the know...
I upgraded some of the internals of my Gen8 server and at the same time I did a BIOS reset as I had forgotten the password that I had set years ago. When booting back up the upgraded hardware was detected and fine but I received Error 1784 - Drive Array - Logical Drive Failure. I believe this is due to a setting change in the BIOS and not anything hardware related. I have gone into IP/SSA and see the array needs to be rebuilt. I had all drives configured as single RAID0. It says all data will be lost if I reconfigure this but I remember reading somewhere on here that the data will be fine if I reconfigure as RAID0.
I do have a backup of course, but it’s not a high availability mirror and will take some time to get back the way I had it, so just want to know if it’s safe to reconfigure the RAID0 without erasing the drives. I have tried pulling the drives out and connecting them to another PC and they are fine. Can anyone tell me which options to choose to get these back to normal and remove the flashing red light?
I'm looking into replacing my desktop's old, failing HD with an SSD. I've got a Dell XPS 8700. I've look up the configuration on Dell's website, entering my machine's service tag (DK49122). I've found the component which is listed as:
KPF74 : Module,Hard Drive,1T,S3,7.2K,5 12E,#1,G-BP INFO,1ST BOOT,HARD DRIVE HD,1TB,S3,7.2K,512E,SGT-GRDABP
I know it's a 1 TB drive. It looks to me like it's a SATA III drive. Am I correct?
Assuming I've correctly identified the drive, then I've found what looks like a good replacement part on Newegg which is listed like this: SAMSUNG 860 EVO Series 2.5" 1TB SATA III V-NAND 3-bit MLC Internal Solid State Drive (SSD) MZ-76E1T0B/AM
So, have a got a good replacement SSD for my machine's old HD?
I've just gotten my old Proliant server out from mothballs recently to use as a Backup controller for my photographic archive. It's specs are a standard G840 2.8GHz Dual core with 16 GB of ram installed with a standard 350 watt PSU. I'm running WSE 2019 with 4x1TB HDD's installed in the 4 bay drive cage. 3 drives are formatted to RAID 0 in WSE (not embedded HP RAID) and one drive acts as the system/boot drive.
The server is surprisingly responsive given its age, with LAN transfer speeds between 84-94MBs. So has pretty much taken center stage in my backup control between my external drives and 4x1TB RAID 0 NAS.
However, I have been wanting to attach a 256GB SSD boot drive for my OS, so that I can utilize the full capacity of my 4 bay drive. It seems that the BIOS will not let you boot any HDD/SSD off the spare SATA connection on my MB. Anyway I am hoping to eventually attach a PCIe USB 3 card to that. But in the meantime I would like to boot a separate drive other than the 4 bay one already installed.
I gather that you can install a PCIe SSD card on this machine. But it's not clear to me as to what kind of card and drive would be suitable for this server. I realize that my MB is PCIe gen 2, which will incur diminished speed compared to the latest gen 3 MBs. But should I be using NVMe or SATA m.2 drives? Or does it even matter? Also will the BIOS allow me to boot from this drive?
Thanks for taking the time.