Jump to content
RESET Forums (homeservershow.com)

ESXi 6.5 - Embedded host client request timeout issues


sofianito
 Share

Recommended Posts

Hi,

I experienced periodic request timeout when pinging the host, and my ssh sessions get stale and sometimes dropped...

I had ESXi installed on a SAN USB3 16 GB stick when the issues happened. I thought it was related with the performance of the USB. So I decided to install ESXi on a Samsung SSD, but unfortunately I still have the same periodic request timeout. I udpated the client VIB to the most recent version (esxui-signed-5214684.vib), and I the application timeout in the console to 2 hours, but still facing the same issues.

Did someone face or solved theses issues?

Thanks

Edited by sofianito
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Similar Content

    • Giuseppolino87
      By Giuseppolino87
      Hi everyone, today I wanted to install esxi 6.7 to try out the various functions, my question is the following, I have a server where I have installed esxi in an SSD and I have two other 4 tb disks so 8 tb total.  I would like to make sure that when I create a virtual machine in this case I will install the various virtual machines in the SSD and use the two disks for data.  So how can I do this?  Thank you very much for those who can help me.
    • E3000
      By E3000
      Hello all,
       
      A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
       
      I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
      Hopefully you guys can help with some harder to find answers I have been seeking.
       
      1) Which would be the better way to setup ProxMox:
           a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
           d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
       
      2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
       
      3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
       
      4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
       
      5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
       
      Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
       
      Thanks in advance to all those that help!
    • acidzero
      By acidzero
      Hello,
       
      So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
       
      Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
      Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
      Add scsi-hpdsa v5.5.0.54 bundle (downloaded from HPE drivers page)
       
      I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
       
      I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
       
      Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD  
      With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
       

      I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.

       
      If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
       

       
      However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
       

       
      Does anyone know how I can get the storage health working in VCSA?
       
      Thanks.
    • H00GiE
      By H00GiE
      I'd like to start using vDGA on ESXi with my ML10v2.
      The host is running ESXi 6.0 u3 HP Customized.

      If successful/possible i'd like to be running a Windows 10 VM with a Quadro k2000
      (in HCL list of vmware for vDGA)
      The VM will have up to 12GB ram and 4vcores and the Quadro card in passthrough mode this vm is meant to broadcast live video to twitch/youtube/fblive using Xsplit.
      live audio will be muxed in via virtual audio cable and local icecast server's stream.
      It does not matter if the remote desktop is stuttering or choppy as long as the broadcasted material is acceptable. 
      There will be a lot of video clips and overlays running, and i'll be run realtime 3D visualizations. the CPU and GPU would normally easily handle this workload.

      A HP332T and NC112T will be replacing 3 out ot the 4ports of the NC364T as this card won't have enough bandwidth on 1x PCIe for 4x gbit connections.
       
      (Considering the config of my ML10v2 (below in signature) and the fact i have a ml310e v2 front 80mm fan installed in server.) i'm just wondering:
      1: can the ML10 v2 handle all it's PCIe lanes being saturated (pcie3.0: 8x for Quadro K2000, 8x for LSI/CISCO RAID9271CV-8i PCIe2.0: 1x for HP 332T and 1x for NC112T)?
      2: will any of this config cause any bottleneck to any other hardware?
      3: is this a feasible configuration? (The resources are there to be used, it will not push other VM's resources.)
    • koth
      By koth
      I'm not sure I would call this a review, but I want to share some initial observations.
       
      Boot time is slow. I never had a Microserver before, so this might well be faster than the previous generation, but this is still pretty slow.
       
      It didn't come with any real instructions, the instructions just direct you to go to a url. The url is broken or not active yet. Nice one HPE.
       
      I loaded esxi 6.5 immediately. During the install process, it hung at 27% for a very long time. So long that I was sure it was locked up, so I went to download an iso of windows server to install instead, but when I finished downloading it I found out the esxi install actually did finish eventually.
       
      Installing a Windows Server 2016 Standard system as my first VM. For some reason it choked when I tried to start the VM with 5GB of RAM (out of the 7.5GB availible). Maybe the virtual dvd drive uses a lot of RAM? Set to 4GB, it starts fine. Still going through the install process at this point.

       

       

×
×
  • Create New...