As an avid fan of dedicated boxes, I have always opted not to use virtual machines in my production environment. Not because I am against it but rather for simplicity and backup. I have been playing around with Hyper-V as well as desktop VM solutions such as Virtual Box and VM Player for quite some time but found too many inherent limitations. On BYOB #70 we had an opportunity to discuss VMs with Paul Braren and talked awhile about ESXi hypervisor. At first glance it looks very complicated and somewhat intimidating. But after listening to Paul I wanted to give it a shot as I was anxious to try something that would provide features I needed to actually put into my production environment and use it.
Problem and limitations of other solutions
First off let me say that all solutions have their strengths and weaknesses. Desktop solutions are great for testing but rely on the desktop OS for stability. Hyper-V is very simple to use, powerful, but very limited in certain areas such as USB devices and for the most part, playback and video applications do not work correctly unless you add a dedicated video card. To better understand these limitations is important to define what I was trying to do. Here is a list of tasks I wanted to put on different machines and what I need them to do.
- Monitoring software for viewing and recording 4 IP cameras. Must be able to playback content directly on the VM for review.
- Install and run home automation software. VM must be able to access a USB controller.
- Podcast/audio recorder. Must be able to access USB microphone, USB Web Cam and support sound playback.
- Transcoding/Streaming. VM must be able to transcode Blu-Ray movies and stream them to my mobile device using Air Video and other similar utilities.
As you can see from the scope of what I wanted to do (which are close to real world tasks many of us do) I needed my VMs to attach to specific hardware as well as have the ability to playback video. With Hyper-V, I was not able to do any of the tasks above and the use of desktop systems for production did not appeal to me as I want to maximize performance and stability.
After several discussions with Paul, I decided to give ESXi a try. I did this project in two phases. The first phase was testing using my Core I3 test system. I wanted to see what was involved as well as assess the performance of ESXi. After successfully installing ESXi, I was able to install the client OS and run all the applications above.
For the second phase, before I spent too much time configuring the client OSs, I decided to build a dedicated VM box that would ultimately take over the duties of at least two of my existing physical machines. Per Paul's information, I wanted a board that would support VT-d. Although I had no issues with the existing Gigabyte board in doing everything I wanted to do, I thought it would be best to plan for future expansion so I went on the search for a VT-d compliant board. Much to my surprise, I found the same results as Paul had found, and discovered that very few board MFG implemented VT-d. After looking around at various boards, I ended up with an AsRock Extreme 3 ($124). It was the cheapest compliant board I could find using the Z68 chipset and is directly supported by VMware. The processor and RAM was pretty straight forward and I ended up with a Core I5-2500 and 16 gigs of GSkill DDR3-1600.
AsRock Extreme 3
16 Gigs of GSkill Ripjaw X 1600 DDR3
Intel Core I5-2500
Lian-Li Lancool (existing case)
Corsair USB 3 8 gig flash drive
300 Gig Western Digital Raptor
3T Western Digital green drive (Backup and temp storage)
Power Consumption is 60-66 Watts at normal state.
Based on Paul's recommendation and description of how this works, I used a USB flash drive to install the hypervisor on the machine. Set the boot order in the BIOS for CD, USB drive.
ESXi is the first virtualization package that I have tired that has the capability of doing all the tasks that I need to do in terms of controlling hardware. The installation process is easy and straight forward, however the client installs and configuration can be daunting at first. If you are patient and can get a bit of help, you will find that that "most" of the time it is a fairly simple process. The one thing I ran across that I thought was strange (though now it makes sense) was the installation of a separate driver pack in each of the client machines which optimized the mouse/keyboard, sound, and video. Prior to installing these drivers the mouse response was horrible and the interaction between the VM windows was frustrating.
Conclusion - Did it all work?
Well Yes and No. All of the tasks stated above were successfully loaded and setup into a VM and operated perfectly. The system ran great and was more than enough power to run all the VM’s I had setup (Total of 4). The main issue that I ran into that I am still trying to work out is despite disabling the power saving settings in the VM’s, the server seems to go into a sort of a reduced power state after a period of time. Normally this would be a desirable condition however because I am monitoring devices such as PhotoStream, Camera’s, USB home automation controllers, and Eye-Fi software this presents a problem as these items slowly stop responding. For example, after the server and VM’s sit for a while and I try upload some picture via my Eye-Fi there is no response. Alternatively, my home automation starts going squirrely and devices such as sensors stop working correctly. I am sure that since this happens to all the VM’s that there is a simple solution, however right now this is an issue. I have no doubt that I will find the problem but until then it will have to stay in the experimental state and I will have to continue to rely on a dedicated boxes. I also ran into other minors issues which where not show stoppers I will mention it anyway.
- Could not get the auto start/stop feature to work correctly.
- Backing up the VM’s is straight forward but imaging the entire drive that contains the VM is a challenge.
- Still Trying to figure out how to configure the UPS. Easy to shut down one VM but shutting all the VM’s and the server is a different story.
I really believe that this is a viable solution once a few annoying kinks get worked out in my particular setup. My plan is to continue working on this until I understand it better as the potential is too great. It is still the best VM solution I have used and depending on your needs, and certainly your hardware could work and do everything you need it to do. Once I figure all this out I will post an update. Again a special thanks to Paul for the help and guidance he provided on this project and keep and eye out for Part 2…