Building a Home VMWare Server – UPDATED

I am building my own VMWare "Whitebox" server for home. I manage ESX system at work. I find using like equipment and software at home helps find, and resolve, issues before have to deal with them at work. I'm also interested in saving on my electric bill and increase my systems capabilities.  With four computers in my small corner bedroom it gets hot. The air condition runs almost continuously during the summer.  Along with the build project I'll be reporting on my energy savings.

I have three Linux servers to convert. I'll use ESXi to re-create them into virtual systems.  They are, a firewall, a web/s data store server and an email server.  The firewall is a small system running Untangle with a single disk, the web server is a Fedora Core system with a four disk RAID 4 and the email system runs Zimbra with mirror disks.

I'm also interested in experimenting with new systems.  Some of these are Gentoo, MythTV (MythBuntu), OpenFiler, BackTrack and Zarafa email. I may report on these as well.

In the next new posts I'll go over:

  • Hardware – What I purchased and why.
  • Server Construction – How I put everything together.
  • System Migration – Moving from physical systems to virtual systems.
  • Network configuration – How one Internet connection is connected to all the virtual servers and my desktop.
  • Benchmarks – How fast is everything and are there better configurations.
  • Power savings – Did I really save on my power bill? What is my return on investment.
  • Tweeks, Hacks and Tuning – Little things that make life better in a virtual world.

If you have questions about this project please email Mark @ or Tweet me at mgrennan


Hardware Details

The website Ultimate VMWare ESX Whitebox was very helpful to ensure the hardware I selected was going to work.  Most of the hardware has been purchased. I chose the hardware I did based on price and fetchers.  I wanted a system with as few "frills" as possible but with as much power as I could get.  Lots of CPU, memory and SATA disk connections.  I didn't care about video, IDE disk or sound.

I chose SuperMicro because I use their hardware at work and I like their designs. The MBD-X8STE board has all the right stuff without anthing extra. For a short time I considered the X7DBE but learned the RAID was a SOFTWARE RAID and didn't work with ESX.  I've included links to the components I've chosen.  The prices quotes from the time I bought them.

The total bill is about $1,300.00.  This is not the ultimite for this system.  I could have purchased 4GB sticks of memory for a total of 24GB of memory for a much higher price. Note there is not CDROM or floppy drive by design.

Todays price (7/22/2010) is 949.39.

Building a Home VMWare Server
UPATE – Building the server
The process of putting the hardware together was very straight forward.  The CPU installations instructions are good and the heat sync mounted to the mother board with out any trouble.  I then installed the memory on the main board before mounting the main board into the case.  This case has lots of room.  The only somewhat awkward part is plugging in the SATA drive cables mounted on the edge of the main board.

I was unhappy with the way the power supply cables ran across the CPU.  The CPU fan is open (no housing) and would be easy to stop with a loose wire.  I secured them out of the say with some tie wraps.

More photos of the server are comming.


I tested the system with no CD-ROM, hard disks or floppy.  I used a USB thumb drive with MEMTest86 (free) loaded on it to run a 48 hours memory test.  The test results showed no errors of any kind.

Next I purchased two 1.5TB disks and added them into the system. Again I loaded my thumb drive, this time with SpinRite  (not free) to run a complete surface test of the disks.  It install SpinRite to a USB thumb drive you run it in windows and select install to disk. Again there where no errors. I do this for all new hard disks I buy. SpinRite has saved my systems many times.

Finley I loaded the thumb drive with VMWare ESXi 4.0 (free). I downloaded and burned the ISO to a CD using ISO Recovder (free).  Then I booted the CD on my laptop and followed the instructions to copy it to a USB disk. Here is a video of the install process. After changing the BIOS settings to boot from USB ESXi booted the first time.

Both my laptop and the new server are 64bit system.  You will have trouble booting the CD if all you have is a 32Bit computer. ESXi 4.0 is 64 bit only. Here are some other methods people have used to create a bootable USB thumb drive on 32bit systems.


If you have questions about this project please email Mark @ or Tweet me at mgrennan

Q. Can I install with a CDROM drive?
A. Yes, If you purchase and install a CDROM drive you can boot the ESXi CD and install direct to a USB or local drive.  I choise not to do this because of cost and I didn't want to use pace on my hard disks for a boot partition.

Building a Home VMWare Server
The Conversion Process

To rebuild or convert that is the question? Should I build new systems and migrate all my data to them or should I clone physical systems into virtual?  I'd like to retire the older systems as soon as I can and sell them to support this project.  Cloning would be faster but require more disk space. When you run physical systems you purchase the largest hard disks you can at the time. With a virtual system you can create disks of "just the right size". 

I have three systems to convert.

  1. The email server – Running Zimbra
  2. The web server – Running Fedora Core, Apache httpd and Samba
  3. The firewall – Running Untangle

This has been the slowest part of the project.  Converting the first two systems has taken weeks (of my spare time).

The Email Sever – Direct conversion from physical to virtual
I thought my email system would be the biggest problem.  As it turns out it was the first one I converted. Like many systems that started as one thing it has become more.  It was going to be email only. It has become the Windows backup system through a Samba share making it the biggest disks to be converted. It has two 500Gig  hard disks in a RAID1 mirror.

Because this system runs Zimbra (not simple to install) and has Logical Volume Management (LVM) partitions, I desired to do a direct physical conversion. I first tried using the dd program to copy the running system to a USB disk.  This didn't work at all.  The dd program does not create files compatible with VMWare.

I found using the qemu-img, a Linux program, I was able to create a .vmdk file VMWare would use.  I believe it created a "grow-able" disk because it was only 300Gig in size.  I then used NFS to share out the USB disk to VMWare.  I added the NFS share to VMware's data stores. I then ssh-ed to the VMware console (See SSH notes) and used the program vmkfstool to copy the .vmdk file to a local data store. 

qemu-img convert -f raw /dev/md1 -O vmdk /media/usb1/mail.vmdk

vmkfstool -i /vmfs/volumes/NFS_Datastore/mail.vmdk /vmfs/volumes/Datastore_1/mail.vmdk

I again tried to run the conversion on a "live" system because no one wants their email server down, including my wife. Because the server used volume lables, when the kernel booted, it was able to mount all needed partitions. But, Zimbra didn't work because the database because corrupted. This is why you should not do "live" system conversions.  I repeated the process in single user mode and everything was good.

This process creates a system with a single disk. The mail system was a two disk mirror. I created a second disk of the same size as the first, on a different data-store, and added it into the mirror. Information on RAID commands are are on

My last step was rsync the /opt directory between the old and new system with the email system down.  I then powered off the old physical system and changed the IP address of the new virtual system.  Wallah.

NOTE: I learned the cp command in the VMware console is very slow.  It took hours to copy just 10% of a disk while vmkfstool copied the file in the same time.

The Web Server – Upgrading to a new OS
The Web server was an old version of Redhat Fedora Core soon to go out of support. It runs all my websites, databases, DNS and is my file sever. The physical system had four hard disks in a software RAID5 with 360Gig of space. I first tried to convert it my using dd to copy the Linux RAID (MD1) partition and then use qemu-img to convert it to a .vmdk file.  This didn't work at all.  The MD1 partition does not have any boot sector information.

I then considered converting each of the physical disks and reassembling them in the virtual environment back into the RAID5.  I only have two physical hard disks so two of the four images would have to exist on each disk.  This seems like a lot of work and just odd.

Because this was an old version of Redhat Fedora Core I converted it to CentOS 5.3 and updated to version 5.4 before I was done.  This involved the normal coping all the data with rsync from the old system to the new and reconfiguring the applications.

The Firewall –
The firewall runs one hard disk because it doesn't need any redundancy. It stores no data beyond it's own configurations and I back that up to my workstation. To convert this system I just rebuilt it from scratch and copied the configuration over.

Enabling SSH on ESXi system
ESXi 3.5 does ship with the ability to run SSH, but this is disabled by default (and is not supported). If you just need to access the console of ESXi, then you only need to perform steps 1 – 3.

  1. At the console of the ESXi host, press ALT-F1 to access the console window.
  2. Enter the word unsupported in the console and then press Enter.
  3. You should see the Tech Support Mode warning and a password prompt. Enter the password for the root login.
  4. You should then see the prompt of ~ #. Edit the file inetd.conf (enter the command vi /etc/inetd.conf).
  5. Find the line that begins with #ssh and remove the #. Then save the file. If you're new to using vi, then move the cursor down to #ssh line and then press the Insert key. Move the cursor over one space and then hit backspace to delete the #. Then press ESC and type in :wq to save the file and exit vi. If you make a mistake, you can press the ESC key and then type it :q! to quit vi without saving the file.
  6. Once you've closed the vi editor, run the command /sbin/ restart to restart the management services. You'll now be able to connect to the ESXi host with a SSH client.




Building a Home VMWare Server
Network Configuration

The first network interfaces connection (NIC) on the motherboard is used as the Internet (public) side of the firewall and the second is used to connect to the inside private LAN.

Internet traffic comes in, through the cable modem, and into the first NIC (NIC 1).  This is connected to the first virtual switch (Switch 1).  The Firewall has three NICs and its "public" interfaces plugs into Switch 1.  The Firewall's second NIC connects to the second virtual switch (Switch 2) that then connect to the second real NIC (NIC 2). This is the "private" side. The second real NIC connects to a physical switch for the home network. The web and email server connect to the third virtual switch (Switch 3) and the third NIC on the firewall.

The firewall controls access between the Internet and everything else. Public traffic has access to web and mail but not the private LAN.  The private LAN has access to the severs but the servers don't have access to the private LAN. 

If you have questions about this project please email Mark @ or Tweet me at mgrennan


Building a Home VMWare Server


Mount ESATA Disks


# Unmount and Mount new Disk for Backup
# To find the device to unmount… see 'ls -al /dev/disk/by-id' and match to 'cat /proc/scsi/scsi'
# If NFS is used
#/etc/init.d/nfs stop
#sleep 2
#umount /mnt/VM.Backups
#echo "scsi remove-single-device 2 0 0 0" > /proc/scsi/scsi
#echo "Remove the disk now"

echo "scsi add-single-device 2 0 0 0" > /proc/scsi/scsi
mount /dev/sdc1 /mnt/VM.Backups
/etc/init.d/nfs start
echo "The disk is ready."

Create a list of system to backup

/bin/vim-cmd vmsvc/getallvms | grep -v Vmid | awk '{print $2}'

Using the Ghetto Virtual Computer Backup.

Mysql Server Testing


As a MySQL administrator, database throughput is important to me, so I did some testing.

The test sequence is:

service mysqld start
mysqladmin create sbtest
sysbench –test=oltp –oltp-table-size=200000 –mysql-user=root prepare
sysbench –num-threads=16 –max-requests=2000000 –test=oltp –mysql-user=root run
sysbench –test=oltp –mysql-user=root cleanup

The results are:

Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Maximum number of requests for OLTP test is limited to 2000000
Threads started!

OLTP test statistics:
    queries performed:
        read:                            28026082
        write:                           10004522
        other:                           4001863
        total:                           42032467
    transactions:                        2000000 (346.28 per sec.)
    deadlocks:                           1863   (0.32 per sec.)
    read/write requests:                 38030604 (6584.70 per sec.)
    other operations:                    4001863 (692.89 per sec.)

Test execution summary:
    total time:                          5775.5994s
    total number of events:              2000000
    total time taken by event execution: 92397.7961
    per-request statistics:
         min:                                  3.03ms
         avg:                                 46.20ms
         max:                               3459.56ms
         approx.  95 percentile:             100.91ms

Threads fairness:
    events (avg/stddev):           125000.0000/141.00
    execution time (avg/stddev):   5774.8623/0.01

As compaired to:

Kernel: 2.6.11 with Alex Tomas 's patches
Partition size: 68 GB (sdc1)

processor      : 2
vendor_id      : GenuineIntel
cpu family     : 15
model          : 2
model name     : Intel(R) Xeon(TM) CPU 2.80GHz
cpu MHz        : 2791.359
cache size     : 512 KB
bogomips       : 5505.02

# hdparm -t /dev/sdc1

 Timing buffered disk reads:  202 MB in  3.02 seconds =  66.90 MB/sec

sysbench v0.3.3: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16

Doing OLTP test.
Using mixed OLTP test
Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Maximum number of requests for OLTP test is limited to 2000000
Threads started!

OLTP test statistics:
queries performed:
read: 28003808
write: 8000763
other: 4000295
total: 40004866
transactions: 2000023 (341.16 per sec.)
deadlocks: 249 (0.04 per sec.)
read/write requests: 36004571 (6141.51 per sec.)
other operations: 4000295 (682.35 per sec.)

Test execution summary:
total time: 5862.4972s
total number of events: 2000023
total time taken by event execution: 93741.9585
per-request statistics:
min: 0.0052s
avg: 0.0469s
max: 0.5604s
approx. 95 percentile: 0.1432s

Threads fairness:
distribution: 99.31/99.71
execution: 99.31/99.71

Disk Speed Testinig

How fast is software RAID running in VMware?  I used bonnie++ to find out.

This test is on a two disk RAID-1.

The test sequence is:

# bonnie++ -u root -d . -s 10240M -n 10:10240:1024:1024 -q > bonnie_test.csv 2> bonnie_test.out

Here are the results.

Version  1.03       ------Sequential Output------               --Sequential Input-        --Random-
                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine         Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP 10G 46052  64 63086   7 36277   4 61205  84 87163   4 151.3   0

This is the same test test on a four disk RAID-5.

Version  1.03       ------Sequential Output------           --Sequential Input-           --Random-
                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP 10G 41262  44 69007    6 40623   3 69362  77 131851   6 103.7   0
It is clear the CPU (%CP) doesn't get much of a hit.  It is disk IO that is a bottleneck for speed. 
Because the RAID-5 has more disks it got mush faster disk reads.


 If you have questions about this project please email Mark @ or Tweet me at mgrennan

6 thoughts on “Building a Home VMWare Server – UPDATED

  1. Good morning – neat short post l was really visit 2 find out your weblog, it features specifically the information 1 was seeking to locate. ;> 12dietboost users Gramercy gettotogether? i’m noobie!

  2. Hey Mark,
    Great post . Looks like it has been over a year + for your VMWare ESXi 4.0 build. Looking to build one myself ,VMWare ESXi 5.0, and really like Supermicro Mobos.

    What Supermicro mobo would u recommend that has worked with VMWare 4-5?
    Your current – MBD-X8STE-O or a newer model and hopefully a little cheaper???
    Not doing anything fancy. Experimenting with networking, media streaming, and Freenas,. (Had any luck on MythTV?) Looks like VMWare does not approve Supermicro Mobo. Not sure why they can’t a least certify/Test one. But any way….
    Any help would be great. Thanks Mike

    Thanks Mike

  3. Simply want to say your article is as astounding. The clearness on your put up is simply spectacular and that i can think you’re an expert in this subject. Well along with your permission allow me to snatch your feed to stay updated with drawing close post. Thank you a million and please carry on the gratifying work.

  4. Hi Mark!
    Wanted to say thanks for the amazing article. I work in the network security field and am thinking of building a similar configuration to your article for my home network just to have better security and this is a big help. I want to make a network that is a little more secure and at the same time saves me space and money on buying equipment.
    Here is my 2013 list of equipment I plan on purchasing for my server for those who are considering to do the same thing.
    Antec TITAN650 Black Steel Pedestal Server Case ($199.99)
    ASUS P8B-X LGA 1155 Intel C202 ATX Intel Xeon E3 Server Motherboard ($189.99)
    Intel Xeon E3-1240 V2 Ivy Bridge 3.4GHz (3.8GHz Turbo) LGA 1155 69W Quad-Core Server Processor ($274.99)
    Kingston 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 ECC Registered Server Memory DR x8 w/TS Model KVR1333D3D8R9SK2/8G (bought two sets of this totally 16GIG memory) ($81.99/ea, $163.98 total)
    Seagate Barracuda STBD2000101 2TB 7200 RPM SATA 6.0Gb/s 3.5″ ($159.99/ea, $219.98 total (w/ $50 discount/ea))
    CD ROM (just to have):
    ASUS DRW-24B1ST/BLK/B/AS Black SATA 24X DVD Burner ($19.99)
    Subtotal: $1,068.92
    I went a little higher end and bought a server case but you could go for a lower end one if you want. The HDD’s ended up being out of stock when I went to make this system so I ended up with buying my HDD’s from Best Buy but was just an example of what I was looking for on Newegg.
    Hope this helps give an updated idea of what you could look for to build a lower/middle end system for this application.

  5. I drop a leave a response whenever I like a post on a site or if I have something to add to the conversation.
    It’s a result of the passion displayed in the article I browsed. And after this post Building a Home VMWare Server – UPDATED | I was actually moved enough to post a leave a responsea response :-) I actually do have 2 questions for you if it’s allright.

    Could it be simply me or do some of these remarks look like they are coming from brain dead individuals?
    :-P And, if you are posting on additional online
    sites, I’d like to keep up with anything fresh you have to post. Would you make a list all of your public sites like your twitter feed, Facebook page or linkedin profile?

Leave a Reply

Your email address will not be published. Required fields are marked *