A picture says a thousand words, like they say. Clichés aside, I have put together a Visio of my home lab.
Here’s a snapshot of it. I’ll provide more details shortly!
All this is on 1 physical machine – a HP DL380 G6 workhorse with 2 x quad core processors, 48GB RAM and 1.2TB of SAS storage (with all drive bays populated).
OS: ESXi 5.5 installed on a USB stick
Disk: There are 8 x 146GB disks 15K dual-channel disks. There is one datastore per disk. See the screenshot below:
For the sake of simplicity, I have kept things on one vSwitch and one uplink (highly not recommended in production!). Note the security setting of the vSwitch. Promiscuous Mode needs to be set to Accept. Sadly my router (just a poor home-grade one) doesnt have Jumbo Frame support so no 9K love for me. The server’s pNIC’s are 100Mbps ones too. Good enough for a lab though, right?
** Important ** – Edit the /etc/vmware/config file to vhv.allow = “true”
This allows for your CPU’s features be passed onto the guests.
That was all for the physical side of things.
Now for the virtual!
In SRM-speak, the Protected Site is my production site running the following VM’s
- DC – Domain Controller
- vESXi and vESXi2 (2 x virtual ESXi hosts, intend to add another in the near future)
- prodSRM – this runs SRM in the production site. Doubles as my first DB server, though I intend to spread the load with the addition of a dedicated SQL cluster. This is also the FVP Management Server. Has 2 vCPU’s and 8GB RAM.
- SAN – this runs Starwind iSCSI SAN in the production site. Nothing flash, just divides your disks into .img files which are presented to the virtual ESXi hosts as iSCSI targets
and the Recovery Site running the following VM’s
- vESXi3 and vESXi4 (again 2 hosts, another host is intended soon)
- drSRM – this runs SRM in the recovery site. Also doubles as my second DB server.
- SAN1 – this runs Starwind iSCSI SAN in the DR site. Presents iSCSI targets to the virtual ESXi hosts in the recovery site.
There are 2 other VM’s, the View ones – but they arent being used yet. This just about sums up the lab, I will write another article on how to setup the virtual ESXi hosts with the iSCSI configuration and the various vSwitches I have on my virtual hosts. In a snapshot, there are
– 8 NIC’s per virtual host for doing multi-NIC vMotion, iSCSI port binding etc etc. Note that FT and DPM arent possible in a nested environment, happy to be corrected though.
– a software iSCSI adapter configured that acts as the iSCSI initiator. The .img files dished out by the two SAN’s become the datastores in the various virtual ESXi hosts.
– then there are VM’s running inside those ESXi hosts.
More to follow in another article.