This post will hold my experiences with VMware View 5.3/5.2 and my journey to the DTA. To start off, I had zero experience with the product till a week ago. I had been thinking of giving myself another challenge and having already passed the DCD and DCA, the DTA was, quite logically, next in the firing line. Why you may ask – well, because quite honestly, the VCDX is beyond my reach just yet. I did take a good look at the blueprint and it was daunting to say the least. So while I gather the experience and the confidence to start climbing Everest, I thought I’d might as well get another flag or two along the way. VMware are doing away with vCD, so there was no point in taking the CIA or the CID. That left me with the DTA and the DTD. The admin exams are easier (at least to me), hence my choice. So without much further ado –
Kept most of my DCD and DCA lab, got rid of SRM though and just removed 1 host – left with 3 hosts now. Should be enough I figure, can easily build add another if needed. I needed some RAM for the various View components and for the View desktops, so I blew away one host. Here’s a bit more detail about it
- 1 x Root CA, running Windows Server 2008 R2. Specs – 1 vCPU, 512 MB RAM
- 1 x Domain Controller – running Windows Server 2008 R2. Specs – 1 vCPU, 512 MB RAM
- 1 x iSCSI SAN – running Windows Server 2008 R2. Specs – 1 vCPU, 512 MB RAM
- 1 x View Security Server – running Windows Server 2008 R2. Specs – 1 vCPU, 1 GB RAM. (For clarity’s sake, this is drSRM in my inventory)
- 1 x Database Server – running Windows Server 2008 R2. Specs – 2 vCPU, 4 GB RAM. (For clarity’s sake, this is prodSRM in my inventory)
- 1 x vCenter Server – running Windows Server 2008 R2. Specs – 2 vCPU, 8 GB RAM
- 3 x virtual ESXi hosts – running ESXi 5.5, all hosts have 6 GB RAM, 2 sockets and 1 core, 8 NICs (2 for mgmt n/w, 2 for vMotion, 2 for a VM port group, remaining two for the vDS on which the vDesktops will live).
- 1 x View Composer Server – running Windows Server 2008 R2. Specs – 2 vCPU, 4 GB RAM
- 2 x View Connection Servers – running Windows Server 2008 R2. Specs – 2 vCPU, 4 GB RAM
Picture of the lab:
Other highlights of the first week
- Read the Architecture and Planning guide
- Read the Installation guide
- Started on the Administration Guide
- Got Active Directory ready:
- Separate OU for my View Desktops. This is necessary because you want to apply GPO’s to the OU or sub-OU in which the View Desktops will reside. I created an OU called – VMware View Desktops and a sub-OU under it called VMware View Linked-clone Desktops.
- A GPO with a Restricted Group in which a VMware Horizon View Users is a member of the Builtin Remote Desktop Users group. This is what’ll allow my View user accounts be able to log on to the vDesktops I will spin up.
- Service accounts to run the Composer and Connection Server services. Made the service accounts local admins on the respective View servers.
- Installed Composer using the -service-composer account on the Composer machine. Gave it full rights to the remote database created on the database server.
- Installed Connection Server and Connection Server 1 for load balancing purposes
- Assigned CA certificates to the Composer and the two Connection Server machines. For some weird reason, the second Connection server’s certificate shows up as invalid in View Manager. I have made sure the friendly names of the certs on either Connection machine were different, but I still get the error. IDK! We’ll see. You need to restart your View servers for the change in certificates to take effect.
I havent yet worked out how to ensure load balancing actually works, will post about it in the coming weeks when I have done some more study. After reading around and listening to some of the vBrownbags I have been able to confirm that VMware View does not do any load balancing of its own. Instead some people use F5 hardware load balancers to distribute load across a Connection Server farm, while other use plain-old DNS load balancing with the caveat being if one Connection Server is down, all users being directed by DNS to that server will experience an outage. A manual change of DNS will point users to the right Connection Server.
Been exceptionally busy at work, but I have somehow found time to keep the labbing going. Created my first desktop pool, all my templates were 2008 R2 ones so I went ahead with creating a VM off of one of the templates to use as a desktop source. A parent VM must be prepared (updates/patches must be installed) and the View agent must be installed before it can be used as a desktop source. In addition, the parent VM must not be altered in vCenter because the Composer service requires the parent VM not be altered during a linked-clone pool’s creation. The parent VM cannot be a template, it’s got to be a VM. I discovered that it wasn’t straightforward to use a server OS as a desktop source. Check out this link for how to get it going. Additionally, you need to install the Desktop Experience feature on the parent VM, particularly if you intend to use the View Blast feature – thanks to Sean Massey. You’ve also got to run the installer from an elevated command prompt with the following switch in the screenshot below:
Other highlights of the week
- Learned that multiple Connection Servers share the same ADAM instance. The first Connection Server is the “primary” server that gave birth to the ADAM instance, as you add multiple Connection Servers they merely use the same ADAM database. As for load balancing, to my knowledge VMware dont offer it within the environment. There’s no way to direct people to one server. Companies have used hardware load balancers, F5’s etc. to achieve load balancing. Security Servers dont share the Connection Servers’ ADAM database nor do they have one of their own.
- It’s best to advise your admins to always target one Connection Server to go about their daily administration tasks. Making changes against different Connection Servers can throw the ADAM database off because changes can take a little while to replicate. So you’d want your admins to target one server to maintain consistency.
- Learned that VMware dont apparently support Connection Server instances over a WAN link, it’s got to be over the LAN
- As of View 5.2 (IIRC), vCenter Heartbeat can be used to protect Composer Server and its database. Previously there was no way to ensure high availability for the Composer Service, apart from VMware HA (of course!).
- Learned a little more about the various bits and pieces that make up a View environment.
Thoughts on View infrastructure :
- Companies with large View installations, think thousands of desktops, generally go with dedicated vCenter servers, hosts and storage. For smaller installations the idea of dedicated infrastructure does not make sense from a financial perspective. In these cases, it’s best to go with soft separation of virtualized servers and virtual desktops by using “should” DRS rules. With “should” rules, you are able to keep desktops and server VM’s on separate hosts and in the event of a HA event you are able to restart your affected VM’s on another servers and DRS will then kick in to do a best-effort separation based on the “should” rule provided there are hosts it can use to balance the load across the cluster. (one long sentence I know!)
- Permissions set on vDesktops folders shouldn’t be changed in vCenter. Instead the View Manager should be used, this is to ensure permissions aren’t skewed and are up-to-date.
- Either vSS or a vDS can be used for your vDesktops, though it must be ensured that there are enough ports for as many vDesktops as your environment will hold. Similarly, the resource pools to which the Pools are deployed should have enough CPU/RAM for your machines. Finally, DRS should be enabled (this is a given, usually) and SvMotion is not supported for linked-clone desktops.
- To reduce what’s called I/O storms, AV scans should be staggered. Furthermore, bootstorms can be avoided by using a host based read cache (limited to 2GB though). Bootstorms occur when multiple users logon around the same time and there’s a lot of read I/O generated on your array. Having a read cache, preferably if it’s on flash, can help. It must be noted that the cache will need to warm up before it has enough in it to be able to serve read requests. PernixData’s FVP can be of great use here.
- To prevent more users from logging on to their vDesktops, the Connection Server instance(s) can be disabled. This doesn’t boot off existing users, just prevents more from logging on.
It may seem like I’ve covered a fair bit of ground over 2 weeks and I haven’t blogged about everything I learned, the 8 week deadline is creeping closer and I’ve needed to turn the heat up despite it being very busy at work. I have learned:
Disk space reclamation: It works on linked-clones only, not on full desktops. It doesn’t affect persistent disks and only helps with space-efficient OS disks. You need to be on vSphere 5.1 or later and VM hardware version 9 is the minimum required. In addition, it works on VM’s with SCSI controllers only, not IDE ones and only on Windows 7 and XP VM’s, not Windows 8.
View Storage Accelerator: The VSA uses the host’s CBRC to improve performance during bootstorms. For the VSA to work, it needs to turned on at the vCenter level and the pool level. The VSA works for both full VM’s and linked clones. In addition, Local Mode is supported. Users can check out desktops in pools that have the VSA enabled; it gets disabled when the desktop is checked out and enabled with the desktop’s checked back in. The VSA will work when View replicas are tiered, with the replica’s being on different datastores to the linked clones. Choosing the use the VSA option will result in “Virtual Disk Digest” tasks show up in vCenter. Essentially, when a VM is created, View will index the contents of each virtual disk file. The indexes of all the vDesktops are stored in the virtual disk digest file. At run-time, ESXi will read these files and cache common blocks. By default the regeneration process is kicked off every 7 days.
Choosing PCoIP or RDP as your protocol of choice: A desktop pool’s settings allows an admin (and a user if you allow them) to choose the protocol that will be used by users to connect to their vDesktops. Googled around for some ideas of scenarios of choosing one over the other and noticed this thread in a VMware Community forum. I’ll write it out concisely here – PCoIP should almost always be chosen over RDP because of a number of advantages. First, things like the iPad, other tablets and zero clients all cannot use the RDP protocol so PCoIP is their only choice. Second, PCoIP is an adaptive protocol as it can lower sound/video quality over say a WAN link when congestion occurs, when congestion is alleviated it picks up the quality again. Third, PCoIP imposes a monitor limit of 4, more than enough in most cases though RDP allows for upto 16. Four, a GPO can be used to limit the bandwidth it chews up.
Weeks 4 and 5
Havent had the chance to blog about my progress for the past few days, extremely busy at work. The early morning study hours, however, continued. Cant lose sight of the goal, can ya!? I have labbed up furiously over Week 4 and most of Week 5 covering:
- How full machine and linked clone desktops are created.
- Whether to use QuickPrep or Sysprep for customizing vDesktops. If QuickPrep is used, recomposition of the pool will result in the parent VM’s SID being preserved on the linked clone as long as the same parent VM is selected. If a new parent VM is selected for the recomposition operation, the new parent’s SID is replicated to the clone.
- Dedicated assignment – users receive the same desktop each time. Honestly though I dont see the point of dedicated assignment desktops especially since View Persona Management will ensure user settings follow them no matter which vDesktop they logon to. Instead, floating pools are a good choice. Users logon to whatever machine and they get the same profile.
- Persistent disks – I learned they are more of a legacy thing now with Persona Management doing the same thing essentially, albeit in a much better way. Persistent disks store user settings, document, files – profiles basically. It is also possible to detach a persistent disk and then reattach it to another vDesktop. Recompose and refresh operations do not affect persistent disks.
- Disk space reclamation does not affect persistent disks (it also does not work on full vDesktops). Minimum hardware version of 9 is required, otherwise the check box to enable disk space reclamation is greyed out.
Floating pools: One massive advantage of these pools is the potential savings of licensing. Thing is, with floating pools, once users are done with the vDesktops the machines are returned to the pool to be used by other users. This reduces concurrently logged on users. Another thing about floating pools is that they cannot have users assigned to them. Users can only be assigned to dedicated assignment pools. As I said above, floating pools are the way to go because Persona Management completely negates the need to have users logon to the same machines every time.
A thing or two about the Replica Disk: Replica’s disks are by default thin provisioned. But to keep calculations simple and assuming the entire provisioned space will be consumed, View Manager takes into consideration 2 replicas. Yep, 2 replicas. One for usual ops, then another when a recompose operation is being performed on the replica.
This week I’ll focus on ThinApp