In the first five parts of this series I covered the project objectives and the system design, then turned my attention to the Hyper-V host image build, automated deployment and the guest virtual machine build. In this post I review some of the questions and issues we’ve encountered after a few months of working this way and some overall reflections on the approach.
Guest user accounts
Guest virtual machines have been configured in a workgroup in order to conserve resources that would be spent on domain services. Additionally, developing on a domain controller is less than ideal for a number of reasons including performance tuning, administrative complexity, start-up times and security.
I created the development virtual machine with 160 local user accounts that have been logged on to the portal and the MySite application in order to create a basic profile. If there is a need to script creation of local user accounts, the net user command will be useful. However, this will be of limited assistance for complex profile requirements, since there is no way to synchronise with a directory and since the local users have no associated profile data, but it may be helpful for testing or demonstration.
If LDAP user accounts or other directory objects are required for development purposes (user profiles for instance), consider using Active Directory Lightweight Directory Services. This is the successor to Active Directory Application Mode (ADAM) in Windows Server 2003. It is a Windows LDAP directory that supports user and group objects without a full-blown domain infrastructure.
There will be some scenarios when a full domain services infrastructure is required for development. In those cases it may be preferable to run a second virtual machine as a domain controller.
Hibernate and Sleep
Hibernate and Sleep are disabled automatically when Hyper-V is installed. This is by design. Hyper-V disables this functionality, as the guest virtual machines could be damaged by a Hibernate or Sleep operation in the host if they were not saved gracefully. If, on the other hand, all virtual machines had to be put in to a saved state before a host machine could be put to sleep or hibernated, this would mean extending the wait time for these operations to unacceptable levels, as they are also automatically triggered by low battery warnings. Unfortunately we need to live with this behaviour.
Do not travel with a running laptop
Putting a running laptop in a bag will cause it to overheat quickly and is likely to damage hardware.
Improvements to Start-up and Shutdown times
These builds should start up and shut down in less than two minutes (closer to 90 seconds). Keep in mind that virtual machines can be safely saved and work can be resumed quickly when the machine is restarted. Since all of the development work will be taking place inside the virtual machine, this should reduce the Hibernate/Sleep annoyance.
Virtual PC won’t run on Windows Server 2008 R2
Windows Virtual PC will not work on Windows Server 2008 R2, as it was designed specifically for Windows 7. Earlier versions of Virtual PC may install on Windows Server 2008 R2, but they will not co-exist with Hyper-V, so do not install them.
Hyper-V role won’t work after SysPrep
This shouldn’t be an issue, as we have set up automated deployment, but it’s worth noting that this is a known issue. There are time-consuming work-arounds to fix some of the problems that this will cause, but they are best considered as a last resort.
Colours are limited to 16-bit in Hyper-V guests. If a fuller spectrum is required, it should be possible to test in full colour in a browser on the host.
Resolving host names from an internal domain
During our pilot we identified that fully-qualified domain names resolved successfully but host names would not resolve without the full domain name. To satisfy this requirement we have added our internal DNS suffixes to the ICS Connection inside the development virtual machine.
Manually adding DNS suffixes
If a network adapter in a guest virtual machine loses these settings by deletion/re-creation of the adapter, or for some other reason, the setting can be re-entered as follows:
- Go to the IPv4 properties on the ICS Connection and select Advanced.
- On the DNS tab select the Append these DNS suffixes (in order) radio button.
- Add internal.domainname.local and other.domainname.com.
- Un-tick the DNS registration box.
- Select OK, OK and Close.
- Make sure that this change is captured in all snapshots as necessary.
Internet Explorer (64-bit version)
Adobe flash player does not currently support 64-bit browsers. You’ll have to use the 32-bit IE or another browser if you want to view flash files. We recommend using the 32-bit version by default.
Hyper-V Manager UAC prompt work-around
If the UAC prompt on Hyper-V Manager is annoying, try launching Server Manager and navigating to Hyper-V in the Roles node. This has the added benefit of exposing the Hyper-V event log messages and service states in that top Hyper-V node. These are not visible in Hyper-V Manager. Awareness of these messages and the service statuses will help to resolve Hyper-V issues faster.
Test DVD burning
In our pilot we identified that the DVD burner drivers don’t work for burning in Windows Server 2008 R2 on a Dell XPS M1330. This was also true on a Lenovo laptop. Chipset updates, driver updates and a Microsoft KB registry hack all failed to make a difference. The Matshita (Panasonic subsidiary) site does not support the products directly (they point to the laptop manufacturer). Dell and Lenovo had not released new drivers when we launched. As DVD burning has changed in Windows Server 2008 R2 this may have a wider impact.
Bluetooth doesn’t work
The Bluetooth stack is missing from Windows Server 2008 and Windows Server 2008 R2. In Windows Server 2008 there were fairly elaborate means of porting the stack from Vista, but results appear to be spotty at best.
WorkItemTypeDeniedOrNotExistException when trying to open work items
This error occurred in the first release of our guest build because I installed Visual Studio 2008 SP1 before the Team Foundation Client (TFC), so the TFC did not get upgraded. The fix is to un/re-install Visual Studio 2008 SP1, or to make sure that the TFC is installed before Visual Studio 2008 SP1.
NUMA nodes and RAM allocation
It is important to not exceed NUMA node limits when assigning RAM to virtual machines, although this will not apply to many laptops, as most will have an SMP architecture. It is beyond the scope of this post to go in to NUMA nodes in great detail (and in truth, my understanding of it does not reach beyond a few hours of research), but the limits in your environment should be understood so that performance does not suffer. As a starting point it’s worth confirming the type of CPU architecture and looking at this in more detail if it is NUMA. The performance and capacity requirements for Hyper-V document on TechNet explains this well:
Configure the correct amount of memory for Hyper-V guests. During the testing, no change had a greater impact on performance than modifying the amount of RAM allocated to an individual Hyper-V image. Because memory configuration is hardware-specific, you need to test and optimize memory configuration for the hardware you use for Hyper-V.
The initial goal of the testing was to make the Hyper-V image as similar as possible to the physical hardware image against which it was being compared. Based on that goal, the Hyper-V images were originally allocated 32 gigabytes (GB) of RAM, which was the same amount of RAM as was on the physical servers being tested. However, the initial test results showed that with that configuration, the Hyper-V images could sustain a load that was only about 70 percent of the load on the physical hardware. After investigating the Event Viewer on the Hyper-V host machine in the Windows Server 2008 Custom Views, Server Roles, Hyper-V Events, it was discovered that the RAM for the Hyper-V images was being spread across multiple non-uniform memory access NUMA nodes. This information confirmed that performance declined when memory was allocated across nodes. After trying different configurations it was determined that for the hardware being used, 8 GB of RAM was the maximum that could be allocated to a Hyper-V image without crossing NUMA nodes.
To reiterate, this means that in Microsoft’s tests, Hyper-V performed significantly worse with 32GB allocated to a virtual machine than it did with an 8GB allocation. The exact size of the NUMA node boundary will vary by vendor, so make certain to gain an understanding of the number of nodes in your system. Divide the total RAM by the number of nodes in order to find the memory limits of a virtual machine. This does not mean that additional virtual machines can’t be run beyond a NUMA node boundary, if there is sufficient RAM available. The node boundary is the limit of optimal process performance. Beyond this limit, the virtual machine will suffer from degraded performance because it needs to use memory from an alocal address space.
However, NUMA isn’t the only thing to worry about when finding an optimal RAM allocation. Based on test results during our pilot we could push our virtual machines to up to 2250MB RAM, depending on the amount of activity in the host. In some cases it may be possible to get up to 2500MB RAM for a virtual machine on a 4GB RAM system, but this was not consistently achievable in our tests. If it’s necessary to achieve that, the virtual machines should be started up soon after booting and before any major client application activity is started on the host machine. Client application activity should be kept to a minimum when allocating this much RAM to virtual machines. We also found that host performance was often reduced to an intolerable level whenever there was less than 2GB RAM available to the host for an extended period of time. 1.75GB RAM may be achievable, but this should be tested extensively for your needs.
Additionally, saving a virtual machine’s state becomes risky when there is less than 2GB available to the host, as the machine will not resume from the saved state if there is insufficient resource available to it.
Periodic but routine loss of connectivity on the host machine
As I’ve been tracking here, we’ve documented repeat problems with periodic (but routine) loss of connectivity on the host machine. This is still an open issue. More info here:
Hyper-V performance suffers during graphics-intensive operations
This has been covered by Ben Armstrong in considerable detail and I’m continuing to track it:
Hyper-V graphics performance and SharePoint 2010 development
Hyper-V graphics performance is on the way… if you need a new laptop
The definitive word on Hyper-V high-end graphics performance
Unfortunately, due to the graphics performance issues in Hyper-V mentioned above, there is a significant graphics performance hit when using Aero Glass. This does not slow down overall systems performance, but graphics-heavy operations will suffer in most Hyper-V environments. To this end, we do not recommend installing Aero Glass, but if you want to put it to the test feel free.
How to enable Aero Glass
- Make sure the Desktop Experience is activated
- On the Dell XPS M1330, make sure BIOS A14 or later is installed
- Confirm the latest NVIDIA drivers for Windows 7 x64 are installed
- Turn on the Desktop Window Manager Session Manager service and switch to automatic start
- Turn on the Themes service and switch to automatic start
- Switch to an Aero theme
- On the Dell XPS M1330, make sure BIOS A14 or later is installed
It is also possible to add the Windows 7 Sidebar to the host dekstop. We haven’t tested this extensively enough to provide documentation on the best approach, but we have done it successfully. If there is sufficient interest in this technique I will add a follow-up post in future.
Be prepared for snapshots to increase the local storage requirements considerably. Some of our developers are legitimately struggling to work on three projects concurrently with 300GB local storage. One option we are considering is eSATA over PCMCIA as a means of increasing total spindle speed and storage but we have yet to begin testing this approach. We are specifically interested in eSATA since Hyper-V does not support system VHDs on USB. If we pursue this option I’ll post the results.
Putting things in perspective
It’s worth repeating that we’re asking this system to do many things it was not intended to do. It is a server operating system with an enterprise virtualisation technology. The Microsoft virtualisation team will tell you that Hyper-V was not designed with developers in mind. To quote Ben Armstrong:
As is being discussed at length here – Hyper-V does not play well with high-end video cards (which are far more common on desktops than servers). Hyper-V also disables sleep and hibernate, as well as increasing the power utilization of the computer. All of these things would need to be addressed before we could even consider putting Hyper-V in a desktop product.
In short, we bent this system to development needs because of the strength of the technology, despite these imperfections. There are fundamental compromises that can’t be avoided when using a server operating system as a mobile workstation but we believe that we can deliver SharePoint projects as a team better with this technology than without it.
Whether laptops are the ultimate hardware solution is a different can of worms, which I’ve chosen to avoid in this series of posts. I’ve tailored the approach to laptops since that is what we have and the approach can be ported to workstations or shared virtual infrastructure.
The developer experience and the bottom line
There’s no question that using snapshots, import and export in Hyper-V adds a complex tier to the development experience and there will be a learning curve for those who are less familiar with virtualisation or don’t use the advanced features often. However, we have achieved an immediate and measurable gain in stability and environment consistency through the use of standard builds, snapshots and exported project-defined environments.
Conversely, it’s worth keeping in mind that as desirable as standardisation is, there are times when it hinders more than it helps and on those occasions a non-standard build may be more appropriate. Considering alternative builds is a much less cumbersome proposition with the combination of WDS, Shrink Volume and Dual-booted systems or the new Native boot from VHD. The key consideration to keep in mind is that most other approaches will entail a sacrifice of what I lump together as the “management benefits” of Hyper-V (snapshot, import and export). For instance, you may consider allowing a team to develop on native operating systems for a project, but then a team member may lose a day if they need to rebuild their system, or the support team may need two days to build an environment in Hyper-V later on, or a team member may need to split her time with a team who use Hyper-V for their project, or the original project may fork. Memories of project difficulties gone-by come flooding back. While it’s always worth considering options, if you spend time identifying a standard approach for your business, it’s probably best to stick with it unless there’s a truly compelling reason not to.