Building a SharePoint 2007/2010 development environment – Part II: Design

In the first part of this series, I introduced the pros and cons of various SharePoint development approaches and the objectives of this system redesign. In this part I will focus on design choices and conclusions, starting with the core technology.

Why we’ve chosen Hyper-V

There are broadly five decisive factors: performance, management features (like snapshots), cost, 64-bit OS support and a full host OS (not just a virtualisation administration console):

  • Hyper-V is now in its second iteration and has proved to be one of the best-performing virtualisation technologies on the market. With laptops we need to take advantage of every performance gain that we can find.
  • Perhaps most importantly, and most often overlooked, Hyper-V is free if you’re already running Windows Server 2008 or Windows Server 2008 R2as a client OS
    • While I work for a Microsoft Gold Partner and Hyper-V is a key part of Microsoft’s future strategy (full disclosure), I believe that most SharePoint developers will have at least an MSDN subscription and a license for a Windows Server operating system, so I believe this is compelling for most SharePoint development environments
  • Hyper-V is one of the few virtualisation technologies that supports 64-bit guest operating systems and within that narrow range of choices, it is one of the few that allows a user to log on to a host machine and control virtual machines within it concurrently
    • For instance, VMware ESXi is also a high-performance Type-1 hypervisor that supports 64-bit operating systems, but there is no host machine other than a virtualisation administration console
      • It also has associated costs (support, management tools, etc) that we would prefer to avoid
    • With Windows Server 2008 R2 and Hyper-V, developers can use client tools within a familiar operating system while accessing virtual development environments at the same time
      • This is also true of VMware Workstation, but it’s a Type-2 hypervisor and will have relatively poor performance
    • We need to have 64-bit OS support for Windows Server 2008 R2 and SharePoint 2010 – both of which do not have 32-bit flavors

Our approach

Workgroup development

By building our virtual machines in a Workgroup, we no longer need to worry about SharePoint installation/(re)configuration difficulties, as we will import the same identical virtual machine on everyone’s Hyper-V host. This would not be possible if the virtual machine was a member of a centralised domain because the domain controller would receive chatter from many identical machines and everything would quickly start to unravel

Alternately, we could run Active Directory Domain Services within a virtual machine, but this has a performance overhead that is best avoided unless it is absolutely necessary. Additionally, developing on a domain controller is not ideal

While developing in a Workgroup presents challenges for profile imports, these are far from insurmountable when LDAP directories like AD LDS or SQL users can substitute in many scenarios. The only evident need is a scenario where Active Directory Domain Services (more than just user accounts) are required, and that’s certainly not going to be true of enough projects that it should form a part of the base build. To this end, as requirements for Domain Services are identified we will provide new builds, as they are likely to be very project-specific

Networking

Internet Connection Sharing network

In order to isolate identical virtual machines from each other and from network resources, I’ve created a Hyper-V internal network dedicated to receiving Internet Connection Sharing (ICS) from one of the host’s active network connections. This is an internal network like any other Hyper-V internal network – it just so happens that the host’s adapter on this network will be receiving ICS from another of the host’s connected adapters. Any connection can share to the ICS network – even if Hyper-V doesn’t natively support external networks of that type. Depending on the need, we will share to this ICS network from:

  • Hyper-V host external connections (by default)
  • Wireless
  • Mobile broadband
  • VPN

ICS also introduces a layer of NAT between the guests and the physical network, preventing inbound connections to guests over these networks. This is desirable as it is how we achieve physical network isolation, and is the reason why we’ve chosen ICS over Bridging. In the ICS Settings we enable outbound RDP, HTTP and HTTPS connections over ICS by default, although it may be useful to enable other common outbound network protocols like FTP and SMTP. Outbound connectivity from our guest virtual machines is primarily used for connecting to TFS over HTTPS.

Internal network

We have also created a Hyper-V Internal network to support “always on” communication between the host machine and the guest virtual machines. Guest virtual machines can also communicate with each other over this network.

We cannot rely solely on the ICS connection. Guest virtual machine IP addresses on the ICS network will change because they receive them via DHCP from the host’s ICS connection. This is just how ICS works. The host’s ICS adapter becomes a gateway on 192.168.137.1. Any Hyper-V guests on that network pick up DHCP from the host and are automatically assigned an address on the 192.168.137.xxx (255.255.255.0) IP range.

As we rely on HOSTS file entries for RDP connections from host to guest and for browsing to guest SharePoint sites from the host, we need fixed, reliable IP routing and name resolution, so we use this second network for that purpose. The hosts and guests have both been built with fixed IP addresses on this range.

Because this is a Hyper-V internal network, there is no risk of network collisions on these IP ranges (which are identical for all users). Internal traffic never leaves the machine.

Project builds

By providing self-contained environments, our technical leads and/or architects can now customise the base builds to create project-specific virtual machines that can be exported to all members of a team, reducing system/configuration inconsistencies. In practice, the build lead/architect will export the final snapshot of the project environment, which all team members will use as a starting point for project development. As the project progresses, updated builds can be released in this same manner.

Clean hosts

Host machines will be cleaned of development tools and data, allowing quick provisioning. Development tools will not need to be reconfigured, as they will reside in virtual machines that can be exported/imported to any host. Guest base build virtual machines will be provided with as much SharePoint configuration as possible in order to make them light on reconfiguration, disposable and re-deployable.

Project resumption

Project-specific exports will reduce the need for storage of multiple virtual machines – as much as possible. We will retain one virtual machine export in public storage per-project. This will reduce the amount of time involved with resurrecting environments after project completion.

Optimisation

Host machines are built on Windows Server 2008 R2, optimised as much as possible and reduced to the lightest weight achievable. The build will broadly include:

  • Hyper-V R2
  • Microsoft Office applications
  • All browsers
  • No development tools (these will all live in guest machines)
    • The only exception to this is the Team Foundation Client which TFS administrators manually install. We have not been able to pick domain users from within a Workgroup environment

Content provisioning

By deploying a single project-specific virtual machine, we can bake content in to the project build, ensuring consistency and reducing the overhead associated with re-deploying content.

Improved testing and reduced volatility through the use of snapshots

By using snapshots we are able to test code and configuration changes without volatility. The benefits of this technology include:

  • Capturing restore points at milestones in a server build
  • Capturing a stable state before attempting volatile configuration changes
  • Capturing a stable state before testing code changes
  • Creating an initial restore point after importing a virtual machine and re-configuring network adapters (when required) and making other preferential settings
    • This saves re-importing and reconfiguring network adapters if a machine needs to be rolled back to its initial state
  • Exporting a virtual machine to capture a problem when trouble occurs
    • An exact instance of a problem can be captured and shipped out for support, without having to re-create the problem in a distinct environment. This will only apply tot self-contained environments, however

Local source code storage on the host machine

Before our pilot started we identified that storing the local copies of source code within changing snapshot states could create problems. At the same time, we found it desirable to put the development tools inside our virtual machines in order to get around remote SharePoint development difficulties and to keep the host build uncluttered. To resolve this conflict, we adopted this approach:

  • Created a share on the host machine and granted ownership of that directory to a new user account that is used solely for this purpose
  • Created a new user account with the same name and password in the guest virtual machine
  • Mapped a drive from the guest to the host’s new share, using the internal network’s IP address and these new credentials
  • Launched Visual Studio and downloaded project source code, pointing to the newly mapped drive as the location for the local source
  • Created a new Code Group for the mapped drive to enable trust for code execution

These steps are covered in more detail in the build guides that will follow in later posts. This is just an outline of the approach to extracting the local code from the snapshot state, which should be adaptable to other development systems with snapshots. For instance, we’ve found it necessary to download code to a unique location within this directory for each each user, as TFS tracks the network locations that code is checked out to. This is easy to get around, as each user just specifies a unique directory name. If this were insufficient for project requirements, this TFS behaviour could possibly be “fooled” by using NTFS junction points. We’ve not seen a need for this yet, but we’re confident that with this additional option we should be able to store local source code in this manner, and this has been validated by our project experience with Hyper-V to-date.

Summary

These are the high-level design choices that emerged early in the consultancy and research, which have remained largely unchanged to-date. These choices represent one design that has been validated for our needs, which has some shortcomings like any approach. Some of these issues will be covered in the final post in this series.

The information in this post has not covered implementation in any detail, but do not fret. In the next section I will cover the step-by-step build guide for the Hyper-V host laptop.