Helient Blog

Helient Systems :

Written by Helient Webmaster | Sep 5, 2013 9:58:12 AM

In Part 1 of “VDI Good to Great” we discussed how Helient has raised the bar on user experience by leveraging personal vDisk to deliver persistent VDI desktops. Persistent desktops retain user data across restarts which, in turn, allows us to reintroduce key Windows 7 functionality and performance optimizations in VDI such as indexing, search, and cached user profiles. In this second installment, we will look at how we validate and guarantee the performance of our VDI solutions to ensure that the infrastructure delivers the best possible performance for every user.

Greatness, Guaranteed

In Part 1 of “VDI Good to Great” we discussed how Helient has raised the bar on user experience by leveraging personal vDisk to deliver persistent VDI desktops. Persistent desktops retain user data across restarts which, in turn, allows us to reintroduce key Windows 7 functionality and performance optimizations in VDI such as indexing, search, and cached user profiles. In this second installment, we will look at how we validate and guarantee the performance of our VDI solutions to ensure that the infrastructure delivers the best possible performance for every user. Whether using persistent or non-persistent desktops, we use a proven methodology to guarantee that the last user to log in to the system has the same, great experience as the first.

Don’t Estimate, Measure

It is no secret that many VDI projects have failed to make the transition from POC or pilot to full production scale out. What works perfectly well for a small group of test users or even a larger pool of staff piloting the system for actual production work, begins to break down as more users exercise the system, daily work patterns put more concentrated strain on specific aspects of the larger infrastructure, and operational tasks unveil new performance limitations that were never fully uncovered and tested prior to roll out. So how does Helient mitigate these risks and gain a full understanding of how your VDI is going to perform at scale before putting your unwitting users onto it?

The answer, and the only answer, is not to guess, estimate, or extrapolate but to actually test the VDI environment from end to end, with real-world activities, using your actual desktop build, under the full workload for which it is being built, and then measure the user experience. But how do we do this without hiring 200, 500 or 1,000 people, equipping them with stopwatches, and asking them to login to the system and pretend they are doing the work of the firm for 2 hours, while accurately recording their experience? Fortunately, there are several commercial tools available that allow us to evaluate all aspects of a virtual desktop platform. At Helient, our VDI toolkit is anchored by one such application, called Login VSI.

Reference Architectures vs. Reality

Login VSI is the same tool that virtually all of the major hardware manufacturers now use when developing and testing virtual desktop reference architectures on their platforms. In fact, Login VSI has become the de facto standard for measuring virtual desktop scalability and performance within the industry. Helient has become an expert in the effective use of Login VSI, having employed it for years on many projects large and small.

It is important to note that our approach to Login VSI is very different from that of hardware vendors, partly due to our focus on the legal industry and partly due to differing objectives when using the tool. Hardware vendors use Login VSI to demonstrate the maximum user density achievable on their platform, using a stripped-down Windows desktop in near total isolation. This is a perfectly valid test for their purposes. It  is essential to validate a reference architecture regardless of industry application, and it provides a standard point of comparison between vendors and various architectures, but we cannot directly use the vendors’ results to gauge VDI density in the real world, especially in a legal desktop environment. Helient, in contrast, uses Login VSI to determine the hardware that will be required to achieve the desired user capacity on your project, given the unique needs of the legal desktop and an understanding that VDI must integrate into a larger, pre-existing infrastructure. While hardware vendors eliminate performance obstacles and the idiosyncrasies of a production environment in order to highlight the ideal performance capabilities of their product, we make sure those complexities and obstacles are incorporated into the test. Consider the parameters used in a typical vendor test versus our custom testing:

As you can see, our testing is influenced by many more desktop and infrastructure components. This will generate much more load both within the desktop and on the backend during login and application execution. Our approach yields an extremely accurate measure of expected performance, identifies potential areas of weakness outside the VDI system itself, and suggests appropriate hardware purchasing & system configuration recommendations for your specific environment. With that in mind, let’s have a look at how Login VSI works and how we use it to both determine and then validate VDI sizing requirements.

Login VSI in a Nutshell

Login VSI is a collection of components that work together to simulate large quantities of users performing active work on your system while measuring system responsiveness throughout the test. The core Login VSI components are:

  • The management console to orchestrate and monitor the testing progress
  • The launcher agent for initiating many simultaneous VDI sessions, usually about 30 per launcher
  • The target agent, which runs on the desktop VMs and executes the workload & timing functions
  • The session monitor which captures performance log data as the test is running
  • The analyzer which is used to process the test log data and generate reports

The Login VSI Management Console is used to orchestrate testing

Put simply, Login VSI determines maximum user count by gradually increasing the number of users on the test system and measuring application response time in relation to the active user count. Using pre-defined or custom workload scripts, Login VSI actually logs each test user into a VDI desktop and performs a scripted sequence of work actions, such as typing in a Word document, reading and composing email, browsing a Flash-enabled website, and printing to a PDF file.

An excerpt from the Login VSI script showing IE and Outlook actions

Throughout the test, a specific series of small transactions (timer actions) are periodically executed, timed and logged. After the test, the analyzer processes the resulting logs and graphs the time required to complete these timer actions as user count is increased. The so-called “VSI Max,” is the point at which the average time required to execute the timer actions has increased significantly from the baseline and noticeable performance degradation begins to occur. Thus the VSI Max represents the maximum number of users that can be expected to work on the system simultaneously and with good results.

The VSI Max indicates maximum capacity before performance noticeably degrades

Unit Testing and Headroom

Typically our first round of testing targets a single representative unit of computing infrastructure, such as a single rack-mounted server, blade, or converged computing “pod,” and from this testing we determine a unit capacity.

While Login VSI is stressing the system and measuring the response time of the applications within the test sessions, we are also monitoring the backend hardware performance counters to determine how close critical components such as storage, network, and CPU are to saturation. These measurements show us available headroom on each component and allow us to predict potential bottlenecks as we expand the computing infrastructure and the user count based on the results of the unit test.

Performance metrics from a single hypervisor node during a Login VSI test

Performance counters from the Machine Creation Services volume during a Login VSI test

Based on the VSI Max obtained through the unit testing, we can simply divide the total target user count by the unit capacity and add additional units as needed for redundancy (N+1, etc.) to arrive at the total number of computing units needed. Analysis and extrapolation of the backend performance counters is also performed to determine the need for additional storage or networking IO, if any.

Full Capacity Testing: Let’s See What this Baby Can Do!

Next, after procuring the recommended hardware and remediating any potential bottlenecks identified through the first round of testing, we proceed to a second round of Login VSI testing. During these tests we validate that the fully built-out production system performs as expected when the user count is increased to the maximum design capacity. Should any unexpected results become manifest during this subsequent testing, additional adjustments or resources can be applied as needed and more testing executed until the desired results are achieved. Using this iterative testing method, we guarantee that the performance of the fully populated system will meet or exceed the users’ requirements and allow the firm to proceed confidently into rollout with no concerns about performance issues cropping up.

Busting out of Pilot and Cruising into Production

In summary, Helient has achieved consistent success in VDI deployments by adopting a highly scientific approach to load testing and performance evaluation which leaves nothing to chance. We take the time to fully stress and understand all the performance parameters in each customer’s infrastructure using a real-world desktop environment and sophisticated load testing tools. We perform unit testing to aid in purchasing decisions and system scale out. Finally, we load the system to full design capacity prior to roll out and make any required adjustments ahead of time, so there are no nasty surprises waiting around the corner as you bring those final users onboard.

We’re Not Done Yet…

So far we have focused our attention almost entirely on the backend infrastructure, but VDI success doesn’t begin and end there. To really deliver a best-in-class experience with today’s graphics intensive desktops without halving your user density numbers, you will need a little help from all those little computers which still sit on the users’ desks. Check back soon for the final part in our series, where we turn our attention to the last mile of VDI: the endpoint device.