team-bg-img
time 6 minute read

Helient Systems :

Other Related Blog Posts: 1 of 4; 2 of 4; 4 of 4

Author:  Jamie Engelhard, CTO
Contributor:  Michael Rocchino, IT Support SpecialistEngelhard CTO

For many of you this is what you’ve been patiently (in most cases) waiting for! Having set the stage in parts 1 and 2, we will focus this entry on delivering and comparing the actual results of our performance and density testing using Citrix XenDesktop 7.6 and VMware Horizon 6. If you have not read the previous posts in the series, we encourage you to do so, but it is not required to gain some insight from these results. Bottom line, we are comparing these two VDI software solutions on the same hyper-converged hardware platform.

So what, specifically, are we reporting on here? While there are many important factors to consider when choosing a VDI solution, our scope here is primarily focused on a few key aspects that were of interest to the firm that sponsored the test. These foci are related to Virtual Desktop Machine (VDM) performance, density, and operational load including:

  • Duration of Power Operations:

–       Boot Storm

–       Power Down

  • Achievable VDM Density per Unit
  • Session Bandwidth Consumption
  • IOPS and Other Infrastructure KPI Throughout

Boot up / power down testing

For these tests, we simply powered off and then on all the VDMs in the pool using the VDI management consoles or vSphere as required. As noted previously, all throttling of VDM power operations was lifted so machines were allowed to boot as quickly as possible.

Test Horizon XenDesktop Outcome
Power Off All VDMs < 5 minutes to shut down all machines. 5 minutes to shut down all machines. Nominal difference. Horizon is slightly faster.
Boot 300 VDMs to Ready State All machines powered on and ready within 7 minutes. All machines powered on and should have registered in 18 minutes. XenDesktop machines took 157% longer

Below are the IOPS graphs and peak values achieved during the boot test for each solution. These graphs use the same time scale. As you can clearly see, Horizon achieved much higher peak IOPS and the boot storm is compressed into a much shorter time period.

Horizon

Horizon

XenDesktop

 XenDesktop

Initially this was a surprising outcome and really highlights some fundamental inefficiency in the Citrix Machine Creation Services (MCS) implementation. After further discussion, we remembered seeing this behavior in production environments during XenDesktop boot storms, in which the machines’ power on task progress indicator, as seen in vSphere, gets stuck at 100% for 1-2 minutes before finally completing. We dug into some old vmware.log files from VDMs that exhibited this behavior in MCS and observed this difference:

2013-10-07T02:08:55.403Z| vmx| I120: DISK: Opening disks took 162844 ms.

At the time, our belief was that the operation was IOPS constrained which is why it took 2 minutes and 42 seconds to complete, but now that we see the same exact behavior on a platform which clearly has plenty of IOPs capacity, we know it is something with the underlying method used to create the disks in MCS. We will be performing deeper analysis of this particular discovery in the future for our XenDesktop customers, but for the purposes of this testing we can simply say that Horizon Linked Clones beat XenDesktop MCS very convincingly.

Login VSI Density Testing

Testing Parameters

For VDM density testing, we used Login VSI to perform identical tests against both software environments. Below are the parameters used for each test, we kept the pacing

Setting Description Horizon XenDesktop
Sessions Number of sessions to launch 332 320
Launch Time The period of time over which the target number of sessions are launched by the launcher PCs. 33 Minutes 32 Minutes
Launch Interval Number of seconds between session launches 6 Seconds 6 Seconds
Workload Workload determines the script actions and timing of those actions. Knowledge Worker Knowledge Worker

Login VSI Terminology

When reviewing the upcoming scores and graphs, it is important to understand certain key terms in Login VSI. These are:

Login VSI Baseline (ms)

The VSI baseline score indicates the responsiveness of a VDM with very few users logged into the environment. A lower baseline score is better. It indicates a more responsive system and faster completion of the very first timed test within the workload script.

Login VSI Threshold (ms)

The VSI threshold is relative to the VSI baseline. It represents the maximum allowable VSI score after which users would begin to experience noticeable slowdowns. When the VSI threshold is met, the system is considered to be at the maximum recommended capacity without degrading user experience.

Login VSI VSImax (sessions)

This is the number of active working sessions when the Login VSI Threshold was reached. A higher score is better as it indicates more users can be accommodated on the system before user experience is impacted.

Summary Results

Below are the summary results of our testing. This was quite a surprise as it was a much bigger difference than we anticipated. Our assumption was that this test would produce a result that was +/- 3% and within a certain margin of error or statistically insignificant outcome. What we actually found was that Horizon performs better from the start and has longer legs as well when it comes to VDM performance.

Metric Horizon XenDesktop Outcome
Login VSI Baseline 788 ms 964 ms Horizon environment is 22% faster with no load
Login VSI Threshold 1788 ms 1964 ms Horizon is 10% faster at full load
Login VSI VISmax 311 284 Horizon can accommodate 10% more sessions on like hardware.

VSImax Detailed Results

Below are the actual Login VSI VSImax graphs for each solution. The VSImax is indicated by the red “X”.

Horizon

Horizon Graphic 3

XenDesktop

XenDesktop Graphic 4

VSI Index Comparison (Horizon vs. XenDesktop)

The graph below depicts the VSI scores of Horizon and XenDesktop over time as sessions are added. As you can see, Horizon starts lower and stays on its performance curve until after 300+ sessions, while XenDesktop starts higher and its performance curve jumps well below 300 sessions.

vSimax Graphic 5

IOPS Comparison During Login VSI Tests

Below you can (hopefully) see the IOPS created during each test. There is very little difference between the two solutions.

Horizon (Peak: 6,302)

Horizon Graphic 6

XenDesktop (Peak: 6,420)

XenDesktop Graphic 7

Hosting Infrastructure Key Performance Indicators (KPI) during Login VSI Tests

Below are the key measurements of system utilization during the Login VSI tests for each environment. The first graph is an aggregate across the entire cluster, with individual hosts / nodes following that.

Horizon

Cluster-wide Resource Utilization

Horizon Graphic 8

Individual Host Utilization

CPU Graphic 9

XenDesktop

Cluster-wide Resource Utilization

XenDesktop Graphic 10

Individual Host Utilization

CPU Graphic 11

Bandwidth Testing

In order to gauge the bandwidth that would be required for a satisfactory user experience in remote offices, a small Login VSI test was run from two different satellite offices on each of the VDI platforms. These results are intended to serve two purposes: 1) as a comparative measure between the two remoting protocols and 2) as a base value which can be extrapolated for each intended office based on the number of users. Login VSI is not equipped to track or analyze bandwidth consumption; it merely provides a useful mechanism for launching a number of sessions from a remote office. The bandwidth values were gathered using OpNet, the preexisting network monitoring platform at the firm. Bandwidth was measured at the switch interface used by the launcher PC.

Test Parameters

Setting Description XenDesktop Horizon
Sessions Number of sessions to launch 9 10
Launch Time The period of time over which the target number of sessions are launched by the launcher PCs. 9 Minutes 10 Minutes
Launch Interval Number of seconds between session launches 60 Seconds 60 Seconds
Workload Workload determines the script actions and timing of those actions. Knowledge Worker Knowledge Worker

Singapore

Horizon

Horizon

XenDesktop

XenDesktop

San Diego

Horizon

VMWare

XenDesktop

XenDesktop

Analysis and Interpretation

Unfortunately OpNet was not able to provide raw values for data analysis so the results are not very precise, but by “eyeballing” the graphs above, ten data points in the middle of the test were converted into rough values, averaged, and presented in the table below. All values are reported in Megabits per second.

  Singapore Average San Diego Average Overall Average Overall Average per Session
Horizon 1.410 1.910 1.660 0.166
XenDesktop 2.155 1.895 2.025 0.225

The results above are quite surprising and are indicative of significant improvements made by Horizon with the recent releases. The conventional wisdom has long been that PCoIP was generally much more bandwidth intensive than ICA. Of course, this topic has been the subject of exhaustive analysis over the years and there are numerous blogs, YouTube videos and whitepapers which go into a lot more detail than we will here. But in our simple test and with no adjustments made to the default quality and compression settings, Horizon 6 actually used less bandwidth than XenDesktop.

Interpretation of the values above and some additional points for consideration are as follows:

  • View consumed, on average, 36% less bandwidth than XenDesktop
  • VMware achieved this role reversal through a combination of PCoIP protocol optimization and adjustments to the default quality settings made in the latest release
  • No attempt was made to evaluate the visual display quality or responsiveness of the user experience
  • All bandwidth and graphics policies were left “as is” from the preexisting installations of both XenDesktop and View
  • In a production scenario, XenDesktop (ICA) traffic across the WAN would be reduced significantly by bandwidth optimization & compression. Horizon (PCoIP) is mostly UDP based and cannot be optimized or compressed, only prioritized.

In Parting

We were surprised and excited by many of the findings in this portion of the shootout. What we came away with was contrary to what we expected and to outcomes from historical tests done by our team and other groups regarding Horizon (View) and XenDesktop. We believe it is a positive outcome for everyone when there is serious competition in this space, and any time we learn something new, it is time very well spent.

Next time we will move on to the hyper-converged infrastructure comparison between Nutanix and Dell’s EVO:RAIL.

Other Related Blog Posts: 1 of 42 of 44 of 4

Jamie Engelhard, CTO – Helient Systems LLC – Jamie is an industry leading systems architect, business analyst, and hands-on systems engineer. As one of the founding partners of Helient Systems, Jamie brings an immense range of IT knowledge and focuses on several key areas such as architecting fully redundant Virtual Desktop Infrastructure solutions, Application integration and management, Document Management Systems, Microsoft Networking, Load Testing and end-user experience optimization. Jamie has provided technical, sales, and operational leadership to several consulting firms and scores of clients over his 20 year career in IT. If you have any questions or need more information, please contact jengelhard@helient.com.