Virtual Desktop Platform Shootout, Part 4 of 4

Other Related Blog Posts: 1 of 42 of 43 of 4

Author:  Jamie Engelhard, CTO
Contributor:  Michael Rocchino, IT Support SpecialistEngelhard CTO

Hopefully, you are returning to or continuing this blog series after reading parts 1, 2 and 3. If so, you have already read all the background info and are anxious to jump right into the load testing results for the Hyper-converged appliances we tested. If not, you may want to consider starting at the beginning so you know how and what we tested. Without further ado, let’s get down to the data!

Just as we did in part 3 with VMware Horizon and Citrix XenDesktop, we will compare the results produced by Login VSI using identical testing workloads and session launch timing across two different products. However, in this case we are using the same VDI software solution (VMware Horizon) and evaluating the results on two different hyper-converged hardware platforms. The two products being compared are the Nutanix NX-3460 and the Dell EVO:RAIL appliance for virtual desktops.

Hardware Specifications

We introduced the Nutanix NX-3460 in part 2 so now let’s compare it to the EVO:RAIL appliance from Dell. Below are the hardware specs for a single node in each appliance:

Category Component Nutanix NX-3060 Dell EVO:RAIL
CPU Processor Intel E5-2680 v2 Intel E5-2620 v2
  Cores per CPU 10 6
  CPUs Installed 2 2
  Base Clock Speed 2.8 GHz 2.1 GHz
  Cache 25 MB 15 MB
  Max Memory Speed 1866 MHz 1600 MHz
  Max Memory Bandwidth 59.7 GB/s 51.2 GB/s
 
Memory Module Size & Type 16 GB DDR3 16 GB DDR3
  Modules 16 12
  Module Speed 1866 MHz 1600 MHz
 
Networking 10 Gb Interface Qty 2 2
  1 Gb Interface Qty 2 0
  LOM Interface Yes Yes
 
Disk SSD Drives 2 x 400 GB 1 x 480 GB
  SAS HDD Drives 0 4 (3 x 1.2 TB Data, 1 x 300 GB Boot)
  SATA HDD Drives 4 x 1 TB 0
  Total Hot Storage Tier 768 GB 480 GB
  Total Cold Storage Tier 4 TB 3.6 TB

Each appliance contains 4 nodes, below are the aggregated specs for each device:

Category Component Nutanix NX-3460 Dell EVO:RAIL
CPU Total Core Count 80 48
 
Memory Total RAM 1024 GB 768 GB
  Max RAM (with Upgrade) 2048 GB N/A
 
Disk Total Hot Storage Tier 3 TB 1.9 TB
  Total Warm Storage Tier 16 TB 14.4 TB

As you can see, from a pure hardware standpoint, the Nutanix unit has a substantial advantage in every category. So, why didn’t we test a more powerful EVO:RAIL device? The answer is that EVO:RAIL is not available in a more powerful configuration, no matter which OEM provides it. Processor, memory, disk and all other core components are governed by the EVO:RAIL specifications. At the time of the evaluation, this was the only configuration available for VDI workloads.

So, using the equipment provided by each vendor let’s see how we fared on the Login VSI density test.

Login VSI Density Testing

We used Login VSI to determine VSImax for both platforms. In case you’re not familiar with Login VSI, refer back to part 3 of this blog series for an explanation of the test parameters and the metrics below.

Testing Parameters

Below are the parameters for the two tests. Because we expected the EVO:RAIL to be able to handle fewer sessions based on the specs, we started with a number that we thought was easily achievable and planned to run additional rounds of testing with gradually higher session counts. Launch interval and workload were kept identical.

Setting Nutanix Test EVO:RAIL Test
Sessions 332 200
Launch Time 33 Minutes 20 Minutes
Launch Interval 6 Seconds 6 Seconds
Workload Knowledge Worker Knowledge Worker

Login VSI Summary Results

Below are the results – as it turned out, we did not need to run additional rounds of testing with EVO:RAIL, because we achieved VSImax on our very first test.

Metric Nutanix EVO:RAIL Outcome
Login VSI Baseline 788 ms 974 ms Nutanix is 24% faster
Login VSI Threshold 1788 ms 1975 ms Nutanix is 10% faster
Login VSI VSImax 311 180 Nutanix is 73% higher

VSImax Results

Login VSI VSImax graphs for each solution are included below. The VSImax is indicated by the red “X”.

Nutanix

Nutanix VSImas Results 1

EVO:RAIL

 EVORail VSImas Results 2

VSI Index Comparison (Nutanix vs. EVO:RAIL)

Below is a comparison of the Nutanix and EVO:RAIL VSI Index over time as sessions are added. This graph captures the relative performance of the solutions perfectly. You can see that Nutanix (in green) starts with a better (lower) baseline score and the response time increases in a predictable and nearly linear fashion until we get above 300 sessions. EVO:RAIL has a slower baseline response time, and while the performance curve is nearly linear for the first 150 sessions, it is at a much steeper slope than the Nutanix curve. The two performance slopes run parallel in the very beginning, but they start to skew noticeably in favor of Nutanix after 25 sessions or so. After reaching 150 sessions, the response time on EVO:RAIL takes a dramatic turn upward and hits the VSImax quickly.

VSI Index Comparison 3

Hosting Infrastructure Key Performance Indicators (KPI) during Login VSI Tests

Below are the key measurements of system utilization during the Login VSI tests for each platform. The first graph is an aggregate across the entire cluster, with individual hosts / nodes following that.

Nutanix

Nutanix Hosting Infrastructure 4

Nutanix Hosting Infrastructure 5

EVO:RAIL

EvoRail Hosting Infrastructure 6

EvoRail Hosting Infrastructure 7

Analysis

The KPI that is most telling and we believe explains the primary performance difference between the two solutions is the disk throughput and response time. Let’s zoom in and compare the two.

Disk Throughput and Latency

Nutanix

Nutaix Disk Throughput 8

EVO:RAIL

EvoRail Disk Throughput 9

As you can see, EVO:RAIL was generating 500 to 1,000 times more disk throughput during the test, which resulted in significantly increased latency. Let’s also take a closer look at CPU.

CPU

Below are the CPU utilization graphs for both appliances throughout the test, Nutanix at left.

Nutanix EVO:RAIL

CPU Utilization 10

What is noteworthy is that the EVO:RAIL appliance peaked at just over 50% of CPU, despite the fact that the processors are significantly less powerful than those in the Nutanix. So while we started out with the belief that EVO:RAIL would be hampered by its limited computing capacity, it appears that the substantially higher disk IO and latency were the true determining factors, creating a bottleneck long before the CPU. This indicates that the VSAN architecture does not scale nearly as well as the Nutanix Distributed File System (NDFS) under heavy IOPS conditions.

We should mention that it is a good thing that the Nutanix comes with those higher performing processors because the Nutanix Controller VMs (CVMs) are directly responsible for a significant portion of the CPU activity shown above. The CPU and RAM allocated to and used by the CVM on each node could probably host another 10 desktops were it available for the desktop workload. The same software that is responsible for the superior scalability of NDFS requires significant resources to run, so this overhead needs to be considered when sizing the solution. Even with that caveat, the Nutanix density numbers and storage performance under significant load are superb.

Final Conclusions

We hope this series has been informative and helpful for anyone considering VDI software solutions or a hyper-converged platform for VDI or any other virtual workload. We learned a tremendous amount in completing the testing and analysis and we are very glad to be able to share it with you here. In the case of VMware Horizon we were pleasantly surprised to see the recent strides made in terms of bandwidth consumption and the unexpected density benefits over Citrix XenDesktop. As for Nutanix vs. EVO:RAIL, the testing reinforced our belief that Nutanix is the more flexible and higher performing SDS solution, and also revealed some specific performance bottlenecks that VMware needs to address within the VSAN architecture. VSAN 6.0 was released shortly after we completed our testing, so perhaps we will have to have a rematch soon!

Other Related Blog Posts: 1 of 42 of 43 of 4

Jamie Engelhard, CTO – Helient Systems LLC – Jamie is an industry leading systems architect, business analyst, and hands-on systems engineer. As one of the founding partners of Helient Systems, Jamie brings an immense range of IT knowledge and focuses on several key areas such as architecting fully redundant Virtual Desktop Infrastructure solutions, Application integration and management, Document Management Systems, Microsoft Networking, Load Testing and end-user experience optimization. Jamie has provided technical, sales, and operational leadership to several consulting firms and scores of clients over his 20 year career in IT. If you have any questions or need more information, please contact jengelhard@helient.com.