Virtual desktop hardware has come a long way in recent years with CPU cores being denser, having cheaper memory, and storage now able to respond within milliseconds. The only part hardware that has been lagging behind for Horizon View has been graphics. Don't get me wrong, great progress has been made since View 5.X, now that NVidia has created dedicated GPU’s. This is has changed the way we deploy Horizon View. These are the options you can have when deploying 3D in your Horizon View Environment
Software 3D Render (Soft 3D)
This provided improved performance for users without requiring any heavy lifting or hardware-based adapters. This was introduced in VMware Horizon View 5.0 and is ideal for the task user.
Virtual Shared Graphics Acceleration (vSGA)
vSGA was introduced with Horizon View 5.2, this provides a significant enhancement for the everyday office worker compared to Soft 3D. This technology can provide high levels of consolidation of users across a single GPU when users only require occasional use. Again, ideal for many task workers. I think this method the same way a CPU is carved up for vCPU’s in a host.
Virtual Direct Graphics Acceleration (vDGA)
vDGA was released with Horizon View 5.3 and this method is different as the hypervisor passes the GPUs to guest VMs directly. Like many things that are passed through in ESXi it’s dedicated to that VM. The Virtual Machine will see the GPU attached as a Graphic card and will require the Windows NVidia drivers, but no special drivers needed in the hypervisor. This method will give the best performance but at the same time limit the amount of users.
Virtual Graphic Processing Unit (vGPU)
vGPU is the best of both worlds for the graphic acceleration for Horizon View. This technology provides multiple uses with direct access to the GPU cores/memory on the NVidia GRID card. Virtual Desktop can be assigned profiles that are based off use case. This technology uses native graphic drivers provided by NVidia that provide the Virtual Desktop with OpenGL and Direct X graphic API's.
When NVidia first came to market with the 3D acceleration for VDI they entered the market with the GRID K1, GRID K2 adapters and now NVidia has released the new GRID 2.0 Tesla M60/M6 GPU.
GRID K1 | GRID K2 | Tesla M6 | Tesla M60 | |
---|---|---|---|---|
GPU (CUDA Cores) | 4 (4×192) | 2 (2×1536) | 1 (1536) | 2 (2×2048) |
VRAM | 4x4GB | 2x4GB | 1x8GB | 2x8GB |
vDGA users | 4 | 2 | 1 | 2 |
GRID | 1.0 | 2.0 | ||
GRID 1GB Profile | 16 | 8 | 8 | 16 |
GRID 2GB Profile | 8 | 4 | 4 | 8 |
GRID 4GB Profile | 4 | 2 | 2 | 4 |
GRID 8GB Profile | - | - | 1 | 2 |
The Tesla comes in 2 different formats, M6 is formatted in a mezzanine card and intended for blade servers and has a single GPU. The M60 is a full PCIe card for the traditional server and this has a pair of GPU’s.
So what the difference?
As you can see in the table the 8GB profile can support 1 or 2 users depending on the card, this is a huge profile and while I am sure there is a use case for this I don’t see that with most VDI deployments I am doing, but this profile also exposes NVidia CUDA and OpenCL which can be used for Compute acceleration. It will be interesting to see how this will play in the VDI arena.