jmbrinkman

Posts Tagged ‘TEC 2011’

TEC 2011 Europe Frankfurt: Project Virtual Reality Check

In Citrix, The Experts Conference Europe, Virtualization on October 24, 2011 at 20:37

I was lucky enough to be able to attend the Experts Conference Europe 2011 in Frankfurt last week. In due time all the slide decks and transcripts will hit the web so I refrain from delayed live blogging about all of the sessions. However there was one session (or actually two, the session was split into two parts – but considering the content it could have easily spanned three sessions!) of which both the topic and the presentation really interested me.

The session in question was Project Virtual Reality Check and it’s speaker was Jeroen van der Kamp, CTO for Login Consultants. Project Virtual Reality Check is a joint venture between two Dutch companies, PQR and Login Consultants. Its objective is to find the answers to several questions concerning the performance of virtualized Presentation Virtualization and Desktop Virtualization environments using different hypervisors, hardware and PV/DV technologies.

In order to find those answers they have developed a standard set of benchmarks which they use to find out what the limits are in terms of session (in DV) or guest (in DV) density. All major players in both the PV or Terminal Services and the DV/VDI are being tested – so its Hyper-V v. vSpere v. Xen and XenDesktop v. Vmware View v. vWorkspace etc.

Now the first reason why I attended this session was that I’m currently looking into several technologies that deal with remote offices and remoting. Traditionally Presentation virtualization or VPN have been the two obvious choices to offer users a way to work from home or from a small office. With the advent of VDI, or the rising demands of power users – I’m not getting into the discussion which came first – and the introduction of platforms such as Citrix XenApp/Desktop and vWorkspace where you can have the best of both worlds those choices aren’t that obvious anymore.

In a world of desktop or client connectivity in general you aren’t working with IOPS, CPU ready times or consolidation ratio’s. You are working with people (or as “us” IT people tend to call them “users”). People with subjective preferences, perception and presuppositions.  The first you don’t want to fix, the second you can’t fix and the last will take time and effort and results. So if you are designing such an infrastructure you want to know exactly if, how and why certain design decisions will influence performance – because you will always be juggling directly with client demands (Media content, Choice and Personalization) and limiting factors (Bandwidth, Latency, Cost).

And that is why I think that having independent, falsifiable and full system benchmarks are so important. And that’s exactly what VRC provides – all the specs and “payloads” are known variables and so are the benchmarking tools. Of course, as their own disclaimer states: “All Project VRC test are preformed in a pre-configured lab environment” – so these are not necessarily real life results. But the results will tell you which hypervisor will do what when pushed to the extreme limit. And its just that limit, even though when all prefer to call it optimal utilization, that was one of the main reason to start virtualizing workloads in the first place.

Of course all vendors also supply us with loads of performance information, comparisons and analysis. And some even do a good job. But most of the time the technical sales talk is even worse then the “normal” sales talk because they try to claim legitimacy through statistics. As Brian Madden pointed out during the Virtualization keynote – nothing is easier then lying with numbers.

A side effect of pushing a system to the limit is that you are able to directly identify, test and adjust Best Practices for each platform. So instead of compiling best practices based on problems and solutions in the field you get a great overview of the various best practices and their actual effect on the ability to host more guests or sessions on a piece of hardware.

Jeroen van der Kamp did a terrific job talking us through the results of each of the project phases and their results – one of things that interested me was the fact that in some cases Hyper-V had the upper hand when compared with vSphere and Xen and also the preliminary results of the Antivirus tests which showed that in a VDI environment offloading actually hurt the performance instead of improving it. Quite the contrary of what was claimed in a Tolly report sponsored by Trend Micro…