jmbrinkman

Archive for the ‘Virtualization’ Category

Quest acquires VKernel

In Quest, Virtualization, Vmware on November 17, 2011 at 21:43

Vkernel the virtualization capacity management company is now a part of Quest – now I really should rewrite my posts on cloud management… (Statement on Vkernel’s Blog) Whether this will really “accelerate growth” for Vkernel remains to be seen however as a firm believer in larger frameworks/ecosystems I applaud this addition to Quest’s already impressive, tho scattered, line-up of management tools.

The Virtualization Practice seems to share my opinion – and I agree with them on the fact that the structure of the acquisition (Vkernel will be remain a separate entity) might hinder the integration of Vkernel in a Quest management framework. However Microsoft and Opalis did something similar – and that seems to have turned out alright. I’m not sure yet where I stand in regards to the absolute necessity to, as the Virtualization Practice puts it, “..bubble all of the vSphere metrics up to three simple scores (Health, Efficiency and Risk)..” and I will get back to that some other time.

I did a mini-review of vScope Explorer not to long ago and am going to do one on vOperations as well. Maybe I’ll test drive some of the Quest stuff as well in order to form a well grounded opinion on the acquisition and Quest’s position in the cloud management landscape.

Battle for Cloud City: Microsoft strikes back? Part II.

In Opalis, Operations Manager, SCOM 2012, SCVMM 2012, Service Manager, System Center, Virtualization, Vmware on November 16, 2011 at 22:28

Part I.

One of biggest advantages of posting on a blog when compared with writing an official proposal or something similar is that I get to ramble on about the things I feel are important. Or peculiar, alienating or just entertaining. Looking at private cloud management solutions in a more trivial way give me the opportunity to talk about factors that might or might not matter for most but do say something about how a product is perceived – a degree of brand value if you wish.

You might wonder where this will lead considering that fact that the more serious part of this series started of with some dubious analogies – but don’t worry I actually intend to make a point here. This is my comparison:

I’ve conjured five topics:

  • Names – If have to explain stuff to my boss and I’ve taken the “cloud” and “virtual” hurdles, I want to have a nice set of abbreviations or a awe inspiring product name to work with
  • Powershell – Very important. Maybe a bit overrated by some – a general sense of logic, a search engine and Powergui are all that are needed to keep you from flipping burgers.
  • “Open” Standards – In what sense can each offering be accessed, extended and customized by both vendors and end-users?
  • Citrix, Emc – Alliances – The cloud and virtualization market seems rather peaceful with what I perceive as a mutually beneficial status-quo between Microsoft and Vmware on the hypervisor front, between Microsoft and Citrix on the SBC/VDI front and Cisco and EMC working with both Microsoft and Vmware to tie everything together. However if we are talking about clouds, unification and abstraction a cloud management solution that provides more integration then the cliche “single-pane-of-glass” everyone seems to be selling might dictate the choice for a hypervisor the next time licenses expire…
  • Monitoring Sprawl? – We consolidated 200 servers into 4 pieces of hardware but need 15 servers to monitor our environment…and we might feel unsure about hosting a monitoring solution on a platform that’s monitored by that solution…or which damn web interface do I need to do this…and of course – how can be VM status be green, my server object in maintenance mode and my email stuck in my outbox?

My conclusion? If you are going “Vendor A unless” Microsoft scores the best on the “trivial” side of things. If you go “best-of-breed” vSphere, vCenter (and PowerCLI/CapacityIQ) are very strong at what they do.

Note Bene:

The success of Powershell, pre-alpha stuff from EMC like project Orion and SMI-S show that there is a need for a universal API and framework for managing infrastructure. The sad part is that those initiatives are not new and many technologies have fallen and entered the eternal cloud – or are still there and are still used by deemed unworthy by some (such as SNMP).

The question that remains is – who will bring balance to the force – the chosen but fallen Anakin or the Light’s side counteraction, Luke ?

Mini-Review: Monitoring vSphere with SCVMM and SCOM 2012

In Powershell, SCOM 2012, SCVMM 2012, System Center, Virtualization, Vmware on November 7, 2011 at 23:03

Sometime ago I posted my Vsphere monitoring shoot-out. I recently had the time to install the RC of the SCVMM 2012 and the beta of SCOM 2012. There are plenty of guides out there that describe how to get you started with both products ( SCOM 2012 beta in ten minutes , SCOM 2012 Beta step by step, SCVMM 2012 Survival Guide ) so I won’t get into that to much. Some general remarks:

SCVMM

  • You need the Windows 7 AIK which is only downloadable as an ISO or IMG. That annoyed me.
  • I used SQL 2008 R2 Express as a database – in hindsight it would have been better to use a full SQL trial and host both SCVMM and SCOM’s databases.
  • Besides that the install was quick and painless

SCOM

  • Collation, Collation, Collation! Choose SQL_Latin1_General_CP1_CI_AS as your SQL collation otherwise SCOM won’t find your SQL instance and it will not tell you you picked the wrong collation.
  • You need .NET 4
  • I had some issues installing the SCOM agent on the SCVMM server. I got this error:

Log Name:      Application

Source:        MsiInstaller

Date:          4-11-2011 17:53:33

Event ID:      1013

Task Category: None

Level:         Error

Keywords:      Classic

User:          ****\****

Computer:      FQ.DN

Description:

Product: System Center Operations Manager 2012 Agent — Microsoft ESENT Keys are required to install this application.  Please see the release notes for more information.

Apparantly this is not a SCOM 2012 specific error but more a general SCOM error on Windows 2008 R2 boxes. Running msiexec from an elevated command prompt solved the problem.

Adding vSphere to SCVMM

This part is pretty straightforward as well. Open the Virtual Machine Manager Console, Fabric pane and choose Add ResourceVmware vCenter Server. Create a Run As account which has enough the required privileges (local admin on the vCenter server according to Technet). After you’ve added the vCenter server you need to each Resource Cluster (or individual host) as well in much the same way as you added the vCenter server. But since you’re already connected to vCenter you don’t have to enter RC or host names – you can just select them in a browsing dialog.

Strangely enough I wasn’t able to retrieve and accept the certificate for any of my hosts using a domain account – which does have root equivalent privileges on the hosts – but either the AD integration is flawed or I made a mistake configuring it. But I used a second Run As account using the default vSphere root account and I was able to retrieve and accept the certificates.

After that I was able to view all my hosts and vm’s in SCVMM. Same goes for templates and host networking. SCVMM even sees my dvSwitches and sees them as one entity – but same goes for my vSwitches…which is not really what I would like to see. Portgroups aren’t shown in the networking pane – but I was able to find them in the vm guest properties. I did a quick test to see if I could actually manage stuff – and I could but for now I’m more interested in monitoring vSphere I’ll get down to managing vSphere some other time.

Connecting SCVMM to SCOM

I followed this great post on the SCVMM blog to connect SCVMM to SCOM. Most notable improvement over the previous versions: no need to install the VMM console on the SCOM server. However you still need to install the SCOMsole on the VMM Server. Oh and creating the connection is now a simple wizard in the VMM console :). I had some issues with not being able to search the online SCOM catalog I needed to download the prequisite MP’s by hand.

Once I got that sorted out I completed the wizard and the connection was made.

And? Has it gotten any better?

Yes. Because vSphere and vCenter are represented just as vSphere and vCenter in both SCVMM and SCOM instead of weird vm’s on a mutated Hyper-V server the visibility and navigation is much better. But my SCOMsole immediatly got filled up with alerts telling me my vm’s didn’t have VSG installed – and because everything is discovered through your VMM server (which is does still seem to see as a Hyper-V server) it started complaining about the fact that I had more then 384 vm’s on a host.

Alerts are also a lot quicker. Views are a bit poor – especially when you consider that the way my vSphere datacenter hierarchy is displayed in SCVMM is pretty good. The fact that SCOM and SCVMM will allow me to view a diagram of a service as defined in SCVMM look really promising but I haven’t tested that yet. If you put a host into maintenance mode in SCVMM its status is automatically propagated to SCOM. There is still no link between the vm as an instance running on vSphere and the Windows computer object in SCOM – that’s a real shame.

There isn’t a lot of Vmware specific stuff there as well. I guess that remains as MS likes to call it a partner opportunity – or something you could develop yourself using vCenter and System Center’s common denominator Powershell. But I believe even that might be less of a challenge then before because of the improved SNMP support in SCOM 2012 (so you can just that in addition to the information exposed by vCenter). Still the biggest improvement seems to be on the managing side rather then on the monitoring side – which makes taking the monitoring shortcomings for granted much more plausible then before.

Battle for Cloud City: Microsoft strikes back? Part I.

In Opalis, Operations Manager, Service Manager, System Center, Virtualization, Vmware on November 7, 2011 at 10:57

A long, long time ago in a galaxy far away business thrived on the planet of Bespin. An almost unlimited source of revenue – clouds – secured the quiet life of Cloud City’s inhabitants 🙂

But those days are gone and The Empire is attempting to take control of the clouds with its hosts of Hyper-V fighters and the SCVDMM (System Central Virtual Destruction and Mayhem Manager) aka “Death Star”.

A day after the announcement of the GA of Vmware’s vCenter Configuration manager, Vmware’s vOperations Suite and Microsoft System Center Suite are facing off in their battle for the private cloud. Of course there are other vendors that provide similar management suites – but because both suites are directly linked with each vendor’s own hypervisor layer I think both will be an obvious choice for customers. Almost a year ago I already voiced my views on why I think that Microsoft might have an advantage here – but in this post I want take a brief look at both suites ( and related products from both vendors) to see what areas of private cloud management they cover.

The term suite implies a set of tools built upon a central framework and using a single data set – however each suite consists of several essentially different products that have been brought together with varying levels of integration. This is because of the different roots of each product but also because each product is built to be used separately as well as in combination with the rest of the suites. This and the fact that both suites are able to connect to other system management software as well means that if a feature is missing from the suite that you might be able to integrate another products with either suite just as well. Both suites have links with EMC Ionix family for instance.

I’m going to do that by comparing each offering in 3 different categories:

  • Configuration and Monitoring: the five infrastructure layers
  • Trivia 😉
  • Management and additional features

I’ve compiled a small table for each category highlighting 4 or 5 components that I believe make up that category – each category will get its own post.

This is in no way a complete or even refined comparison but its also a comparison based on documented features and aspects of both products – however I do intend to test and blogs about the two suites extensively in the near future.

When I mention a product I am talking about its most recent version – unless stated otherwise. Most of the System Center 2012 stuff is still beta or RC, some might say that that makes this comparison unfair – on both sides. But I think the fact that Microsoft might lack some features because the product isn’t finished is nullified by the fact they don’t have  to provide the quality and stability needed for a released product. And you could make the same argument the other way around.

C&M: The five infrastructure layers

First Star Wars and now this category that sounds like a Kung Fu movie..

In this part I want to look at which part of your “private cloud” infrastructure each suite can manage, configure and monitor. The layers that I have defined here are:

  • Storage
  • Network
  • Hypervisor
  • Guests
  • Applications

This leads to the following table (click to enlarage):


My conclusion: Microsoft is able to cover every layer with regard to monitoring and most with configuration/provisioning etc. Vmware is not. But if you can’t configure network devices from System Center and you need another application to do that chances are that application will also be able to monitor those devices.

Nota Bene:

  • Service Manager and Orchestrator really add value because they are the applications that really tie all the data from SCOM and SCCM together and makes it possible to use that data to build an intelligent management infrastructure.
  • As mentioned in other blogs and sources – dynamic discovery, self learning performance and capacity analysis are key features in managing a highly abstracted/virtualized infrastructure. Vmware sees this and seems to have given such features priority offer more “classical” features.

Sources:

vCenter Operations Docs

vCenter Configuration Manager Docs

Nice blog post comparing Vmware with other systems management applications

SCOM 2007 R2: Monitoring vSphere Shoot Out

In Operations Manager, Virtualization on November 1, 2011 at 20:52

Update: I’ve done a mini-review on SCVMM/SCOM 2012 and vSphere monitoring

We are a Microsoft shop. And a Vmware shop. We use SCOM to monitor everything and vSphere to host all our servers. So you can imagine how crucially important it is for us to properly monitor vSphere. With SCOM. Of course Virtual Center does a great job in giving us basic information about our hypervisor environment and the state of our virtual machines. But without the information about our applications SCOM provides and no real way to relate the two sets of data we really needed a way to get that information into one system.

Of course, there are other monitoring solutions, both for vSphere and for Microsoft applications. But we want to take advantage of our investment in SCOM and we firmly believe that SCOM is the best options to monitor  a 99% Microsoft infrastructure.

We were not the first facing this challenge. Because a challenge it was. We did our best to look at as many options as we could and in the end made a choice based on both functionality and price.

In this post I want to give a short overview of the solutions we looked at and give my personal opinion on each of them.

The contenders

In no particular order:

We also expressed some interest in a management pack created by Bridgeways, but they were very slow to respond to our request for a evaluation and once we got a response the amount of information we had to provide in order to evaluate the pack was so huge we decided it was not worth the effort.

Small disclaimer: we really did our best to give each solution a fair shot, however it could be possible that additional configuration or tweaking would increase the performance or the quality of the data. On the other hand we didn’t take into account how hard to was to actually get the solutions working – because the installation process (especially under Windows 2008) wasn’t always easy though nothing we couldn’t handle.

Round 1: What do they monitor – and how?

All of the solutions work through vCenter, with the exception of QMX which is able to monitor vSphere hosts directly through SNMP and SSH. I guess you could configure Jalasoft or even SCOM itself as a generic SNMP device or build your own sets of monitors and rules but in general you will still need vCenter as a middle man to monitor your hosts.

None of them consists of just a Management Pack – they all need a service running on either a SCOM server or a separate server with access to SCOM. Jalasoft and QMX are frameworks – so its possible to monitor other devices as well which makes it easier to digest that you need to add another component to your monitoring infrastructure – SCVMM could also be used to monitor Hyper-V or to manage vSphere and Hyper-V.

Jalasoft’s Smart MP monitors just vCenter. Hosts are discovered as part of the vCenter server but aren’t represented as separate entities. SCVMM monitors both vCenter, hosts and virtual machines however it will not give you any vSphere specific data such as CPU ready times, Memory swapping etc. During our tests a vSphere host failed and we had fixed the problem before SCVMM alerted us. QMX gives you an afwul lot of options – it can monitor vmware logs, syslogs on the esx servers, esxtop data (my personal favourite) and also give you the possibility to create custom filters on log files to trigger an alert if an entry matching the filter is logged. It also is aware of vCenter alerts en events but I didn’t find any monitor or alerts relating to DRS or HA.

Veeam monitors just about everything that makes vSphere vSphere. Also a lot of work has been put in the knowledge in the alerts as well – and the alerting is really quick and accurate. Therefore Veeam wins this round.

Round 2: Pricing

vSphere is expensive – period. And since vCenter has its own monitoring capabilities it could be hard to justify another large investment. As always its hard to define a ROI on solutions that mitigate risks if it is possible at all. QMX for vSphere is free. Extensions for other devices are not and are generally somewhat more expensive then other solutions (for instance for networking devices) – but I’ll talk more about that in round three.

With Jalasoft you pay per device. If you have one vCenter server, you pay for one device. SCVMM is a a part of the System Center Suite. If you have the proper agreement with Microsoft you get it for “free” once you’ve joined the dark side.

Veeam is so closely aligned with vSphere – they even have (or at least had with vSphere 4.*) the same pricing model. And the price per socket is quite high. But you could ask yourself – if proper monitoring, performance analysis and trend based alerting can increase my consolidation ratio I will be able to host more servers per physical host and need less sockets, less vSphere licenses and less Veeam licenses.

QMX is completly free – except for the OS license for the machine you host it on – so QMX wins this round.

Round 3: Vision, Tactics, Strategy..whatever

This round is about how the solution fits in a management or monitoring vision. So the outcome is going to be very subjective. But hey – when vendors talk about a journey to the cloud they are talking about just that – a vision or even a paradigm if you want about how to manage infrastructure to properly deliver services to users.

If you are virtualizing your infrastructure you are consolidating. So one thing you don’t want to do is to introduce a monitoring server sprawl. Despite the name the current incarnation of the System Center Suite is not at all an organic whole. Still using SCVMM makes sense, especially if you also use Hyper-V in your environment – but you would still need to check vCenter regularly as well because otherwise you are going to miss crucial information about the state of your environment.

Jalasoft and QMX are frameworks. QMX also gives you the possiblity to extend System Center Configuration Manager and has the broadest support for other non-Microsoft platforms and devices. Jalasoft is very network oriented but has a great integration with another add-on to SCOM, Savision LiveMaps.

Veeam – as described in the previous rounds – is very vSphere oriented. It does vSphere, it does it very well, but you will still need something of a framework next to Veeam and SCOM to monitor the other layers of your infrastructure such as your SAN storage or your network.

I put my faith in the frameworks. And I think its inevitable that a solution like Veeam will be built by either Vmware themselves or one of the vendors that offer a monitoring framework at some point in the near future. This round goes to QMX because of the integration with SCCM and the support for just about any non-Windows platform or application out there.

So the winner is..and some final thoughts

I think QMX is the best option available today if you are looking for a solution that is very configurable, affordable and has enough promise for the future to justify investing time and money into making the framework part of your monitoring infrastructure. But….

  • There are other options – vKernel has quite a nice toolset and claims to connect to SCOM – I will be testing that soonish
  • SCVMM 2012 is said to prvoide better vSphere integration and SCOM 2012 is said to have improved network device monitoring. I will look at those two in detail as well and report back with my findings.
  • You could build your own MP – you get get all the relevant data from vCenter using Powershell and SNMP gets and traps
  • SCVMM 2008 has a nasty habit of setting custom properties on your virtual machines – but you can us Powershell (isn’t that ironic) to get rid of those properties – for more info : VCritical article
  • Since Powershell and vSphere are so compatible I’m really surprised that I haven’t found a solution based on just Powershell to link SCOM and vSphere together.

Mini review: VKernel vScope Explorer

In Virtualization on October 27, 2011 at 20:51

This is my first post in what might become a series 🙂 In these post I  want to give a short review of an interesting piece of software or hardware. Today’s victim is VKernel vScope Explorer.

I found out about this tool through a post on Eric Sloof’s blog and because I got a promotional email from VKernel (apparently I left my email address there for some reason 😉 ).

What does it do?

vScope Explorer is a tool that will visualize and analyze data about your vSphere or Hyper-V environment.

So what I expect it to do is to check for configuration Best Practices and analyze both host, vCenter\SCVMM and guest metrics in order to determine possible bottlenecks and inefficiencies. And of course – pretty pictures with lots of green (or red depending on how hard they want to sell the paid complement – vOperations).

How does it do whatever it does?

vScope Explorer is a virtual appliance with a web interface. You download an OVF and deploy the appliance. It has relatively high system requirements:

  • 4 vCPUs
  • 8 GB of memory
  • 64 GB of storage space

The website says they have a instruction video – but I couldn’t find it. So I just fired it up, went through a small text based setup to configure ip,dns,ntp and http proxy settings and was presented with a login screen. (For those interested the appliance runs SUSE Linux Enterprise Server 11 SP1). Seeing this is a web based tool I decided to stop peeking around in the VM itself and opened the web interface. One note on the ip address – since the tool will connect to vCenter you should ensure the vm can connect to vCenter. And there is a user’s guide included in the download.

On the web interface (which runs on port 80) after accepting the agreement I added our vCenter server and immediately ran into an error:

This gives us a hint that the product is indeed looking into performance metrics in great detail. Since this just a test I changed the logging level and it discovered my vCenter server and I finished the setup. I logged in using the default username and password and was then presented with a nice dialog telling me it would take approximately an hour(5 minutes for 10 vm’s) for the data to be collected. I decided not to add any alarms to vCenter or to install the client plugin at this time btw.

In all honesty – it didn’t even take an hour until the collection was finished. Once it was finished the tool will showed a status screen that defaulted to the VM performance “vScope”. You can then switch to the Host performance vScope, the Capacity,the VM Efficiency or the Datastore Efficiency vScope.

Each Object (a host,a VM or a Datastore grouped together by resource cluster) will have a colour indicating its status (red, yellow or green). On mouse over or when you click the object it will give you some details on why it has a certain status. A red or yellow status can be caused by an inefficient storage allocation, high memory or cpu utilization or on a host level even a projected performance bottleneck or capacity problem with a estimated amount of time until this bottleneck or problem will be reached or occur.

I had a quick look at the status of our environment and all the statuses of the objects seemed plausible. However sometimes issues aren’t really issues – we know we have a lot of wasted space on our datastores – that’s because we need a certain amount of IOPS. There is no way to “override” these checks from the vScope interface. And as I said before – in order to properly solve the actual issues something more elaborate such as the vOperations product will be necessary.

And what do I think about it?

I think this a very nice piece of software – but its only a part of what should be a full virtualization management and monitoring solution. And I think VKernel would agree 😉

It was easy to install, easy to use and easy to interpret. And since you can connect to several vCenter servers (and SCVMM servers) you could provide an high level “single pane glass” overview that’s understandable for just about everyone.But the lack of customization features (and the abundance of red blocks caused by that limition – no one wants to many red blocks…) makes me doubt if vScope can be used as a “Manager Dashboard”

One big plus – its very portable. You download the OVF, deploy it and you have a very nice overview of the general health of your environment or your customer’s.

TEC 2011 Europe Frankfurt: Project Virtual Reality Check

In Citrix, The Experts Conference Europe, Virtualization on October 24, 2011 at 20:37

I was lucky enough to be able to attend the Experts Conference Europe 2011 in Frankfurt last week. In due time all the slide decks and transcripts will hit the web so I refrain from delayed live blogging about all of the sessions. However there was one session (or actually two, the session was split into two parts – but considering the content it could have easily spanned three sessions!) of which both the topic and the presentation really interested me.

The session in question was Project Virtual Reality Check and it’s speaker was Jeroen van der Kamp, CTO for Login Consultants. Project Virtual Reality Check is a joint venture between two Dutch companies, PQR and Login Consultants. Its objective is to find the answers to several questions concerning the performance of virtualized Presentation Virtualization and Desktop Virtualization environments using different hypervisors, hardware and PV/DV technologies.

In order to find those answers they have developed a standard set of benchmarks which they use to find out what the limits are in terms of session (in DV) or guest (in DV) density. All major players in both the PV or Terminal Services and the DV/VDI are being tested – so its Hyper-V v. vSpere v. Xen and XenDesktop v. Vmware View v. vWorkspace etc.

Now the first reason why I attended this session was that I’m currently looking into several technologies that deal with remote offices and remoting. Traditionally Presentation virtualization or VPN have been the two obvious choices to offer users a way to work from home or from a small office. With the advent of VDI, or the rising demands of power users – I’m not getting into the discussion which came first – and the introduction of platforms such as Citrix XenApp/Desktop and vWorkspace where you can have the best of both worlds those choices aren’t that obvious anymore.

In a world of desktop or client connectivity in general you aren’t working with IOPS, CPU ready times or consolidation ratio’s. You are working with people (or as “us” IT people tend to call them “users”). People with subjective preferences, perception and presuppositions.  The first you don’t want to fix, the second you can’t fix and the last will take time and effort and results. So if you are designing such an infrastructure you want to know exactly if, how and why certain design decisions will influence performance – because you will always be juggling directly with client demands (Media content, Choice and Personalization) and limiting factors (Bandwidth, Latency, Cost).

And that is why I think that having independent, falsifiable and full system benchmarks are so important. And that’s exactly what VRC provides – all the specs and “payloads” are known variables and so are the benchmarking tools. Of course, as their own disclaimer states: “All Project VRC test are preformed in a pre-configured lab environment” – so these are not necessarily real life results. But the results will tell you which hypervisor will do what when pushed to the extreme limit. And its just that limit, even though when all prefer to call it optimal utilization, that was one of the main reason to start virtualizing workloads in the first place.

Of course all vendors also supply us with loads of performance information, comparisons and analysis. And some even do a good job. But most of the time the technical sales talk is even worse then the “normal” sales talk because they try to claim legitimacy through statistics. As Brian Madden pointed out during the Virtualization keynote – nothing is easier then lying with numbers.

A side effect of pushing a system to the limit is that you are able to directly identify, test and adjust Best Practices for each platform. So instead of compiling best practices based on problems and solutions in the field you get a great overview of the various best practices and their actual effect on the ability to host more guests or sessions on a piece of hardware.

Jeroen van der Kamp did a terrific job talking us through the results of each of the project phases and their results – one of things that interested me was the fact that in some cases Hyper-V had the upper hand when compared with vSphere and Xen and also the preliminary results of the Antivirus tests which showed that in a VDI environment offloading actually hurt the performance instead of improving it. Quite the contrary of what was claimed in a Tolly report sponsored by Trend Micro…

Bone Machine

In Virtualization on November 29, 2010 at 20:59

The other day I was reading the wikipedia entry on the Apple-II. When discussing the Apple-II Plus the article mentioned a virtual machine which was used to run certain compilers by utilizing the language exyension card. I’m not claiming some sort of discovery here – it was just a trace that made me think about virtualization.

Virtualization in the context of that entry means running something on hardware that was not designed for that job. Its like cooking in your dishwasher. Related to this sort of virtualization is emulation, like a NES emulator on a windows box – or the ugly duckling.

The difference between the two is mainly that the dishwasher gives immediate access to its resources and that the swan needs to use its OS to run its duck program and assign its resources accordingly.

If you take the main virtualization providers in the x86/x64 world,Esx/Xen/Hyper-V, and very roughly distribute those over the above mentioned archetypes you’ll see that hyper-v is the ugly duckling and that the other two are dishwashers.

Now let me ask you – why do want we virtualize? There are different ways to put it but in the end it comes down to this: we want distribute and utilize our resources dynamically and in the best possible way ( any resource – management is also a resource as are power and space).

And virtualization gives us just that. But why do we need virtualization to reach these goals? Are the ability to migrate processes from one piece of hardware, the ability to run different kinds of processes one on piece of hardware or possibility to assign resources to and from procceses intrinsic qualities of virtualization as we know it?

No.

To quote the engineer that was resonsible for developing vmotion:
“A bunch of us at VMware came from academia where process migration was popular but never worked in any mainstream OS because it had too many external dependencies to take care of.”(Yellow Bricks).Virtualization was neccesary because the original hosts of these processes weren’t able to empower us to reach the goals I mentioned earlier. And if we would be talking about a switch OS not being able to host a document editing process that would be no big deal – but that’s not representative of you’re everyday virtual host and guest.

And if we look at Novell using Xen to facilitate the transition from Netware to SUSE we look directly into the nature of virtualization: a way to stich up an ugly wound at the core of mainstream (read x86/x64 if you want) OS’s.Of course from a consolidation perspective this exactly what you need, but quite possibly not what you want if you consider the effect of keeping the stiches together.

With the rise of Vmware many have seized the momentum to develop all sorts of functionality that hooks up to their virtual platform – adding to the discrepeancy of what is possible on a bare metal server and what is possible on a virtual machine.All of that functionality has made our lives a lot easier – but it could have been developed for the bare metal systems as well.

But the danger lies within the fact that we are so glad with the patch and all it’s bells and whistles that the is little incentive to fix the actual problem.Microsoft Windows is an extreme example of this: because it can’t provide what we need – Microsoft even promotes application role seperation over all-in-one servers – it now includes Hyper-V to fulfill those needs. So instead of transforming their OS to adapt to the changing needs and requirements Microsoft develops (some may say copies) its own workaround. Before Microsoft launched Hyper-V it used to complain about the overhead of ESX and the associated performance hit – but the way I see it the real overhead is de redudant guest OS in which the applications or processes are encapsulated.

I work with virtualization everyday and share the enthousiasm about all the new possibilties, perspectives and challenges – but I’m a computer geek. I enjoy complexity. And when I think about the development of application infrastructure in the years to come, typing and reading on my Ipad – the complete opposite of a VM – I can’t help but wonder if we are really on the yellow brick road and if we are whether the Wizard will live up to our expectations.