jmbrinkman

Posts Tagged ‘Hyper-V’

Battle for Cloud City: Microsoft strikes back? Part I.

In Opalis, Operations Manager, Service Manager, System Center, Virtualization, Vmware on November 7, 2011 at 10:57

A long, long time ago in a galaxy far away business thrived on the planet of Bespin. An almost unlimited source of revenue – clouds – secured the quiet life of Cloud City’s inhabitants ūüôā

But those days are gone and The Empire is attempting to take control of the clouds with its hosts of Hyper-V fighters and the SCVDMM (System Central Virtual Destruction and Mayhem Manager) aka “Death Star”.

A day after the announcement of the GA of Vmware’s vCenter Configuration manager, Vmware’s vOperations Suite and Microsoft System Center Suite are facing off in their battle for the private cloud. Of course there are other vendors that provide similar management suites – but because both suites are directly linked with each vendor’s own hypervisor layer I think both will be an obvious choice for customers. Almost a year ago I already voiced my views on why I think that Microsoft might have an advantage here – but in this post I want take a brief look at both suites ( and related products from both vendors) to see what areas of private cloud management they cover.

The term suite implies a set of tools built upon a central framework and using a single data set – however each suite consists of several essentially different products that have been brought together with varying levels of integration. This is because of the different roots of each product but also because each product is built to be used separately as well as in combination with the rest of the suites. This and the fact that both suites are able to connect to other system management software as well means that if a feature is missing from the suite that you might be able to integrate another products with either suite just as well. Both suites have links with EMC Ionix family for instance.

I’m going to do that by comparing each offering in 3 different categories:

  • Configuration and Monitoring: the five infrastructure layers
  • Trivia ūüėČ
  • Management and additional features

I’ve compiled a small table for each category highlighting 4 or 5 components that I believe make up that category – each category will get its own post.

This is in no way a complete or even refined comparison but its also a comparison based on documented features and aspects of both products – however I do intend to test and blogs about the two suites extensively in the near future.

When I mention a product I am talking about its most recent version – unless stated otherwise. Most of the System Center 2012 stuff is still beta or RC, some might say that that makes this comparison unfair – on both sides. But I think the fact that Microsoft might lack some features because the product isn’t finished is nullified by the fact they don’t have¬† to provide the quality and stability needed for a released product. And you could make the same argument the other way around.

C&M: The five infrastructure layers

First Star Wars and now this category that sounds like a Kung Fu movie..

In this part I want to look at which part of your “private cloud” infrastructure each suite can manage, configure and monitor. The layers that I have defined here are:

  • Storage
  • Network
  • Hypervisor
  • Guests
  • Applications

This leads to the following table (click to enlarage):


My conclusion: Microsoft is able to cover every layer with regard to monitoring and most with configuration/provisioning etc. Vmware is not. But if you can’t configure network devices from System Center and you need another application to do that chances are that application will also be able to monitor those devices.

Nota Bene:

  • Service Manager and Orchestrator really add value because they are the applications that really tie all the data from SCOM and SCCM together and makes it possible to use that data to build an intelligent management infrastructure.
  • As mentioned in other blogs and sources – dynamic discovery, self learning performance and capacity analysis are key features in managing a highly abstracted/virtualized infrastructure. Vmware sees this and seems to have given such features priority offer more “classical” features.

Sources:

vCenter Operations Docs

vCenter Configuration Manager Docs

Nice blog post comparing Vmware with other systems management applications

Bone Machine

In Virtualization on November 29, 2010 at 20:59

The other day I was reading the wikipedia entry on the Apple-II. When discussing the Apple-II Plus the article mentioned a virtual machine which was used to run certain compilers by utilizing the language exyension card. I’m not claiming some sort of discovery here – it was just a trace that made me think about virtualization.

Virtualization in the context of that entry means running something on hardware that was not designed for that job. Its like cooking in your dishwasher. Related to this sort of virtualization is emulation, like a NES emulator on a windows box – or the ugly duckling.

The difference between the two is mainly that the dishwasher gives immediate access to its resources and that the swan needs to use its OS to run its duck program and assign its resources accordingly.

If you take the main virtualization providers in the x86/x64 world,Esx/Xen/Hyper-V, and very roughly distribute those over the above mentioned archetypes you’ll see that hyper-v is the ugly duckling and that the other two are dishwashers.

Now let me ask you – why do want we virtualize? There are different ways to put it but in the end it comes down to this: we want distribute and utilize our resources dynamically and in the best possible way ( any resource – management is also a resource as are power and space).

And virtualization gives us just that. But why do we need virtualization to reach these goals? Are the ability to migrate processes from one piece of hardware, the ability to run different kinds of processes one on piece of hardware or possibility to assign resources to and from procceses intrinsic qualities of virtualization as we know it?

No.

To quote the engineer that was resonsible for developing vmotion:
“A bunch of us at VMware came from academia where process migration was popular but never worked in any mainstream OS because it had too many external dependencies to take care of.”(Yellow Bricks).Virtualization was neccesary because the original hosts of these processes weren’t able to empower us to reach the goals I mentioned earlier. And if we would be talking about a switch OS not being able to host a document editing process that would be no big deal – but that’s not representative of you’re everyday virtual host and guest.

And if we look at Novell using Xen to facilitate the transition from Netware to SUSE we look directly into the nature of virtualization: a way to stich up an ugly wound at the core of mainstream (read x86/x64 if you want) OS’s.Of course from a consolidation perspective this exactly what you need, but quite possibly not what you want if you consider the effect of keeping the stiches together.

With the rise of Vmware many have seized the momentum to develop all sorts of functionality that hooks up to their virtual platform – adding to the discrepeancy of what is possible on a bare metal server and what is possible on a virtual machine.All of that functionality has made our lives a lot easier – but it could have been developed for the bare metal systems as well.

But the danger lies within the fact that we are so glad with the patch and all it’s bells and whistles that the is little incentive to fix the actual problem.Microsoft Windows is an extreme example of this: because it can’t provide what we need – Microsoft even promotes application role seperation over all-in-one servers – it now includes Hyper-V to fulfill those needs. So instead of transforming their OS to adapt to the changing needs and requirements Microsoft develops (some may say copies) its own workaround. Before Microsoft launched Hyper-V it used to complain about the overhead of ESX and the associated performance hit – but the way I see it the real overhead is de redudant guest OS in which the applications or processes are encapsulated.

I work with virtualization everyday and share the enthousiasm about all the new possibilties, perspectives and challenges – but I’m a computer geek. I enjoy complexity. And when I think about the development of application infrastructure in the years to come, typing and reading on my Ipad – the complete opposite of a VM – I can’t help but wonder if we are really on the yellow brick road and if we are whether the Wizard will live up to our expectations.

Lucy in the sky with diamonds

In Tech Ed on November 10, 2010 at 15:31

I’m writing this post, my first post on my first blog, from Tech Ed Europe in Berlin. In between sessions I’m going to try to provide some coverage of this event and present my view on what I’ve heard and seen here.

My visit to Tech Ed is part of the reason why I’ve jumped on the blogging bandwagon – I wanted to let my co-workers know how I’m doing and what I’m doing over here.

This years Tech Ed is all about clouds. I’ll skip all the obvious and not so obvious allusions about Berlin,walls, windows and clouds but I do want to mention that Microsoft seems to be a little late in the cloud game ( as I am in the blogging game perhaps). However,because Microsoft’s ecosystem forms such a substantial part of a lot of services that customers consume, it has an edge on competing unified management systems because it can fully manage this ecosystem and can manage it better.

The System Center suite (which to some extent also seems to include FIM) has been transformed from a vaguely integrated group of management, monitoring and reporting tools into a cloud management quilt where the system center products are the patches and the newly acquired Opalis automation software is the thread.

As I said before Microsoft is late¬†– Vmware¬†has been promoting the idea of a personal clouds for a while now. And great progress has been made to tightly integrate VM’s, Network, Storage and in some cases even OS and software. But where it was lacking was in native support for Microsoft back-end/infrastructure. If Microsoft can¬†take the experiences of the likes of Cisco, NetApp,EMC e.a. with cloud management on the network and storage level(which it has judging from some of the demo’s I’ve seen here) and combine that with its knowledge of managing its own ecosystem no Windows administrator will ever have to leave her or his office to see their clouds but can just look through their windows.

This is the first in a series of posts on Microsoft Tech-Ed, the next posts I will focus on the details regarding System Center and Opalis.