jmbrinkman

Posts Tagged ‘vmware’

Battle for Cloud City: Microsoft strikes back? Part I.

In Opalis, Operations Manager, Service Manager, System Center, Virtualization, Vmware on November 7, 2011 at 10:57

If you like my content please do check out my new blog at thirdpartytools.net ! 

A long, long time ago in a galaxy far away business thrived on the planet of Bespin. An almost unlimited source of revenue – clouds – secured the quiet life of Cloud City’s inhabitants 🙂

But those days are gone and The Empire is attempting to take control of the clouds with its hosts of Hyper-V fighters and the SCVDMM (System Central Virtual Destruction and Mayhem Manager) aka “Death Star”.

A day after the announcement of the GA of Vmware’s vCenter Configuration manager, Vmware’s vOperations Suite and Microsoft System Center Suite are facing off in their battle for the private cloud. Of course there are other vendors that provide similar management suites – but because both suites are directly linked with each vendor’s own hypervisor layer I think both will be an obvious choice for customers. Almost a year ago I already voiced my views on why I think that Microsoft might have an advantage here – but in this post I want take a brief look at both suites ( and related products from both vendors) to see what areas of private cloud management they cover.

The term suite implies a set of tools built upon a central framework and using a single data set – however each suite consists of several essentially different products that have been brought together with varying levels of integration. This is because of the different roots of each product but also because each product is built to be used separately as well as in combination with the rest of the suites. This and the fact that both suites are able to connect to other system management software as well means that if a feature is missing from the suite that you might be able to integrate another products with either suite just as well. Both suites have links with EMC Ionix family for instance.

I’m going to do that by comparing each offering in 3 different categories:

  • Configuration and Monitoring: the five infrastructure layers
  • Trivia 😉
  • Management and additional features

I’ve compiled a small table for each category highlighting 4 or 5 components that I believe make up that category – each category will get its own post.

This is in no way a complete or even refined comparison but its also a comparison based on documented features and aspects of both products – however I do intend to test and blogs about the two suites extensively in the near future.

When I mention a product I am talking about its most recent version – unless stated otherwise. Most of the System Center 2012 stuff is still beta or RC, some might say that that makes this comparison unfair – on both sides. But I think the fact that Microsoft might lack some features because the product isn’t finished is nullified by the fact they don’t have  to provide the quality and stability needed for a released product. And you could make the same argument the other way around.

C&M: The five infrastructure layers

First Star Wars and now this category that sounds like a Kung Fu movie..

In this part I want to look at which part of your “private cloud” infrastructure each suite can manage, configure and monitor. The layers that I have defined here are:

  • Storage
  • Network
  • Hypervisor
  • Guests
  • Applications

This leads to the following table (click to enlarage):


My conclusion: Microsoft is able to cover every layer with regard to monitoring and most with configuration/provisioning etc. Vmware is not. But if you can’t configure network devices from System Center and you need another application to do that chances are that application will also be able to monitor those devices.

Nota Bene:

  • Service Manager and Orchestrator really add value because they are the applications that really tie all the data from SCOM and SCCM together and makes it possible to use that data to build an intelligent management infrastructure.
  • As mentioned in other blogs and sources – dynamic discovery, self learning performance and capacity analysis are key features in managing a highly abstracted/virtualized infrastructure. Vmware sees this and seems to have given such features priority offer more “classical” features.

Sources:

vCenter Operations Docs

vCenter Configuration Manager Docs

Nice blog post comparing Vmware with other systems management applications

SCOM 2007 R2: Monitoring vSphere Shoot Out

In Operations Manager, Virtualization on November 1, 2011 at 20:52

If you like my content please do check out my new blog at thirdpartytools.net ! 

Update: I’ve done a mini-review on SCVMM/SCOM 2012 and vSphere monitoring

We are a Microsoft shop. And a Vmware shop. We use SCOM to monitor everything and vSphere to host all our servers. So you can imagine how crucially important it is for us to properly monitor vSphere. With SCOM. Of course Virtual Center does a great job in giving us basic information about our hypervisor environment and the state of our virtual machines. But without the information about our applications SCOM provides and no real way to relate the two sets of data we really needed a way to get that information into one system.

Of course, there are other monitoring solutions, both for vSphere and for Microsoft applications. But we want to take advantage of our investment in SCOM and we firmly believe that SCOM is the best options to monitor  a 99% Microsoft infrastructure.

We were not the first facing this challenge. Because a challenge it was. We did our best to look at as many options as we could and in the end made a choice based on both functionality and price.

In this post I want to give a short overview of the solutions we looked at and give my personal opinion on each of them.

The contenders

In no particular order:

We also expressed some interest in a management pack created by Bridgeways, but they were very slow to respond to our request for a evaluation and once we got a response the amount of information we had to provide in order to evaluate the pack was so huge we decided it was not worth the effort.

Small disclaimer: we really did our best to give each solution a fair shot, however it could be possible that additional configuration or tweaking would increase the performance or the quality of the data. On the other hand we didn’t take into account how hard to was to actually get the solutions working – because the installation process (especially under Windows 2008) wasn’t always easy though nothing we couldn’t handle.

Round 1: What do they monitor – and how?

All of the solutions work through vCenter, with the exception of QMX which is able to monitor vSphere hosts directly through SNMP and SSH. I guess you could configure Jalasoft or even SCOM itself as a generic SNMP device or build your own sets of monitors and rules but in general you will still need vCenter as a middle man to monitor your hosts.

None of them consists of just a Management Pack – they all need a service running on either a SCOM server or a separate server with access to SCOM. Jalasoft and QMX are frameworks – so its possible to monitor other devices as well which makes it easier to digest that you need to add another component to your monitoring infrastructure – SCVMM could also be used to monitor Hyper-V or to manage vSphere and Hyper-V.

Jalasoft’s Smart MP monitors just vCenter. Hosts are discovered as part of the vCenter server but aren’t represented as separate entities. SCVMM monitors both vCenter, hosts and virtual machines however it will not give you any vSphere specific data such as CPU ready times, Memory swapping etc. During our tests a vSphere host failed and we had fixed the problem before SCVMM alerted us. QMX gives you an afwul lot of options – it can monitor vmware logs, syslogs on the esx servers, esxtop data (my personal favourite) and also give you the possibility to create custom filters on log files to trigger an alert if an entry matching the filter is logged. It also is aware of vCenter alerts en events but I didn’t find any monitor or alerts relating to DRS or HA.

Veeam monitors just about everything that makes vSphere vSphere. Also a lot of work has been put in the knowledge in the alerts as well – and the alerting is really quick and accurate. Therefore Veeam wins this round.

Round 2: Pricing

vSphere is expensive – period. And since vCenter has its own monitoring capabilities it could be hard to justify another large investment. As always its hard to define a ROI on solutions that mitigate risks if it is possible at all. QMX for vSphere is free. Extensions for other devices are not and are generally somewhat more expensive then other solutions (for instance for networking devices) – but I’ll talk more about that in round three.

With Jalasoft you pay per device. If you have one vCenter server, you pay for one device. SCVMM is a a part of the System Center Suite. If you have the proper agreement with Microsoft you get it for “free” once you’ve joined the dark side.

Veeam is so closely aligned with vSphere – they even have (or at least had with vSphere 4.*) the same pricing model. And the price per socket is quite high. But you could ask yourself – if proper monitoring, performance analysis and trend based alerting can increase my consolidation ratio I will be able to host more servers per physical host and need less sockets, less vSphere licenses and less Veeam licenses.

QMX is completly free – except for the OS license for the machine you host it on – so QMX wins this round.

Round 3: Vision, Tactics, Strategy..whatever

This round is about how the solution fits in a management or monitoring vision. So the outcome is going to be very subjective. But hey – when vendors talk about a journey to the cloud they are talking about just that – a vision or even a paradigm if you want about how to manage infrastructure to properly deliver services to users.

If you are virtualizing your infrastructure you are consolidating. So one thing you don’t want to do is to introduce a monitoring server sprawl. Despite the name the current incarnation of the System Center Suite is not at all an organic whole. Still using SCVMM makes sense, especially if you also use Hyper-V in your environment – but you would still need to check vCenter regularly as well because otherwise you are going to miss crucial information about the state of your environment.

Jalasoft and QMX are frameworks. QMX also gives you the possiblity to extend System Center Configuration Manager and has the broadest support for other non-Microsoft platforms and devices. Jalasoft is very network oriented but has a great integration with another add-on to SCOM, Savision LiveMaps.

Veeam – as described in the previous rounds – is very vSphere oriented. It does vSphere, it does it very well, but you will still need something of a framework next to Veeam and SCOM to monitor the other layers of your infrastructure such as your SAN storage or your network.

I put my faith in the frameworks. And I think its inevitable that a solution like Veeam will be built by either Vmware themselves or one of the vendors that offer a monitoring framework at some point in the near future. This round goes to QMX because of the integration with SCCM and the support for just about any non-Windows platform or application out there.

So the winner is..and some final thoughts

I think QMX is the best option available today if you are looking for a solution that is very configurable, affordable and has enough promise for the future to justify investing time and money into making the framework part of your monitoring infrastructure. But….

  • There are other options – vKernel has quite a nice toolset and claims to connect to SCOM – I will be testing that soonish
  • SCVMM 2012 is said to prvoide better vSphere integration and SCOM 2012 is said to have improved network device monitoring. I will look at those two in detail as well and report back with my findings.
  • You could build your own MP – you get get all the relevant data from vCenter using Powershell and SNMP gets and traps
  • SCVMM 2008 has a nasty habit of setting custom properties on your virtual machines – but you can us Powershell (isn’t that ironic) to get rid of those properties – for more info : VCritical article
  • Since Powershell and vSphere are so compatible I’m really surprised that I haven’t found a solution based on just Powershell to link SCOM and vSphere together.

Bone Machine

In Virtualization on November 29, 2010 at 20:59

The other day I was reading the wikipedia entry on the Apple-II. When discussing the Apple-II Plus the article mentioned a virtual machine which was used to run certain compilers by utilizing the language exyension card. I’m not claiming some sort of discovery here – it was just a trace that made me think about virtualization.

Virtualization in the context of that entry means running something on hardware that was not designed for that job. Its like cooking in your dishwasher. Related to this sort of virtualization is emulation, like a NES emulator on a windows box – or the ugly duckling.

The difference between the two is mainly that the dishwasher gives immediate access to its resources and that the swan needs to use its OS to run its duck program and assign its resources accordingly.

If you take the main virtualization providers in the x86/x64 world,Esx/Xen/Hyper-V, and very roughly distribute those over the above mentioned archetypes you’ll see that hyper-v is the ugly duckling and that the other two are dishwashers.

Now let me ask you – why do want we virtualize? There are different ways to put it but in the end it comes down to this: we want distribute and utilize our resources dynamically and in the best possible way ( any resource – management is also a resource as are power and space).

And virtualization gives us just that. But why do we need virtualization to reach these goals? Are the ability to migrate processes from one piece of hardware, the ability to run different kinds of processes one on piece of hardware or possibility to assign resources to and from procceses intrinsic qualities of virtualization as we know it?

No.

To quote the engineer that was resonsible for developing vmotion:
“A bunch of us at VMware came from academia where process migration was popular but never worked in any mainstream OS because it had too many external dependencies to take care of.”(Yellow Bricks).Virtualization was neccesary because the original hosts of these processes weren’t able to empower us to reach the goals I mentioned earlier. And if we would be talking about a switch OS not being able to host a document editing process that would be no big deal – but that’s not representative of you’re everyday virtual host and guest.

And if we look at Novell using Xen to facilitate the transition from Netware to SUSE we look directly into the nature of virtualization: a way to stich up an ugly wound at the core of mainstream (read x86/x64 if you want) OS’s.Of course from a consolidation perspective this exactly what you need, but quite possibly not what you want if you consider the effect of keeping the stiches together.

With the rise of Vmware many have seized the momentum to develop all sorts of functionality that hooks up to their virtual platform – adding to the discrepeancy of what is possible on a bare metal server and what is possible on a virtual machine.All of that functionality has made our lives a lot easier – but it could have been developed for the bare metal systems as well.

But the danger lies within the fact that we are so glad with the patch and all it’s bells and whistles that the is little incentive to fix the actual problem.Microsoft Windows is an extreme example of this: because it can’t provide what we need – Microsoft even promotes application role seperation over all-in-one servers – it now includes Hyper-V to fulfill those needs. So instead of transforming their OS to adapt to the changing needs and requirements Microsoft develops (some may say copies) its own workaround. Before Microsoft launched Hyper-V it used to complain about the overhead of ESX and the associated performance hit – but the way I see it the real overhead is de redudant guest OS in which the applications or processes are encapsulated.

I work with virtualization everyday and share the enthousiasm about all the new possibilties, perspectives and challenges – but I’m a computer geek. I enjoy complexity. And when I think about the development of application infrastructure in the years to come, typing and reading on my Ipad – the complete opposite of a VM – I can’t help but wonder if we are really on the yellow brick road and if we are whether the Wizard will live up to our expectations.