jmbrinkman

Archive for November, 2010|Monthly archive page

Bone Machine

In Virtualization on November 29, 2010 at 20:59

The other day I was reading the wikipedia entry on the Apple-II. When discussing the Apple-II Plus the article mentioned a virtual machine which was used to run certain compilers by utilizing the language exyension card. I’m not claiming some sort of discovery here – it was just a trace that made me think about virtualization.

Virtualization in the context of that entry means running something on hardware that was not designed for that job. Its like cooking in your dishwasher. Related to this sort of virtualization is emulation, like a NES emulator on a windows box – or the ugly duckling.

The difference between the two is mainly that the dishwasher gives immediate access to its resources and that the swan needs to use its OS to run its duck program and assign its resources accordingly.

If you take the main virtualization providers in the x86/x64 world,Esx/Xen/Hyper-V, and very roughly distribute those over the above mentioned archetypes you’ll see that hyper-v is the ugly duckling and that the other two are dishwashers.

Now let me ask you – why do want we virtualize? There are different ways to put it but in the end it comes down to this: we want distribute and utilize our resources dynamically and in the best possible way ( any resource – management is also a resource as are power and space).

And virtualization gives us just that. But why do we need virtualization to reach these goals? Are the ability to migrate processes from one piece of hardware, the ability to run different kinds of processes one on piece of hardware or possibility to assign resources to and from procceses intrinsic qualities of virtualization as we know it?

No.

To quote the engineer that was resonsible for developing vmotion:
“A bunch of us at VMware came from academia where process migration was popular but never worked in any mainstream OS because it had too many external dependencies to take care of.”(Yellow Bricks).Virtualization was neccesary because the original hosts of these processes weren’t able to empower us to reach the goals I mentioned earlier. And if we would be talking about a switch OS not being able to host a document editing process that would be no big deal – but that’s not representative of you’re everyday virtual host and guest.

And if we look at Novell using Xen to facilitate the transition from Netware to SUSE we look directly into the nature of virtualization: a way to stich up an ugly wound at the core of mainstream (read x86/x64 if you want) OS’s.Of course from a consolidation perspective this exactly what you need, but quite possibly not what you want if you consider the effect of keeping the stiches together.

With the rise of Vmware many have seized the momentum to develop all sorts of functionality that hooks up to their virtual platform – adding to the discrepeancy of what is possible on a bare metal server and what is possible on a virtual machine.All of that functionality has made our lives a lot easier – but it could have been developed for the bare metal systems as well.

But the danger lies within the fact that we are so glad with the patch and all it’s bells and whistles that the is little incentive to fix the actual problem.Microsoft Windows is an extreme example of this: because it can’t provide what we need – Microsoft even promotes application role seperation over all-in-one servers – it now includes Hyper-V to fulfill those needs. So instead of transforming their OS to adapt to the changing needs and requirements Microsoft develops (some may say copies) its own workaround. Before Microsoft launched Hyper-V it used to complain about the overhead of ESX and the associated performance hit – but the way I see it the real overhead is de redudant guest OS in which the applications or processes are encapsulated.

I work with virtualization everyday and share the enthousiasm about all the new possibilties, perspectives and challenges – but I’m a computer geek. I enjoy complexity. And when I think about the development of application infrastructure in the years to come, typing and reading on my Ipad – the complete opposite of a VM – I can’t help but wonder if we are really on the yellow brick road and if we are whether the Wizard will live up to our expectations.

Microsoft Server Activesync, Iphone and client certificates issues

In Exchange, Unified Communications on November 16, 2010 at 16:32

At my company we are currently performing a pilot to see if we can offer corporate email to users through an Iphone. We decided to go for a simple setup: one dedicated Exchange 2007 Client Access Server facing the Internet (behind a firewall of course) using HTTPS and client certificates on the Iphones. There are plenty of guides out there that discuss this topic (with or without client certificates and with reverse proxy server infront of the CAS server) so I’m not going to elaborate on that .We did take some standard security measures like using the Microsoft SCW, enabling the Windows Firewall on the external interface and install a virus scanner on the CAS server itself. We were using Iphone 4’s with IOS 4.1 and Exchange 2007 sp1 RU 9.

After we had rolled out the profiles the Iphone’s were syncing fine, but sometimes users weren’t able to connect to the server. One particular symptom that came up every now and then was that messages containing attachments wouldn’t send but would get stuck in the device’s Outbox. Some messages would remain stuck indefinitely while other would send after a certain time period.

On the CAS server itself I noticed the following error in the Application event log:

And in the System Log:

There were also some entries in the httperr1.log:

2010-11-15 22:47:19 109.34.215.23 60140 192.168.2.40 443 HTTP/1.1 POST /Microsoft-Server-ActiveSync?User=MY USERNAME&DeviceId=MYDEVICE&DeviceType=iPhone&Cmd=Ping – 1 Connection_Abandoned_By_AppPool MSExchangeSyncAppPool

At times we would also see Connection_Dropped_By_AppPool MSExchangeSyncAppPool and the same error as above but with the actual send and save command string.

Doing some research (aka using Google/Bing) gave me some information about IIS deadlocks and I found the following suggestions:

– Add another CPU if you have a single CPU VM

– Adjust the machine.config file for the .NET version mentioned in the event log

We tested both and that had no impact.

Additional troubleshooting steps we took were:

– Remove Anti virus, disable Windows Firewall -> No effect whatsoever

– We checked the session time-out on the Firewall, because Direct Push uses very HTTP sessions -> The firewall had a time-out value of 30 minutes and since the Direct Push sessions last about 15 minutes that couldn’t be the cause of our problems either

– Upgraded one of the Iphone’s to the IOS 4.2 GM -> Nada

After that Icontacted PSS in order to jointly investigate the issue. They looked at the logs and we performed a trace but nothing really came up.

Then I decided to have another look myself. I fired up Wireshark, exported the key of the SSL certificate and traced and decrypted the conversations between the device and the CAS server. In the conversations I noticed the following HTTP response:


So apparently the web server had problems with the size of the request. Searching Technet I found this article:

http://technet.microsoft.com/en-us/library/cc737382%28WS.10%29.aspx:

If a client sends a long HTTP request, for example, a POST request, to a Web server running IIS 6.0, the IIS worker process might receive enough data to parse request headers, but not receive the entire request entity body. When the IIS worker process detects that client certificates are required to return data to the client, IIS attempts to renegotiate the client connection. However, the client cannot renegotiate the connection because it is waiting to send the remaining request data to IIS.

If client renegotiation is requested, the request entity body must be preloaded using SSL preload. SSL preload will use the value of the UploadReadAheadSize metabase property, which is used for ISAPI extensions. However, if UploadReadAheadSize is smaller than the content length, an HTTP 413 error is returned, and the connection is closed to prevent deadlock. (Deadlock occurs because a client is waiting to complete sending a request entity, while the server is waiting for renegotiation to complete, but renegotiation requires that the client to be able to send data, which it cannot do).”

I’ve tried enlarging the UploadReachAheadSize to 64k but as could be expected (the attachment was much larger than that) that didn’t help. And just as the article says, increasing this value would create an attack surface on our server. So I followed the link on the bottom of the article to this article:

http://technet.microsoft.com/en-us/library/cc778630%28WS.10%29.aspx:

The SSLAlwaysNegoClientCert property controls SSL client connection negotiations. If this property is set to true, any time SSL connections are negotiated, the server will immediately negotiate a client certificate, preventing an expensive renegotiation. Setting SSLAlwaysNegoClientCert also helps eliminate client certificate renegotiation deadlocks, which may occur when a client is blocked on sending a large request body when a renegotiation request is received.”

I then used the adsutil script to set that value and voila! The messages were sent normally and the errors stopped occuring.

If you want apply either of those settings you should remember to restart the IIS Admin service and not just reset IIS.

I’ve seen several posts on the web dealing with the same issue or at least the same symptom. They might be related to our issue and I think that the UploadReachAheadSize could also affect sending email messages when no client certificates are being used.

Persistence is Futile

In Opalis, System Center on November 11, 2010 at 17:43

 

Opalis

In my earlier post I mentioned Opalis. Now what is Opalis? Opalis is an IT process automation tool. It gives you the possibility to visually design workflows that orchestrate, manage and monitor your whole process. By using integration packs Opalis is able to communicate with a host of different systems, vendors and platforms. You can get data out of systems, into systems and base your workflow’s logica on the repsonses you get from those systems.

In the breakout session I attended Opalis was compared to a mainframe run book: a formalization of all the steps involved in a process from start to end. And because of the great interoperability you can start by taking your “informal” processes and putting them into Opalis – no chance in functionality but know you let Opalis handle the execution (for instance calling Powershell), the monitoring/logging (by raising an alert in SCOM if something goes wrong or even creating an incident in Service Manager) and the decision making logic. So instead of incorperating all of that in every script you find in your environment you create a template which you can then reuse for every task.

Opalis itself was a so called third party tool vendor but is now a fully owned subsidiary of Microsoft and has been included in the System Center suite. In later posts I will try to get into the technical details of Opalis and how it relates to Microsoft Cloud management solution.

Computer says Yes

In Service Manager, System Center, Tech Ed on November 10, 2010 at 21:16

In this post I’ll give an overview of SC Service Manager:

Service Manager is an IT service management tool which can provide problem,incident and change management while fully integrating with the other System Center products. You need a CMDB? Connect Service Manager to SCCM and SCOM and you have your CMDB. You want to create an incident if an alert is generated in SCOM? Connect Service Manager to SCOM and there you go. Want to see the same distributed applications in your IT service management tool as you’ve defined in SCOM? Import your existing MP in Service Manager and you’re good to go.

Besides being able to tap into the information provided by SCOM and SCCM Service manager enables you to create work-flows to formalize and/or automate your existing processes. Since existing scripts for common tasks can be included in the workflow you can pick up those pesky scripts and put them into Service Manager so that they are visible, documented and manageable. Combined with Opalis you could take all tasks and scripts (defrags, legacy nt backups, third party config exports) and use Opalis to orchestrate these processes and use Service Manager/SCOM to manage and monitor them. But more on Opalis later.

Eventhough the interface might seem a bit quirky for users accustomed to other IT service management or ticket handling systems the fact that you have all this information about your environment, are able to create logical workflows to for instance create a template for standard changes and are able to automate the change and monitor the change will it is being made in your environment makes this a very powerful tool.

Service Manager uses an extension of the SCOM schema and uses Management Packs just like SCOM does. Out of the box Microsoft provides with MP’s for a knowledge base,change and incident management and they are working with partners to provide things such as asset management.

Microsoft positions Service Manager as the focal point for customer\IT interaction and as a presentational layer to expose and act on information from your data-center.

Service Manager is available for free if you have a System Center Enterprise or Datacenter edition license.

 

 

Lucy in the sky with diamonds

In Tech Ed on November 10, 2010 at 15:31

I’m writing this post, my first post on my first blog, from Tech Ed Europe in Berlin. In between sessions I’m going to try to provide some coverage of this event and present my view on what I’ve heard and seen here.

My visit to Tech Ed is part of the reason why I’ve jumped on the blogging bandwagon – I wanted to let my co-workers know how I’m doing and what I’m doing over here.

This years Tech Ed is all about clouds. I’ll skip all the obvious and not so obvious allusions about Berlin,walls, windows and clouds but I do want to mention that Microsoft seems to be a little late in the cloud game ( as I am in the blogging game perhaps). However,because Microsoft’s ecosystem forms such a substantial part of a lot of services that customers consume, it has an edge on competing unified management systems because it can fully manage this ecosystem and can manage it better.

The System Center suite (which to some extent also seems to include FIM) has been transformed from a vaguely integrated group of management, monitoring and reporting tools into a cloud management quilt where the system center products are the patches and the newly acquired Opalis automation software is the thread.

As I said before Microsoft is late – Vmware has been promoting the idea of a personal clouds for a while now. And great progress has been made to tightly integrate VM’s, Network, Storage and in some cases even OS and software. But where it was lacking was in native support for Microsoft back-end/infrastructure. If Microsoft can take the experiences of the likes of Cisco, NetApp,EMC e.a. with cloud management on the network and storage level(which it has judging from some of the demo’s I’ve seen here) and combine that with its knowledge of managing its own ecosystem no Windows administrator will ever have to leave her or his office to see their clouds but can just look through their windows.

This is the first in a series of posts on Microsoft Tech-Ed, the next posts I will focus on the details regarding System Center and Opalis.