jmbrinkman

Posts Tagged ‘Apple’

Bone Machine

In Virtualization on November 29, 2010 at 20:59

The other day I was reading the wikipedia entry on the Apple-II. When discussing the Apple-II Plus the article mentioned a virtual machine which was used to run certain compilers by utilizing the language exyension card. I’m not claiming some sort of discovery here – it was just a trace that made me think about virtualization.

Virtualization in the context of that entry means running something on hardware that was not designed for that job. Its like cooking in your dishwasher. Related to this sort of virtualization is emulation, like a NES emulator on a windows box – or the ugly duckling.

The difference between the two is mainly that the dishwasher gives immediate access to its resources and that the swan needs to use its OS to run its duck program and assign its resources accordingly.

If you take the main virtualization providers in the x86/x64 world,Esx/Xen/Hyper-V, and very roughly distribute those over the above mentioned archetypes you’ll see that hyper-v is the ugly duckling and that the other two are dishwashers.

Now let me ask you – why do want we virtualize? There are different ways to put it but in the end it comes down to this: we want distribute and utilize our resources dynamically and in the best possible way ( any resource – management is also a resource as are power and space).

And virtualization gives us just that. But why do we need virtualization to reach these goals? Are the ability to migrate processes from one piece of hardware, the ability to run different kinds of processes one on piece of hardware or possibility to assign resources to and from procceses intrinsic qualities of virtualization as we know it?

No.

To quote the engineer that was resonsible for developing vmotion:
“A bunch of us at VMware came from academia where process migration was popular but never worked in any mainstream OS because it had too many external dependencies to take care of.”(Yellow Bricks).Virtualization was neccesary because the original hosts of these processes weren’t able to empower us to reach the goals I mentioned earlier. And if we would be talking about a switch OS not being able to host a document editing process that would be no big deal – but that’s not representative of you’re everyday virtual host and guest.

And if we look at Novell using Xen to facilitate the transition from Netware to SUSE we look directly into the nature of virtualization: a way to stich up an ugly wound at the core of mainstream (read x86/x64 if you want) OS’s.Of course from a consolidation perspective this exactly what you need, but quite possibly not what you want if you consider the effect of keeping the stiches together.

With the rise of Vmware many have seized the momentum to develop all sorts of functionality that hooks up to their virtual platform – adding to the discrepeancy of what is possible on a bare metal server and what is possible on a virtual machine.All of that functionality has made our lives a lot easier – but it could have been developed for the bare metal systems as well.

But the danger lies within the fact that we are so glad with the patch and all it’s bells and whistles that the is little incentive to fix the actual problem.Microsoft Windows is an extreme example of this: because it can’t provide what we need – Microsoft even promotes application role seperation over all-in-one servers – it now includes Hyper-V to fulfill those needs. So instead of transforming their OS to adapt to the changing needs and requirements Microsoft develops (some may say copies) its own workaround. Before Microsoft launched Hyper-V it used to complain about the overhead of ESX and the associated performance hit – but the way I see it the real overhead is de redudant guest OS in which the applications or processes are encapsulated.

I work with virtualization everyday and share the enthousiasm about all the new possibilties, perspectives and challenges – but I’m a computer geek. I enjoy complexity. And when I think about the development of application infrastructure in the years to come, typing and reading on my Ipad – the complete opposite of a VM – I can’t help but wonder if we are really on the yellow brick road and if we are whether the Wizard will live up to our expectations.

Advertisements

Microsoft Server Activesync, Iphone and client certificates issues

In Exchange, Unified Communications on November 16, 2010 at 16:32

At my company we are currently performing a pilot to see if we can offer corporate email to users through an Iphone. We decided to go for a simple setup: one dedicated Exchange 2007 Client Access Server facing the Internet (behind a firewall of course) using HTTPS and client certificates on the Iphones. There are plenty of guides out there that discuss this topic (with or without client certificates and with reverse proxy server infront of the CAS server) so I’m not going to elaborate on that .We did take some standard security measures like using the Microsoft SCW, enabling the Windows Firewall on the external interface and install a virus scanner on the CAS server itself. We were using Iphone 4’s with IOS 4.1 and Exchange 2007 sp1 RU 9.

After we had rolled out the profiles the Iphone’s were syncing fine, but sometimes users weren’t able to connect to the server. One particular symptom that came up every now and then was that messages containing attachments wouldn’t send but would get stuck in the device’s Outbox. Some messages would remain stuck indefinitely while other would send after a certain time period.

On the CAS server itself I noticed the following error in the Application event log:

And in the System Log:

There were also some entries in the httperr1.log:

2010-11-15 22:47:19 109.34.215.23 60140 192.168.2.40 443 HTTP/1.1 POST /Microsoft-Server-ActiveSync?User=MY USERNAME&DeviceId=MYDEVICE&DeviceType=iPhone&Cmd=Ping – 1 Connection_Abandoned_By_AppPool MSExchangeSyncAppPool

At times we would also see Connection_Dropped_By_AppPool MSExchangeSyncAppPool and the same error as above but with the actual send and save command string.

Doing some research (aka using Google/Bing) gave me some information about IIS deadlocks and I found the following suggestions:

– Add another CPU if you have a single CPU VM

– Adjust the machine.config file for the .NET version mentioned in the event log

We tested both and that had no impact.

Additional troubleshooting steps we took were:

– Remove Anti virus, disable Windows Firewall -> No effect whatsoever

– We checked the session time-out on the Firewall, because Direct Push uses very HTTP sessions -> The firewall had a time-out value of 30 minutes and since the Direct Push sessions last about 15 minutes that couldn’t be the cause of our problems either

– Upgraded one of the Iphone’s to the IOS 4.2 GM -> Nada

After that Icontacted PSS in order to jointly investigate the issue. They looked at the logs and we performed a trace but nothing really came up.

Then I decided to have another look myself. I fired up Wireshark, exported the key of the SSL certificate and traced and decrypted the conversations between the device and the CAS server. In the conversations I noticed the following HTTP response:


So apparently the web server had problems with the size of the request. Searching Technet I found this article:

http://technet.microsoft.com/en-us/library/cc737382%28WS.10%29.aspx:

If a client sends a long HTTP request, for example, a POST request, to a Web server running IIS 6.0, the IIS worker process might receive enough data to parse request headers, but not receive the entire request entity body. When the IIS worker process detects that client certificates are required to return data to the client, IIS attempts to renegotiate the client connection. However, the client cannot renegotiate the connection because it is waiting to send the remaining request data to IIS.

If client renegotiation is requested, the request entity body must be preloaded using SSL preload. SSL preload will use the value of the UploadReadAheadSize metabase property, which is used for ISAPI extensions. However, if UploadReadAheadSize is smaller than the content length, an HTTP 413 error is returned, and the connection is closed to prevent deadlock. (Deadlock occurs because a client is waiting to complete sending a request entity, while the server is waiting for renegotiation to complete, but renegotiation requires that the client to be able to send data, which it cannot do).”

I’ve tried enlarging the UploadReachAheadSize to 64k but as could be expected (the attachment was much larger than that) that didn’t help. And just as the article says, increasing this value would create an attack surface on our server. So I followed the link on the bottom of the article to this article:

http://technet.microsoft.com/en-us/library/cc778630%28WS.10%29.aspx:

The SSLAlwaysNegoClientCert property controls SSL client connection negotiations. If this property is set to true, any time SSL connections are negotiated, the server will immediately negotiate a client certificate, preventing an expensive renegotiation. Setting SSLAlwaysNegoClientCert also helps eliminate client certificate renegotiation deadlocks, which may occur when a client is blocked on sending a large request body when a renegotiation request is received.”

I then used the adsutil script to set that value and voila! The messages were sent normally and the errors stopped occuring.

If you want apply either of those settings you should remember to restart the IIS Admin service and not just reset IIS.

I’ve seen several posts on the web dealing with the same issue or at least the same symptom. They might be related to our issue and I think that the UploadReachAheadSize could also affect sending email messages when no client certificates are being used.