Archive for the ‘Active Directory’ Category

Running in the name of…

In Active Directory, Chef, DevOps, Powershell, Race Against the Machine on February 14, 2015 at 22:28

If you like my content please do check out my new blog at !

During my last post I briefly discussed the issues related to the execution context of the chef-client when developing and testing cookbooks. To summarize: on a windows machine the chef-client runs under the context of the local system account, an account which basically is a local admin but has almost no permissions on other Active Directory joined/secured objects ( such as other machines, Active Directory itself or file shares).

You can always mimic this behaviour by just adding a cookbook to a test node’s runlist and restarting the chef-client service of course. Or create a scheduled task which runs chef-solo or zero. However that leaves you without the direct output from an interactively started chef-run. Also I can’t really see how you could add something like that to a vagrant setup.

Luckily – the need to run stuff under the local system account is not new. Quite some time ago the windows tool legend Mark Russinovich developed a tool which also you to, amoungst other things, to start a process as the local system account. In order to do that it temporarily installs a windows service called PSEXESVC which is used to start a process ( eg. cmd.exe) as the local system account.

The command itself is very simple:

somepath:\psexec /s cmd.exe

You can run this from another command window, or Powershell – doesn’t really matter. Once you do it starts another instance of cmd.exe and everything executed from that instance will be run as the local system account. For instance :


Will return:

nt authority\system

From there you can start chef-client just as you normally would – however you will find out that if you access anything outside of the node that access will be denied – unless the permissions on the resource are set to include the computer account object ( which includes Everyone and Authenticated Users btw). Also the output of the chef-client is a little bit different – more like the output that’s written to the chef log when you run the client as a service.

I’ve tried hacking this into the batch files chef uses to start stuff ( so instead of calling ruby.exe directly I would first open a command console as local system when running chef zero) but that didn’t seem to work – I guess that has something to with the difference between a real interactive session and a session as started by psexec. But I’m sure I’ll find some way to work around that at some point.

But for now – if you really want to know if your cookbook will run as you’d expect in a Windows domain environment – use psexec manually at least once in order to identify any permission issues.


P.S – there is one other fun way to run stuff as local system – replacing the ease of access executable on a box 😉 For more details see Guy’s site.


Steamy Windows – 3 things no one told you before you started with Chef on Windows

In Active Directory, Chef, DevOps on February 5, 2015 at 21:42

If you like my content please do check out my new blog at !



Steamy windows
Zero visibility”

1. Escape


Escape characters in Ruby ( and therefore in Chef) start with a backslash. Nothing wrong with that.But it will mess up your paths when you are working on windows – because in the windows world our paths are made out of backslashes ( a nice article on the history or back slashes in windows can be found here.

Online sources and docs point you towards helpers and functions to fix this. However in my experience the best way to deal with this is just escaping them properly. And since Windows has no issues with double backslashes instead of single back slashes you could – as a precaution – just use double back slashes in all your attribute assignments.

This does mean however that an UNC path needs double escaping, e.g. \\someserver\somepath. More about UNC paths and fileshares later.


2. Thou Powershall not


Chef supports Powershell.

Or well actually it provides you with a couple of ways to run powershell from a recipe. Powershell_script will create a temporary script on the file system and run that and Powershell_out actually starts a powershell process and runs the code you provide.

However is important to realize that while you can use recipe variables such as node attributes in your powershell script – you can’t actually pass back the objects powershell returns. Powershell_script just returns the exit code and powershell_out captures all return data – but in a string – which is perfectly suitable for error handling or reporting but it denies you access to one of the big advantages of powershell, the fact that you can gather all sorts of information about your environment and use that in your script’s decision making logic without having to parse the ouput and put it into some sort of data structure.

Par example – there is no really added value to using the AD cmdlets over plain old dsquery because if you want to use data it returns you’ll have to parse the returned strings with some sort of fancy regex and what not. And in all honesty powershell is much slower then most cmdline based tools – especially when they are RPC rather then WMI based.

This is by no means a design flaw or anything – the whole idea behind Chef is that when you start a run you already have the necessary environment data, either from Ohai or from loading up your defined resources.

So either use equivalent ruby code or fall back to the old cmdline binaries ( where I personally prefer using ruby code )

3. Execution Context


This in itself has actually very little to do with Chef. If you want to run something on a box you’ll need to do that under some sort of security context which determines what you can and what you cannot do. Before you get access to that context you will need to authenticate.

If you want to automate a whole set of systems there you really two things you can do :

1. Run commands and execute them remotely on a system with some sort of network shell ( an example could be running a powershell remoting session from an admin machine to a set of target machines).

2. Use a service or a scheduled task to run tasks locally on each machine.

Chef and quite a few other config management tools use the second option. And this works really well if you only need to change things on that particular node. You can just use the Local System account – no password needed and it has all the permissions you might need. No it no suprise that this is the account the chef-client uses to do it’s regular runs.

But what if you’d need to change a setting outside of the node/machine? Like creating a DNS entry, or a SPN or use the template resource on a CIFS\SMB Share? The local system account ( even though it has some permissions when the machine is domain joined – such as being a member authenticated users) cannot access either of these resources.  And what further complicates this is, is that in the “chef world” – testing and development is generally done by running chef with some sort of “real user” ( either the person logged on the machine or the vagrant user when you are using vagrant). So you won’t notice that you have a problem until you actually deploy your cookbook.

There are three ways to solve this :

1. Use chef_shellout in your recipes.It supports running a commmand as another user( basically what it does is spawn a process). Or when you are dealing with network shares, use the mount provider. It allows you to mount a share as another user as well. All actions you perform after mounting the share will executed as that user. Even though it might seem tempting to use shell_out to call Powershell remember that at the moment this has very little added value.

The powershell_out mixin btw strangly enough has no option to run as another user (yet).

2. To the outside world ( inside your domain anyway) – the Local System account will perform actions outside of the host as the AD computer account. So if you try to reach a share from Chef on let’s say windowsbox1 while Chef-client is running as the local system account ( which is the default and as far as I’ve seen the only recommended configuration) you will see a logon request from contoso\windowsbox1. You could of course just grant this machine account permission to your share ( either directly our through some group). This may sound crazy – but there are other situations were this is perfectly normal ( such as Microsoft Clustering, Kerberos constrained delegation in TMG,SCCM and some part of MS Exchange).

One caveat : you really have to have a working Kerberos authentication setup – because authenticating as a computer account won’t work over NTLM.

3. Bypass the windows security context all together. Easier said then done most of the time but some of the infrastructures services actually have a network interface or API ( LDAP, SharePoint). This will enable you to define a bind account or authenticate through web services and only for that specific task or service.

And there might be a fourth – running chef-client under a service account. I haven’t really seen anyone do this though.

In my next post I will try to the pro’s and con’s of these three approaches and show a simple and easy way to simulate this real world limitation when you are developing your cookbooks.

Netscaler/Citrix Access Gateway and Active Directory nested groups

In Active Directory, Citrix, Netscaler on May 11, 2012 at 21:21

If you like my content please do check out my new blog at ! 


We recently adapted RBAC based on MS Active Directory to manage our infrastructure. We already used AD groups to authenticate and authorize administrators on our Netscaler appliances,  however admins where direct members of the groups defined in the Netscalers. ( Have a look over here to see configure this )

Our new RBAC system uses nested groups – an admin is a member of a role group which is a member of a group authorizing access to a resource. Not every non-MS system is able to “understand” nested groups ( such as Cisco Ironport anti-spam appliances) so you are forced to use some sort of iterative\recursive query to make it work but luckily the Netscalers have a feature called “nested group extraction”.

You can enable nested group extraction when you define a Authentication server. After you choose LDAP as the authentication server type the option is somewhat hidden in the bottom of the dialogue window – but if you flip it open and enter:

  • The nesting level ( default is 2)
  • The group name identifier ( simply the attribute defining the unique name for the group object) which in most situations would be the samAccountName attribute
  • The group search attribute: memberOf
  • And the sub search attribute – here the documentation suggest using the common name – just as in the general server configuration

you will be able to use nested groups to authorize your administrators to perform management tasks on the Netscalers.

You could use an existing LDAP server/policy pair to achieve this – but I would strongly advise to create a separate server/policy pair. The main reason is that when you enable nested group extraction for an authentication server  all users authenticating through that policy/server pair seem to be checked for nested group memberships – even if you don’t use group membership as a factor to authorize your users to access resources…

We found out about that one the hard way – after enabling nested group extraction on our default LDAP policy/server pair certain users where unable to log onto our Citrix environment. This was caused by the fact that they were a member of a group that had “illegal characters” in their common name ( a forward slash “/”) – and with nested group extraction enabled they got an access denied message…

We solved this by using a different policy/server pair for admin authentication/authorization.

Exchange 2007: Enable Non-Admin to set Mailbox Permissions

In Active Directory, Exchange on April 17, 2012 at 13:06

If you like my content please do check out my new blog at ! 


In Exchange 2010 you are able to design your own RBAC system and define roles. No such luck for those of us still using Exchange 2007. In order to set Full Access mailbox permissions you need to be a Server or Organization Administrator which in our case was overkill because we wanted to allow non-admin users to set these permissions.

After some experiments I came up with combination of permissions:

Grant Full Access to a Mailbox:

  • Assign the Exchange Recipient Administrator role to the user or group
  • On each mailbox store/database:
    • Start Adsiedit:
      • Go to Configuration\Services\Microsoft Exchange\ORGNAME\Administrative Groups\Exchange Administrative Group bla bla bla\Servers\SERVERNAME\InformationStore\SGNAME\STORENAME
      • Open Properties\Security
      • Give the user or group the following permissions:
        • Administer Information Store
        • View Information Store Status
        • Read Permissions
        • Modify Permissions

As for Send As permissions:

  • On each OU containing User objects set the following permissions:
    • Read Permissions ( On Descendant User objects)
    • Modify Permissions (On Descendant User objects)

A Heraclean Task? Active Directory, Kerberos and the krbtgt account.

In Active Directory, Kerberos on November 11, 2011 at 10:54

If you like my content please do check out my new blog at ! 


Hercules presenting Cerberus to Eurystheus

Does any of this sound familiar?

  • All domain controllers have been logically corrupted or physically damaged to a point that business continuity is impossible; for example, all business applications that depend on AD DS are nonfunctional.
  • A rogue administrator has compromised the Active Directory environment.
  • An attacker intentionally—or an administrator accidentally—runs a script that spreads data corruption across the forest.
  • An attacker intentionally—or an administrator accidentally—extends the Active Directory schema with malicious or conflicting changes.
  • None of the domain controllers can replicate with their replication partners.
  • Changes cannot be made to AD DS at any domain controller.
  • New domain controllers cannot be installed in any domain.


When any of the above is true you know that you have a problem. It’s under these circumstances that a Forest Recovery could be necessary. I’ve never been in such a situation and I sincerely hope I never will be. But let’s assume that you are looking out over the smoking ruins of your Active Directory forest and have found someone else to blame it on – what to do next? Either you will actually have a good restore procedure in place based upon Technet or you will Bing for it 😉

Now the procedure itself is pretty straightforward – make sure all DC’s are down and out and restore one DC from a valid backup starting with your forest root domain. There are a lot of steps in between – but hey – if my forest is down and Microsoft tells me to jump through a flaming hoop in my underwear, smoking a cigar and shouting “Developers, Developers, Developers!” – I will do just that.

One of part of the procedure that has always interested me is where you try to make sure no rogue DC screws up your non-authorative restore by replicating the corrupt right back at you. You might wonder why. Well first of all if anything will enable you to really get an understanding of Kerberos and replication – trying to break it will. Secondly let’s assume you make a mistake during the procedure (or perform any of of the steps in the wrong environment when testing 😉 ). The steps are:


Step 3 – Shutting down all other writable domain controllers. That seems logical. If they are shut down they can’t replicate.


Restore the first writable domain controller for the forest root domain, steps 8 to 11

  • Delete server and computer objects for all other domain controllers in the forest root domain.
  • Reset the computer account password for the domain controller you are restoring (twice)
  • Reset the krbtgt password (twice)
  • Reset the trust password (if any – and twice). This steps seems fairly logical as well – it’s basically the same thing as deleting the computer accounts for the DC’s in your forest.

In a note, the procedure on Technet tells us that these steps are needed in order to prevent replication – but not how those steps will prevent it. Let alone what will happen if you fail to perform any of the steps. Others have touched on this subject, Jane Lewis (in 2006..) devoted a very informative blog post to the subject: The KRBTGT Account – What is it ? . However this only tells us about the krbtgt password reset – and doesn’t really tell us how this reset will stop replication. In order to grasp the whole concept we will have to take a closer look at the Kerberos authentication process.

Triple headed monster

Kerberos – named after the hell hound of classic mythology which guarded the underworld making sure only dead people could enter – is a authentication protocol. For a good introduction to Kerberos as used in Windows see Kerberos Explained by Mark Walla or check the IETF RFC’s on the subject. The authentication process consists of three actors: a client, a server and a third party – the Key Distribution Center (KDC). Hence the analogy with a dog with three heads.

AD and Kerberos

Active Directory replication uses Kerberos Mutual Authentication to authenticate before a replication operation can be performed. Now I want to try to describe the scenario where you have just restored the first writable DC in your forest root domain and a rogue DC exists in the same domain which contains newer (but corrupt!) data and wants to replicate that data really really bad.

The above diagram shows two ways authentication might occur – however unless the KDC service on DC2 would be broken or otherwise unavailable I think that a DC, as a Kerberos client, will always connect to it’s own KDC. And since a service ticket will be encrypted with the service key by the KDC – TGS, and the service key for DC1 on DC2 will be different from the service key on DC1 – DC1 will never be able to decrypt it.

So why would you have to reset the krbtgt password as well? If the DC1 and DC2 both have different keys for each other ( DC1 had it’s password reset so it’s not equal to the password known by DC2 and DC1 doens’t have a password for DC2 because we removed it’s computer account from the restored AD) there is no way in Hades they will be able to communicate.

The only scenario I could think of where not resetting the krbtgt password would have any serious consequences would be:

  • DC2 uses it’s TGT ticket from it’s own KDC – AS to connect to the KDC – TGS on DC1
  • DC1 will issue a service ticket to DC2 – encrypted with correct service key and the client key contained in the TGT
  • DC2 will then contact the DC1 in order to trigger replication
  • When DC2 presents the service ticket, decrypt it (and validate the authenticator) and create an access token based upon the SID in the ticket.

Now I’m not sure if: a) a client would ever use a it’s own KDC – AS but another KDC – TGS b) an access token could be created and/or the authorization could be given to a client that no longer exists in AD Now to determine if this scenario is even possible I would have to test this in a LAB and in due time I will – but I’d also love to hear from someone else who already tried something similar or has more theoretical knowledge about how this process works. Of course – even if resetting the krbtgt account has little or no effect it won’t hurt to do it.

Even if you would reset this account on an active and healthy AD you shouldn’t experience any real issues except that all previously created TGT’s are invalid on the DC you changed it on and any new TGT’s issued by the DC you changed it on will be invalid on all other DC’s. This could lead to a service disruption for users at least until the new krbtgt password is replicated through the forest ( and that is something for which I unfortunately have seen “test” results from a production environment). The step is even mentioned in certain kb articles by MS to resolve certain problems with Kerberos authentication – without any warning or information on the risk and impact.

Event ID 14 – Kerberos Key Integrity

Event ID 10 – KDC Password Configuration

But I do get an uneasy feeling performing steps in procedure which is so complicated and crucial without knowing why I am performing said steps – and that’s where the analogy with Heraclean tasks comes into play. Hercules actually had to perform two extra tasks to atone for his sins (slaying his sons after being driven mad by Hera) because he didn’t read the fine print and two of his labours weren’t counted by Eurystheus…which forced to him to confront Kerberos in the first place…

This blog has moved to