Active Directory Contact Sync Tool (ADCST)

I ran into a scenario recently where two companies had been sharing the same Office365 Exchange tenant for >2 years, one of the two companies was now big enough to warrant its own Exchange online instance, however the two companies still needed to be able to seamlessly contact one another [Lync(Skype For Business)/Exchange Mail/Share Calendars/etc].

We could  easily share calendar information between 365 accounts, however the problem of “finding” a user in the sister-company became an issue;  How does a user in “company A” find a user in “company B” if they don’t know them before hand? By splitting user out of the original Office 365 tenant they loose visibility to search for people in the Global Address List (GAL) to look-up job titles/user locations/phone-numbers etc (This was desired).

One option was to create contact objects in “Company A’s” active directory for users in “Company B” (and vice-versa) and have these sync to Office 365 via Directory Sync… Good-idea, however this is manual and is not a function of Office 365 “out of the box”.

…It turned out this problem wasn’t tremendously hard to solve with the improvements that Microsoft have recently made to Office 365, specifically around the ability to access and [now] query the Azure Active Directory that comes with Office 365 with the Graph API.

Introducing ADCST – A little tool written in C# to help resolve this problem by facilitating the Sync of address books between Office 365 tenants (GAL Address book syncing / federation)

How it works:

The concept is fairly trivial really – Each company delegates Read-Only access to their Azure active directory to a custom application; this is no different to allowing a Standard Application for something like Jira (from Atlassian) access to your Active Directory in Azure for authentication… The corresponding company cannot retrieve anything other than standard AD attributes nor can they attempt to authenticate as you (Allowing read-only access will generate a certificate that can be exchanged to the other company; and can be revoked at any time).

Once each company has established application access to their AzureAD Instance, the relevant details are exchanged and loaded into the ADCST tool.

ADCST Flow

Now, when the application is invoked user objects from Company A that were previously synced to Office365/AzureAD via Directory Sync are retrieved as objects by ADCST. They are then added to Company B’s on-premise Active Directory as contact objects and synced to their instance of Office365 to later appear in the GAL. If a user was to leave Company A and their account to be deleted, the corresponding contact object will be removed Company B’s GAL.

  • Objects to be synced are determined by a membership of a group (that is, users in Company A must be in a specified group otherwise they will not be synced and created)
  • Objects will only be created in a Target OU as specified in configuration.
  • Only the following attributes are synced (if they exist):
    • givenName
    • Mail
    • sn
    • displayName
    • title
    • l
    • co & c
    • st
    • streetAddress
    • department

Phew, okay… how to set it up:

How to set it up:

  1. Download a copy ADCST or go and grab the source from GITHUB 🙂
  2. Access your Azure active Directory and complete the following:
    1. Access your Office365 Portal, select AzureAD from the left hand bar of the Admin portal
      AzureAD
    2. Once tha Azure Portal loads, Click on Active Directory in the left hand nav.
    3. Click the directory tenant where you wish to register the sample application.
    4. Click the Applications tab.
    5. In the drawer, click Add.
    6. Click “Add an application my organization is developing”.
    7. Enter a friendly name for the application, for example “Contoso ADCST”, select “Web Application and/or Web API“, and click next.
    8. For the Sign-on URL, enter a value (NOTE: this is not used for the console app, so is only needed for this initial configuration): “http://localhost”
    9. For the App ID URI, enter “http://localhost”. Click the checkmark to complete the initial configuration.
    10. While still in the Azure portal, click the Configure tab of your application.
    11. Find the Client ID value and copy it aside, you will need this later when configuring your application.
    12. Under the Keys section, select either a 1-year or 2-year key – the keyValue will be displayed after you save the configuration at the end – it will be displayed, and you should save this to a secure location. NOTE: The key value is only displayed once, and you will not be able to retrieve it later.
    13. Configure Permissions – under the “Permissions to other applications” section, you will configure permissions to access the Graph (Windows Azure Active Directory). For “Windows Azure Active Directory” under the first permission column (Application Permission:1″), select “Read directory data”. Notes: this configures the App to use OAuth Client Credentials, and have Read access permissions for the application.
    14. Select the Save button at the bottom of the screen – upon successful configuration, your Key value should now be displayed – please copy and store this value in a secure location.
    15. You will need to update the ADCST.exe.config of ADCST with the updated values.
      1. AzureADTenantName = Update your tenant name for the authString value (e.g. example.onMicrosoft.com)
      2. AzureADTenantId = Update the tenantId value for the string tenantId, with your tenantId. Note: your tenantId can be discovered by opening the following metadata.xml document: https://login.windows.net/GraphDir1.onmicrosoft.com/FederationMetadata/2007-06/FederationMetadata.xml – replace “example.onMicrosoft.com”, with your tenant’s domain value (any domain that is owned by the tenant will work). The tenantId is a guid, that is part of the sts URL, returned in the first xml node’s sts url (“EntityDescriptor”): e.g. “https://sts.windows.net/”
      3. AzureADClientId = This is the ClientID noted down previously
      4. AzureADClientSecret = This is the certificate Value noted down previously
      5. AzureADUserGroup = This group contains all of the user accounts in the Remove Azure AD that you with to pull into your Active Directory on-prem as contact objects.
      6. FQDomainName = FQDN of your on-prem Active Directory Domain
      7. DestinationOUDN = The distinguished name of the Target OU that you with to create the Contact objects in
      8. ContactPrefix = This string will populate the Description field in Active Directory
      9. AllowCreationOfADObjects = Self Explanitory, Allow ADCST to create Contact objects in AD
      10. AllowDeletionOfADObjects = Self Explanitory, Allow ADCST to delete Contact objects in AD when they are no longer required
      11. VerboseLogUserCreation = Log contact creation to Debug Log
      12. VerboseLogUserDeletion = Log contact deletion to Debug log
  3. Create a Service Account to run this as and Delegate it Create/Delete rights on the OU Container on your on-prem Active Directory (see this post for some pretty good instructions – We want to be able to create/delete user accounts [even though these will be contact objects])
  4. Create a Scheduled task to call the ADCST executable on a regular basis as the Service Account that you just created.

I suggest you do some solid-testing prior to rolling this into Production (with read-only acccess you wont be able to do any damage to the Azure AD… The on-prem AD is the one you dont wanna screw-up)

The above implementation certainly wont appeal to everybody, and it could be argued that this is a specific edge-case, but it appears to do the job nicely for what was required. Let me know if you have any thoughts or suggestions.

-Patrick

The fine print: The use of this software is at your own risk – No warrantee is expressed or implied.

EDIT #1 (7/06/15): I’ve gone ahead and refactored a few things on this project. The following has changed:

  • ADCST will now Sync nominated Group’s to  Active Directory as contact Objects (I want to change this to normal Group objects with members expanded from source groups). (Synced Group [Distinguished name] destination defined using the “GroupsDestinationOUDN” App.Config Value + Source Group defined using AzureADGroupsGroup App.Config Value).
  • Users that are synced are now added to a nominated security group – This can be used to lock down groups/resources in Exchange to Internal users as well as contact objects contained in this new security group to prevent spam. (Group [distinguished named] defined using the “PermittedSendersGroupDN” App.Config Value).

Mitigate MS15-034 using F5 LTM iRules

So last Tuesday Microsoft announced MS15-034, a critical security bug in the HTTP.sys kernel driver impacting pretty much all versions of Windows; This meant anything using this driver instantly become vulnerable, including IIS 🙁

Whilst the article specified that this issue could allow an attacker to execute arbitrary code on a remote server, as of writing this no Proof of Concept exists. That said we do know how to bring a server to its knees by overflowing an Interger when sending a large Range HTTP header whilst making a request to an IIS web-server, generally resulting in a complete system lockup or blue-screen (bad!)

So for those of you who cannot patch immediately for what ever reason, but happen to have F5 LTM infrastructure, good news – you could potentially use the following to mitigate the onslaught of bots already scanning for this vulnerability / script kiddies attempting to break websites for the lulz.

To demonstrate the issue:

I wanted to see how hard it was to execute this vulnerability. I installed a fresh version of Windows Server 2012 Standard in a VM and proceed to add the IIS role.

The bug is exploitable if Output Caching  (Kernel Cache) is enabled on IIS (Which is by default).
Output caching makes web pages tremendously responsive during the round-trip to the server. When a user requests a particular page, the web server returns it to the client browser and stores a copy of that processed web page in the memory on the web server and returns it to the client in subsequent requests for that same content, which eliminates the need to reprocess that page in the future.

IIS 8 Output Caching

If we fire up WGET/Curl (prob could be done in Powershell using Invoke-Webrequest too if you wanted) and request the image that shows up on IIS8’s ‘Welcome’ page with a HTTP Range header with a large value we will recieve a blue-screen (took me 3 tries to get the result with Curl)

i.e.

wget --header="Range: bytes=18-18446744073709551615" http://192.168.1.129/iis-85.png

or

curl -v 192.168.1.129/iis-85.png -H "Range: bytes=18-18446744073709551615"

 

IISSploit

To mitigate the Vuln:

To Mitigate the vulnerability go ahead and add the below iRule to your F5 device and add it near the top of your list of rules on Virtual Servers that are balancing Windows Server web-servers.(This requires a HTTP profile enabled for the virtual server as we’re at layer7):

##############################################
# Patrick S - 16/04/15
# This iRule will drop the "RANGE" header if it contains a LONG range value to
# mitigate the effects of https://technet.microsoft.com/library/security/MS15-034
##############################################
when HTTP_REQUEST {

set vip [IP::local_addr]:[TCP::local_port]

if { ([HTTP::header exists "Range"]) } {
# If the range Matches REGEX log the result, return a 400 (patched) and then drop the socket.
    if { ([HTTP::header exists "Range"]) and ([HTTP::header "Range"] matches_regex {bytes\s*=.*([0-9]){10,}.*})} {
        log local0. "Potential MS15-034 Exploitation Attempt to [HTTP::host] in uri [HTTP::uri] from [IP::client_addr] on VIP $vip"
        HTTP::respond 400
        drop
        }
    }
}

Note the Regular expression in the above on Line 12. This is based off an iRule written by the community for the F5 ASM. It will prevent someone sending in range requests longer than 10 bytes in length. and can be tested by one of the best online REGEX testers on the net: (https://regex101.com)

regex match

The above REGEX also supports other (less-known) supported scenarios mentioned here:

  1. Range requests with white-space (i.e. Range: bytes = 2 – 18446744073709551615)
  2. Range requests with leading-zeros (i.e. Range: bytes=2-018446744073709551615)
  3. Range requests that contain Multiple range requests (i.e. Range: bytes=2-3,4-18446744073709551615)

Good luck, let me know if you have any questions… and…

Patch now!!!

The above should only be a stop-gap, patch this problem as soon as possible, or you’re gonna have a bad day!

Download Patcheshttps://technet.microsoft.com/en-us/library/security/ms15-034.aspx
Further Reading on KB3042553: https://support.microsoft.com/en-us/kb/3042553
Potential Work-around (if you dont care about performance): https://technet.microsoft.com/en-us/library/security/ms15-034.aspx#ID0EHIAC

-Patrick

Windows Server App-Fabric “failed to connect to hosts in cluster”

I’ve just completed the process of building a new AppFabric Cluster on version 1.1 with a SQL backend over an existing XML Based 1.0 cluster… The new version appeared to fix a lot of issues that existed in V1.0, plus by installing a Cumulative Update  you are able to use Windows Server 2012 standard to host a Highly Available cache cluster (now that it includes cluster functionality that only previously existed in Server Enterprise in 2008/R2)

Fortunately my old automated deployment scripts did not need that much tweaking aside from the obvious changes required to use a SQL server to store the configuration + changing my secondary cache count for scaling.

After the script established the Cache Cluster and added the host I ran into an issue when attempting to start the cluster to add the individual caches and assign permissions, I got the following error: “Use-CacheCluster : ErrorCode<ERRCAdmin040>:SubStatus<ES0001>:Failed to connect to hosts in the cluster

 

cache-error
 

There appeared to be no real help on MSDN  to help me solve my problem… A bit of research yielded the following fixes:

  1. Ensure that the AppFabric Cache Host can resolve itself (and other Cache Lead-Hosts) via DNS, Hosts Files etc.
  2. Ensure that the Remote Registry Service has been started and the rule “Remote Service Management (NP-In)” on the Windows Firewall rule is allowed.
  3. Ensure that Firewall rules exist to allow App-Fabric communication (e.g. Port 22233 for cache port, 22234 for Cluster port etc).

My script opened firewall ports but didn’t start the remote registry service… After starting this service and reconfiguring my cache once more everything came online – I was able to add all all of my cache nodes to my brand new cluster.

IIS Logging broken when traffic proxied Via F5 NLB

So you have a new F5 NLB, You have a new site hosted on IIS behind said F5…And now you have broken IIS Logging…

You may find that after deploying F5, any IIS logging will now reflect the internal IP of the F5 unit, and not the external address of the actual client. Why? When requests are passed through proxies/load balancers, the client no longer has a direct connection to the web-server itself, all traffic is proxied by the F5-Unit and the traffic looks like its coming from the last hop in the chain (F5)

 

 

X-Forwarded-For Diagram

So how do we get our logging back? Easy, it just requires two simple pre-requisites (and no downtime).

 

 

First is to insert an “X-Forwarded-For” header into each request to the web server. This header is a non-standardised header used for identifying the originating IP address of a client connecting to a web server via an HTTP proxy or load balancer.
To Insert X-Forwarded header:

  1. From your F5 Web console select Local Traffic > Select Profiles > Select Services
  2. Choose One of your custom HTTP profiles or select the default HTTP profile to edit all child profiles
  3. Scroll down the page and locate the “Insert X-Forwarded-For” property and enable it (you may need to select the custom check-box first depending on your profile type)
  4. Select update to apply changes

Next step is to install an ISAPI filter developed by F5 to amend IIS’s logging with the correct requester IP using the X-Forwarded for HTTP header Syntax {X-Forwarded-For: clientIP, Proxy1IP, Proxy2IP} (this filter is supported on both IIS6 & 7)
Download the ISAPI filter here: https://devcentral.f5.com/downloads/codeshare/F5XForwardedFor.zip

 

  1. Copy the F5XForwardedFor.dll file from the x86\Release or x64\Release directory (depending on your platform) into a target directory on your system.  Let’s say C:\ISAPIFilters.
  2. Ensure that the containing directory and the F5XForwardedFor.dll file have read permissions by the IIS process.  It’s easiest to just give full read access to everyone.
  3. Open the IIS Admin utility and navigate to the web server you would like to apply it to.
  4. For IIS6, Right click on your web server and select Properties.  Then select the “ISAPI Filters” tab.  From there click the “Add” button and enter “F5XForwardedFor” for the Name and the path to the file “c:\ISAPIFilters\F5XForwardedFor.dll” to the Executable field and click OK enough times to exit the property dialogs.  At this point the filter should be working for you.  You can go back into the property dialog to determine whether the filter is active or an error occurred.
  5. For II7, you’ll want to select your website and then double click on the “ISAPI Filters” icon that shows up in the Features View.  In the Actions Pane on the right select the “Add” link and enter “F5XForwardedFor” for the name and “C:\ISAPIFilters\F5XForwardedFor.dll” for the Executable.  Click OK and you are set to go.

If you’re that way inclined – there is also an IIS Module available if you think ISAPI filters are not for you (See: https://devcentral.f5.com/weblogs/Joe/archive/2009/12/23/x-forwarded-for-http-module-for-iis7-source-included.aspx)

Let me know if you have any questions 🙂

-Patrick

Unable to Open Lync CSCP from Lync Server

When deploying a Lync Server the other day I spent a good 15 mins (stupid me) trying to figure out why I couldn’t open the Lync CSCP control panel from the Lync Server – I kept getting:

HTTP Error 401.1 – Unauthorized

You do not have permission to view this directory or page using the credentials that you supplied.

I had defined an Admin URL when establishing my topology (and published it), plus I had set the appropriate DNS records within my domain to make the CSCP site resolve – still no Dice. I finally ended up trying from another server which had Silverlight installed… It worked!?!

So what was the cause?
Back in Win Server 2003 Sp1 (and subsequent versions of Windows) Microsoft introduced a loop-back security check. This feature prevents access to a web application using a fully qualified domain name (FQDN) if the attempt to access it takes place from a machine that hosts that application. The end result is a 401. 1 Access denied from the web server and a logon failure event in the Windows event log.

A work around for the issue if you really want to access the Lync CSCP from the Lync server itself (using anything other than https://localhost/cscp):

  1. Logon on to the Lync server with an account that is member of the local admins group
  2. Start “regedit”
  3. Navigate and expand the following reg key “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters”
  4. Right-click Parameters, click New, and then click DWORD (32-bit) Value.
  5. In the Value name box, type DisableStrictNameChecking, and then press ENTER.
  6. Now navigate to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0”
  7. Right-click MSV1_0, point to New, and then click Multi-String Value.
  8. Type BackConnectionHostNames, and then press ENTER
  9. Right-click BackConnectionHostNames, and then click Modify.
  10. In the Value data box, type the host name (or the host names) for the sites that are on the local computer, and then click OK.
  11. Quit Registry Editor, and then restart your server

You should be good to go 🙂

For more reading + another possible (and less-secure) work around for a lab environment see KB896861

-Patrick

Manual WSUS Sync using Powershell

A quick blog post tonight:

When setting up WSUS, its common practice to trial updates internally  prior to deployment across an entire environment…

For a recent WSUS setup I completed we decided to leave auto update approval on for all Windows Critical & Security updates so they could be tested after release every Patch Tuesday. After Microsoft’s recent spate of Out-Of-Band updates we were finding that machines were being updated half way through a month due the limited update controls you have with WSUS… We could opt to manually sync updates or select the longest time between syncs of “check once every 24 hours”.

Using some cool tips from the Hey Scripting Guy Blog I’ve slapped together this script that now runs as a scheduled task to download updates once a month on Patch Tuesday.

This is an impractical approach to updating, critical updates should be applied as soon as possible, however forcing a manual WSUS update could come in handy for a select few:


$ErrorActionPreference = "SilentlyContinue"

# WSUS Connection Parameters:
[String]$updateServer = "WSUS.resdevops.com"
[Boolean]$useSecureConnection = $False
[Int32]$portNumber = 80

# Load .NET assembly
[void][reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")

# Connect to WSUS Server
$Wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::getUpdateServer($updateServer,$useSecureConnection,$portNumber)

# Perform Synchronization
$Subscription = $Wsus.GetSubscription()
$Subscription.StartSynchronization()

Write-host “WSUS Sync Started/Queued; Check WSUS console or Event log for any Errors.”;

-Patrick

Remote logoff all Windows users with Powershell

An annoyance for anyone when running a large scripted software release is when the deployment fails half way through because someone has left a logged-on Windows session with an Open file or Window. You may then need to invest large amounts of time identifying the cause, reverting changes that only deployed half-way and eventually re-run the release package…

In one of my earlier blog posts I explained how I would drop servers into maintenance mode when running a software deployment to prevent being spammed by Operations Manager Alerts… I targeted my specific environment in SCOM using a filter that would only add servers with a NetBIOS name starting with PROD or TEST.

I have quickly whipped up something similar to boot all users off of servers (log-off) prior to executing release package. The snippet below will grab all servers in the “Servers OU” on active directory and throw them into an array. It will then enumerate each server and use the Inbuilt “MSG” command (similar to NET SEND) to warn each user logged onto servers to save their work and log off (along with a countdown timer). Once the timer hits zero the Win32Shutdown() WMI method is run to Force log-off all users so the deployment can proceed

This script could easily be turned into a function that passed along a parameter for Prod/Test/Dev environments.

Enjoy…

#Load Active Directory Module
Import-Module ActiveDirectory;

[int]$timeleft = 5 #Countdown time until Logoff (Mins)
[System.Reflection.Assembly]::LoadWithPartialName("System.Diagnostics")
$countdowntimer = new-object system.diagnostics.stopwatch

#Establish prodservers array + Enumerate all servers with a name starting with "PROD" and add to array
[array] $prodservers= @()
Get-AdComputer -filter 'Name -like "PROD*"' -SearchBase "OU=Servers,DC=resdevops,DC=com" -Properties "Name" | foreach-object {
$prodservers += $_.Name
}

while($timeleft -gt 0)
 {
	$countdowntimer.start()
	foreach ($server in $prodservers) {msg * /SERVER:$server "A software deployment is scheduled for the PROD environment; Your remote session will be logged off automatically in $timeleft minutes, you should save your work."}
	while ($countdowntimer.elapsed.minutes -lt 1) {write-progress -activity "Elapsed Time" -status $countdowntimer.elapsed}
	$countdowntimer.reset()
	$timeleft--
 }

#Once countdown is complete - Run Win32Shutdown WMI method with the "4" value 
#to force loggof  all users
foreach ($server in $prodservers) 
{
	Write-Host "Logging all users off of $server"
	(gwmi win32_operatingsystem -ComputerName $server).Win32Shutdown(4)
}
Write-Host "`n`nSent Logoff command to $($prodservers.count) servers" -foregroundcolor Green;

-Patrick