Modest Refinement — Evolving Ad Hoc Scripts

modest-refinement-gp

Recently, I wrote of how to work with Active Directory Group Policy via Windows PowerShell and presented a modification of a Windows PowerShell script which was the result of a quick hack lobotomy (um, translation) from a VBscript example.

Admittedly, as a software engineer (not full time anymore, remember I teach too now) while I acknowledge, use, and even write ad hoc scripts, something deep inside me yearns to optimize them, modularize them, and treat them nicely. Let’s look at a couple of things we could do with the aforementioned ad hoc script. I’m not going to start here by reposting the original, but by recanting snippets and eventually composing a new whole. If you want the original, please read “This is Your Brain on Active Directory-based Group Policy.”

At the end of the GPOPolicyVer.ps1 script, we have a Group Policy Object (GPO) in a variable $oGPO. The script displays several version number properties of that object.

"The user version number from the Active Directory = " + $oGPO.UserDSVersionNumber "The computer version number from the Active Directory = " + $oGPO.ComputerDSVersionNumber "The user version number from the Sysvol = " + $oGPO.UserSysvolVersionNumber "The computer version number from the Sysvol = " + $oGPO.ComputerSysvolVersionNumber

 

This type of redundant verbosity is common in script, application software, and seems rampant in the industry. So much of the same text and variable references are repeated again and again. Actually, this tiny example isn’t anywhere near as bad as a lot of code I’ve seen and worked with. But consider the following “adjustment.”

$tags = @{ DS="Active Directory"; Sysvol="SysVol" } "DS","Sysvol" |%{ $x = $_; "User","Computer" |%{ "The {0} version number from {1} = {2}" -f $_, $tags[$x], $oGPO.$("{0}{1}VersionNumber" -f $_, $x ) } }

 

Yes, there are surely tighter, denser ways to express this, but let’s not get our PowerShell egos in a knot. Some people into PowerShell seem to be obsessed with making everything a “one-liner” – an addiction I got over years ago. The above snippet was written as two, but I’ve transcribed it here as four for “readability,” which could easily be further expanded for better clarity. Anyway, before I digress, let’s focus on the task at hand – what on earth does this do? And why might we want to use this sort of technique in other scripts?

Let’s start with philosophy. Instead of writing out some text including some data value, and then doing it again, and again, and again, let’s step back for a moment a think about what we want to do. We want several values, in this case four. And actually, there are two aspects of two kinds of values we want. Therefore, we iterate through the first two options, then inside that loop we iterate through the next two, each time emitting one value, and as two times two is four, what we have is this nest iteration yielding four values.

Enough philosophy. Let’s dissect this lovely code snippet and walk through it.

First we set up an array with the nice human-readable names of the directory service and the system volume. This isn’t absolutely essential, and it could have been embedded in the next line instead of assigning it to a variable, but in a later refinement we’ll see that this is brought outside of another loop, therefore in this version it’s defined before the loops. The variable $tags is assigned with the associative array with two entries. The @{} notation delimits the associative array (hash table) with the different values separated with a semicolon. Each value consists of a tag (DS or Sysvol), an equal sign, and the string used as the value. This will be used to display the name instead of the abbreviation later on. In this example we have the tag and the value for Sysvol pretty much the same, but we could have used Sysvol=”System Volume” or Sysvol=”SYSVOL share” or another variation. The point is to use a generic technique.

The next part of this code has a list of two string which are passed to a pipeline which runs a ForEach-Object loop on them. “DS”,”Sysvol” is the specification of the strings, the vertical bar (|) sends those values down the pipeline, and the percent sign (%) is an alias for the ForEach-Object cmdlet. The code block for this loop begins with the curly brace right after the percent sign and ends at the end of the code snippet with the closing curly brace.

We do two things inside that outer loop. First, we save away the value of the iteration variable $_ into a variable $x. The first time through the loop, $x will have the value “DS” and the second time $x will be “Sysvol” by virtue of the nature of ForEach-Object assigning the special variable $_ to one of the values in the list piped into it at a time.

Once we’ve saved that value away, we begin another pipeline and ForEach-Object structure. This time we have the two string “User”,”Computer” |%{ … } to loop through the values “User” and “Computer” using ForEach-Object. This inner loop  is where the real work resides. We just use the format operator (-f) to display a string. This has three or five parts, depending on how you count. On the left we have the format string “The {0} version number from {1} = {2}” and then the format operator (-f) itself. Two down, values to go. The third part is a list of three values, thus if we wanted to treat this comma separated list of values as the 3rd, 4th, and 5th parts of the expression, that wouldn’t be outlandish but more a matter of ambiguous opinion.

Let’s look at these three data values to be injected into the format string before the result is output (displayed) for the user. The first is easy: $_ is the iteration variable for the inner loop and will thus have either the value “User” or “Computer” depending on which trip we’re at through the loop. According to the format string, argument number zero (the first one in the list on the right-hand side of the -f operator) will be inserted after the word “The” in place of {0}, therefore we’ll first get: “The User version number…” and then “The Computer version number…” the 1st and 2nd times through the inner loop, respectively.

The second value injected in the format string in place of {1} will be $tags[$x] where $x has the value “DS” or “Sysvol” for the 1st and 2nd times through the outer loop. This results in the value “Active Directory” or “SysVol” being put into the resultant output string.

But we’re not done yet, and the third value injected in place of {2} is the most fun. This value is an attribute (a.k.a. property) of the object in the variable $oGPO. But which one? Let’s remember why we’re here. We wanted to avoid literally repeating much of the same text and data specifications. There are humongous motivations for doing so which I’m not going to state here and which are not blatantly obvious from this tiny example of four values. But here’s the gist. Each time through the loop, we want either the UserDSVersionNumber, ComputerDSVersionNumber, UserSysvolVersionNumber, or ComputerSysvolVersionNumber. How do we pick one? Shall we use a switch statement? Cascading series of if/else? Think again. We have all of the pieces we need in front of us to identify which attribute to use, yet those pieces may need to be… well, pieced together. There are numerous ways to do this, and if we gathered 100 PowerShell-fluent people, I’m sure they’d come up with lots of different ways. We could use various string concatenation techniques and take the catenated value and presto, we’d be set. But we’ve already used a similar technique in this code snippet which we can use again – the format operator! Behold, magic: “{0}{1}VersionNumber” -f $_,$x and we’re golden.

Almost. We still need to use the results of that format operator which generates the proper attribute name and access that attribute on the $oGPO variable. This is done by substituting the name back into the command expression with a dollar sign and parentheses around the expression. Thus, the whole third parameter for the first -f operator is:

 

$oGPO.$(“{0}{1}VersionNumber” -f $_, $x )

The first time through the outer and inner loops this would reference $o.GPO.UserDSVersionNumber, and the other attributes on the subsequent trips through the loops.

That’s it for collapsing down the easy to read “string = value” type output into a condensed double-loop structure. Now let’s move onto something else we can do with the original example script. Based on a translation from the VBscript version which pulled the results of the first item of a group policy object search using: set oGPO = oGPSearchResults.Item(1), we simply used $oGPO = $oGPSearchResults[0] for a similar behavior. Following that was the display of the version numbers which we’ve already transmogrified above.

What if we got back a number of GPOs from the search? We’ll make that more possible in a moment, but first, let’s replace that crazy assumption that we only want element zero of the array (yes, $oGPSearchResults was forced/casted as an array earlier in the original translation using @(…), but that’s another story. Let’s use the foreach construct of PowerShell, which is notably distinct from the ForEach-Object cmdlet (although that cmdlet is aliased as both % and foreach) as I pointed out in the course materials written for Microsoft’s course 6434. We don’t need to know the distinctions here; just now to follow an example.

foreach( $oGPO in $oGPSearchResults ){ … }

This foreach loop should be a nice replacement for the $oGPO = $oGPSearchResults[0], and we’d put the display snippet we munged earlier in the body instead of that evil ephemeral ellipsis.

But wait. How does a search with searchOpEquals match more than one GPO? Well, I’ll leave that as an exercise for the reader, and in the mean-time let’s switcheroo that operator to searchOpContains which will match substrings in the search. Because multiple GPOs could be coming out in the results and we may not be sure what their exact names are, we’ll add another line like “`n— GPO {0} —” -f $oGPO.DisplayName so we know what we’re getting resultant version number for. Also, as promised earlier, we’ll rotate the $tags assignment out of the foreach loop. So far, our revised script would look like this.

# Script to identify the Group Policy version number # PowerShell version, retromutated Mach 1 $USE_THIS_DC = 0 $strPolicyName = "TEST" $strDC = "delta.hq.local" $strDomainName = "hq.local" # Create objects for searching the domain $oGPM = New-Object -ComObject "GPMGMT.GPM" $oGPConst = $oGPM.GetConstants() $oGPSearch = $oGPM.CreateSearchCriteria() $oDom = $oGPM.GetDomain( $strDomainName, $strDC, ` $USE_THIS_DC ) $oGPSearch.Add( $oGPConst.searchPropertyGPODisplayName, $oGPConst.searchOpContains, $strPolicyName ) $oGPSearchResults = @($oDom.SearchGPOs( $oGPSearch )) # Verify we have found a GPO. If not quit. if( $oGPSearchResults.Count -le 0 ){ "The Group Policy object " + $strPolicyName + " was not found`non Domain Controller " + $strDC return } "Got {0} GPOs back..." -f $oGPSearchResults.Count # If found policy then print out version numbers $tags = @{ DS="Active Directory"; Sysvol="SysVol" } foreach( $oGPO in $oGPSearchResults ){ "`n---GPO {0} ---" -f $oGPO.DisplayName "DS","Sysvol" |%{ $x = $_; "User","Computer" |%{ "The {0} version number from {1} = {2}" -f $_, $tags[$x], $oGPO.$("{0}{1}VersionNumber" -f $_, $x ) } } }

 

But we’re not done yet. Let’s take this one step further. Having a script which gets GPOs could be really handy, but why should it be coupled with the display of version numbers? A bit of well-placed modularity could be sprinkled on this prototype. Then this version number printing could depend on a some generic fetching GPO code.

Let’s usher in this next stage of the transformation with a variation of the above script with influences from the three forms of a Get-GPO function I included in the course 6434 “Automating Windows Server 2008 Administration with Windows PowerShell” supplementary materials. Let’s use a form similar to my Get-GPO function with the name search functionality, keeping the contains rather than strict equals matching, yet not complicate it here with the abilities of matching backup GPOs nor starter GPOs. Let me know if you want a more flexible version.

function Get-GPO( $name = "", $domain = "nanoware.net" ){ $gpm = new-object -com gpmgmt.gpm $gpmConstants = $gpm.getConstants() $dom = $gpm.getdomain( $domain, "", $gpmConstants.UseAnyDC ) # get all GPOs (unless a name is given) $sc = $gpm.CreateSearchCriteria() if( $name -ne "" ){ $sc.Add( $gpmConstants.SearchPropertyGPODisplayName, $gpmConstants.SearchOpContains, # or Equals $name ) } $all = @($dom.SearchGPOs( $sc )) return $all }

 

Such a function could be defined in a script that’s run before doing Group Policy work, or in a script run as a part of a Windows shortcut, calling script, or profile so that’s available when you need it.

How would we use this generic Get-GPO function to simplify the earlier script? Consider the following function.

# Function to identify the Group Policy version number # PowerShell version, retromutated Mach 2 function Get-GPOVersions( $name = "" ){ $sr = Get-GPO $name if( $sr.Count -le 0 ){ "The Group Policy object {0} was not found." -f $name return } "Got {0} GPOs..." -f $sr.Count # If found policy then print out version numbers $tags = @{ DS="Active Directory"; Sysvol="SysVol" } foreach( $oGPO in $sr ){ "`n---GPO {0} ---" -f $oGPO.DisplayName "DS","Sysvol" |%{ $x = $_; "User","Computer" |%{ "The {0} version number from {1} = {2}" -f $_, $tags[$x], $oGPO.$("{0}{1}VersionNumber" -f $_, $x ) } } } }

 

How could we take this one step further? By taking out the Get-GPO invocation from Get-GPOVersions, changing the Get-GPOVersions into a filter or function which accepts pipeline input, and requiring that the caller just pipeline the output of Get-GPO into Get-GPOVersions.

get-gpo marketing | get-gpoversions # if modified as described

 

An alternative would be to have them use Get-GPO piped to Format-Table with the version numbers selected.

get-gpo marketing | FT displayName,*versionNumber

 

In the end, the key to scripting is often to keep things simple. An innocent-looking asterisk can sometimes save a whole lot of code. Many ad hoc scripts can be virtually optimized out of existence.

This is Your Brain on Active Directory-based Group Policy

brain-on-gp

Some days things come out of my mouth and I stand (or sit) here thinking to myself “I can’t believe I just said that.” Alas, it perhaps keeps my students awake. I hope so. Just today I think I said something like “this is your brain on Group Policy Loopback Processing. Any questions?” For some reason, during the day, a commercial with sunny-side-fried (or was that scrambled) eggs in a frying pan, Blazing Saddles (“we don’t need no stinking break”), and the soup guy from Seinfeld (“no break for you”) all popped into my head and right into the microphone. Yes, the original lines, movie names, and show names are all copyright by somebody else. I think some of my students even took a break. It’s just amazing how much media is in my head despite the fact that I haven’t intentionally watched television for several years.

But the point of this article is to reflect on a few moments during a class I’m teaching today which I thought were worth sharing – it’s about Microsoft Windows and the lovely Group Policy feature. You know it’s bound to devolve into a discussion of Windows PowerShell at some point, and if that’s what you wanted, I hope you won’t be disappointed.

Although tools such as GPOtool and others let you work with properties of Group Policy Objects (GPOs), a powerful feature of using the Group Policy Management Console (GPMC) is that you can automate or script many aspects of Group Policy Management. This is not only demonstrated with a VBscript in a lab exercise in the Group Policy class I’m teaching this week, but also was the focus of a module of the PowerShell class which I taught last week.

Today I converted (not for the first time) the VBscript example into PowerShell. Here it is with a few extra modifications. Please remember that I didn’t write the original, and I did minimal modifications from VBscript to PowerShell – just enough to get it running for a quick demonstration.

# Script to identify the Group Policy version number # PowerShell version $USE_THIS_DC = 0 $strPolicyName = "TEST" $strDC = "delta.hq.local" $strDomainName = "hq.local" # Create objects for searching the domain $oGPM = New-Object -ComObject "GPMGMT.GPM" $oGPConst = $oGPM.GetConstants() $oGPSearch = $oGPM.CreateSearchCriteria() $oDom = $oGPM.GetDomain( $strDomainName, $strDC, ` $USE_THIS_DC ) $oGPSearch.Add( $oGPConst.searchPropertyGPODisplayName, $oGPConst.searchOpEquals, $strPolicyName ) $oGPSearchResults = @($oDom.SearchGPOs( $oGPSearch )) # Verify we have found a GPO. If not quit. if( $oGPSearchResults.Count -le 0 ){ "The Group Policy object " + $strPolicyName + " was not found`non Domain Controller " + $strDC return } "Got {0} GPOs back..." -f $oGPSearchResults.Count # If found policy then print out version numbers $oGPO = $oGPSearchResults[0] "The user version number from the Active Directory = " + $oGPO.UserDSVersionNumber "The computer version number from the Active Directory = " + $oGPO.ComputerDSVersionNumber "The user version number from the Sysvol = " + $oGPO.UserSysvolVersionNumber "The computer version number from the Sysvol = " + $oGPO.ComputerSysvolVersionNumber

 

This little script, called GPOPolicyVer.ps1 during the demonstration, simply takes a GPO called TEST and displays the version numbers for it. Although the GPT.INI file in a GPO has both the version number for the Computer Configuration (CC) half of policy co-mingled with version number for the User Configuration (UC) half of policy in the same 32-bit value. The UC version is the high-order 16 bits while the CC version is the low-order 16 bits. Luckily the interface we’re using here separates those values out and distinctly represents the UserSysvolVersionNumber and ComputerSysvolVersionNumber. Beside the settings and the GPT.INI file in the SYSVOL share, Active Directory-based GPOs have an LDAP-accessible facet to them. This instance of the groupPolicyContainer class also has version numbers for the CC and UC halves of the policy.

This script pulls all four version numbers. Like the GPOtool utility, this script can be used to detect conditions such as when the independent SYSVOL share replication and Active Directory database replication are not synchronized with respect to the GPO.

In another article I’ll look at some adjustments which could be made to this script.

Power to the People (Windows PowerShell Logon Scripts)

posh-logon

It’s true that PowerShell is addictive. But it’s primarily designed for use as an *Administrative* scripting and management environment. Many people ask about doing ASP.NET web content via PowerShell and also getting script to affect users, such as the following fusion of a question from a PowerShell class I taught last week, and another from a few months ago.

“Wow. PowerShell scripts are pretty handy. We have logon scripts written in VB script now. How can we use PowerShell scripts as logon scripts?”

 

Excellent question. You can still use a VB script as a logon script, and have that launch PowerShell just to run a specific command or to run a PowerShell script. VBscript calls PowerShell. Consider the following example which is described in a Microsoft article <http://www.microsoft.com/technet/scriptcenter/topics/winpsh/manual/run.mspx>.

’ logon.vbs - brief example set objShell = CreateObject( "Wscript.Shell" ) objShell.run( "powershell.exe -noexit c:scriptslogon.ps1" )

 

With that VBscript as your logon script, assigned through either Local Users and Groups, Active Directory Users and Computers, or via AD-based Group Policy, we can effectively run PowerShell scripts as logon scripts.

Here are some requirements.

  1. In order to execute the script locally, each workstation or server on which this will be run will need to have Windows PowerShell installed.
  2. PowerShell’s execution policy on each computer must allow the execution of the scripts in question.
  3. The scripts must be accessible at a path visible to the client.
  4. Any modules and extensions (e.g. Exchange Management Shell) would need to be loaded.

Of course, normal local, site, domain, organization unit, … (L, S, D, OU…) scope for Group Policy applies to the users (for Logon/Logoff scripts), or computers (for Startup/Shutdown scripts) in Active Directory affected by the policy.

The example quoted above from the Microsoft article includes -noexit parameter when launching PowerShell. That typically would not be used when running a logon script, as it leaves the shell open for the user on the target system after executing the script. Of course there are cases where that may be the appropriate desired behavior. Simply removing the -noexit parameter from that example reverses that behavior – as soon as the script completes, the shell will exit.

To find out what additional parameters are available when launching PowerShell, simply type powershell -?. That shows a usage message followed by a description of each parameter. We’ll include just the usage text at the top of that output here.

powershell[.exe] [-PSConsoleFile <file> | -Version <version>] [-NoLogo] [-NoExit] [-NoProfile] [-NonInteractive] [-OutputFormat {Text | XML}] [-InputFormat {Text | XML }] [-Command { - | <script-block> [-args <arg-array>] | <string> [<CommandParameters>] } ]

 

Beside not using the -NoExit parameter, it is likely that a lot of logon scripts should also run with the -NonInteractive parameter instead. There is much power in many of the other parameters, especially the ability to use XML input and output formats, yet -PSConsoleFile and -Command are the most dramatic.

Briefly, -PSConsoleFile is typically used to extend the capabilities of Windows PowerShell with new providers or cmdlets. If you don’t know a cmdlet or provider is with respect to PowerShell, just remember the words “extend the capabilities.” Details really are beyond the scope of this blog post, but again, remember to include modules or extensions that your script(s) are expecting are available.

Giving a command string is the most important part of the invocation of PowerShell, unless you’ve specified -NoExit. The -Command parameter name is actually optional. The command parameter value can be a complicated script block if you’re invoking one instance of PowerShell from another, but that doesn’t apply to logon scripts launched the way we’re describing here. Script blocks are delimited by braces { }. The hyphen option allows PowerShell to take its standard input stream and use that as the command string to use. Here’s a quick example.

"get-date" | powershell -

 

In this example, the string “get-date” is taken as the command that PowerShell should run. The same thing as this simplistic example could have accomplished by simply giving a command string as the -Command parameter value to PowerShell. This string form is what is most often used with logon scripts. To see how this can be used, consider the following examples which are derived from the help information from powershell -?.

powershell -command {get-eventlog -logname security} powershell -command "{get-eventlog -logname security}" powershell -command "&{get-eventlog -logname security}"

 

The first of these takes the code block {get-eventlog -logname security} and acts upon it. The second form takes the string “{get-eventlog -logname security}” which is considered as a string because it’s delimited with quotation marks, and runs it as a PowerShell which… I hope this isn’t a surprise for those of you just learning PowerShell, but it echoes back the string. Perhaps that’s not what you wanted? You wanted to run the code block which is within the string? Ah, then you should say so. How? With the powerful ampersand (&) operator – this is sometimes called the “call” or “run” operator – as shown in the third example here. In this case, the string “&{get-eventlog -logname security}” is passed as a PowerShell command and this does explicitly say to execute the rest of the string as a code block. This is mentioned in “get-help about_Script_Block.”

At what location do the logon scripts run? Remember that we are having Group Policy configured with the VBscript file which in turn launches the PowerShell script. Therefore, it depends on the location where the VBscript will run. When adding the script to the Group Policy Object (GPO), you would navigate to the User Configuration, Windows Settings Scripts (Logon/Logoff), Logon. The properties of that Logon node of the GPO allows you to use the Show Files… button to check the default path to the script. For the Local Computer Policy, this would normally be C:WindowsSystem32GroupPolicyUserScriptsLogon. Similarly, the Add… button on the Logon node brings up the Add a Script dialog with a Browse… button to select the VBscript file to add to the GPO. Script Parameters can also be specified in that dialog. Therefore, in the Local Computer GPO, if the default path is used, the location of the VBscript would be where the script runs. Let’s assume that the GPO is configured with the following VBscript file as a logon script, logon.vbs.

’ logon.vbs - brief example set objShell = CreateObject( "Wscript.Shell" ) objShell.run( "powershell.exe logon.ps1" )

 

Because this script invokes powershell.exe with an anonymous -Command parameter of simply logon.ps1 – unqualified, therefore in the local folder – that PowerShell script will run at the same location as the VBscript. Consider the following example of a test script logon.ps1.

"hello, world" $n = read-host -prompt "Name" "hello $n" get-childitem get-childitem | out-file -append xyzzy.txt &{ "Log in to computer: {0} at {1}" -f (hostname), (get-date) $os = get-wmiobject win32_operatingsystem "Using {0} (version {1})" -f $os.Caption, $os.Version "Logged on as {0} ({1})" -f (whoami /upn), (whoami /fqdn) } >>plugh.txt

 

Any files which this script references would be in the same folder. By default, this would be in C:WindowsSystem32GroupPolicyUserScriptsLogon for the Local Computer Policy. The VBscript, PowerShell script, and any files which those scripts reference without folder specifications would be in that folder.

How would this path be different for an Active Directory-based GPO? The path would normally be in the SYSVOL share. For example, in the domain nanoware.net, we might have:

\nanoware.netSysVolnanoware.netPolicies{335E7174-A68E-4431-9258-CAFFA948895A}UserScriptsLogon

Note that this example script would run visibly in a command window and display the text “hello, world,” then prompt for the user to enter a name. This kind of user interaction is not typical behavior for a logon script, however it’s worth noting that it is possible to do this. In similar fashion, this script continues with another “hello” message with the name which the user had entered, and then a listing of the files in the current location where the script is running (by virtue of Get-ChildItem). If this input/output is not desired, these lines could be removed from the script. An alternative would be to have the VBscript invoke powershell.exe with the -NonInteractive parameter. Doing so would prevent this console output and would also not wait for user input, effectively running the script invisibly.

Note that the rest of this script redirects its output to files. As the paths to these files are not relative to the user’s documents folder or other location to which an ordinary user would normally have write permissions, this could fail with an access denied error for non-administrative users. For example, normal users typically don’t have write access into a GPO.

Although there is so much more to delve into on this subject, I hope that this tiny bit has helped get you started if you’ve ever had that question… Can I use Windows PowerShell for logon scripts?

Exchange Sans Edge With Barracuda

Barracuda

A while back, I wrote a few words about “Edge Transport is optional… depending on what you want to do” with respect to Microsoft Exchange Server 2007 deployments. A reader wrote in the following question:

“I am very interested in the actual setup of an Exchange environment sans Edge server. Specifically use with Barracuda, if you have any experience with the architecture of the environment or the quirks of how Exchange will act I’d love to hear about them.”

 

The primary relationship between the Barracuda Spam & Virus Firewall and the internal Exchange Server 2007 Hub Transport server(s) are send/receive connectors. With the Barracuda device doing the AntiSpam + AntiVirus processing, there isn’t an absolute need for Exchange Edge Transport servers in the organization, and the Exchange Hub Transport server(s) don’t need to do AS + AV either. Here are some basics of the setup.

Inbound

For inbound traffic from the public Internet through the Barracuda Spam (& Virus) Firewall (BSF) device and into Hub Transport (HT) server(s), a few items need to be configured.

The public DNS services for your organization need to include Mail Exchange (MX) record(s) which specify the name of each of your BSF devices. Corresponding Address (A) records would also need to be present in the public DNS to map from those fully qualified domain names (FQDNs) to the publicly visible IP address(es) of your BSF devices. Your outer firewall Network Address Translation (NAT) would need to allow mapping from the public addresses to any demilitarized zone (DMZ) addresses of the BSF device(s).

In the BSF, configure the Destination Mail Server TCP/IP Configuration to a resolvable name of your internal HT server(s). Let’s assume that you have your domains configured in the BSF and that it’s configured to use DNS. By “resolvable name” we mean that from the BSF perspective of DNS, the name of the HT server(s) must be resolvable. It is often recommended that a FQDN for the HT server(s) be used instead of their IP addresses so that adjustments can be made by changing DNS to point to another HT server, however that’s not an absolute rule. So if there is separate DNS in the DMZ where the BSF lives, it must be populated with the DMZ-visible address(es) of the HT server(s). Another possibility would be that the BSF has access to internal DNS services, but that can be another security concern.

Any firewalls between the BSF (in a DMZ, edge, or perimeter network) and the internal HT server(s) must allow SMTP communications from the BSF to the HT in question. If you’re using regular port 25 for the inbound SMTP communication from the BSF to the HT, the firewalls must allow this to be initiated by the BSF inbound. Alternate ports could be used instead, but the BSF needs to specify the alternate port in its Destination Mail Server TCP/IP Configuration and the intervening firewalls need to allow it initiated from there inbound to such a destination.

On the Exchange side, the HT server configuration is stored in Active Directory Domain Services (AD DS) along with the rest of the Exchange organization configuration. One aspect of the configuration of the Hub Transport Role is a collection of receive connectors per server. In other words, there are typically two or more SMTP Receive Connector objects defined per server hosting the HT role. If you’re using port 25 from the BSF to HT, then you’ll want to either allow anonymous access on this receive connector, or better yet, configure the BSF to authenticate with the HT, and make sure the Permission Groups on the receive connector are configured accordingly. Alternately, a custom Receive Connector could be created for the HT server with a different port number than port 25. Furthermore with either the Default Receive Connector or a custom one, IP address restrictions could be applied.

Without getting into too many technical details, that’s the gist of the inbound configuration.

Outbound

For outbound mode, configuring the Exchange organization with a Send connector would direct outbound mail through the Barracuda device (BSF). In the Exchange Organization Configuration, Hub Transport category, create a Send Connector which specifies the appropriate Hub Transport (HT) server(s) as Source Server(s) to send to the BSF. Then specify the Smart Host option in the Send Connector configuration and supply internally resolvable plus reachable names or addresses of the BSF from the HT DNS perspective. Any firewalls for outbound traffic from the source HT to the BSF would need to allow SMTP on TCP port 25.

That’s the simplified explanation. Much depends on the details of your configuration. If you have a Barracuda Spam & Virus Firewall Model 300 or higher, you may also want to consider LDAP-assisted filtering. I hope this helps.

The Power of Pipelines (with Filters)

AlaskanPipeline

Upon revisiting the Wrap-Function and Wrap-History functions, some questions on these arose. Let me try to address at least one of those questions.

“Your Wrap-Function definition looks like a function, but is called a filter. What does that mean? Also, how does it actually process a file or bunch of commands and turn them into a function?”

 

That’s two questions. Before delving into answers, let’s first go back and look at another definition of Wrap-Function which we now call Wrap-FunctionClassic. It’s not called Classic because it actually is classy or anything like that but because it’s older. The distinction between the two versions which have similar functionality can be educational. Let’s take a look at the classic version first before dissecting the newer one and actually answering the questions.

function global:Wrap-FunctionClassic { 
    param( $file=”.script.ps1”, 
        $fun=”Wrapped” , $scope=””) 
    $_lines = Get-Content $file 
    for( $i = 0; $i -lt $_lines.count; ++$i ){
        $_lines[$i] = “`t” + $_lines[$i] 
    } 
    Write-Output “function $scope$fun {“ 
    $_lines 
    Write-Output “}” 
}

 

Note that this function could be significantly optimized, yet as it is, it serves as a point of reference as a classic programming approach to PowerShell scripting. It gets the contents of a named file, adds a tab at the beginning of each line, and then outputs the set of lines as a function. Although it reads from a file, it does not save the output to a file yet merely uses the standard output channel for the resultant file-turned-function. In summary, Wrap-FunctionClassic is a function, and it reads from a file, but doesn’t write to a file.

Now, let’s take a look at a variation which is defined as a filter rather than a function. It’s called Wrap-Function. Although this version could be optimized as well (e.g. “Write-Output” doesn’t need to be explicit), look at the simplicity compared with the Wrap-FunctionClassic version.

filter global:Wrap-Function { 
    param( $fun=”Wrapped” ) 
    # include global: or other scope 
    # in name as appropriate 
    BEGIN{ Write-Output “function $fun {“ }
    PROCESS{ “`t” + $_ } 
    END{ Write-Output “}” } 
}

 

Filters are like functions in that they may be invoked with parameters. They are called in the same way. Like awk scripts of UNIX heritage, the body of the filter can have more than one code block. This differentiates filters from functions. In fact, even if the keyword “function” had been used instead of “filter” in the definition, the presence of BEGIN, PROCESS, and/or END code blocks within a function causes it to behave as a filter. In other words, including a code block named BEGIN, PROCESS, or END in a function converts it to a filter.

But just what is the distinction between an ordinary function and a filter? A function runs through its one code block from top to bottom with whatever flow control it contains – once for all input. When a filter is invoked, it’s BEGIN block (if any) is run first. Then the PROCESS block is run once for each input object. Finally, the END block runs. That may sound simple, but it has immense power.

Consider the following.

function x(){ “one”; $_; “two” } 
filter y(){ “one”; $_; “two” }

 

We could call the function x and the filter y by passing an object down the pipeline to either one.

“three” | x 
“three” | y

 

What’s the output of the function x in this example? one, two. That’s it. No three. How about the filter? One, three, two. If that makes sense, try sending more than one object to the filter (e.g. “three”,”four” | y).

filter z(){ 
    BEGIN{ “zero” } 
    PROCESS{ “one”; $_; “two” } 
    END{ “infinity” } 
}

 

 

“three”,”four” | z

The filter z includes separate BEGIN, PROCESS, and END blocks. With the filter y, the body of y was effectively assumed to be the PROCESS block. I’d recommend becoming familiar with the behavior of simple examples like these to help understand how filters process the objects input to them.

Now that we’ve looked at a few basic filters, take another look at the Wrap-Function filter. It’s really quite straight forward once you know how filters work. The BEGIN block emits the beginning of a function declaration. The PROCESS block, which is executed for each object/line of input, emits a tab character followed by the original line. The END block emits the closing of the function definition. That’s it.

Using Wrap-Function to wrap a script or history is fairly straight forward. Let’s first take a look at converting the contents of a script file into a function with the body of that original file and saving that resultant function into another file. Then we’ll revisit the wrapping of history into a function and saving that to a file.

get-content original.ps1 | Wrap-Function | 
out-file result.ps1

 

This pipeline simply uses Get-Content to obtain all of the lines contained in the original.ps1 file. Those lines are then sent to Wrap-Function, and the resultant function is saved by Out-File into the result.ps1 script. Many variations could be made, such as using other cmdlets or functions to get the body of the code to convert to a function.

Although this pipeline is fairly easy to type, it could either be abbreviated by using various aliases and shorthand notations.

cat original.ps1 | Wrap-Function >result.ps1

 

Of course, we could define this sequence as a function for convenient use.

function Convert-ScriptToFunction( $file, $result ){ 
    Get-Content $file | Wrap-Function | Out-File $result 
}

 

And then merely invoke the function when we need to convert a script file into a function-wrapped body stored in another script file.

Convert-ScriptToFunction original.ps1 result.ps1

 

Another use of a pipeline with the Wrap-Function filter has been shown in the Wrap-History examples I’ve posted previously to this blog. The guts of this is essentially as follows.

Get-History -Count 32 `
  | ForEach-Object { $_.CommandLine } `
  | Wrap-Function RecentCommands `
  | Out-File recent.ps1

 

While the possibilities for filters and pipelines are seemingly endless, hopefully these few short examples have illustrated a tidbit of the power of pipelines and filters.

Wrapping History to a File

posh-wrapper3

Previously, (was that really five months ago?) we looked at how to wrap a number of strings up into a function, and then just yesterday I finally wrote about how to take Windows PowerShell history and wrap it into a function for later use. Here’s where we left off:

function global:Wrap-History { 
    param( $count=32, $fun=”Wrapped” ) 
    Get-History -Count $count `
    | ForEach-Object { $_.CommandLine } `
    | Wrap-Function $fun 
}

 

If we want to use this Wrap-History function to create a function from our recent history and save that function in a file, we could simply invoke the function and redirect the output to a file. Consider the following examples:

Wrap-History >myfun1.ps1 
Wrap-History -count 60 -fun Second >myfun2.ps1 
Wrap-History 60 Third >myfun3.ps1

 

The first example just uses Wrap-History without parameters – this assumes that we want to wrap up the 32 most recent commands into a function called “Wrapped” and save that function in a script file called myfun1.ps1.

The second example explicitly specifies a count of 60 history items (the most recent ones). Also, it names the resultant function “Second” and saves it in a file called myfun2.ps1.

Finally, the third example also uses an explicit count and function name, yet doesn’t use the parameter names -count and -name but instead depends on positional parameters. Note that Wrap-History treats the first parameter as the count and the second one as the function name according to the “param” block definition. This example saves the resultant function in the file myfun3.ps1.

How could we make the Wrap-History function accept a file name and perform the file redirection for us as well? Consider the following modified version of Wrap-History.

function global:Wrap-History { 
  param( $count=32, $fun=”Wrapped”, $file=$null ) 
  if( $file -eq $null ){ $out = “Out-Default” }   
  else{ $out = { $input | Out-File $file } } 
  Get-History -Count $count `
  | ForEach-Object { $_.CommandLine } `
  | Wrap-Function $fun `
  | &$out 
}

 

Note that I included a similar definition of Wrap-History in the course materials for Microsoft’s “Automating Windows Server 2008 Administration with Windows PowerShell” course 6434, yet with an error. Sorry about that. The above version includes the correction of including “$input | “ before the Out-File invocation in the else clause. If you’re teaching or attending that course, feel free to adjust the script file there.

Let’s take a look at how this version differs from the one at the top of this post.

The first notable difference is that a third parameter can be used to specify the file name. If used positionally, this would follow the count and resultant function name. By name, it could appear in any order.

In the middle of the expanded version is the “if” block which checks if the file name has not been supplied in which case this version of Wrap-History emulates the simpler form, and uses the Out-Default cmdlet to emit the resultant function at the end. If however a file name was supplied to Wrap-Function, then the “else” clause takes the resultant function and outputs it to a file. Yes, “outputs” is a verb just like “saves,” “writes,” “emits,” and all their friends, right? Note that both of these clauses don’t do the actual output, but merely define a variable $out which will be invoked later. The scenario which does not save to a file just defines $out as the string value “Out-Default” which is the name of a cmdlet. The more exciting scenario is when we’ve given Wrap-Function a file name to use. In this case, the $out variable is assign a code block rather than a string (thus the curly braces) which includes the pipeline $input | Out-File $file. Note that the values of these variables are not evaluated until the code block is actually invoked.

And now for the fun part. The Get-History pipeline at the end of the script starts off the same as in the simpler version of Wrap-History. Then we’ve added another stage to the pipeline at the end – the expression &$out. That’s the magic that invokes the variable $out. In the case where we aren’t redirecting to a file, recall from the discussion above that in that scenario, $out = “Out-Default” which means that &$out will use that string value as code to execute and just send the pipeline output (the resultant function) to the Out-Default cmdlet. In situations where we’ve given a file name to save to, note that $out will have been assigned the value { $input | Out-File $file } by this point. Therefore, &$out will send the output of the pipeline thus far (the resultant function) to this code block.

The bug I had in a previously published version of this code block is that if the code block had been just { Out-File $file } then the objects coming down the pipeline to it would not actually get written to the file, and the Out-File cmdlet would just create an empty file (actually 2 characters, carriage return and line feed, but that’s another story). The fix we included here is to use the $input variable in another level of pipeline within the code block. This takes the resultant function coming down the “top-level” pipeline (Get-History | ForEach-Object | Wrap-Function) and coerces it into Out-File in the right way.

Do you remember those three examples of using the simpler Wrap-Function with redirection? Those could be rewritten as follows using the new version.

Wrap-History -file myfun1.ps1 
Wrap-History -count 60 -fun Second `
             -file myfun2.ps1 
Wrap-History 60 Third myfun3.ps1

 

The ability to save recent commands to a script or function can be immensely powerful for ad hoc script development. Often times we might not think of saving what we’ve just done until after we’ve done it. Also, the interactive nature of the shell lends itself well to prototyping a line at a time, which leads to the desire/need to later save what worked well and discard what didn’t.

Naturally, when saving a history of recent commands to a file, there may be some commands which you really do want in the resultant function and some you don’t. For now, we’d suggest that you just edit the output script file from Wrap-History using your favorite text editor and prune and adjust as necessary. Of course, if there’s interest, perhaps we’ll revisit this topic in the future. Let me know what you think.

Wrapping History

posh-wrapper2

Once upon a time, well actually just five months ago, I posted a blog entry about wrapping code or file contents as a Windows PowerShell function. At the time I thought I’d post the next day or week with more details. Finally, here’s a bit more of the story.

First, a little review. Consider the following question:

“I’ve typed a lot of great commands in PowerShell just now. How do I save those to a file to run again as a script? Actually, if I could save all or part of my recent commands as a function in my profile, that would be great! Is that even possible?”

 

Previously, we looked at a Wrap-Function filter I’d written. Now let’s focus more directly on the question. When you need to deal with recent commands you’ve typed in Windows PowerShell, two facilities come to mind:

  1. History
  2. Transcripts

While using transcripts (see the cmdlet Start-Transcript) are wonderful, we’ll first focus on history. PowerShell keeps a history of the recent commands which have been entered. For each command, the history mechanism records an identifier number (Id) for the command within the current session, what you typed (CommandLine), whether the command succeeded or not (ExecutionStatus), and when the command started and stopped running (StartExecutionTime, EndExecutionTime). The cmdlets Add-History, Get-History, and Invoke-History work with such history information. Also, the alias “history” provides a shorthand for the Get-History cmdlet which might seem reminiscent of a particular history command from certain UNIX-heritage shells.

Let’s focus on Get-History, because Add-History and Invoke-History are just far too much fun for us to be distracted by at the moment. Get-History just shows us what commands we’ve recently run. How do we turn those into a function or script? With a little bit of magic of course. Read on true fans.

function global:Wrap-History { 
    param( $count=32, $fun=”Wrapped” ) 
    Get-History -Count $count `
    | ForEach-Object { $_.CommandLine } `
    | Wrap-Function $fun 
}

 

This Wrap-History function is a simplified form of one which will write the function to a script file is so desired, but it’s certainly functional as it is shown here.

How does it work? It uses the Get-History cmdlet to retrieve recently entered commands. The optional count parameter allows choosing how many commands should be included. Then it pipes that history information to the ForEach-Object cmdlet to select only the CommandLine property of the history objects. Next the CommandLine attribute of each of the recent history items is piped to the Wrap-Function filter which is given the optionally supplied name or a default name of “Wrapped.” Here is where this version of the Wrap-History function ends. If you’d like to redirect the output to a file, this can simplify saving the resultant function as a script file.

Stay tuned for more on wrapping functions (part 3).

Migrating Exchange to a New Domain

move-mailbox

When teaching about Exchange Server 2007, many questions about migration and transition from old messaging platforms arise. Coming to E2K7 from GroupWise, Notes, and other systems naturally have a number of factors to be considered, yet even coming to E2K7 from earlier versions of Exchange Server involves many possible approaches and choices. The following question is not uncommon.

We are in the process of moving to a new domain. I’m being told to look into what would be the best way to move/migrate all the users over to the new domain. One of the recommendations is to build a new Exchange box on the new domain and then move all the mailboxes over to the new Exchange server in the new domain.

 

My first question is “Is the new domain in the same Active Directory forest as the first?”

My second question is “Will the old domain still exist after the migration?”

Without the answers to these questions, I could describe a few scenarios leading to different directions and rather dissimilar tactics and final solutions.

However for the moment, I’ll assume that you’re talking about using a separate forest and that the original domain (and perhaps forest) will no longer exist after the migration. That’s not all that unusual – I’m not saying that everyone does that, but it’s not uncommon either.

Migrating Mailboxes to Another Forest

When the target domain is in another forest, it is also in another Exchange organization because in Exchange Server 2007 each Active Directory forest can support one Exchange organization and that messaging organization structure is based on and tied to the forest. Also, domain rename of Active Directory is not currently supported with Exchange Server 2007 installed in the forest due to some hard-coded FQDNs. Those are probably some of the reasons such a migration has been recommended to you.

  1. Establish network connectivity between the forests.
  2. Associate the two forests at a domain name system (DNS) level.
  3. Consider trust relationships between the forests.
  4. Establish Exchange Server(s) in the new environment if you haven’t already done so.
  5. Configure the proper connectors for mail flow between the Exchange environments.
  6. Move mailboxes over to the new environment, creating the user accounts along with them.
  7. Move public folders if you have them to the new environment.
  8. Move any relevant connectors from the old environment to the new.
  9. Consider the exit plan for decommissioning the old environment.

This list isn’t a strict guide, just a possible overview of one approach to the process. Let me know if you have questions on details or whether all of these aspects are necessary for your plan. I hope this helps!