Archive

Archive for the ‘Microsoft’ Category

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. 😉  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. 😉

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. 😉

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Querying Oracle using PowerShell

September 1, 2009 4 comments

Yesterday, I wrote up a quick bit of code to query out our SQL Servers.  Initially, I wanted a speedy way to hit, parse, and report back log4net logs in our “server status” scripts.

Well, never one to leave something alone, I started tinkering with Oracle support.  In our enterprise, most of our key systems sit on Oracle and there are SEVERAL opportunities for quick data retrieval routines that could help out in daily work.

Plus, doing an Oracle query in PowerShell beats five minute process of cranking up Oracle SQL Developer for a simple, single query. 🙂

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

[System.Reflection.Assembly]::LoadWithPartialName(“System.Data.OracleClient”) | out-null
$connection = new-object system.data.oracleclient.oracleconnection( `
    “Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$server)(PORT=1521)) `
    (CONNECT_DATA=(SERVICE_NAME=$instance)));User Id=USER_ID;Password=PASSWORD;”);

$set = new-object system.data.dataset   

$adapter = new-object system.data.oracleclient.oracledataadapter ($query, $connection)
$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

I chose to use the OracleClient library for simplicity sake.  I could have used ODP.Net; however, that’d make my scripts FAR less portable.  Since OracleClient isn’t loaded by default in PowerShell, this script loads it.  In addition, I chose to use the TNS-less connection string as I don’t typically keep a ‘tnsnames.ora’ file on my computer.  This further adds to the portability of the script.

Past that and the change from SqlClient to OracleClient, the rest of the code is the same from the prior example.

Dealing With Empty Strings and Nulls

One thing that I did run across that differed between Oracle and Microsoft SQL revolved around how empty strings were dealt with when parsing using PowerShell.

Example:

oq “SELECT * FROM Schools”

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School

102 School

112 School      user3@domain.net

140 School      user1@domain.net

etc.

Now, what if I wanted to just see the schools missing a principal_email_address?  I’d just rewrite my SQL query, right?  Yeah, probably, but for the sake of argument and perhaps some scripting.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq “”}

No results.

What? Why not?  I see two in my last query.  Unfortunately, dealing with “nulls” and empty strings can get a bit tricky when pulling from database data.  With Microsoft SQL, a text-based column (varchar, ntext, etc) seems to handle -eq “” just fine, but Oracle is less than pleased.  @ShayLevy suggested -eq [string]::Empty but that didn’t pull through either. 

From a prior experiment, I also tried -eq $null and was greeted with something very different—it returned all results. Meh.

Randomly, I tried -like $null and it worked. Well, that’s interesting.  So the value isn’t empty in Oracle, but it is “like” a null.  After a bit more digging, I discovered that the real value is –eq [DBNull]::Value.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq [DBNull]::Value }

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School
102 School

It makes sense… but more testing is required to see which is more reliable for a wide variety of data types.  I like the concept of “like null” to simulate “string empty or null”.  Further testing required. 🙂

 

Querying SQL Server using PowerShell

August 31, 2009 2 comments

The great thing about PowerShell is direct access to objects.  For my apps, database connectivity is happily handled by NHibernate; however, that doesn’t mean we can’t take advantage of good old System.Data.SqlClient for our PowerShell scripting.

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

$connection = new-object system.data.sqlclient.sqlconnection( `
    “Data Source=$server;Initial Catalog=$instance;Integrated Security=SSPI;”);
   
$adapter = new-object system.data.sqlclient.sqldataadapter ($query, $connection)
$set = new-object system.data.dataset

$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

Not too long or challenging—it’s mostly working to instantiate a quick SQL connection and pass in your query.  I even considered plugging in a check on the $query parameter to ensure it began with SELECT to ensure I wouldn’t do accidental damage to a system. Maybe I’m just paranoid. 😉

What this little snippet allows me to do is quickly add log4net checking into some of my server monitoring PowerShell scripts.

query sqlServer myDatabase “Select count(id), logger from logs group by logger” | format-table -autosize

Notice I didn’t include the format-table command in my main query script.  Why?  I wanted to keep the flexibility to select, group, and parse the information returned by my query.  Unfortunately, it seems that the format commands break that if they’re ran before a manipulation keyword.  Adding in “ft –a” isn’t difficult in a pinch.

WebStorageHandler[1]

Quick and easy…

Other uses:

  • Customer calls up with a question about data—save time and do a quick query rather than waiting for Management Studio to wind up.
  • Keep tabs on database statistics, jobs, etc.
  • and more…

Digging into the Event Log with PowerShell

August 25, 2009 Comments off

There are a few of our applications that haven’t been converted over to log4net logging so their events still land in the good ol’ Windows Event Log.  That’s fine and was fairly easy to browse, sort, and filter using the new tools in Windows Server 2008.

I’ve found a bit better tool, however, over the past few hours for digging into the logs on short notice and searching—obviously, PowerShell.

Full source for this can be found here.

I wanted to be able to quickly query out:

  • the time – to look at trending,
  • the user – trending, and filtering if I have them on the phone,
  • the URL – shows both the application and the page the problem is occuring on,
  • the type – the exception type for quick filtering,
  • the exception – the core of the issue,
  • the details – lengthy, but can be ever so helpful even showing the line number of the code in question.

param ([string]$computerName = (gc env:computername))

function GetExceptionType($type, $logEvent)
{
 if ($type -ne "Error") { $logEvent.ReplacementStrings[17] }
 else {
        $rx = [regex]"Exception:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])
        $matches.Groups[1].Value
 }
}

function GetException($type, $logEvent)
{
 if ($type -ne "Error") { $logEvent.ReplacementStrings[18] }
 else {
        $rx = [regex]"Message:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])
        $matches.Groups[1].Value
 }
}

get-eventlog -log application -ComputerName $computerName |
    ? { $_.Source -eq "ASP.NET 2.0.50727.0" } |
    ? { $_.EntryType -ne "Information" } |
    select `
  Index, EntryType, TimeGenerated, `
  @{Name="User"; Expression={$_.ReplacementStrings[22]}}, `
  @{Name="Url"; Expression={truncate-string $_.ReplacementStrings[19] 60 }}, `
  @{Name="Type"; Expression={GetExceptionType $_.EntryType $_ }}, `
  @{Name="Exception"; Expression={GetException $_.EntryType $_ }}, `
  @{Name="Details"; Expression={$_.ReplacementStrings[29]}}

The code itself is probably pretty overworked and, I hope, can be refined as time goes on.

The two helper functions, GetExceptionType and GetException, exist because (it seems) that Warnings and Information store their information in one location and Errors store their information in one HUGE blob of text that needs to be parsed.  Those helpers provide that switch logic.

The get-eventlog logic itself is pretty straightforward:

  1. Open up the ‘Application’ EventLog on the specified computer,
  2. Filter only “ASP.NET 2.0.50727.0” sourced events,
  3. Exclude “Information” type events,
  4. Select 3 columns and generate 5 columns from expressions.

The great advantage is I can then take this file and “pipe” it into other commands.

get-aspnet-events webserver1 | select user, url, type | format-table -auto

User               Url                               Type
----               ---                               ----
domain\dlongnecker http://domain.net/Create.aspx     PreconditionException
domain\dlongnecker http://domain.net/Create.aspx     PreconditionException
domain\dlongnecker http://domain.net/View.aspx       PostconditionException
domain\dlongnecker http://domain.net/View.aspx       AssertionException

or

get-aspnet-events webserver1 | ? { $_.user -like “*dlongnecker” }

The possibilities are great—and a real time saver than hitting each server and looking through the GUI tool.

The code also includes a helper method I created for truncating strings available here via codepaste.  If there’s built-in truncating, I’d love to know about it.

 

Tips for Booting/Using VHDs in Windows 7

August 6, 2009 3 comments

Both Windows 7 and Windows Server 2008 R2 (aka Windows 7 Server) support booting directly from a VHD.  This is FANTASTIC, AWESOME, and other bolded, all-caps words.  For the full details, check out Hanselman’s handy post.

I’m a HUGE user of differencing disks.  My layout follows the basic structure of:

  • system (parent/dynamically expanding)
    • environment (child of system/differencing)
      • task (child of environment/differencing)
  • Windows Server 2008 R2 (2008r2.vhd)
    • VS2008 + tools (vs2008.vhd)
      • “production” work (projectName.vhd)
      • freelance/open source work (dev1.vhd)
      • tinkering (dev3.vhd)
    • VS2010 + tools (vs2010.vhd)
      • tinkering (dev2.vhd)
  • Windows 7 (win7.vhd)
    • Simulated client “a” environment (client-a.vhd)
    • Simulated client “b” environment (client-b.vhd)

The great thing is, I have a single “2008r2.vhd” and “win7.vhd” as a baseline.  A customer calls and needs a quick mockup?  I can instanciate a new development environment in moments (or quicker via PowerShell scripts).  Who really wants to walk through reinstalling the operating system again anyway?  Not me.

With that, here’s a few tips for situations I ran into building up my environment.

Q: I had a series of VHDs from [Virtual Server 2005 R2 | Virtual PC 2007 | The Interwebz] and they won’t work.
Correct.  Only VHDs from HyperV or created directly in Windows 7 or Server 2008 R2 (that R2 part is important) using DISKPART are bootable.


Q: My system will not boot after installing!  I just get a BSOD!

If you can catch the BSOD message or press F8 and turn off automatic reboot, the error reads:

“An initialization failure occurred while attempting to boot from a VHD.  The volume that hosts the VHD does not have enough free space to expand the VHD.”
What?  Huh?  We setup dynamically expanding VHDs.. why would it need all of the free space?  Well, it seems that to boot from a VHD, it expands it to full capacity (assuming with zeros because I don’t see a latency on boot-up).  If you’re like me, you probably set your “dynamically expanding disk” to a wild maximum capacity, such as 200GB.  Even if you get close to that number, it’s likely that the parent/child VHD chains are split across multiple partitions/spindles.

That’s a gotcha.

Lesson: Be prudent with how you size your VHDs.  Ensure you have room for you’re intentions, but also ensure you have enough physical capacity.

Here’s how to fix the problem without totally reinstalling your VHD.

  1. Boot into your parent operating system and attach the VHD as a partition using either DISKPART or the Disk Management GUI.
    1. select vdisk file=”d:\vm\basedrives\2008r2.vhd”
    2. attach vdisk
  2. Shrink the VHD using the Disk Management GUI (it’s just easier, trust me).  If your original maximum capacity was 200GB and you only have 150GB free, set it to 120GB or something reasonable.
  3. Use the free VHDResizer tool to trim off the excess “maximum capacity” of your newly shrunken VHD.  You can get VHDResizer here. Set the maximum size to the same size as your new partition size.
    1. VHDResizer will require you to specify a new name for the resized VHD.  After it’s done, rename the old VHD to “file_old.vhd” and the new VHD to the same as your old file to ensure the boot manager picks up the VHD.
  4. Restart and continue along with configuring your new system.
Q: The Parent Disk is Complete.  How do I create a Differencing Disk?
Creating a differencing disk is pretty easy–a few commands in DISKPART and an administrator-privilaged console window and you’re set.
Before doing any of this, be sure that you’ve defragmented and ran the precompactor in your VHD.  This cleans up the data and zeros out the free space so that it compacts nicely.  If you don’t want to install Virtual Server to get ISO image for the PreCompactor (though I recommend this just to be safe), you can download an ‘extracted’ version from eeSJae.com.  Here’s a direct link to the precompact.exe file.
  1. Using DISKPART, select your parent VHD, compact it, and create a child (differencing) disk.
    1. select vdisk file=”d:\vm\basedrives\2008r2.vhd”
    2. compact vdisk
    3. create vdisk file=”d:\vm\vs2008.vhd” parent=”d:\vm\basedrives\2008r2.vhd”
  2. Run bcdedit /v and grab the {guid} of your existing VHD boot loader.
  3. Use BCDEDIT to replace the ‘device’ and ‘osdevice’ VHD paths.
    1. bcdedit /set {guid} device vhd=[LOCATE]\vm\vs2008.vhd
    2. bcdedit /set {guid} osdevice vhd=[LOCATE]\vm\vs2008.vhd
  4. Browse (using Windows Explorer, command window, etc) to your original, newly parent VHD (2008r2.vhd in this example) and mark it as read-only for safe keeping.
  5. Reboot and load up your new differencing disk.
Quick note:  In BCDEdit, the [LOCATE] tag is super—it allows the boot loader to FIND the location of the file rather than you specifying it.  This is great if your drive letters tend to bounce around (which they will… a bit).
Be aware that the previous note that your VHDs will expand to their full size remains.  You now, however, have the static size of your parent VHD and the “full size” of your new differencing disk (which inherits the parent’s maximum size).  If your parent is 8GB and the maximum size is 120GB, you’re now using 128GB, not 120GB.  Keep that in mind as you chain differencing disks. 🙂

Q: DVDs are annoying.  I can mount VHDs, why can’t I mount ISOs?

Who knows.  At least with Windows 7, we can actually BURN ISO images… much like 1999.  In either case, I recommend tossing SlySoft’s Virtual CloneDrive on your images (and your host).  It’s fast, mounts ISOs super easy, and saves a TON of time.

Fetching Nested Group Memberships in Active Directory

July 22, 2009 Comments off

As we’ve started using Active Directory more and more to provide single sign-on services for our web applications, group memberships have become more important.

We recently rolled out an application that took advantage of nesting groups (easier to add and manage five global groups than 10,000 individuals); however, our existing code to fetch memberships wouldn’t look at nested groups.

So if I was a member of “Student Achievement”, how could I parse the memberships of that group and determine if I was in “MIS”?

Thankfully, a bit of recursion does the trick… 🙂

As our infrastructure is entirely Windows Server 2003 and higher, I use the System.DirectoryServices.Protocols namespace and methods to connect to and parse out information from LDAP.  Because of this, I rely on SearchResult(s) rather than DirectoryEntries. 

In our environment, a “user” is defined as:

“(&(objectCategory=person)(objectClass=user)(mail=*)({0}={1}))”

Everything looks pretty plain except we require that a valid “user” have an email address.  That ensures we filter out junk/test accounts as only employees have Exchange accounts.

Groups are even easier:

“(objectCategory=group)”

If, say I’ve queried for a single user, the groups property is populated simply by looking at the local user’s “memberOf” attribute.

private static IEnumerable<string> ParseGroupMemberships(SearchResultEntry result, int countOfGroups)

{

    for (int i = 0; i < countOfGroups; i++)

    {

        var fullGroupName = (string) result.Attributes[“memberOf”][i];

        //Fully Qualified Distinguished Name looks like:

        //CN={GroupName},OU={AnOU},DC={domain},DC={suffix}

        //CN=DCI,OU=Groups,OU=Data Center,DC=usd259,DC=net

        int startGroupName = fullGroupName.IndexOf(“=”, 1);

        int endGroupName = fullGroupName.IndexOf(“,”, 1);

        if (startGroupName != -1)

        {

            string friendlyName =

                fullGroupName.Substring(startGroupName + 1, (endGroupNamestartGroupName) – 1);

            yield return friendlyName;

        }

    }

}

That was fine for the primary groups (attached through memberOf); however, it didn’t look at the groups those groups were a “memberOf”. 🙂

After quite a bit of trial and error, the new method looks pretty ugly, but seems to be quite performant and reliant in tests. 

private static IEnumerable<string> ParseGroupMemberships(

    SearchResultEntry result, int countOfGroups)

{

    var primaryGroups = new List<string>(countOfGroups);

    var allGroups = new List<string>();

 

    for (int index = 0; index < countOfGroups; index++)

    {

        primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

        allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    }

 

    var connection = new ActiveDirectory().GetConnection();

 

    while (0 < primaryGroups.Count)

    {

        var searchRequest = new SearchRequest(distinguishedName,

                                              CreateFilterFromGroups(primaryGroups),

                                              SearchScope.Subtree,

                                              ldapGroupsAttribute);

        primaryGroups.Clear();

 

        var response = (SearchResponse)connection.SendRequest(searchRequest);

        if (response != null)

        {

            int entriesCount = response.Entries.Count;

            for (int entry = 0; entry < entriesCount; entry++)

            {

                DirectoryAttribute groupList =

                    response.Entries[entry].Attributes[ldapGroupsAttribute];

 

                if (groupList != null)

                {

                    int groupCount = groupList.Count;

                    for (int index = 0; index < groupCount; index++)

                    {

                        string dn = groupList[index].ToString();

                        if (!allGroups.Contains(dn))

                        {

                            allGroups.Add(dn);

                            primaryGroups.Add(dn);

                        }

                    }

                }

            }

        }

    }

    connection.Dispose();

 

    foreach (string dn in allGroups)

    {

        yield return GetFriendlyName(dn);

    }

}

Here’s a breakdown of the highlights:

var primaryGroups = new List<string>(countOfGroups);

var allGroups = new List<string>();

 

for (int index = 0; index < countOfGroups; index++)

{

    primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

}

This section takes the SearchResultEntry’s primary groups and adds each one of them to two lists.

  • The ‘primaryGroups’ list is exactly that—here’s a list of groups that we need to iterate over and find the nested groups. 
  • The ‘allGroups’ will hold our master list of every unique group and will provide our return value.

var searchRequest = new SearchRequest(distinguishedName,

                                      CreateFilterFromGroups(primaryGroups),

                                      SearchScope.Subtree,

                                      ldapGroupsAttribute);

primaryGroups.Clear();

This code formulates our LDAP search request. distinguishedName and ldapGroupsAttribute are two constants in my code base (for our domain’s DN and “memberOf”).  CreateFilterFromGroups takes the list of groups and concats them together—so we’re only looking at the groups we want, not everything.

Finally, we’re reusing our primaryGroups list to look for nested within nested… within nested, so clear that out—infinite loops hinder performance. 🙂

int entriesCount = response.Entries.Count;

for (int entry = 0; entry < entriesCount; entry++)

{

    DirectoryAttribute groupList =

        response.Entries[entry].Attributes[ldapGroupsAttribute];

 

    if (groupList != null)

    {

        int groupCount = groupList.Count;

        for (int index = 0; index < groupCount; index++)

        {

            string dn = groupList[index].ToString();

            if (!allGroups.Contains(dn))

            {

                allGroups.Add(dn);

                primaryGroups.Add(dn);

            }

        }

    }

}

Here’s our massive, disgusting block of if statements that populate the lists and keep the where statement running as long as primaryGroups returns a count > 0.

foreach (string dn in allGroups)

{

    yield return GetFriendlyName(dn);

}

Finally, use a helper method to convert the DN to a “friendly name” and return it to the caller (using yield since our method returns an IEnumerable<string>).

Running a quick test gives me:

UserAccount_Can_Get_Group_Memberships_With_Default_Security : Passed

Group count for David Longnecker is 138
Elapsed time for first query: 00:00:00.0420000

Wow, I’m in a lot of groups… O_o. The query is relatively quick (that is with connection buildup and teardown time and generating the rest of the attributes of the user) especially considering our AD infrastructure is far from optimal.

In addition, a LDAP query using ADUC gives the same results. 

If nothing else, its consistent! 🙂 

Performing SELECT.. WHERE IN using a Repository

June 8, 2009 Comments off

As I’ve discussed in the past, a few of my repository pattern practices are borrowed and built on the nice S#arp Architecture project.  Here’s another situation where I needed a bit more functionality.

Disclaimer:  If there’s a better way to do this—I’m all ears and emails. 🙂

By default, the FindAll method builds up the NHibernate criteria by iterating through a key/value pair.  Easy enough.

‘Id, 12345’ generates ‘WHERE Id = 12345’.

But what happens when I want to do something with an array?

‘Id, int[] {12345, 67890}’ should generate ‘WHERE Id IN (12345, 67890)’

Thankfully, the Restrictions class has an In method, but how can I add that flexibility to the FindAll method?

Here’s what the FindAll method looks like to start off:

public T Find(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    var criteria = Session.CreateCriteria(typeof (T));

    propertyValuePairs

        .ForEach(x =>

                 criteria.Add(Restrictions.Eq(x.Key, x.Value)));

 

    return criteria.List<T>() as List<T>;

}

That’s nice.  Iterate through each, but assuming an Eq (Equals) relationship between the key and the value.

After a bit of dinking, checking to see if the object is a typeof(ICollection) seems to be the most reliable considering Restrictions.In(key,value) accepts Collections for the value parameter. 

This allows you to pass arrays, lists, and dictionaries.

public List<T> FindAll(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

 

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    ICriteria criteria = Session.CreateCriteria(typeof (T));

 

    propertyValuePairs

        .ForEach(x =>

                     {

                         if (x.Value.IsA<ICollection>())

                         {

                             // add WHERE key IN (value)

                             criteria.Add(Restrictions.In(x.Key, (ICollection) x.Value));

                         }

                         else

                         {

                             // add WHERE key = value

                             criteria.Add(Restrictions.Eq(x.Key, x.Value));

                         }

                     });

    return criteria.List<T>() as List<T>;

}

Here’s my (now) passing test that I used to test this logic as I built it:

[Fact]

public void can_find_students_by_array_of_student_ids()

{

    var studentsToFind = new int[] { 622100, 567944, 601466 };

 

    var criteria = new Dictionary<string, object>();

    criteria.Add(“Id”, studentsToFind);

    criteria.Add(“Grade”, “09”);

 

    var sut = new StudentRepository();

    var students = sut.FindAll(criteria);

 

    students.Count.ShouldBeEqualTo(1);

    students.ShouldContainMatching(x => x.Id == 567944);

    students.ForEach(x =>

        Console.WriteLine(“{0}, {1}”.AsFormatFor(x.FullName, x.Id)));

}

Test Passed.  Woot.  The generated SQL is also nice and clean (really loving NHProf… though I trimmed out the excess columns for brevity).

SELECT this_.Id            as Id7_0_, [..]

       this_.Grade         as Grade7_0_, [..]

FROM   custom.student_lkup this_

WHERE  this_.Id in (622100 /* :p0 */,567944 /* :p1 */,601466 /* :p2 */)

       and this_.Grade = 09 /* :p3 */

Categories: .net 3.5, c#, Microsoft, NHibernate, SQL

Comparing Google and Bing

June 1, 2009 Comments off

Bing, the latest iteration of Windows Live Search, is now available for use and has been getting quite a few rave reviews.  Miguel Carrasco wrote a lengthy post discussing the benefits of Bing over Google; however, maybe I’m old fashioned, but I see a lot of the hype as overhead.

Not to nitpick, but searching DIRECTLY for product names will return advertisements and sponsored sales information—no matter the search engine.  Miguel’s post (searching for a Nikon D60 camera).  Using my new fanboy item, the Palm Pre, as an example, I can see that the Google page actually has FEWER (real estate) “advertisements” than the Bing search.

Bing has a whopping 6 sponsored links… along with something called Bing cashback that I haven’t really read up on yet.  That along witht he lost space on the left seems like a lot of waste.

Bing

Google drops off a bit of space for the 3 sponsored links, but keeps it thin and uses it’s integration with YouTube to show previews and group the video results to the top.

BUT… how much does that matter?  If Google’s page takes a moment more to load, do the background graphics, fancy headers, and AJAX postbacks on the Bing page make up that difference?

“Previews” in Google and Bing

An exciting point with Bing is the content previews.  Hover over a search result and it provides a bit more information.  This, however, has been in Google’s search for quite a while if you turn on the functionality.

I’ll admit, I like the hover effect.  Clean, renders relatively quickly (noticed a bit of delay on some pages, but not too bad), and seems to parse content well.  But is it ground breaking?  Nah, not really.

Google’s had it for a while, though I’ll admit, I rarely use it (I used it more while doing heavy research in school).

The difference is that Google’s requires you to turn it on… For those who don’t use it, the dynamic filtering is FANTASTIC (and provides that handy “left side bar” that Bing is raving about).  Remember—Google was originally focused on extremely streamlined results—”more text” and AJAX postbacks would be considered evil. 😉

If you really do need more images, using Google’s “Images on the page” provides a cool look at what graphics are on the page.

Are product searches always what we do?  Nah!

How often do we actually search for a product name—especially in the workplace?  When I’m looking for my Pre, sure, but 90% of the day is spent searching for error messages, code snippets, and forum messages.  Face it, I JFGI it all day long. I encourage coworkers to JFGI. Etc.

So how does Bing and Google stack up when searching for a less commercial, more technical request?  I’ve recently dug into db40 and needed to dig up some ideas for id generation.  I remember reading a blog post recently about it, but couldn’t remember who wrote it.

Bing returned an interesting result set… but didn’t turn up (in the first 10) what I was looking for…

Nearly all of the results were root domains or directories—nothing real specific (blog posts, forum posts, etc) from the titles.  Since the cool Bing “sorting and grouping” doesn’t appear to apply to everything quiet yet (not sure if that’s context-based or a newness/lack of indexing), I was left with “all results”.

On the other hand, Google appears to put more priority on the title of the page.  Notice on the Bing search results, “Id Generation” didn’t appear in ANY of the page titles whereas they’re in all of the titles of the Google results.

The title doesn’t mean these are the BEST results; however, with news stories, blog posts, etc—the takeaway point will usually be in the title. 

The blog post I was wanting showed up second in the Google results.  The Bing results referenced Tuna’s blog a few places; however, didn’t actually reference the post for “Id Generation in db4o”.  The one time it WAS mentioned (the 4th result of Bing), it was a stale link to the front page of Tuna’s blog—not the actual post.

Conclusion

The selling point to Google for many years now has been two fold:

1) Extremely clean, fast user interface
2) Reliable and relevant results

I moved away from AltaVista, Yahoo, and other engines years ago because they wanted to be ‘cute’ and while Bing has potential, it’s already too cute for my tastes without any real benefit.

It’ll be interesting to see how Bing grows—and how it affects Google.  It’s too early, however, to say that Bing is this saving grace—especially if you’ve never dug into all of the features of Google.

Configuring Oracle SQL Developer for Windows 7

I’m fully Vista-free now and loving it; however, Oracle SQL Developer has (once again) decided to simply be annoying to configure.

<rant>Yes, I realize Oracle hates Microsoft.  Girls, you’re both pretty—please grow up.</rant>

Anyway, after a bit of hunting, I think I’ve found a good mix for those of us who love SQL Developer for coding and testing, but don’t “use” a lot of the Oracle proprietary junk features that comes with it.

Originally, I was going to include a couple of the configuration files; however, they’re spread out EVERYWHERE and, frankly, I can’t find them all. 😦  I also can’t figure out a way to import settings (without blowing my settings away first).

File Paths

As I mentioned before, some of the configuration files are spread out—everywhere.  Here’s where the most common files are located.

sqldeveloper.conf – <sqldeveloper folder>\sqldeveloper\bin\

ide.conf – <sqldeveloper folder>\ide\bin\

User-Configured Settings – %appdata%\SQL Developer\

I prefer to make my modifications in sqldeveloper.conf; however, a lot of the resources that pop up on Google make them in ide.conf.  Why?  I’m not sure.  sqldeveloper.conf simply CALLS ide.conf.  Meh.

Fixing Memory Consumption on Minimize

I found a reference to JDeveloper (another Java utility) that discussed how JDeveloper (and similarly, SQL Developer) pages everything when you minimize the window.

To fix this, open up sqldeveloper.conf and add the following line:

AddVMOption -Dsun.awt.keepWorkingSetOnMinimize=true

Fixing Aero Basic Theme

Tired of your IDE swapping back to Aero Basic whenever you launch SQL Developer?  Me too.  For now, Oracle reports that SQL Developer doesn’t support the full Aero Theme… or does it?

To enable Aero support (or at least keep it from bouncing back to Aero Basic), open up sqldeveloper.conf and add the following line:

AddVMOption -Dsun.java2d.noddraw=true

The Oracle forums also recommend trying the following line:

AddVMOption -Dsun.java2d.ddoffscreen=false

That option; however, never resolved the issue for me.  Your mileage may vary.

Cleaning Up The UI

The default UI leaves a lot to be desired for Oracle SQL Developer.  Here’s a few UI tips to try out.  These settings are found under Tools > Preferences.

Change the Theme – Environment > Theme. 

I like Experience Blue.  It’s clean, simple, and goes well with Windows 7’s look and feel.

Change Fonts – Code Editor > …

There are quite a few fonts that can be configured.  Here’s what I changed:

Code Insight – Segoe UI, 12
Display – check ‘Enable Text Anti-Aliasing’
Fonts – Consolas, 11
Printing – Consolas, 10

Disable Unnecessary Extensions – Extensions > …

Honestly, I don’t use ANY of the extentions, so I disabled everything as well as unchecking ‘Automatically Check for Updates’.  I’ve noticed that load time for the UI is insanely fast now (well, insanely fast for a Java app on Windows).

Window Locations

The only thing that I can’t figure out how to fix is the window location and placement.  Example: When you open a new worksheet, the results area is not visible (you have to drag that frame up each time).  That annoys me to no end and I can’t find a place to ‘save current window layout’ or similar.  Ideas?

That’s it!

With that, SQL Developer loads up quickly, connects, and displays just fine in Windows 7.