Archive

Archive for the ‘.net 3.5’ Category

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. ๐Ÿ˜‰  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. ๐Ÿ˜‰

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. ๐Ÿ˜‰

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. ๐Ÿ˜‰

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Fetching Nested Group Memberships in Active Directory

July 22, 2009 Comments off

As we’ve started using Active Directory more and more to provide single sign-on services for our web applications, group memberships have become more important.

We recently rolled out an application that took advantage of nesting groups (easier to add and manage five global groups than 10,000 individuals); however, our existing code to fetch memberships wouldn’t look at nested groups.

So if I was a member of “Student Achievement”, how could I parse the memberships of that group and determine if I was in “MIS”?

Thankfully, a bit of recursion does the trick… ๐Ÿ™‚

As our infrastructure is entirely Windows Server 2003 and higher, I use the System.DirectoryServices.Protocols namespace and methods to connect to and parse out information from LDAP.  Because of this, I rely on SearchResult(s) rather than DirectoryEntries. 

In our environment, a “user” is defined as:

“(&(objectCategory=person)(objectClass=user)(mail=*)({0}={1}))”

Everything looks pretty plain except we require that a valid “user” have an email address.  That ensures we filter out junk/test accounts as only employees have Exchange accounts.

Groups are even easier:

“(objectCategory=group)”

If, say I’ve queried for a single user, the groups property is populated simply by looking at the local user’s “memberOf” attribute.

private static IEnumerable<string> ParseGroupMemberships(SearchResultEntry result, int countOfGroups)

{

    for (int i = 0; i < countOfGroups; i++)

    {

        var fullGroupName = (string) result.Attributes[“memberOf”][i];

        //Fully Qualified Distinguished Name looks like:

        //CN={GroupName},OU={AnOU},DC={domain},DC={suffix}

        //CN=DCI,OU=Groups,OU=Data Center,DC=usd259,DC=net

        int startGroupName = fullGroupName.IndexOf(“=”, 1);

        int endGroupName = fullGroupName.IndexOf(“,”, 1);

        if (startGroupName != -1)

        {

            string friendlyName =

                fullGroupName.Substring(startGroupName + 1, (endGroupNamestartGroupName) – 1);

            yield return friendlyName;

        }

    }

}

That was fine for the primary groups (attached through memberOf); however, it didn’t look at the groups those groups were a “memberOf”. ๐Ÿ™‚

After quite a bit of trial and error, the new method looks pretty ugly, but seems to be quite performant and reliant in tests. 

private static IEnumerable<string> ParseGroupMemberships(

    SearchResultEntry result, int countOfGroups)

{

    var primaryGroups = new List<string>(countOfGroups);

    var allGroups = new List<string>();

 

    for (int index = 0; index < countOfGroups; index++)

    {

        primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

        allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    }

 

    var connection = new ActiveDirectory().GetConnection();

 

    while (0 < primaryGroups.Count)

    {

        var searchRequest = new SearchRequest(distinguishedName,

                                              CreateFilterFromGroups(primaryGroups),

                                              SearchScope.Subtree,

                                              ldapGroupsAttribute);

        primaryGroups.Clear();

 

        var response = (SearchResponse)connection.SendRequest(searchRequest);

        if (response != null)

        {

            int entriesCount = response.Entries.Count;

            for (int entry = 0; entry < entriesCount; entry++)

            {

                DirectoryAttribute groupList =

                    response.Entries[entry].Attributes[ldapGroupsAttribute];

 

                if (groupList != null)

                {

                    int groupCount = groupList.Count;

                    for (int index = 0; index < groupCount; index++)

                    {

                        string dn = groupList[index].ToString();

                        if (!allGroups.Contains(dn))

                        {

                            allGroups.Add(dn);

                            primaryGroups.Add(dn);

                        }

                    }

                }

            }

        }

    }

    connection.Dispose();

 

    foreach (string dn in allGroups)

    {

        yield return GetFriendlyName(dn);

    }

}

Here’s a breakdown of the highlights:

var primaryGroups = new List<string>(countOfGroups);

var allGroups = new List<string>();

 

for (int index = 0; index < countOfGroups; index++)

{

    primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

}

This section takes the SearchResultEntry’s primary groups and adds each one of them to two lists.

  • The ‘primaryGroups’ list is exactly that—here’s a list of groups that we need to iterate over and find the nested groups. 
  • The ‘allGroups’ will hold our master list of every unique group and will provide our return value.

var searchRequest = new SearchRequest(distinguishedName,

                                      CreateFilterFromGroups(primaryGroups),

                                      SearchScope.Subtree,

                                      ldapGroupsAttribute);

primaryGroups.Clear();

This code formulates our LDAP search request. distinguishedName and ldapGroupsAttribute are two constants in my code base (for our domain’s DN and “memberOf”).  CreateFilterFromGroups takes the list of groups and concats them together—so we’re only looking at the groups we want, not everything.

Finally, we’re reusing our primaryGroups list to look for nested within nested… within nested, so clear that out—infinite loops hinder performance. ๐Ÿ™‚

int entriesCount = response.Entries.Count;

for (int entry = 0; entry < entriesCount; entry++)

{

    DirectoryAttribute groupList =

        response.Entries[entry].Attributes[ldapGroupsAttribute];

 

    if (groupList != null)

    {

        int groupCount = groupList.Count;

        for (int index = 0; index < groupCount; index++)

        {

            string dn = groupList[index].ToString();

            if (!allGroups.Contains(dn))

            {

                allGroups.Add(dn);

                primaryGroups.Add(dn);

            }

        }

    }

}

Here’s our massive, disgusting block of if statements that populate the lists and keep the where statement running as long as primaryGroups returns a count > 0.

foreach (string dn in allGroups)

{

    yield return GetFriendlyName(dn);

}

Finally, use a helper method to convert the DN to a “friendly name” and return it to the caller (using yield since our method returns an IEnumerable<string>).

Running a quick test gives me:

UserAccount_Can_Get_Group_Memberships_With_Default_Security : Passed

Group count for David Longnecker is 138
Elapsed time for first query: 00:00:00.0420000

Wow, I’m in a lot of groups… O_o. The query is relatively quick (that is with connection buildup and teardown time and generating the rest of the attributes of the user) especially considering our AD infrastructure is far from optimal.

In addition, a LDAP query using ADUC gives the same results. 

If nothing else, its consistent! ๐Ÿ™‚ 

Filtering an Enum by Attribute

July 9, 2009 Comments off

I had a curve ball thrown at me this morning—changing requirements.  It happens and was easily solved by a couple of custom attributes and a helper method.

UPDATE: I’ve updated the code (and explaination) for FilterEnumWithAttributeOf below to tidy it up a bit.

In our current project, there is an enum of standard, static “periods” (times of days students are in school).  Easy enough.

BeforeSchool = 0,
FirstPeriod = 1,
SecondPeriod = 2,
etc.

But what happens if we want to “query” our list down a bit… say a certain group only wanted a subset of the “periods”.

I could create an entirely different Enum — Group1Period and Group2Period.

But then handling things in FluentNHibernate’s automapping would get freaked out with the Period property.

So, what about a custom attribute?

  1. I can assign multiple custom attributes to the same Enum field so I can be in Group1 and Group2 at the same time.
  2. I can keep the same Enum “Period” for my ORM layer.
  3. Now how do I query it down…?

Here’s an abstracted example of how the enum looks right now:

public enum Period

{

    [Elementary][Secondary]

    [Description(“Before School”)]

    BeforeSchool = 0,

 

    [Elementary]

    Homeroom = 12,

 

    [Secondary]

    [Description(“1st”)]

    First = 1,

}

Elementary and Secondary (our two groups, in this case) are “logicless” attributes (I’m just looking at them as flags, not passing/storing information).

[AttributeUsage(AttributeTargets.Field)]

public class ElementaryAttribute : Attribute

{

}

 

[AttributeUsage(AttributeTargets.Field)]

public class SecondaryAttribute : Attribute

{

}

Now, to filter out those pesky periods based on the attributes.

Update:

Old Code!

public IEnumerable<TEnum> FilterEnumWithAttributeOf<TEnum, TAttribute>()

{

    foreach (var field in

        typeof (TEnum).GetFields(BindingFlags.GetField |

                                 BindingFlags.Public |

                                 BindingFlags.Static))

    {

        foreach (var attribute in

            field.GetCustomAttributes(typeof (TAttribute), false))

        {

            yield return (TEnum) field.GetValue(null);

        }

    }

}

New Code!

public static IEnumerable<TEnum> FilterEnumWithAttributeOf<TEnum, TAttribute>()

    where TEnum : struct

    where TAttribute : class

{

    foreach (var field in

        typeof(TEnum).GetFields(BindingFlags.GetField |

                                 BindingFlags.Public |

                                 BindingFlags.Static))

    {

 

        if (field.GetCustomAttributes(typeof(TAttribute), false).Length > 0)

            yield return (TEnum)field.GetValue(null);

    }

}

Why new code?

Well, after looking over the code, I don’t need to iterate through each attribute, simply see if the field contains it (Length > 0).  If it does, then return it.  That cuts a loop out of our code and performs the same function.  I also added two generic constraints.  You can’t constrain by Enum, but struct works well.

I’m passing in two generics in this case—TEnum, which is the type of the of the Enum and TAttribute.. the type of the attribute.  Yeah, I realize that my creativity of naming is pretty low.  Work with me here, alright? ๐Ÿ˜‰

Past that, the loops are pretty easy.

  1. Loop through each field of the enumeration.  Return the field (GetField) and be sure to check Public and Static fields.
  2. Loop through each custom attribute on each field (returned by GetField) and only return the fields that match the type of our attribute.  I pass along the false parameter (do not inherit) because I’m not interested in inherited attributes. You could leave this as true. YMMV.
  3. If the field’s attribute’s contains our type, yield out the actual Enum value (a string of the field isn’t as useful).

Now, for using it…

var enums = FilterEnumWithAttributeOf<Period, ElementaryAttribute>();

 

foreach (var period in enums)

{

    Console.WriteLine(“{0}, {1}”.AsFormatFor(period, (int)period));

}

Easy enough.  ElementaryAttribute returns:

BeforeSchool, 0
Homeroom, 12
AfterSchool, 10
etc..

Running the same code, but asking for SecondaryAttribute returns:

BeforeSchool, 0
First, 1
Second, 2
etc..

Sweet.

Tags: , ,

AutoMappings in NHibernate – A Quick Runthrough

June 26, 2009 Comments off

For most of my projects, at least since I’ve moved to NHibernate/Fluent NHibernate, I’ve been trapped using the existing data structures of prior iterations.  Funky naming conventions (many due to cross-cultural, international column naming), missing data relationships, and general craziness.

Having used Fluent Mappings (creating a class that implements ClassMap<objectType>) in the past, they were a huge jump up from writing painful data objects, connecting them together, and recreating the wheel with “SELECT {column} from {table}” code.  Create a map, use the fluent methods to match column to property, and away you go.

In a recent project, I’ve had the opportunity to build a new system from the ground up.  With this, I decided to dive head first into using the AutoMappings functionality of Fluent NHibernate. 

This post is somewhat a rambling self-discussion of my explorations with AutoMappings.

What are AutoMappings?

The FluentNHibernate wiki provides a simple definition:

[…] which is a mechanism for automatically mapping all your entities based on a set of conventions.

Rather than hand-mapping each column to a property, we create conventions (rules) to map those.. automatically.  Hey look, auto…mappings.  ๐Ÿ˜‰

How?

Using the same fluent language, configuring AutoMapping is an exercise in implementing conventions for the logical naming and handling of data.

Fluently

    .Configure()

    .Database(MsSqlConfiguration.MsSql2005

                  .ConnectionString(cs => cs

                                              .Server(“server”)

                                              .Database(“db”)

                                              .Username(“user”)

                                              .Password(“password”)

                  )

                  .UseReflectionOptimizer()

                  .UseOuterJoin()

                  .AdoNetBatchSize(10)

                  .DefaultSchema(“dbo”)

                  .ShowSql()

    )

    .ExposeConfiguration(raw =>

                             {

                                 // Testing/NHibernate Profiler stuffs.

                                 raw.SetProperty(“generate_statistics”, “true”);

                                 RebuildSchema(raw);

                             })

    .Mappings(m =>

              m.AutoMappings.Add(AutoPersistenceModel

                                     .MapEntitiesFromAssemblyOf<Walkthrough>()

                                     .ConventionDiscovery.Setup(c =>

                                                                    {

                                                                        c.Add<EnumMappingConvention>();

                                                                        c.Add<ReferencesConvention>();

                                                                        c.Add<HasManyConvention>();

                                                                        c.Add<ClassMappingConvention>();

                                                                    })

                                     .WithSetup(c => c.IsBaseType = type => type == typeof (Entity)))

                  .ExportTo(@”.\”)

    );

As you can see above, the only difference from a fluent mappings configuration is in the actual Mappings area.  Good deal!  That helps ensure my existing work using fluent mappings could translate without too much headache.

I’ve specified four conventions.  Each of these conventions have interfaces that provide the necessary methods to ensure your rules are appied to the correct objects.

EnumMappingConvention

internal class EnumMappingConvention : IUserTypeConvention

{

    public bool Accept(IProperty target)

    {

        return target.PropertyType.IsEnum;

    }

 

    public void Apply(IProperty target)

    {

        target.CustomTypeIs(target.PropertyType);

    }

 

    public bool Accept(Type type)

    {

        return type.IsEnum;

    }

}

The great thing about these methods is they’re fluent enough to translate to English.

Accept… targets where the property type is an enumeration.

Apply… to the target that the “Custom Type Is” the property type of the target.
  NOTE: This translates from a ClassMap into: Map(x => x.MyEnumFlag).CustomTypeIs(typeof(MyEnum));

Accept… a type that is an enumeration.

ReferenceConvention

The Reference convention handles those reference relationships between our classes (and the foreign keys).

internal class ReferencesConvention : IReferenceConvention

{

    public bool Accept(IManyToOnePart target)

    {

        return string.IsNullOrEmpty(target.GetColumnName());

    }

 

    public void Apply(IManyToOnePart target)

    {

        target.ColumnName(target.Property.Name + “Id”);

    }

}

The most important part here is enforcing how your foreign keys are going to be named.  I prefer the simple {Object}Id format.

Car.Battery on the object side and [Car].[BatteryId] on the database side.

HasManyConvention

The HasManys are our lists, bags, and collections of objects.

internal class HasManyConvention : IHasManyConvention

{

 

    public bool Accept(IOneToManyPart target)

    {

        return target.KeyColumnNames.List().Count == 0;

    }

 

    public void Apply(IOneToManyPart target)

    {

        target.KeyColumnNames.Add(target.EntityType.Name + “Id”);

        target.Cascade.AllDeleteOrphan();

        target.Inverse();

    }

}

We want to make sure that we haven’t added any other key columns (the Count == 0), and then apply both the naming convention as well as a few properties.

Cascade.AllDeleteOrphan() and Inverse() allows our parent objects (Car) to add new child objects (Car.Battery (Battery), Car.Accessories (IList<Accessory>)) without separating them out.

ClassMappingConvention

Finally, the important Class mapping.  This convention ensures that our tables are named property with pluralization.

public class ClassMappingConvention : IClassConvention

{

    public bool Accept(IClassMap target)

    {

        return true; // everything

    }

 

    public void Apply(IClassMap target)

    {

        target.WithTable(PluralOf(target.EntityType.Name));

    }

}

I’m using a pluralization method from one of my base libraries that I borrowed from Hudson Akridge.  This helper method works really well and I don’t need to add additional references and libraries into my application just to handle the table names.

public static string PluralOf(string text)

  {

      var pluralString = text;

      var lastCharacter = pluralString.Substring(pluralString.Length – 1).ToLower();

 

      // y’s become ies (such as Category to Categories)

      if (string.Equals(lastCharacter, “y”, StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString = pluralString.Remove(pluralString.Length – 1);

          pluralString += “ie”;

      }

 

      // ch’s become ches (such as Pirch to Pirches)

      if (string.Equals(pluralString.Substring(pluralString.Length – 2), “ch”,

                        StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString += “e”;

      }

      switch (lastCharacter)

      {

          case “s”:

              return pluralString + “es”;

          default:

              return pluralString + “s”;

      }

  }

Save and build.  The ExportSchema method will generate the SQL and/or regen the database based on the specifications you’ve provided to it. and you’re ready to hit the ground running!

 

Rendering the Web Equally on Mobile Devices

June 26, 2009 Comments off

I’ve been digging through the Interwebs for a while now and, I thought, had worked out all of the “kinks” of rendering on a mobile device—specifically iPhones.

The special ‘viewport’ meta tag means the world to iDevices.

meta name=“viewport” content=“width = device-width” />

I’m faced with a new challenge—the Palm Pre’s built-in web browser.  My shiny new phone is great, but it isn’t without glitches.

The first glitch I’ve found appears to be a DNS issue— http://myserver/web won’t resolve; however, http://123.45.67.89/web will.  It seems to be touchy.  Most of our webs work just fine, others don’t.  I haven’t narrowed it down to a single server or architecture as it seems to be a bit of everything.  Wonky.

The next glitch is more important—the rendering.  One of our tools is a simple form-based tool that looks great on the iPhone; however, renders partial screen and “garbles” when you move around the screen.

Palm Pre:

Garbled image

iTouch/iPhone:

I’ve also found that anything in an ASP.NET Update Panel (like those Select buttons) are unusable.  Other webs I’ve used (Bank of America, etc) use AJAX just fine, so I don’t think it’s that—probably a coding issue I need to dig into and resolve.

UPDATE: Explicitly adding “LoadScriptsBeforeUI=’true’” to the ASP.NET ScriptManager seems to help with this.. a little.

Anyone else worked specifically with the Pre devices and rendering?  I’d appreciate any meta tags or layout ideas that worked. ๐Ÿ™‚  The Pre isn’t a common device in our organization—yet.

Benchmarks : Comparing LINQ to NHibernate Transforms/Grouping

Yesterday, I wrote about setting up NHibernate to query up, group, count, and transform results and return them into a control.  Why did I go to such effort?  Well, the original LINQ query I had that refined the results didn’t perform up to par.  As some may note, premature optimization is never a good practice so I needed some stats to back up the claims.

Overnight, I wrote up a quick test to query up both ways and benchmark the results.  Here’s what I found.

The “test”:

public void TEMP_quick_compare_of_linq_to_nhibernate()

{

    var schoolId = 120;

 

    var benchmark = new Benchmark();

    using (var repository = new IncidentRepository())

    {

        benchmark.Start();

        var resultsFromLinq =

            repository.GetCountByIncidentCodeWithLinq(schoolId);

        foreach (var item in resultsFromLinq)

        {

            Console.WriteLine(item);

        }

        benchmark.Stop();

        Console.WriteLine(“Linq: {0}”.AsFormatFor(benchmark.ElapsedTime));

 

        benchmark.Start();

        var resultsFromNhibernate =

            repository.GetCountByIncidentCode(schoolId);

        foreach (var item in resultsFromNhibernate)

        {

            Console.WriteLine(item);

        }

        benchmark.Stop();

        Console.WriteLine(“NHibernate: {0}”.AsFormatFor(benchmark.ElapsedTime));

    }

}

Setting up the benchmark (and the NHibernate Init) are outside of the benchmark—they’re necessary overhead.  I’m also iterating through each of the results as part of the benchmark to ensure that everything is evaluated. Past that, pretty basic.  On the database side, I’ve disabled statement caching to not sway the results as much.

With 24 records (the test data in the system), the results were pretty clear. The average of running the benchmark 100 times resulted in…

Linq: 00:00:00.7050000
NHibernate: 00:00:00.0190000

With 24 records, NHibernate was about 37x faster. 

That’s nice, but what happens in a few weeks when there are a few thousand records?  I populated a few hundred of each incident type into the system, giving me almost 4000 records (the anticipated monthly load of the system by the customer).  How’d that change our averages?

Linq: 00:00:00.8869746
NHibernate: 00:00:00.1381518

Now we’re only 6x faster with NHibernate vs. LINQ.  The duration from 24 to 4000 records for LINQ  jumped ~.18 seconds for a 25% gain where as NHibernate jumped ~.11 seconds for a 626% gain.

So, with that, my original gut feeling and assumptions were wrong.  More and more records don’t really slow down the LINQ filtering.. at least not by much.  The performance gain is still appparent between the two methods (.88 sec vs. .13 sec); however, how much of that time is eaten up by rendering, server latency, etc and not by the actual processing?

Grouping and Transforming Using NHibernate

June 11, 2009 Comments off

Okay, I’ve got to be doing this wrong.

Call it premature optimization, but I forsee an old LINQ method being a performance bottleneck when we hit a few hundred thousands records—especially for the ASP.NET charting control to render in any useful time period.

So, what do I do?  I figure pushing that computation back down to the database would be a good first step.

Unfortunately, grouping, sorting, and such are a serious pain in the ass.  Unless, as I said, I’m doing it wrong.

Original Code – Grouping and Counting ala LINQ

private IList GetIncidentsGroupedByIncidentCode()

{

    using (var repository = new IncidentRepository())

    {

        var allIncidents =

            repository.GetAllBySchoolId(SessionManager.CurrentSchoolId);

 

        var incidentsByCode = from i in allIncidents

                              group i by i.IncidentCodeId

                              into grouping

                                  orderby grouping.Count()

                                  select new

                                             {

                                                 IncidentCodeId = grouping.Key,

                                                 Count = grouping.Count(),

                                                 Description =

                                                    GetIncidentCodeDescription(grouping.Key)

                                             };

        return incidentsByCode.ToList();

    }

}

Grab all incidents (using NHibernate repository) and use LINQ to transform them into a new handy anonymous type that consisted of the IncidentCodeId, a Count (by IncidentCodeId), and the Description which uses the IncidentCodeId to grab the description (the incident code description is coming from an entirely different system/database, hence the method to go fetch it).

I can simply return an IList rather than specifying the type (since it’s anonymous) and get away with loading up my Chart Control—not a problem.

Newish Code – Grouping and Counting ala NHibernate

public IList GetCountByIncidentCode(int schoolId)

{

    using (var tx = Session.BeginTransaction())

    {

        var criteria = Session.CreateCriteria(typeof (Incident));

 

        // Only get those matching the requested SchoolId

        criteria.Add(RestrictionsHelper<Incident>.Eq(x => x.SchoolId, schoolId));

 

        // Setup our projections.

        // IncidentCodeId is what we’re using as an Identifier.

        // Id is what we’re counting, so the results of the “GroupedResult” go into Result

        // and we’re grouping by IncidentCodeId

        criteria.SetProjection(Projections.ProjectionList()

                                   .Add(Projections.Property(“IncidentCodeId”), “Identifier”)

                                   .Add(Projections.Count(“Id”), “Result”)

                                   .Add(Projections.GroupProperty(“IncidentCodeId”)));

        // Order THAT mess by Result

        criteria.AddOrder(Order.Asc(“Result”));

 

        // Now, since we can’t use anonymous objects (??), we have to use a funky Java

        // method to transform it into a typed result.

        criteria.SetResultTransformer(Transformers.AliasToBean(typeof (GroupedResult)));

 

        // Convert this all to a list.

        var result = criteria.List<GroupedResult>() as List<GroupedResult>;

 

        // Commit… or get committed.

        tx.Commit();

        if (result != null)

        {

            // We can’t do this inline (??), so go back into the list and iterate through… grabbing

            // descriptions.

            result.ForEach(x =>

                                {

                                    var description =

                                        GetIncidentCodeDescription(x.Identifier.ConvertTo<int>());

                                    x.Description = description;

                                });

        }

 

        // Holy crap, we’re done!

        return result;

    }

}

What… the… heck?

Amazingly enough, that works (changing the chart’s column names, of course).  And it’s relatively quick… But woah, what a mess. 

It also adds annoying little ‘result’ objects into the mix. 

public class GroupedResult

{

    public int Identifier { get; set; }

    public string Description { get; set; }

    public int Result { get; set; }

}

Strongly typed is stellar and I’m pretty sure I could have some generic objects. [Identifier/Description/Result] could work for counts, averages, most anything that is grouped up, but that just twitches me out a bit to have random classes sitting around for data transformations.

So, good readers—how is this REALLY supposed to work?  All that to generate the guts of:

SELECT COUNT(IncidentCodeId) as Result, IncidentCodeId
FROM Incidents
WHERE SchoolId = :schoolId
GROUP BY IncidentCodeId
ORDER BY Result

Categories: .net 3.5, c#, NHibernate, SQL

Performing SELECT.. WHERE IN using a Repository

June 8, 2009 Comments off

As I’ve discussed in the past, a few of my repository pattern practices are borrowed and built on the nice S#arp Architecture project.  Here’s another situation where I needed a bit more functionality.

Disclaimer:  If there’s a better way to do this—I’m all ears and emails. ๐Ÿ™‚

By default, the FindAll method builds up the NHibernate criteria by iterating through a key/value pair.  Easy enough.

‘Id, 12345’ generates ‘WHERE Id = 12345’.

But what happens when I want to do something with an array?

‘Id, int[] {12345, 67890}’ should generate ‘WHERE Id IN (12345, 67890)’

Thankfully, the Restrictions class has an In method, but how can I add that flexibility to the FindAll method?

Here’s what the FindAll method looks like to start off:

public T Find(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    var criteria = Session.CreateCriteria(typeof (T));

    propertyValuePairs

        .ForEach(x =>

                 criteria.Add(Restrictions.Eq(x.Key, x.Value)));

 

    return criteria.List<T>() as List<T>;

}

That’s nice.  Iterate through each, but assuming an Eq (Equals) relationship between the key and the value.

After a bit of dinking, checking to see if the object is a typeof(ICollection) seems to be the most reliable considering Restrictions.In(key,value) accepts Collections for the value parameter. 

This allows you to pass arrays, lists, and dictionaries.

public List<T> FindAll(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

 

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    ICriteria criteria = Session.CreateCriteria(typeof (T));

 

    propertyValuePairs

        .ForEach(x =>

                     {

                         if (x.Value.IsA<ICollection>())

                         {

                             // add WHERE key IN (value)

                             criteria.Add(Restrictions.In(x.Key, (ICollection) x.Value));

                         }

                         else

                         {

                             // add WHERE key = value

                             criteria.Add(Restrictions.Eq(x.Key, x.Value));

                         }

                     });

    return criteria.List<T>() as List<T>;

}

Here’s my (now) passing test that I used to test this logic as I built it:

[Fact]

public void can_find_students_by_array_of_student_ids()

{

    var studentsToFind = new int[] { 622100, 567944, 601466 };

 

    var criteria = new Dictionary<string, object>();

    criteria.Add(“Id”, studentsToFind);

    criteria.Add(“Grade”, “09”);

 

    var sut = new StudentRepository();

    var students = sut.FindAll(criteria);

 

    students.Count.ShouldBeEqualTo(1);

    students.ShouldContainMatching(x => x.Id == 567944);

    students.ForEach(x =>

        Console.WriteLine(“{0}, {1}”.AsFormatFor(x.FullName, x.Id)));

}

Test Passed.  Woot.  The generated SQL is also nice and clean (really loving NHProf… though I trimmed out the excess columns for brevity).

SELECT this_.Id            as Id7_0_, [..]

       this_.Grade         as Grade7_0_, [..]

FROM   custom.student_lkup this_

WHERE  this_.Id in (622100 /* :p0 */,567944 /* :p1 */,601466 /* :p2 */)

       and this_.Grade = 09 /* :p3 */

Categories: .net 3.5, c#, Microsoft, NHibernate, SQL