Archive

Archive for the ‘.net 2.0’ Category

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. ๐Ÿ˜‰  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. ๐Ÿ˜‰

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. ๐Ÿ˜‰

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Digging into the Event Log with PowerShell

August 25, 2009 Comments off

There are a few of our applications that haven’t been converted over to log4net logging so their events still land in the good ol’ Windows Event Log.  That’s fine and was fairly easy to browse, sort, and filter using the new tools in Windows Server 2008.

I’ve found a bit better tool, however, over the past few hours for digging into the logs on short notice and searching—obviously, PowerShell.

Full source for this can be found here.

I wanted to be able to quickly query out:

  • the time – to look at trending,
  • the user – trending, and filtering if I have them on the phone,
  • the URL – shows both the application and the page the problem is occuring on,
  • the type – the exception type for quick filtering,
  • the exception – the core of the issue,
  • the details – lengthy, but can be ever so helpful even showing the line number of the code in question.

param ([string]$computerName = (gc env:computername))

function GetExceptionType($type, $logEvent)
{
 if ($type -ne "Error") { $logEvent.ReplacementStrings[17] }
 else {
        $rx = [regex]"Exception:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])
        $matches.Groups[1].Value
 }
}

function GetException($type, $logEvent)
{
 if ($type -ne "Error") { $logEvent.ReplacementStrings[18] }
 else {
        $rx = [regex]"Message:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])
        $matches.Groups[1].Value
 }
}

get-eventlog -log application -ComputerName $computerName |
    ? { $_.Source -eq "ASP.NET 2.0.50727.0" } |
    ? { $_.EntryType -ne "Information" } |
    select `
  Index, EntryType, TimeGenerated, `
  @{Name="User"; Expression={$_.ReplacementStrings[22]}}, `
  @{Name="Url"; Expression={truncate-string $_.ReplacementStrings[19] 60 }}, `
  @{Name="Type"; Expression={GetExceptionType $_.EntryType $_ }}, `
  @{Name="Exception"; Expression={GetException $_.EntryType $_ }}, `
  @{Name="Details"; Expression={$_.ReplacementStrings[29]}}

The code itself is probably pretty overworked and, I hope, can be refined as time goes on.

The two helper functions, GetExceptionType and GetException, exist because (it seems) that Warnings and Information store their information in one location and Errors store their information in one HUGE blob of text that needs to be parsed.  Those helpers provide that switch logic.

The get-eventlog logic itself is pretty straightforward:

  1. Open up the ‘Application’ EventLog on the specified computer,
  2. Filter only “ASP.NET 2.0.50727.0” sourced events,
  3. Exclude “Information” type events,
  4. Select 3 columns and generate 5 columns from expressions.

The great advantage is I can then take this file and “pipe” it into other commands.

get-aspnet-events webserver1 | select user, url, type | format-table -auto

User               Url                               Type
----               ---                               ----
domain\dlongnecker http://domain.net/Create.aspx     PreconditionException
domain\dlongnecker http://domain.net/Create.aspx     PreconditionException
domain\dlongnecker http://domain.net/View.aspx       PostconditionException
domain\dlongnecker http://domain.net/View.aspx       AssertionException

or

get-aspnet-events webserver1 | ? { $_.user -like “*dlongnecker” }

The possibilities are great—and a real time saver than hitting each server and looking through the GUI tool.

The code also includes a helper method I created for truncating strings available here via codepaste.  If there’s built-in truncating, I’d love to know about it.

 

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. ๐Ÿ˜‰

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Fetching Nested Group Memberships in Active Directory

July 22, 2009 Comments off

As we’ve started using Active Directory more and more to provide single sign-on services for our web applications, group memberships have become more important.

We recently rolled out an application that took advantage of nesting groups (easier to add and manage five global groups than 10,000 individuals); however, our existing code to fetch memberships wouldn’t look at nested groups.

So if I was a member of “Student Achievement”, how could I parse the memberships of that group and determine if I was in “MIS”?

Thankfully, a bit of recursion does the trick… ๐Ÿ™‚

As our infrastructure is entirely Windows Server 2003 and higher, I use the System.DirectoryServices.Protocols namespace and methods to connect to and parse out information from LDAP.  Because of this, I rely on SearchResult(s) rather than DirectoryEntries. 

In our environment, a “user” is defined as:

“(&(objectCategory=person)(objectClass=user)(mail=*)({0}={1}))”

Everything looks pretty plain except we require that a valid “user” have an email address.  That ensures we filter out junk/test accounts as only employees have Exchange accounts.

Groups are even easier:

“(objectCategory=group)”

If, say I’ve queried for a single user, the groups property is populated simply by looking at the local user’s “memberOf” attribute.

private static IEnumerable<string> ParseGroupMemberships(SearchResultEntry result, int countOfGroups)

{

    for (int i = 0; i < countOfGroups; i++)

    {

        var fullGroupName = (string) result.Attributes[“memberOf”][i];

        //Fully Qualified Distinguished Name looks like:

        //CN={GroupName},OU={AnOU},DC={domain},DC={suffix}

        //CN=DCI,OU=Groups,OU=Data Center,DC=usd259,DC=net

        int startGroupName = fullGroupName.IndexOf(“=”, 1);

        int endGroupName = fullGroupName.IndexOf(“,”, 1);

        if (startGroupName != -1)

        {

            string friendlyName =

                fullGroupName.Substring(startGroupName + 1, (endGroupNamestartGroupName) – 1);

            yield return friendlyName;

        }

    }

}

That was fine for the primary groups (attached through memberOf); however, it didn’t look at the groups those groups were a “memberOf”. ๐Ÿ™‚

After quite a bit of trial and error, the new method looks pretty ugly, but seems to be quite performant and reliant in tests. 

private static IEnumerable<string> ParseGroupMemberships(

    SearchResultEntry result, int countOfGroups)

{

    var primaryGroups = new List<string>(countOfGroups);

    var allGroups = new List<string>();

 

    for (int index = 0; index < countOfGroups; index++)

    {

        primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

        allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    }

 

    var connection = new ActiveDirectory().GetConnection();

 

    while (0 < primaryGroups.Count)

    {

        var searchRequest = new SearchRequest(distinguishedName,

                                              CreateFilterFromGroups(primaryGroups),

                                              SearchScope.Subtree,

                                              ldapGroupsAttribute);

        primaryGroups.Clear();

 

        var response = (SearchResponse)connection.SendRequest(searchRequest);

        if (response != null)

        {

            int entriesCount = response.Entries.Count;

            for (int entry = 0; entry < entriesCount; entry++)

            {

                DirectoryAttribute groupList =

                    response.Entries[entry].Attributes[ldapGroupsAttribute];

 

                if (groupList != null)

                {

                    int groupCount = groupList.Count;

                    for (int index = 0; index < groupCount; index++)

                    {

                        string dn = groupList[index].ToString();

                        if (!allGroups.Contains(dn))

                        {

                            allGroups.Add(dn);

                            primaryGroups.Add(dn);

                        }

                    }

                }

            }

        }

    }

    connection.Dispose();

 

    foreach (string dn in allGroups)

    {

        yield return GetFriendlyName(dn);

    }

}

Here’s a breakdown of the highlights:

var primaryGroups = new List<string>(countOfGroups);

var allGroups = new List<string>();

 

for (int index = 0; index < countOfGroups; index++)

{

    primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

}

This section takes the SearchResultEntry’s primary groups and adds each one of them to two lists.

  • The ‘primaryGroups’ list is exactly that—here’s a list of groups that we need to iterate over and find the nested groups. 
  • The ‘allGroups’ will hold our master list of every unique group and will provide our return value.

var searchRequest = new SearchRequest(distinguishedName,

                                      CreateFilterFromGroups(primaryGroups),

                                      SearchScope.Subtree,

                                      ldapGroupsAttribute);

primaryGroups.Clear();

This code formulates our LDAP search request. distinguishedName and ldapGroupsAttribute are two constants in my code base (for our domain’s DN and “memberOf”).  CreateFilterFromGroups takes the list of groups and concats them together—so we’re only looking at the groups we want, not everything.

Finally, we’re reusing our primaryGroups list to look for nested within nested… within nested, so clear that out—infinite loops hinder performance. ๐Ÿ™‚

int entriesCount = response.Entries.Count;

for (int entry = 0; entry < entriesCount; entry++)

{

    DirectoryAttribute groupList =

        response.Entries[entry].Attributes[ldapGroupsAttribute];

 

    if (groupList != null)

    {

        int groupCount = groupList.Count;

        for (int index = 0; index < groupCount; index++)

        {

            string dn = groupList[index].ToString();

            if (!allGroups.Contains(dn))

            {

                allGroups.Add(dn);

                primaryGroups.Add(dn);

            }

        }

    }

}

Here’s our massive, disgusting block of if statements that populate the lists and keep the where statement running as long as primaryGroups returns a count > 0.

foreach (string dn in allGroups)

{

    yield return GetFriendlyName(dn);

}

Finally, use a helper method to convert the DN to a “friendly name” and return it to the caller (using yield since our method returns an IEnumerable<string>).

Running a quick test gives me:

UserAccount_Can_Get_Group_Memberships_With_Default_Security : Passed

Group count for David Longnecker is 138
Elapsed time for first query: 00:00:00.0420000

Wow, I’m in a lot of groups… O_o. The query is relatively quick (that is with connection buildup and teardown time and generating the rest of the attributes of the user) especially considering our AD infrastructure is far from optimal.

In addition, a LDAP query using ADUC gives the same results. 

If nothing else, its consistent! ๐Ÿ™‚ 

Filtering an Enum by Attribute

July 9, 2009 Comments off

I had a curve ball thrown at me this morning—changing requirements.  It happens and was easily solved by a couple of custom attributes and a helper method.

UPDATE: I’ve updated the code (and explaination) for FilterEnumWithAttributeOf below to tidy it up a bit.

In our current project, there is an enum of standard, static “periods” (times of days students are in school).  Easy enough.

BeforeSchool = 0,
FirstPeriod = 1,
SecondPeriod = 2,
etc.

But what happens if we want to “query” our list down a bit… say a certain group only wanted a subset of the “periods”.

I could create an entirely different Enum — Group1Period and Group2Period.

But then handling things in FluentNHibernate’s automapping would get freaked out with the Period property.

So, what about a custom attribute?

  1. I can assign multiple custom attributes to the same Enum field so I can be in Group1 and Group2 at the same time.
  2. I can keep the same Enum “Period” for my ORM layer.
  3. Now how do I query it down…?

Here’s an abstracted example of how the enum looks right now:

public enum Period

{

    [Elementary][Secondary]

    [Description(“Before School”)]

    BeforeSchool = 0,

 

    [Elementary]

    Homeroom = 12,

 

    [Secondary]

    [Description(“1st”)]

    First = 1,

}

Elementary and Secondary (our two groups, in this case) are “logicless” attributes (I’m just looking at them as flags, not passing/storing information).

[AttributeUsage(AttributeTargets.Field)]

public class ElementaryAttribute : Attribute

{

}

 

[AttributeUsage(AttributeTargets.Field)]

public class SecondaryAttribute : Attribute

{

}

Now, to filter out those pesky periods based on the attributes.

Update:

Old Code!

public IEnumerable<TEnum> FilterEnumWithAttributeOf<TEnum, TAttribute>()

{

    foreach (var field in

        typeof (TEnum).GetFields(BindingFlags.GetField |

                                 BindingFlags.Public |

                                 BindingFlags.Static))

    {

        foreach (var attribute in

            field.GetCustomAttributes(typeof (TAttribute), false))

        {

            yield return (TEnum) field.GetValue(null);

        }

    }

}

New Code!

public static IEnumerable<TEnum> FilterEnumWithAttributeOf<TEnum, TAttribute>()

    where TEnum : struct

    where TAttribute : class

{

    foreach (var field in

        typeof(TEnum).GetFields(BindingFlags.GetField |

                                 BindingFlags.Public |

                                 BindingFlags.Static))

    {

 

        if (field.GetCustomAttributes(typeof(TAttribute), false).Length > 0)

            yield return (TEnum)field.GetValue(null);

    }

}

Why new code?

Well, after looking over the code, I don’t need to iterate through each attribute, simply see if the field contains it (Length > 0).  If it does, then return it.  That cuts a loop out of our code and performs the same function.  I also added two generic constraints.  You can’t constrain by Enum, but struct works well.

I’m passing in two generics in this case—TEnum, which is the type of the of the Enum and TAttribute.. the type of the attribute.  Yeah, I realize that my creativity of naming is pretty low.  Work with me here, alright? ๐Ÿ˜‰

Past that, the loops are pretty easy.

  1. Loop through each field of the enumeration.  Return the field (GetField) and be sure to check Public and Static fields.
  2. Loop through each custom attribute on each field (returned by GetField) and only return the fields that match the type of our attribute.  I pass along the false parameter (do not inherit) because I’m not interested in inherited attributes. You could leave this as true. YMMV.
  3. If the field’s attribute’s contains our type, yield out the actual Enum value (a string of the field isn’t as useful).

Now, for using it…

var enums = FilterEnumWithAttributeOf<Period, ElementaryAttribute>();

 

foreach (var period in enums)

{

    Console.WriteLine(“{0}, {1}”.AsFormatFor(period, (int)period));

}

Easy enough.  ElementaryAttribute returns:

BeforeSchool, 0
Homeroom, 12
AfterSchool, 10
etc..

Running the same code, but asking for SecondaryAttribute returns:

BeforeSchool, 0
First, 1
Second, 2
etc..

Sweet.

Tags: , ,

AutoMappings in NHibernate – A Quick Runthrough

June 26, 2009 Comments off

For most of my projects, at least since I’ve moved to NHibernate/Fluent NHibernate, I’ve been trapped using the existing data structures of prior iterations.  Funky naming conventions (many due to cross-cultural, international column naming), missing data relationships, and general craziness.

Having used Fluent Mappings (creating a class that implements ClassMap<objectType>) in the past, they were a huge jump up from writing painful data objects, connecting them together, and recreating the wheel with “SELECT {column} from {table}” code.  Create a map, use the fluent methods to match column to property, and away you go.

In a recent project, I’ve had the opportunity to build a new system from the ground up.  With this, I decided to dive head first into using the AutoMappings functionality of Fluent NHibernate. 

This post is somewhat a rambling self-discussion of my explorations with AutoMappings.

What are AutoMappings?

The FluentNHibernate wiki provides a simple definition:

[…] which is a mechanism for automatically mapping all your entities based on a set of conventions.

Rather than hand-mapping each column to a property, we create conventions (rules) to map those.. automatically.  Hey look, auto…mappings.  ๐Ÿ˜‰

How?

Using the same fluent language, configuring AutoMapping is an exercise in implementing conventions for the logical naming and handling of data.

Fluently

    .Configure()

    .Database(MsSqlConfiguration.MsSql2005

                  .ConnectionString(cs => cs

                                              .Server(“server”)

                                              .Database(“db”)

                                              .Username(“user”)

                                              .Password(“password”)

                  )

                  .UseReflectionOptimizer()

                  .UseOuterJoin()

                  .AdoNetBatchSize(10)

                  .DefaultSchema(“dbo”)

                  .ShowSql()

    )

    .ExposeConfiguration(raw =>

                             {

                                 // Testing/NHibernate Profiler stuffs.

                                 raw.SetProperty(“generate_statistics”, “true”);

                                 RebuildSchema(raw);

                             })

    .Mappings(m =>

              m.AutoMappings.Add(AutoPersistenceModel

                                     .MapEntitiesFromAssemblyOf<Walkthrough>()

                                     .ConventionDiscovery.Setup(c =>

                                                                    {

                                                                        c.Add<EnumMappingConvention>();

                                                                        c.Add<ReferencesConvention>();

                                                                        c.Add<HasManyConvention>();

                                                                        c.Add<ClassMappingConvention>();

                                                                    })

                                     .WithSetup(c => c.IsBaseType = type => type == typeof (Entity)))

                  .ExportTo(@”.\”)

    );

As you can see above, the only difference from a fluent mappings configuration is in the actual Mappings area.  Good deal!  That helps ensure my existing work using fluent mappings could translate without too much headache.

I’ve specified four conventions.  Each of these conventions have interfaces that provide the necessary methods to ensure your rules are appied to the correct objects.

EnumMappingConvention

internal class EnumMappingConvention : IUserTypeConvention

{

    public bool Accept(IProperty target)

    {

        return target.PropertyType.IsEnum;

    }

 

    public void Apply(IProperty target)

    {

        target.CustomTypeIs(target.PropertyType);

    }

 

    public bool Accept(Type type)

    {

        return type.IsEnum;

    }

}

The great thing about these methods is they’re fluent enough to translate to English.

Accept… targets where the property type is an enumeration.

Apply… to the target that the “Custom Type Is” the property type of the target.
  NOTE: This translates from a ClassMap into: Map(x => x.MyEnumFlag).CustomTypeIs(typeof(MyEnum));

Accept… a type that is an enumeration.

ReferenceConvention

The Reference convention handles those reference relationships between our classes (and the foreign keys).

internal class ReferencesConvention : IReferenceConvention

{

    public bool Accept(IManyToOnePart target)

    {

        return string.IsNullOrEmpty(target.GetColumnName());

    }

 

    public void Apply(IManyToOnePart target)

    {

        target.ColumnName(target.Property.Name + “Id”);

    }

}

The most important part here is enforcing how your foreign keys are going to be named.  I prefer the simple {Object}Id format.

Car.Battery on the object side and [Car].[BatteryId] on the database side.

HasManyConvention

The HasManys are our lists, bags, and collections of objects.

internal class HasManyConvention : IHasManyConvention

{

 

    public bool Accept(IOneToManyPart target)

    {

        return target.KeyColumnNames.List().Count == 0;

    }

 

    public void Apply(IOneToManyPart target)

    {

        target.KeyColumnNames.Add(target.EntityType.Name + “Id”);

        target.Cascade.AllDeleteOrphan();

        target.Inverse();

    }

}

We want to make sure that we haven’t added any other key columns (the Count == 0), and then apply both the naming convention as well as a few properties.

Cascade.AllDeleteOrphan() and Inverse() allows our parent objects (Car) to add new child objects (Car.Battery (Battery), Car.Accessories (IList<Accessory>)) without separating them out.

ClassMappingConvention

Finally, the important Class mapping.  This convention ensures that our tables are named property with pluralization.

public class ClassMappingConvention : IClassConvention

{

    public bool Accept(IClassMap target)

    {

        return true; // everything

    }

 

    public void Apply(IClassMap target)

    {

        target.WithTable(PluralOf(target.EntityType.Name));

    }

}

I’m using a pluralization method from one of my base libraries that I borrowed from Hudson Akridge.  This helper method works really well and I don’t need to add additional references and libraries into my application just to handle the table names.

public static string PluralOf(string text)

  {

      var pluralString = text;

      var lastCharacter = pluralString.Substring(pluralString.Length – 1).ToLower();

 

      // y’s become ies (such as Category to Categories)

      if (string.Equals(lastCharacter, “y”, StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString = pluralString.Remove(pluralString.Length – 1);

          pluralString += “ie”;

      }

 

      // ch’s become ches (such as Pirch to Pirches)

      if (string.Equals(pluralString.Substring(pluralString.Length – 2), “ch”,

                        StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString += “e”;

      }

      switch (lastCharacter)

      {

          case “s”:

              return pluralString + “es”;

          default:

              return pluralString + “s”;

      }

  }

Save and build.  The ExportSchema method will generate the SQL and/or regen the database based on the specifications you’ve provided to it. and you’re ready to hit the ground running!

 

Rendering the Web Equally on Mobile Devices

June 26, 2009 Comments off

I’ve been digging through the Interwebs for a while now and, I thought, had worked out all of the “kinks” of rendering on a mobile device—specifically iPhones.

The special ‘viewport’ meta tag means the world to iDevices.

meta name=“viewport” content=“width = device-width” />

I’m faced with a new challenge—the Palm Pre’s built-in web browser.  My shiny new phone is great, but it isn’t without glitches.

The first glitch I’ve found appears to be a DNS issue— http://myserver/web won’t resolve; however, http://123.45.67.89/web will.  It seems to be touchy.  Most of our webs work just fine, others don’t.  I haven’t narrowed it down to a single server or architecture as it seems to be a bit of everything.  Wonky.

The next glitch is more important—the rendering.  One of our tools is a simple form-based tool that looks great on the iPhone; however, renders partial screen and “garbles” when you move around the screen.

Palm Pre:

Garbled image

iTouch/iPhone:

I’ve also found that anything in an ASP.NET Update Panel (like those Select buttons) are unusable.  Other webs I’ve used (Bank of America, etc) use AJAX just fine, so I don’t think it’s that—probably a coding issue I need to dig into and resolve.

UPDATE: Explicitly adding “LoadScriptsBeforeUI=’true’” to the ASP.NET ScriptManager seems to help with this.. a little.

Anyone else worked specifically with the Pre devices and rendering?  I’d appreciate any meta tags or layout ideas that worked. ๐Ÿ™‚  The Pre isn’t a common device in our organization—yet.

Fluent NHibernate Repository of… integers?

April 21, 2009 Comments off

I’d like to preface this by the fact that this “works” doesn’t mean it “should”.  If there’s a proper way to do this, I’m all ears. ๐Ÿ˜€

I recently needed to do some revamp to an application that queried lookup data from another data system.  The system had a collection of composite keys (age and total score) that returned a percentile score.  Easy enough; however, there are a couple dozen of these tables and I didn’t want to create a couple dozen domain objects/repositories for a SINGLE query.

Typically, the NHibernateRepository* takes a type parameter that matches to the mapped object (and provide the proper return type); however, in this case, I didn’t have a type to return, simply an integer.  So why wouldn’t that work?

public class ScoreRepository : NHibernateRepository<int>, IDisposable

With that in place, I can now add a query into Session:

public int GetConceptPercentile(int age, int total)

{

var result =

       Session.CreateSQLQuery(

“select perc from tblConcept where age = :age and total = :total”

             .SetInt32(“age”, age)

             .SetInt32(“total”, total)

.UniqueResult().ConvertTo<int>();

 

return result;

}

A few more of those, and our test looks like:

[Fact]

public void GetPercentiles_For_Student()

{

using (var repository = new ScoreRepository())

       {

              var languagePercentile =

             repository.GetLanguagePercentile(ageCalc_72months.TotalMonths, 18);

            

var motorPercentile =

             repository.GetMotorPercentile(ageCalc_72months.TotalMonths, 18);

            

var conceptPercentile =

             repository.GetConceptPercentile(ageCalc_72months.TotalMonths, 18);

 

             languagePercentile.ShouldBeEqualTo(12);

             motorPercentile.ShouldBeEqualTo(17);

             conceptPercentile.ShouldBeEqualTo(10);

}

}

Everything “appears” to be working; however, the extraneous methods that each NHibernateRepository includes (Get, GetAll, FindAll, etc) are defunct and just sitting there—very messy.

So is there a better way to use NHibernate/Fluent NHibernate WITHOUT mapping objects—those “lookup tables”?

Auto Incrementing Visual Studio Project Versions

March 9, 2009 Comments off

I’m not using NAnt or anything fancy for most of my projects—so I needed a simple, MSBuild way to automate my version numbers in a project..

<tangent>
HOLY CRAP! Why isn’t this built into Visual Studio Pro?
</tangent>

Here we go:

1. Download the latest build of AssemblyInfoTask (download here) (was 1.0.51130.0 for me).  This is a semi-Microsoft supported MSBuild task that gives you a lot of flexibility over your AssemblyInfo.cs files.

2. Install AssemblyInfoTask.  When prompted where—install into the GAC.  If you don’t have access to the GAC on your workstation, then why aren’t you developing on a VM? ๐Ÿ˜‰

3. Locate the Microsoft.VersionNumber.targets file.  If you installed to the GAC, it should be at %ProgramFiles(x86)%\MSBuild\Microsoft\AssemblyInfoTask Or %ProgramFiles%\MSBuild\Microsoft\AssemblyInfoTask (depending on your architecture).

4. Copy the Microsoft.VersionNumber.targets file into a location in your solution or project.  I recommend $(SolutionDir) so you can share it amongst all of your projects.  The guilde recommend pointing to the file directly; however, you can’t modify the base Major versions that way (without setting the same major version for ALL projects you ever work on).  You can also rename it as approprate.

“Int16s Are Too Small” Or “Why 2007 Broke Versioning” Fix

According to experts who are much smarter than me, the build version numbers are Int16s—meaning 65535 caps out the number.  Unfortunately, the year 2007 breaks this (070101 or 70101 for 07 jan 01) doesn’t fit within an Int16.  Stellar.

The MSBuild team recommended taking out the year and simply placing a 1 infront of it.  That works; however, I really like having the year in there somewhere.

For me, I’ve placed the year into the MinorVersion.  After reviewing most of our practices, the minor version for most of our projects changes with annual maintenance OR not at all (we bump the major version).  This, if nothing else, will help standardize when it changes. ๐Ÿ™‚  As always, YMMV.

No matter which solution you choose, you’ll need to remove the year from the BuildNumberFormats.

In your Targets file, you can change the two lines to report out the MMdd (0309, for example, today) to work around the bug.  I’ve bolded the two lines below.  As you can see, I also added the “9” to the MinorVersion to represent 2009. 

<PropertyGroup>

  <AssemblyMajorVersion>3</AssemblyMajorVersion>

  <AssemblyMinorVersion>9</AssemblyMinorVersion>

  <AssemblyBuildNumber></AssemblyBuildNumber>

  <AssemblyRevision></AssemblyRevision>

  <AssemblyBuildNumberType>DateString</AssemblyBuildNumberType>

  <AssemblyBuildNumberFormat>MMdd</AssemblyBuildNumberFormat>

  <AssemblyRevisionType>AutoIncrement</AssemblyRevisionType>

  <AssemblyRevisionFormat>00</AssemblyRevisionFormat>

</PropertyGroup>

 

<!– Properties for controlling the Assembly File Version –>

<PropertyGroup>

  <AssemblyFileMajorVersion>3</AssemblyFileMajorVersion>

  <AssemblyFileMinorVersion>9</AssemblyFileMinorVersion>

  <AssemblyFileBuildNumber></AssemblyFileBuildNumber>

  <AssemblyFileRevision></AssemblyFileRevision>

  <AssemblyFileBuildNumberType>DateString</AssemblyFileBuildNumberType>

  <AssemblyFileBuildNumberFormat>MMdd</AssemblyFileBuildNumberFormat>

  <AssemblyFileRevisionType>AutoIncrement</AssemblyFileRevisionType>

  <AssemblyFileRevisionFormat>00</AssemblyFileRevisionFormat>

</PropertyGroup>

 

This results in a version string that looks like 3.9.0309.{increment}.

5. Open up your project’s solution and unload the project you are wanting to auto-increment. Towards the end of the file, you’ll see the default MSBuild C# build path; add the location to your new .targets file in your solution directory.

<Import Project=$(SolutionDir)MyProject.VersionNumber.targets />

7. Save and Close and Reload the project.

8. Build/Rebuild your project and the AssemblyInfo.cs should set to the specified increment scheme.

You’re done!

“Too Many WebResources?” Fix 

My project references numerous resources for images and style sheets; however, having these inside of AssemblyInfo.cs seems to cause it to go haywire and throw array errors (assumingly because there is more than one [assembly:WebResource()] call).

To fix this, I moved my WebResources out of AssemblyInfo.cs and into a new file under Properties called WebResources (Add New Item > Assembly Information File).  Strip out everything except the WebResources you copy in and the project now compiles like a champ.

For additional setup details and options within the .targets files, the AssemblyInfoTask installer comes with a CHM help file that covers additional customizations available.