Archive

Archive for the ‘NHibernate’ Category

Populating Select Lists in ASP.NET MVC and jQuery

September 25, 2009 Comments off

I’ve been working the last bit to find the best way to create/populate select (option) lists using a mixture of ASP.NET MVC and jQuery.  What I’ve run into is that the “key” and “value” tags are not passed along when using Json(data).

Here’s what I’m trying to pull off in jQuery: building a simple select drop down list.

var dd_activities = “<select id=’dd_activities’>”;
var count = data.length;
for (var i = 0; i < count; i++) {
 dd_activities += “<option value='” + data[i].Key + “‘>” + data[i].Value + “</option>”;
}
dd_activities += “</select>”;

$(“#activities”).before(dd_activities);

Using some very basic key/value data:

[
 {“3″,”Text Value”},
 {“4″,”Another Text Value”},
 {“1″,”More boring values…”},
 {“2″,”Running out of values”},
 {“5″,”Last value…”}
]

Without any sort of name, I was at a loss on how to access the information, how to get it’s length, or anything.  FireBug was happy to order it up… but that didn’t help.
 
My first attempt was to use a custom object, but that just felt dirty—creating NEW objects simply to return Json data.
 
My second attempt matched the mentality of newing new anonymous Json objects and seemed to work like a champ:
 

[Authorize]

[CacheFilter(Duration = 20)]

public ActionResult GetActivitiesList()

{

    try

    {

        var results =

        _activityRepository

            .GetAll()

            .OrderBy(x => x.Target.Name).OrderBy(x => x.Name)

            .Select(x => new

                {

                    Key = x.Id.ToString(),

                    Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

                })

            .ToList();

 

        return Json(results);

    }

    catch (Exception ex)

    {

        return Json(ex.Message);

    }

}

 
Well, not beautiful, but returns a sexy Key/Value list that Json expects—and that populates our select list.
[
 {“Key”:”3″,”Value”:”Text Value”},
 {“Key”:”4″,”Value”:”Another Text Value”},
 {“Key”:”1″,”Value”:”More boring values…”},
 {“Key”:”2″,”Value”:”Running out of values”},
 {“Key”:”5″,”Value”:”Last value…”}
]
The next step was to get that out of the controller and into the data repository… pushing some of that logic back down to the database.
 

var criteria =

    Session.CreateCriteria<Activity>()

    .CreateAlias(“Target”, “Target”)

    .Add(Restrictions.Eq(“IsValid”, true))

    .AddOrder(Order.Asc(“Target.Name”))

    .AddOrder(Order.Asc(“Name”))

    .SetMaxResults(100);

 

var data = criteria.List<Activity>();

var result =

    data

        .Select(x => new

            {

                Key = x.Id.ToString(),

                Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

            })

        .ToList();

tx.Commit();

return result;

 
A bit of formatting, restrictions, push the ordering back to the database, and a tidy SQL statement is created.
 
The last touch is the return type.  Since we’re returning a “List” of anonymous types, the return type of GetActivitiesList() must be an IList.
 
That shrinks down my ActionResult to a single call.
 

try

 {

     return Json(_activityRepository.GetActivitiesList());

 }

 catch (Exception ex)

 {

     return Json(ex.Message);

 }

 
That works… and will work for now.  Though, I’ve marked it as a HACK in my code.  Why?  I’m honestly not sure yet.  Just a feeling.

Fluent NHibernate 1.0 RTM

August 29, 2009 Comments off

For those who haven’t seen the twits, posts, or others, James and crew have put the final touches on and cranked out Fluent NHibernate 1.0 RTM.

There were quite a few changes (for the better) between prior releases and the 1.0 builds—especially around auto mapping and conventions.  So far, I love the changes—much more explicit and easy to configure once you get the hang of where things are located.

Congrats to everyone involved!

Download: http://fluentnhibernate.org/downloads

(New) Wiki: http://wiki.fluentnhibernate.org/Main_Page

AutoMappings in NHibernate – A Quick Runthrough

June 26, 2009 Comments off

For most of my projects, at least since I’ve moved to NHibernate/Fluent NHibernate, I’ve been trapped using the existing data structures of prior iterations.  Funky naming conventions (many due to cross-cultural, international column naming), missing data relationships, and general craziness.

Having used Fluent Mappings (creating a class that implements ClassMap<objectType>) in the past, they were a huge jump up from writing painful data objects, connecting them together, and recreating the wheel with “SELECT {column} from {table}” code.  Create a map, use the fluent methods to match column to property, and away you go.

In a recent project, I’ve had the opportunity to build a new system from the ground up.  With this, I decided to dive head first into using the AutoMappings functionality of Fluent NHibernate. 

This post is somewhat a rambling self-discussion of my explorations with AutoMappings.

What are AutoMappings?

The FluentNHibernate wiki provides a simple definition:

[…] which is a mechanism for automatically mapping all your entities based on a set of conventions.

Rather than hand-mapping each column to a property, we create conventions (rules) to map those.. automatically.  Hey look, auto…mappings.  😉

How?

Using the same fluent language, configuring AutoMapping is an exercise in implementing conventions for the logical naming and handling of data.

Fluently

    .Configure()

    .Database(MsSqlConfiguration.MsSql2005

                  .ConnectionString(cs => cs

                                              .Server(“server”)

                                              .Database(“db”)

                                              .Username(“user”)

                                              .Password(“password”)

                  )

                  .UseReflectionOptimizer()

                  .UseOuterJoin()

                  .AdoNetBatchSize(10)

                  .DefaultSchema(“dbo”)

                  .ShowSql()

    )

    .ExposeConfiguration(raw =>

                             {

                                 // Testing/NHibernate Profiler stuffs.

                                 raw.SetProperty(“generate_statistics”, “true”);

                                 RebuildSchema(raw);

                             })

    .Mappings(m =>

              m.AutoMappings.Add(AutoPersistenceModel

                                     .MapEntitiesFromAssemblyOf<Walkthrough>()

                                     .ConventionDiscovery.Setup(c =>

                                                                    {

                                                                        c.Add<EnumMappingConvention>();

                                                                        c.Add<ReferencesConvention>();

                                                                        c.Add<HasManyConvention>();

                                                                        c.Add<ClassMappingConvention>();

                                                                    })

                                     .WithSetup(c => c.IsBaseType = type => type == typeof (Entity)))

                  .ExportTo(@”.\”)

    );

As you can see above, the only difference from a fluent mappings configuration is in the actual Mappings area.  Good deal!  That helps ensure my existing work using fluent mappings could translate without too much headache.

I’ve specified four conventions.  Each of these conventions have interfaces that provide the necessary methods to ensure your rules are appied to the correct objects.

EnumMappingConvention

internal class EnumMappingConvention : IUserTypeConvention

{

    public bool Accept(IProperty target)

    {

        return target.PropertyType.IsEnum;

    }

 

    public void Apply(IProperty target)

    {

        target.CustomTypeIs(target.PropertyType);

    }

 

    public bool Accept(Type type)

    {

        return type.IsEnum;

    }

}

The great thing about these methods is they’re fluent enough to translate to English.

Accept… targets where the property type is an enumeration.

Apply… to the target that the “Custom Type Is” the property type of the target.
  NOTE: This translates from a ClassMap into: Map(x => x.MyEnumFlag).CustomTypeIs(typeof(MyEnum));

Accept… a type that is an enumeration.

ReferenceConvention

The Reference convention handles those reference relationships between our classes (and the foreign keys).

internal class ReferencesConvention : IReferenceConvention

{

    public bool Accept(IManyToOnePart target)

    {

        return string.IsNullOrEmpty(target.GetColumnName());

    }

 

    public void Apply(IManyToOnePart target)

    {

        target.ColumnName(target.Property.Name + “Id”);

    }

}

The most important part here is enforcing how your foreign keys are going to be named.  I prefer the simple {Object}Id format.

Car.Battery on the object side and [Car].[BatteryId] on the database side.

HasManyConvention

The HasManys are our lists, bags, and collections of objects.

internal class HasManyConvention : IHasManyConvention

{

 

    public bool Accept(IOneToManyPart target)

    {

        return target.KeyColumnNames.List().Count == 0;

    }

 

    public void Apply(IOneToManyPart target)

    {

        target.KeyColumnNames.Add(target.EntityType.Name + “Id”);

        target.Cascade.AllDeleteOrphan();

        target.Inverse();

    }

}

We want to make sure that we haven’t added any other key columns (the Count == 0), and then apply both the naming convention as well as a few properties.

Cascade.AllDeleteOrphan() and Inverse() allows our parent objects (Car) to add new child objects (Car.Battery (Battery), Car.Accessories (IList<Accessory>)) without separating them out.

ClassMappingConvention

Finally, the important Class mapping.  This convention ensures that our tables are named property with pluralization.

public class ClassMappingConvention : IClassConvention

{

    public bool Accept(IClassMap target)

    {

        return true; // everything

    }

 

    public void Apply(IClassMap target)

    {

        target.WithTable(PluralOf(target.EntityType.Name));

    }

}

I’m using a pluralization method from one of my base libraries that I borrowed from Hudson Akridge.  This helper method works really well and I don’t need to add additional references and libraries into my application just to handle the table names.

public static string PluralOf(string text)

  {

      var pluralString = text;

      var lastCharacter = pluralString.Substring(pluralString.Length – 1).ToLower();

 

      // y’s become ies (such as Category to Categories)

      if (string.Equals(lastCharacter, “y”, StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString = pluralString.Remove(pluralString.Length – 1);

          pluralString += “ie”;

      }

 

      // ch’s become ches (such as Pirch to Pirches)

      if (string.Equals(pluralString.Substring(pluralString.Length – 2), “ch”,

                        StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString += “e”;

      }

      switch (lastCharacter)

      {

          case “s”:

              return pluralString + “es”;

          default:

              return pluralString + “s”;

      }

  }

Save and build.  The ExportSchema method will generate the SQL and/or regen the database based on the specifications you’ve provided to it. and you’re ready to hit the ground running!

 

Benchmarks : Comparing LINQ to NHibernate Transforms/Grouping

Yesterday, I wrote about setting up NHibernate to query up, group, count, and transform results and return them into a control.  Why did I go to such effort?  Well, the original LINQ query I had that refined the results didn’t perform up to par.  As some may note, premature optimization is never a good practice so I needed some stats to back up the claims.

Overnight, I wrote up a quick test to query up both ways and benchmark the results.  Here’s what I found.

The “test”:

public void TEMP_quick_compare_of_linq_to_nhibernate()

{

    var schoolId = 120;

 

    var benchmark = new Benchmark();

    using (var repository = new IncidentRepository())

    {

        benchmark.Start();

        var resultsFromLinq =

            repository.GetCountByIncidentCodeWithLinq(schoolId);

        foreach (var item in resultsFromLinq)

        {

            Console.WriteLine(item);

        }

        benchmark.Stop();

        Console.WriteLine(“Linq: {0}”.AsFormatFor(benchmark.ElapsedTime));

 

        benchmark.Start();

        var resultsFromNhibernate =

            repository.GetCountByIncidentCode(schoolId);

        foreach (var item in resultsFromNhibernate)

        {

            Console.WriteLine(item);

        }

        benchmark.Stop();

        Console.WriteLine(“NHibernate: {0}”.AsFormatFor(benchmark.ElapsedTime));

    }

}

Setting up the benchmark (and the NHibernate Init) are outside of the benchmark—they’re necessary overhead.  I’m also iterating through each of the results as part of the benchmark to ensure that everything is evaluated. Past that, pretty basic.  On the database side, I’ve disabled statement caching to not sway the results as much.

With 24 records (the test data in the system), the results were pretty clear. The average of running the benchmark 100 times resulted in…

Linq: 00:00:00.7050000
NHibernate: 00:00:00.0190000

With 24 records, NHibernate was about 37x faster. 

That’s nice, but what happens in a few weeks when there are a few thousand records?  I populated a few hundred of each incident type into the system, giving me almost 4000 records (the anticipated monthly load of the system by the customer).  How’d that change our averages?

Linq: 00:00:00.8869746
NHibernate: 00:00:00.1381518

Now we’re only 6x faster with NHibernate vs. LINQ.  The duration from 24 to 4000 records for LINQ  jumped ~.18 seconds for a 25% gain where as NHibernate jumped ~.11 seconds for a 626% gain.

So, with that, my original gut feeling and assumptions were wrong.  More and more records don’t really slow down the LINQ filtering.. at least not by much.  The performance gain is still appparent between the two methods (.88 sec vs. .13 sec); however, how much of that time is eaten up by rendering, server latency, etc and not by the actual processing?

Grouping and Transforming Using NHibernate

June 11, 2009 Comments off

Okay, I’ve got to be doing this wrong.

Call it premature optimization, but I forsee an old LINQ method being a performance bottleneck when we hit a few hundred thousands records—especially for the ASP.NET charting control to render in any useful time period.

So, what do I do?  I figure pushing that computation back down to the database would be a good first step.

Unfortunately, grouping, sorting, and such are a serious pain in the ass.  Unless, as I said, I’m doing it wrong.

Original Code – Grouping and Counting ala LINQ

private IList GetIncidentsGroupedByIncidentCode()

{

    using (var repository = new IncidentRepository())

    {

        var allIncidents =

            repository.GetAllBySchoolId(SessionManager.CurrentSchoolId);

 

        var incidentsByCode = from i in allIncidents

                              group i by i.IncidentCodeId

                              into grouping

                                  orderby grouping.Count()

                                  select new

                                             {

                                                 IncidentCodeId = grouping.Key,

                                                 Count = grouping.Count(),

                                                 Description =

                                                    GetIncidentCodeDescription(grouping.Key)

                                             };

        return incidentsByCode.ToList();

    }

}

Grab all incidents (using NHibernate repository) and use LINQ to transform them into a new handy anonymous type that consisted of the IncidentCodeId, a Count (by IncidentCodeId), and the Description which uses the IncidentCodeId to grab the description (the incident code description is coming from an entirely different system/database, hence the method to go fetch it).

I can simply return an IList rather than specifying the type (since it’s anonymous) and get away with loading up my Chart Control—not a problem.

Newish Code – Grouping and Counting ala NHibernate

public IList GetCountByIncidentCode(int schoolId)

{

    using (var tx = Session.BeginTransaction())

    {

        var criteria = Session.CreateCriteria(typeof (Incident));

 

        // Only get those matching the requested SchoolId

        criteria.Add(RestrictionsHelper<Incident>.Eq(x => x.SchoolId, schoolId));

 

        // Setup our projections.

        // IncidentCodeId is what we’re using as an Identifier.

        // Id is what we’re counting, so the results of the “GroupedResult” go into Result

        // and we’re grouping by IncidentCodeId

        criteria.SetProjection(Projections.ProjectionList()

                                   .Add(Projections.Property(“IncidentCodeId”), “Identifier”)

                                   .Add(Projections.Count(“Id”), “Result”)

                                   .Add(Projections.GroupProperty(“IncidentCodeId”)));

        // Order THAT mess by Result

        criteria.AddOrder(Order.Asc(“Result”));

 

        // Now, since we can’t use anonymous objects (??), we have to use a funky Java

        // method to transform it into a typed result.

        criteria.SetResultTransformer(Transformers.AliasToBean(typeof (GroupedResult)));

 

        // Convert this all to a list.

        var result = criteria.List<GroupedResult>() as List<GroupedResult>;

 

        // Commit… or get committed.

        tx.Commit();

        if (result != null)

        {

            // We can’t do this inline (??), so go back into the list and iterate through… grabbing

            // descriptions.

            result.ForEach(x =>

                                {

                                    var description =

                                        GetIncidentCodeDescription(x.Identifier.ConvertTo<int>());

                                    x.Description = description;

                                });

        }

 

        // Holy crap, we’re done!

        return result;

    }

}

What… the… heck?

Amazingly enough, that works (changing the chart’s column names, of course).  And it’s relatively quick… But woah, what a mess. 

It also adds annoying little ‘result’ objects into the mix. 

public class GroupedResult

{

    public int Identifier { get; set; }

    public string Description { get; set; }

    public int Result { get; set; }

}

Strongly typed is stellar and I’m pretty sure I could have some generic objects. [Identifier/Description/Result] could work for counts, averages, most anything that is grouped up, but that just twitches me out a bit to have random classes sitting around for data transformations.

So, good readers—how is this REALLY supposed to work?  All that to generate the guts of:

SELECT COUNT(IncidentCodeId) as Result, IncidentCodeId
FROM Incidents
WHERE SchoolId = :schoolId
GROUP BY IncidentCodeId
ORDER BY Result

Categories: .net 3.5, c#, NHibernate, SQL

Performing SELECT.. WHERE IN using a Repository

June 8, 2009 Comments off

As I’ve discussed in the past, a few of my repository pattern practices are borrowed and built on the nice S#arp Architecture project.  Here’s another situation where I needed a bit more functionality.

Disclaimer:  If there’s a better way to do this—I’m all ears and emails. 🙂

By default, the FindAll method builds up the NHibernate criteria by iterating through a key/value pair.  Easy enough.

‘Id, 12345’ generates ‘WHERE Id = 12345’.

But what happens when I want to do something with an array?

‘Id, int[] {12345, 67890}’ should generate ‘WHERE Id IN (12345, 67890)’

Thankfully, the Restrictions class has an In method, but how can I add that flexibility to the FindAll method?

Here’s what the FindAll method looks like to start off:

public T Find(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    var criteria = Session.CreateCriteria(typeof (T));

    propertyValuePairs

        .ForEach(x =>

                 criteria.Add(Restrictions.Eq(x.Key, x.Value)));

 

    return criteria.List<T>() as List<T>;

}

That’s nice.  Iterate through each, but assuming an Eq (Equals) relationship between the key and the value.

After a bit of dinking, checking to see if the object is a typeof(ICollection) seems to be the most reliable considering Restrictions.In(key,value) accepts Collections for the value parameter. 

This allows you to pass arrays, lists, and dictionaries.

public List<T> FindAll(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

 

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    ICriteria criteria = Session.CreateCriteria(typeof (T));

 

    propertyValuePairs

        .ForEach(x =>

                     {

                         if (x.Value.IsA<ICollection>())

                         {

                             // add WHERE key IN (value)

                             criteria.Add(Restrictions.In(x.Key, (ICollection) x.Value));

                         }

                         else

                         {

                             // add WHERE key = value

                             criteria.Add(Restrictions.Eq(x.Key, x.Value));

                         }

                     });

    return criteria.List<T>() as List<T>;

}

Here’s my (now) passing test that I used to test this logic as I built it:

[Fact]

public void can_find_students_by_array_of_student_ids()

{

    var studentsToFind = new int[] { 622100, 567944, 601466 };

 

    var criteria = new Dictionary<string, object>();

    criteria.Add(“Id”, studentsToFind);

    criteria.Add(“Grade”, “09”);

 

    var sut = new StudentRepository();

    var students = sut.FindAll(criteria);

 

    students.Count.ShouldBeEqualTo(1);

    students.ShouldContainMatching(x => x.Id == 567944);

    students.ForEach(x =>

        Console.WriteLine(“{0}, {1}”.AsFormatFor(x.FullName, x.Id)));

}

Test Passed.  Woot.  The generated SQL is also nice and clean (really loving NHProf… though I trimmed out the excess columns for brevity).

SELECT this_.Id            as Id7_0_, [..]

       this_.Grade         as Grade7_0_, [..]

FROM   custom.student_lkup this_

WHERE  this_.Id in (622100 /* :p0 */,567944 /* :p1 */,601466 /* :p2 */)

       and this_.Grade = 09 /* :p3 */

Categories: .net 3.5, c#, Microsoft, NHibernate, SQL

Fluent NHibernate Repository of… integers?

April 21, 2009 Comments off

I’d like to preface this by the fact that this “works” doesn’t mean it “should”.  If there’s a proper way to do this, I’m all ears. 😀

I recently needed to do some revamp to an application that queried lookup data from another data system.  The system had a collection of composite keys (age and total score) that returned a percentile score.  Easy enough; however, there are a couple dozen of these tables and I didn’t want to create a couple dozen domain objects/repositories for a SINGLE query.

Typically, the NHibernateRepository* takes a type parameter that matches to the mapped object (and provide the proper return type); however, in this case, I didn’t have a type to return, simply an integer.  So why wouldn’t that work?

public class ScoreRepository : NHibernateRepository<int>, IDisposable

With that in place, I can now add a query into Session:

public int GetConceptPercentile(int age, int total)

{

var result =

       Session.CreateSQLQuery(

“select perc from tblConcept where age = :age and total = :total”

             .SetInt32(“age”, age)

             .SetInt32(“total”, total)

.UniqueResult().ConvertTo<int>();

 

return result;

}

A few more of those, and our test looks like:

[Fact]

public void GetPercentiles_For_Student()

{

using (var repository = new ScoreRepository())

       {

              var languagePercentile =

             repository.GetLanguagePercentile(ageCalc_72months.TotalMonths, 18);

            

var motorPercentile =

             repository.GetMotorPercentile(ageCalc_72months.TotalMonths, 18);

            

var conceptPercentile =

             repository.GetConceptPercentile(ageCalc_72months.TotalMonths, 18);

 

             languagePercentile.ShouldBeEqualTo(12);

             motorPercentile.ShouldBeEqualTo(17);

             conceptPercentile.ShouldBeEqualTo(10);

}

}

Everything “appears” to be working; however, the extraneous methods that each NHibernateRepository includes (Get, GetAll, FindAll, etc) are defunct and just sitting there—very messy.

So is there a better way to use NHibernate/Fluent NHibernate WITHOUT mapping objects—those “lookup tables”?

Using Multiple NHibernate Session Factories

March 6, 2009 1 comment

A few months ago, I rewrote a few of my base repositories to follow some of the archtecture practices of the S#arp Architecture project.  I liked how easy it was to setup NHibernate*, mark domain objects, and setup mappings and highly recommend checking out their project.  Unfortunately, the whole project was a bit too bulky for what I needed and much of the functionality already existed in existing framework libraries we use at the office.

* When I mention NHibernate, I’m using Fluent NHibernate, currently at build 366 with a bit of modification to allow Oracle data access.  I’ve also enhanced the S#arp Architecture project to take advantage of the latest feature changes in Fluent NHibernate.

One pitfall, however, that I recently ran into was needing to access NHibernate multiple session factories at the same time—such as to access PeopleSoft and our internal student system to pull in demographic information.  The S#arp web session storage, by default, uses a single key opening a separate session overwrites the prior.

I found an open Issue on the S#arp tracker (#12) for such a functionality and a bit of code from Russell Thatcher to allow multiple sessions; however, I could never get it fully working—so I set off to create my own.

Updating the WebSessionStorage

To address the issue of storing multiple sessions in the HttpContext’s session state, the keys must be uniquely named when created.  To do this, I replaced the previous constructor of WebSessionStorage(HttpApplication) and added an additional parameter to create WebSessionStorage(HttpApplication, string).

This appends a name to the session key used to store the NHibernate ISession.

private readonly string CurrentSessionKey = “nhibernate.”;

 

public WebSessionStorage(HttpApplication application, string factoryKey)

{

       application.EndRequest += Application_EndRequest;

       CurrentSessionKey += factoryKey;

}

Everything else in WebSessionStorage remains the same.

NOTE: This ensures that we can find the ISession within the ASP.NET context using WebSessionStorage and isn’t required if you’re managing the the ISession yourself or using another mechanism for storage.

Updating the NHibernateSession

Since we’re looking to support multiple, keyed sessions, the NHibernateSession’s private fields of SessionFactory and SessionStorage should be changed into keyed Dictionaries.

private static readonly Dictionary<string, ISessionFactory> SessionFactories =

       new Dictionary<string, ISessionFactory>();

 

private static readonly Dictionary<string, ISessionStorage> SessionStorages =

       new Dictionary<string, ISessionStorage>();

To use these new dictionaries, we replace the change the logic in the Init method to add the factory and storage to the dictionary, rather than assign to a single variable.

public static FluentConfiguration Init(

       this FluentConfiguration configuration,

       ISessionStorage storage,

       string factoryKey)

{

       Check.Require(storage != null,

              “The session storage was null, but must be provided.”);

       Check.Require(!string.IsNullOrEmpty(factoryKey),

               “A unique key for the session factory was null, but must be provided.”);

 

       try

       {

              if (!SessionFactories.ContainsKey(factoryKey))

             {

                     SessionFactories.Add(

                           factoryKey,

                           configuration.BuildSessionFactory());

                    

                     SessionStorages.Add(

                           factoryKey,

                           storage);

 

                     return configuration;

              }

       }

       catch (HibernateException ex)

       {

              throw new

                     HibernateException(

                           “Cannot create session factory.”, ex);

       }

 

       return configuration;

}

Our Init statement, when called, then looks like, passing along the factory key with the ISessionStorage.

configuration.Init(storage, “FactoryKeyNameGoesHere”);

The Current static public property also needs to address the new Dictionary; however, cannot as a property as it needs the parameter of the factory key.  I converted this into a method that accepts the factory key and returns an ISession based on what it finds.  If it cannot find an open sesson (is null) in the dictionary, it opens a new one, sets it in the dictionary, and returns it to the caller.

public static ISession Current(string factoryKey)

{

       var session =

              SessionStorages[factoryKey].Session ??

              SessionFactories[factoryKey].OpenSession();

 

       SessionStorages[factoryKey].Session = session;

      

       return session;

}

That’s it in the NHibernateSession.

Updating the Repository & Decorating Inheriting Repositories

The Repository base class provides functionality to the CRUD methods through a protected Session property.  This property needs to be updated to call the new Current(string) method.  However, since the Repository doesn’t know, we need to find a way to communicate the “key” that a repository uses to ensure the correct session is accessed (so, for example, that our PeopleSoft queries don’t try to hit our SAP session—that confuses it. *grin*)

Randall had a great solution for this—an attribute applied to repositories that simply provides a place to store the “name” of the session.

[AttributeUsage(AttributeTargets.Class)]

public class SessionFactoryAttribute : Attribute

{

       public readonly string FactoryKey;

 

       public SessionFactoryAttribute(string factoryKey)

       {

              FactoryKey = factoryKey;

       }

 

       public static string GetKeyFrom(object myClass)

       {

              var key = string.Empty;

 

              //Get member info via reflection

              var info = myClass.GetType();

 

              //Retrieve custom attributes

              var attributes =

                     info.GetCustomAttributes(

                           typeof (SessionFactoryAttribute),

                           true);

 

              //Check for attribute

              if (attributes.Length > 0)

              {

                     var att =

                           (SessionFactoryAttribute) attributes[0];

                     key = att.FactoryKey;

              }

             

              return key;

       }

}

Now we simply decorate the inheriting repositories with SessionFactory.  Notice how the key matches what we used in the Init statement earlier.

[SessionFactory(“FactoryKeyNameGoesHere”)]

public class CourseRepository :
       NHibernateRepository<Course>, ICourseRepository, IDisposable { }

For a bit better design, you can also create a constant called, perhaps, FactoryKey or {data}FactoryKey and use that for the attributes and Init statement.  Then, if it needs changed—change it in one place.

Inside our Configuration class (PSConfiguration, for example), we have:

public const string FactoryKey = “PSProd”;

and

configuration.Init(storage, PSConfiguration.FactoryKey);

and then decorating our Repository classes:

[SessionFactory(PSConfiguration.FactoryKey)]

Now, to take advantage of our new SessionFactory attribute, we modify the Session property on our Repository base class.

protected virtual ISession Session

{

       get

       {

              var factoryKey = SessionFactoryAttribute.GetKeyFrom(this);

              return NHibernateSession.Current(factoryKey);

       }

}

Now the interface implementations (Get, Find, etc) are fetching the correct Session.

In summary, I’ve enabled accessing multiple NHibernate Session Factories from the S#arp Architecture framework (or my small implementation of it) by…

  • Updating the WebSessionStorage to accept a factory key when the session is created.
  • Updating the NHibernateSession to store multiple Factories and Storages in dictionaries and then access these dictionaries when the “current” session is requested.
  • Adding an attribute to Repositories to “name” which factory a certain repository should be selecting out of and then providing that session when called.

So far, things are working like a champ—tweaking and refactoring from here. 🙂

Part 2: Converting Generic Collections to CSVs

February 6, 2009 1 comment

Part 1 | Part 2

In my last post, I wrote up the tests needed to meet our customer’s requirements, now I’m ready to flush out the tests.

Objective #1 – Export Collection to CSV With All Properties

Each of my tests implement one of two extension methods, ToDelimitedFile and ToDelimitedRow.  Collections focus on File whereas objects (singles) use the Row. 

Unfortunately, I couldn’t find a 100% accurate way to use the generic constraints—too bad we can’t put in “where T : !ICollection<T>”  Hence the separate methods.

For me, extension methods sit outside of my actual classes in their own happy static {whatever}Extensions class.  The CSV Conversions are no different.

For Test #1 to pass, we need to implement the extension method.

public static string ToDelimitedFile<T>(this ICollection<T> convertible)

{

       return DelimitedExporter.SerializeCollection(convertible);

}

This paves the way for creating our next class and method, DelimitedExporter and SerializeCollection.  The DelimitedExporter class will contain all of the methods and logic for converting our collections into CSV “lines”.  Let’s implement SerializeCollection real quick with some dummy data and rerun the test.

public static string SerializeCollection<T>(ICollection<T> convertible)

{

       return “Name,Level”;

}

Pass!  Test passes!

Now, of course, since the test passes, we’re ready to flush out the details of the SerializeCollection method.

Creating the Header Rows

Since we know we’ll be working with strings and doing a large number of appends, let’s start out by creating a StringBuilder to hold all of that text information.

public static string SerializeCollection<T>(IEnumerable<T> convertible)

{

       var stringBuilder = new StringBuilder();

 

       //grab properties of the method to create columns

       var columns = typeof(T).GetProperties();

 

GetProperties is a method on Type that returns all public properties as PropertyInfo objects.  These properties will be used to create the column header names for our CSV.

       // build header row

       stringBuilder.Append(BuildHeaderRowFrom(columns));

 

So let’s “build” our header row.  I’ve extracted this into a separate method from the start—you could have one mega method and refactor it down later.

 

private static string BuildHeaderRowFrom(IEnumerable<PropertyInfo> columns)

{

       var headerString = string.Empty;

       var counter = 0;

       foreach (PropertyInfo column in columns)

       {

              counter++;

             headerString += column.Name;

             if (counter != columns.Count)

                     headerString += “,”;

       }

       headerString += Environment.NewLine;

 

       return headerString;

}

The BuildHeaderRowFrom method takes the PropertyInfo enumeration from our GetProperties() call and iterates through it, capturing the Name of each property and appending it to a string.

Let’s close up the SerializeCollection() method …

       return stringBuilder.ToString();

}

… and run the test again and everything should pass up to this point.

Now we can focus on the data rows.

Creating the Data Rows

Our next task in SerializeCollection is to flush out how each enumerable object will be converted into a row in our CSV.

After our BuildHeaderRowsFrom() method, let’s now append the data rows with:

       // build item rows

       stringBuilder.Append(BuildDataRowsFrom(collection, columns));

BuildDataRowsFrom will take the original collection of objects (an IEnumerable<T>) and the columns we parsed for the header as parameters.

private static string BuildDataRowsFrom<T>(

       IEnumerable<T> collection,

       IEnumerable<PropertyInfo> columns)

{

       var rowString = string.Empty;

       foreach (T item in collection)

       {

              int counter = 0;

              var columnCount = columns.Count();

             foreach (PropertyInfo column in columns)

             {

                     counter++;

                     var value =

                           column.GetValue(item, null) ?? string.Empty;

 

                     // Append column value to the row.

                     rowString += “{0}”.AsFormatFor(value);

 

                     // Append delimiter if _not_ the last column.

                     if (counter != columnCount)

                           rowString += “,”;

              }

              rowString += Environment.NewLine;

       }

       return rowString;

}

Note: I could use a StringBuilder inside of this; however, the StringBuilder seemed to take a bit longer with the tests of 50,000ish records (max in the system I was designing this for).  A later revision may be to further examine ways to improve performance.

 

So, what’s all this do?

 

For the most part, it reads as a sentence:  For each item in the collection, loop through each known column and fetch the value from that item.  If the item is null, simply return an empty string. 

 

Since I’m using IEnumerable<T> rather than ICollection<T>, I don’t have access to a Count property (without calling the Linq extension method, so I’ve moved that outside of the foreach loop.  Perhaps there’s a better generic collection to be using, but that’s for later revisions.

 

Again, run our test again, and it should pass AND we should now have data.

 

can_convert_list_of_objects_to_csv : Passed

*** ConsoleOutput ***
Name,Level
SchoolA,Elementary
SchoolB,Elementary
SchoolC,Elementary
SchoolD,Elementary

 

Excellent.

 

That’s it for this post.  In the next part, I’ll start in on the second objective—being able to specify which parameters are picked up by the DelimitedExporter.

 

 

Part 1: Converting Generic Collections To CSVs

February 6, 2009 1 comment

I recently had a request to allow “exports” of lists from one application to the user in CSV (comma separated value) format.  After a bit of dinking, here’s what I came up with; however, I’m sure a bit more work will find that it’s overkill. 🙂

Starting Off With The Tests

So what did I want to accomplish?  I had three objectives:

  1. Export a collection of objects (IList, ICollection, IEnumerable, etc) to CSV with all “properties”.
  2. Export a collection and be able to specify the properties to export.
  3. (optional) Export a single object to CSV with the same two requirements above (1, 2).

#1:

[Fact]

public void can_convert_list_of_objects_to_csv()

{

       var repository = new SchoolRepository();

       var schools = repository.GetBySchoolLevel(Level.Elementary);

 

       var csvOutput = schools.ToDelimitedFile();

 

       // check that the CSV has the headers

       // of the properties of the object

       csvOutput.ShouldContain(“Name,Level”);

}

#2:

[Fact]

public void can_convert_list_of_objects_to_csv_with_parameters()

{

       var repository = new SchoolRepository();

       var schools = repository.GetBySchoolLevel(Level.Elementary);

      

       var csvOutput = schools.ToDelimitedFile(“Name”);

       csvOutput.ShouldContain(“Name”);

 

       Console.Write(csvOutput);

}

#3:

[Fact]

public void can_convert_a_single_object_to_csv()

{

       var repository = new SchoolRepository();

       var school = repository.Get(120);

 

       var csvOutput = school.ToDelimitedRow();

       csvOutput.ShouldContain(“Great School,High”);

 

       Console.Write(csvOutput);

}

In the next post, we’ll begin flushing out our tests.