Archive

Archive for the ‘.net 3.5’ Category

Reusable ‘Controls’ with Sprites, CSS, and Spark Partials

October 2, 2009 Comments off

I came across SilkSprite during some of my work with BluePrintCSS and fell in love.  I’ve used the FamFamSilk icons for ages and, with a current project, wanted to implement some instant CSS buttons and menu items using those icons.

So, with CSS-fu in hand, here’s how the button turned out.

First, creating a simple CSS button style:

.button

{

    display: inline-block;

    font-weight: bold;

    font: bold .85em/2.5em Verdana, Helvetica, sans-serif;

    text-decoration: none;

    text-indent: 10px;

    width: 150px;

    height: 2.5em;

    color: #555555;

    background-color: #EAEAD7;

    border: #CCCCCC solid 1px;

    -moz-border-radius: 5px;

    -webkit-border-radius: 5px;

}

 

.button:hover

{

    background-color: #F3F3E9;

    color: #737373;

    cursor: pointer;

}

This button is pretty simple and standard—nothing fancy.  The most important tag in there is display: inline-block as this allows our buttons to share the same row and not stack on top of each other (like float:left would cause).

Hover Off: CSS Button - Hover Off

Hover On: CSS Button - Hover On

Second, because I needed to include the icon INSIDE another container (the button), I modified the original SilkSprite ss_sprite and removed a bit of the extra padding.

.sprite

{

    display: inline-block;

    overflow: hidden;

    background-repeat: no-repeat;

    background-image: url(/content/img/sprites.png);

    padding-left: 25px;

    padding-top: 2px;

    height: 16px;

    max-height: 16px;

    vertical-align: middle;

}

The important thing to note here is the padding-top and the height attributes.  Until I’m lead into the light and improve my CSS-fu, I’m compensating for the font height differences by using padding-top.  e.g. I want the icon to truly show up in the “middle”… and vertical-align just wasn’t pushing it far enough.

We’ll come back to actually working with SilkSprite in a moment, let’s work up a quick Spark partial view to display our results.  Now, you could do this without the partial views—just replace my Spark variables with the actual text we’re passing to them.  I’ll provide both examples.

Our button needs three elements to be useful:

  1. An ID so that we can call it from jQuery or your client-side query engine of choice,
  2. A Sprite Name, such as ss_add, etc.  These are provided by SilkSprite.
  3. The button text.

So, knowing that, a basic button could be rendered using:

<div id=”my-button” class=”button”>

    <span class=”sprite ss_add” />Add Activity

</div>

We can simply substitute out the variable elements (id, sprite class, and text) and create a Spark partial view (named _styledButton.spark and located in the /Shared view folder).

<div id=”${id}class=”button”>

    <span class=”sprite ${sprite}“></span>${text}

</div>

In our views, it’s easy to create a button:

<styledButton text=”‘Add Activity‘” id=”‘my-button‘” sprite=”‘ss_page_add‘” />

Note: You need to surround your text with single quotes (‘text’) to inform the Spark engine that the information contained in your variables is a string.

Now, we have shiny, icon’d buttons:

Hover Off:

Hover On:

Categories: HTML, MVC, Spark View Engine

Populating Select Lists in ASP.NET MVC and jQuery

September 25, 2009 Comments off

I’ve been working the last bit to find the best way to create/populate select (option) lists using a mixture of ASP.NET MVC and jQuery.  What I’ve run into is that the “key” and “value” tags are not passed along when using Json(data).

Here’s what I’m trying to pull off in jQuery: building a simple select drop down list.

var dd_activities = “<select id=’dd_activities’>”;
var count = data.length;
for (var i = 0; i < count; i++) {
 dd_activities += “<option value='” + data[i].Key + “‘>” + data[i].Value + “</option>”;
}
dd_activities += “</select>”;

$(“#activities”).before(dd_activities);

Using some very basic key/value data:

[
 {“3″,”Text Value”},
 {“4″,”Another Text Value”},
 {“1″,”More boring values…”},
 {“2″,”Running out of values”},
 {“5″,”Last value…”}
]

Without any sort of name, I was at a loss on how to access the information, how to get it’s length, or anything.  FireBug was happy to order it up… but that didn’t help.
 
My first attempt was to use a custom object, but that just felt dirty—creating NEW objects simply to return Json data.
 
My second attempt matched the mentality of newing new anonymous Json objects and seemed to work like a champ:
 

[Authorize]

[CacheFilter(Duration = 20)]

public ActionResult GetActivitiesList()

{

    try

    {

        var results =

        _activityRepository

            .GetAll()

            .OrderBy(x => x.Target.Name).OrderBy(x => x.Name)

            .Select(x => new

                {

                    Key = x.Id.ToString(),

                    Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

                })

            .ToList();

 

        return Json(results);

    }

    catch (Exception ex)

    {

        return Json(ex.Message);

    }

}

 
Well, not beautiful, but returns a sexy Key/Value list that Json expects—and that populates our select list.
[
 {“Key”:”3″,”Value”:”Text Value”},
 {“Key”:”4″,”Value”:”Another Text Value”},
 {“Key”:”1″,”Value”:”More boring values…”},
 {“Key”:”2″,”Value”:”Running out of values”},
 {“Key”:”5″,”Value”:”Last value…”}
]
The next step was to get that out of the controller and into the data repository… pushing some of that logic back down to the database.
 

var criteria =

    Session.CreateCriteria<Activity>()

    .CreateAlias(“Target”, “Target”)

    .Add(Restrictions.Eq(“IsValid”, true))

    .AddOrder(Order.Asc(“Target.Name”))

    .AddOrder(Order.Asc(“Name”))

    .SetMaxResults(100);

 

var data = criteria.List<Activity>();

var result =

    data

        .Select(x => new

            {

                Key = x.Id.ToString(),

                Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

            })

        .ToList();

tx.Commit();

return result;

 
A bit of formatting, restrictions, push the ordering back to the database, and a tidy SQL statement is created.
 
The last touch is the return type.  Since we’re returning a “List” of anonymous types, the return type of GetActivitiesList() must be an IList.
 
That shrinks down my ActionResult to a single call.
 

try

 {

     return Json(_activityRepository.GetActivitiesList());

 }

 catch (Exception ex)

 {

     return Json(ex.Message);

 }

 
That works… and will work for now.  Though, I’ve marked it as a HACK in my code.  Why?  I’m honestly not sure yet.  Just a feeling.

Html.Grid Rendering as Plain Text?

September 23, 2009 1 comment

Notice: Stupid, stupid moment described ahead.  Proceed with caution.

I spent a good half hour trying to figure out why my MVCContrib Html.Grid<T> wasn’t rendering.  It wasn’t throwing an error, it was simply returning the HTML code as Plain Text.

  • The AutoMapper code looked good,
  • The Html.Grid<T> code looked good (it’d be templated off another page anyway and that page was working),
  • The view model code looked good.

So why was I being greeted with garble junk?

${Html.Grid(Model.Details) .Attributes(Id => “RoutineDetails”) .Columns(column => { column.For(c => this.Button(“EditDetail”).Value(“Edit”).Id(string.Format(“edit_{0}”, c.Id))).DoNotEncode(); column.For(c => c.Activity.Target.Name).Named(“Target Area”); column.For(c => c.Activity.Name).Named(“Activity”); column.For(c => c.Sets); column.For(c => c.Weight); column.For(c => c.Repetitions); column.For(c => c.Duration); })}

Html.Grid that is not… well, at least not properly.

Encoding issue? Maybe. Data issue? Perhaps.

No, the issue was typing too quick and not paying attention.

public ActionResult New()

 {

     var viewModel = BuildRoutineNewViewModel(new Routine());

     return View();

 }

Yeah, that’s the problem… right there.  I’d forgotten to pass the view model into the View.  Apparently the Html.Grid<T> helper simply panics if the model you’re reading from is empty or null—rather than throwing an error.

Oddly enough, this is one of those times I’d wish the screen would have lit up red.  Lessons learned.

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. 😉  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. 😉

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. 😉

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. 😉

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Fetching Nested Group Memberships in Active Directory

July 22, 2009 Comments off

As we’ve started using Active Directory more and more to provide single sign-on services for our web applications, group memberships have become more important.

We recently rolled out an application that took advantage of nesting groups (easier to add and manage five global groups than 10,000 individuals); however, our existing code to fetch memberships wouldn’t look at nested groups.

So if I was a member of “Student Achievement”, how could I parse the memberships of that group and determine if I was in “MIS”?

Thankfully, a bit of recursion does the trick… 🙂

As our infrastructure is entirely Windows Server 2003 and higher, I use the System.DirectoryServices.Protocols namespace and methods to connect to and parse out information from LDAP.  Because of this, I rely on SearchResult(s) rather than DirectoryEntries. 

In our environment, a “user” is defined as:

“(&(objectCategory=person)(objectClass=user)(mail=*)({0}={1}))”

Everything looks pretty plain except we require that a valid “user” have an email address.  That ensures we filter out junk/test accounts as only employees have Exchange accounts.

Groups are even easier:

“(objectCategory=group)”

If, say I’ve queried for a single user, the groups property is populated simply by looking at the local user’s “memberOf” attribute.

private static IEnumerable<string> ParseGroupMemberships(SearchResultEntry result, int countOfGroups)

{

    for (int i = 0; i < countOfGroups; i++)

    {

        var fullGroupName = (string) result.Attributes[“memberOf”][i];

        //Fully Qualified Distinguished Name looks like:

        //CN={GroupName},OU={AnOU},DC={domain},DC={suffix}

        //CN=DCI,OU=Groups,OU=Data Center,DC=usd259,DC=net

        int startGroupName = fullGroupName.IndexOf(“=”, 1);

        int endGroupName = fullGroupName.IndexOf(“,”, 1);

        if (startGroupName != -1)

        {

            string friendlyName =

                fullGroupName.Substring(startGroupName + 1, (endGroupNamestartGroupName) – 1);

            yield return friendlyName;

        }

    }

}

That was fine for the primary groups (attached through memberOf); however, it didn’t look at the groups those groups were a “memberOf”. 🙂

After quite a bit of trial and error, the new method looks pretty ugly, but seems to be quite performant and reliant in tests. 

private static IEnumerable<string> ParseGroupMemberships(

    SearchResultEntry result, int countOfGroups)

{

    var primaryGroups = new List<string>(countOfGroups);

    var allGroups = new List<string>();

 

    for (int index = 0; index < countOfGroups; index++)

    {

        primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

        allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    }

 

    var connection = new ActiveDirectory().GetConnection();

 

    while (0 < primaryGroups.Count)

    {

        var searchRequest = new SearchRequest(distinguishedName,

                                              CreateFilterFromGroups(primaryGroups),

                                              SearchScope.Subtree,

                                              ldapGroupsAttribute);

        primaryGroups.Clear();

 

        var response = (SearchResponse)connection.SendRequest(searchRequest);

        if (response != null)

        {

            int entriesCount = response.Entries.Count;

            for (int entry = 0; entry < entriesCount; entry++)

            {

                DirectoryAttribute groupList =

                    response.Entries[entry].Attributes[ldapGroupsAttribute];

 

                if (groupList != null)

                {

                    int groupCount = groupList.Count;

                    for (int index = 0; index < groupCount; index++)

                    {

                        string dn = groupList[index].ToString();

                        if (!allGroups.Contains(dn))

                        {

                            allGroups.Add(dn);

                            primaryGroups.Add(dn);

                        }

                    }

                }

            }

        }

    }

    connection.Dispose();

 

    foreach (string dn in allGroups)

    {

        yield return GetFriendlyName(dn);

    }

}

Here’s a breakdown of the highlights:

var primaryGroups = new List<string>(countOfGroups);

var allGroups = new List<string>();

 

for (int index = 0; index < countOfGroups; index++)

{

    primaryGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

    allGroups.Add(result.Attributes[ldapGroupsAttribute][index].ToString());

}

This section takes the SearchResultEntry’s primary groups and adds each one of them to two lists.

  • The ‘primaryGroups’ list is exactly that—here’s a list of groups that we need to iterate over and find the nested groups. 
  • The ‘allGroups’ will hold our master list of every unique group and will provide our return value.

var searchRequest = new SearchRequest(distinguishedName,

                                      CreateFilterFromGroups(primaryGroups),

                                      SearchScope.Subtree,

                                      ldapGroupsAttribute);

primaryGroups.Clear();

This code formulates our LDAP search request. distinguishedName and ldapGroupsAttribute are two constants in my code base (for our domain’s DN and “memberOf”).  CreateFilterFromGroups takes the list of groups and concats them together—so we’re only looking at the groups we want, not everything.

Finally, we’re reusing our primaryGroups list to look for nested within nested… within nested, so clear that out—infinite loops hinder performance. 🙂

int entriesCount = response.Entries.Count;

for (int entry = 0; entry < entriesCount; entry++)

{

    DirectoryAttribute groupList =

        response.Entries[entry].Attributes[ldapGroupsAttribute];

 

    if (groupList != null)

    {

        int groupCount = groupList.Count;

        for (int index = 0; index < groupCount; index++)

        {

            string dn = groupList[index].ToString();

            if (!allGroups.Contains(dn))

            {

                allGroups.Add(dn);

                primaryGroups.Add(dn);

            }

        }

    }

}

Here’s our massive, disgusting block of if statements that populate the lists and keep the where statement running as long as primaryGroups returns a count > 0.

foreach (string dn in allGroups)

{

    yield return GetFriendlyName(dn);

}

Finally, use a helper method to convert the DN to a “friendly name” and return it to the caller (using yield since our method returns an IEnumerable<string>).

Running a quick test gives me:

UserAccount_Can_Get_Group_Memberships_With_Default_Security : Passed

Group count for David Longnecker is 138
Elapsed time for first query: 00:00:00.0420000

Wow, I’m in a lot of groups… O_o. The query is relatively quick (that is with connection buildup and teardown time and generating the rest of the attributes of the user) especially considering our AD infrastructure is far from optimal.

In addition, a LDAP query using ADUC gives the same results. 

If nothing else, its consistent! 🙂