Archive

Archive for the ‘Visual Studio 2008’ Category

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Using .less To Simplify BluePrintCSS

December 17, 2009 2 comments

For the past few projects, I’ve used BluePrintCSS and really liked the experience.  It forced me both to conquer my CSS layout fears (tables no more) and standardize a few of my formatting techniques that we use on our internal and external applications.  Good deal all around.

The one caveat that I really… really didn’t like was how I had to name things.

The clean class codes and IDs that I had…

<div class="page">
    <div class="header">
        <div class="title">
            <h1>${H(ApplicationName)}</h1>
        </div>
        [...]
    </div>
</div>

Turned into long, drawn out classes…

<div class="container">
    <div class="span-24">
        <div class="prepend-1 span-12 column">
            <h1>${H(ApplicationName)}</h1>
        </div>
    </div>
</div>

Without the BluePrintCSS guide or the CSS files available, you couldn’t look at the classes and tell much of what was going on… and it wasn’t descriptive like ‘header’ and ‘title’.

Welcome To .less (dotless)

I stumbled onto .less (aka dotless, dotlesscss, that shizzle css thingy) back in November and thought “hey, that’s cool… that’s how CSS should work” and didn’t give it much more thought.  Shortly after fav’ing it in github, I noticed they pushed an update targeting BluePrintCSS operability.  Cool–I’ve GOT to try this out.

Getting Started with .less

The instructions on the home page (right side of the screen) is all you need.  Clone, compile, update web.config and start go!

The Benefits

So what’s the big hype?  This:

1. Import your BluePrintCSS file into your .less file (for me, it’s site.less).

@import “screen.css”;

2. Simply reference any of the BluePrintCSS class styles as part of your custom styles.

#header {
    #title {
        .span-10;
        .column;
    }
 
    #menucontainer {
        .span-14;
        .column;
        .last;
        text-align: right;
    }
}
 
#left-content {
    .span-18;
    .column;
}
 
#right-boxes {
    .span-6;
    .column;
    .last;
}

Then a miracle occurs...3. “Then a miracle occurs…”

When dotless’ HttpHandler hits your .less file (or your use dotless.Compiler), it translates those referenced styles into their actual CSS tags.


#header #title{width:390px;float:left;margin-right:10px;}

Nice.  Plain and simple (and miraculous).

Lessons Learned

Some “lessons learned” so far:

1. Order matters.  Referencing a style before you’ve ‘created’ it will bork the interperter. So @imports always go at the top and if you’re referencing within the same .less file, keep things in order.

2. Pre-compiling is fun.  For now, I’m pre-compiling my .less files without using the handler and simply sending the css file up to our web server.  This is easily taken care of with either a MS Build task or psake task.  Here’s how an example of a quick MS Build task that references the dotless.Compiler in the solution’s “tools” directory.

$(SolutionDir)Tools\dotLess\dotless.compiler.exe -m $(ProjectDir)content\css\site.less $(ProjectDir)content\css\site.css

3. .less files need to be ‘Content’. Since VS2008 is stupid, .less files (like .spark views, etc) need to be explicitly set to have a Build Action of ‘Content’ so that the publishing process sends them up to the web server.  If you’re publishing via psake or another automation tool, then ignore this. ;)

That’s it for now.  Hit up the project site, peruse and clone the github repo, and join the discussion for .less and (finally) start applying some DRY to your CSS.

Tip: Excluding Auto-Generated Files from SourceSafe

December 9, 2009 Comments off

Being an avid git user outside the workplace, I’m used to setting up .gitignore files for my ReSharper and pre-generated sludge that find their way into my projects.  However, until today, I never found a clean way of handling pre-generated files with Visual SourceSafe.

Seems the answer was just in a menu that I never used (imagine that…).

Situation

For now, rather than having dotless generate my files on the fly, I’m using the compiler to compile my .less into .css files.  Since I’m using the built-in publishing features (which, I am replacing with psake tasks–more on that later) for this project, any files not included in the project are skipped/not copied.  That’s a bummer for my generated .css file.

The answer is to include the file in the project; however, when checked in, dotless.Compiler crashes because it can’t rewrite the file (since it’s read-only).

Solution

Exclude the file from source control. Sure, that sounds good, but how using SourceSafe?

1. Select the file, site.css in this case.

2. File > Source Control > Exclude ‘site.css’ from Source Control.

Yeah, seriously that easy.  Instead of the normal lock/checkmark, a red (-) appears by the file (which is fine) and everything compiles as expected.

I’ve used SourceSafe for years now and never saw that in there… it doesn’t look like I can wildcard files or extensions like .gitignore (or even folders–the option disappears if anything but a single file is selected), but for a one-off case like this, it works just fine.

Populating Select Lists in ASP.NET MVC and jQuery

September 25, 2009 Comments off

I’ve been working the last bit to find the best way to create/populate select (option) lists using a mixture of ASP.NET MVC and jQuery.  What I’ve run into is that the “key” and “value” tags are not passed along when using Json(data).

Here’s what I’m trying to pull off in jQuery: building a simple select drop down list.

var dd_activities = “<select id=’dd_activities’>”;
var count = data.length;
for (var i = 0; i < count; i++) {
 dd_activities += “<option value=’” + data[i].Key + “‘>” + data[i].Value + “</option>”;
}
dd_activities += “</select>”;

$(“#activities”).before(dd_activities);

Using some very basic key/value data:

[
 {"3","Text Value"},
 {"4","Another Text Value"},
 {"1","More boring values..."},
 {"2","Running out of values"},
 {"5","Last value..."}
]

Without any sort of name, I was at a loss on how to access the information, how to get it’s length, or anything.  FireBug was happy to order it up… but that didn’t help.
 
My first attempt was to use a custom object, but that just felt dirty—creating NEW objects simply to return Json data.
 
My second attempt matched the mentality of newing new anonymous Json objects and seemed to work like a champ:
 

[Authorize]

[CacheFilter(Duration = 20)]

public ActionResult GetActivitiesList()

{

    try

    {

        var results =

        _activityRepository

            .GetAll()

            .OrderBy(x => x.Target.Name).OrderBy(x => x.Name)

            .Select(x => new

                {

                    Key = x.Id.ToString(),

                    Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

                })

            .ToList();

 

        return Json(results);

    }

    catch (Exception ex)

    {

        return Json(ex.Message);

    }

}

 
Well, not beautiful, but returns a sexy Key/Value list that Json expects—and that populates our select list.
[
 {"Key":"3","Value":"Text Value"},
 {"Key":"4","Value":"Another Text Value"},
 {"Key":"1","Value":"More boring values..."},
 {"Key":"2","Value":"Running out of values"},
 {"Key":"5","Value":"Last value..."}
]
The next step was to get that out of the controller and into the data repository… pushing some of that logic back down to the database.
 

var criteria =

    Session.CreateCriteria<Activity>()

    .CreateAlias(“Target”, “Target”)

    .Add(Restrictions.Eq(“IsValid”, true))

    .AddOrder(Order.Asc(“Target.Name”))

    .AddOrder(Order.Asc(“Name”))

    .SetMaxResults(100);

 

var data = criteria.List<Activity>();

var result =

    data

        .Select(x => new

            {

                Key = x.Id.ToString(),

                Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

            })

        .ToList();

tx.Commit();

return result;

 
A bit of formatting, restrictions, push the ordering back to the database, and a tidy SQL statement is created.
 
The last touch is the return type.  Since we’re returning a “List” of anonymous types, the return type of GetActivitiesList() must be an IList.
 
That shrinks down my ActionResult to a single call.
 

try

 {

     return Json(_activityRepository.GetActivitiesList());

 }

 catch (Exception ex)

 {

     return Json(ex.Message);

 }

 
That works… and will work for now.  Though, I’ve marked it as a HACK in my code.  Why?  I’m honestly not sure yet.  Just a feeling.

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. ;)  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. ;)

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. ;)

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Using Git (and everything else) through PowerShell

August 21, 2009 5 comments

After a discussion on Stack Overflow a few days ago (and hopefully a useful answer), I got to thinking a bit about how I use PowerShell.  It may be a bit geekish, but PowerShell starts up on Windows startup for me.  The prompt is almost always open on a second monitor–ready for whatever task I may need.

As the SO post mentioned, I also use PowerShell to connect to my Git repositories.  At the office, it has a few more customizations to hashout against our *shudder* SourceSafe */shudder* repositories, but that’s a different post. 

For now, I wanted to walk through how profile script is setup in a bit more detail than the SO post.

Creating a Profile Script

UPDATE: The full source code (plus a few extras) for this article can be found here : http://codepaste.net/53a7z6

A profile script is essentially a “startup” script for your PowerShell environment. 

By default (perhaps a registry key changes this), it’s located in %userprofile%\Documents\WindowsPowerShell and is aptly named Microsoft.PowerShell_Profile.ps1.  The naming convention between “WindowsPowerShell” and “MicrosoftPowerShell” is a bit annoying, but not a big problem.

The file is just plain text, so feel free to use your editor of choice or PowerShell ISE (Windows 7, Windows 2008 R2) for some fancy content highlighting.

What goes in here?

As far as I can tell, the profile is a great place to initialize global customizations:

  • environmental variables,
  • paths,
  • aliases,
  • functions that you don’t want extracted to .ps1 files,
  • customizatons to the console window,
  • and, most importantly, customize the command prompt.

The Console

I use Console2 rather than the standard PowerShell prompt.  Console2 is an amazing open source alternative to the standard console and includes features such as ClearType, multiple tabs, and more.  Check it out.

I also use Live Mesh, so there are a few things that are unnecessary for most users.  Live Mesh is an online syncronization service… so my PowerShell scripts (amongst other things) stay synced between my home and work environments.

My PowerShell Prompt At Startup

Preparing the Environment

My profile script starts off by setting up a few global variables to paths.  I use a quick function to setup the parameters based on the computer I’m currently using. 

# General variables

$computer = get-content env:computername

switch ($computer)

{

    “WORKCOMPUTER_NAME” {

        ReadyEnvironment “E:” “dlongnecker” $computer ; break }

    “HOMECOMPUTER_NAME” {

        ReadyEnvironment “D:” “david” $computer ; break }

    default {

        break; }

}

function ReadyEnvironment (

            [string]$sharedDrive,

            [string]$userName,

            [string]$computerName)

{

    set-variable tools “$sharedDrive\shared_tools” -scope 1

    set-variable scripts “$sharedDrive\shared_scripts” -scope 1

    set-variable rdpDirectory “$sharedDrive\shared_tools\RDP” -scope 1

    set-variable desktop “C:\Users\$userName\DESKTOP” -scope 1

    Write-Host “Setting environment for $computerName” -foregroundcolor cyan

}

Easy enough.  I’m sure I could optimize this a bit more, but it works.  Again, this wouldn’t be necessary on a single computer, but since I use LiveMesh and the same PowerShell profile on multiple computers—this keeps my paths in check.

The second step is to modify the $PATH environmental variable to point to my scripts and Git as well as add a new $HOME variable to satisfy Git’s needs.

# Add Git executables to the mix.

[System.Environment]::SetEnvironmentVariable(“PATH”, $Env:Path + “;” + (Join-Path $tools “\PortableGit-1.6.3.2\bin”), “Process”)

 

# Add our scripts directory in the mix.

[System.Environment]::SetEnvironmentVariable(“PATH”, $Env:Path + “;” + $scripts, “Process”)

 

# Setup Home so that Git doesn’t freak out.

[System.Environment]::SetEnvironmentVariable(“HOME”, (Join-Path $Env:HomeDrive $Env:HomePath), “Process”)

Customizing the Console Prompt

The ‘prompt’ function overrides how the command prompt is generated and allows a great deal of customization.  As I mentioned in the SO post, the inspiration for my Git prompt comes from this blog post.

I’ve added quite a few code comments in here for reference. 

function prompt {

    Write-Host(“”)

    $status_string = “”

    # check to see if this is a directory containing a symbolic reference,

    # fails (gracefully) on non-git repos.

    $symbolicref = git symbolic-ref HEAD

    if($symbolicref -ne $NULL) {

       

        # if a symbolic reference exists, snag the last bit as our

        # branch name. eg “[master]“

        $status_string += “GIT [" + `

            $symbolicref.substring($symbolicref.LastIndexOf("/") +1) + "] “

       

        # grab the differences in this branch   

        $differences = (git diff-indexname-status HEAD)

       

        # use a regular expression to count up the differences.

        # M`t, A`t, and D`t refer to M {tab}, etc.

        $git_update_count = [regex]::matches($differences, “M`t”).count

        $git_create_count = [regex]::matches($differences, “A`t”).count

        $git_delete_count = [regex]::matches($differences, “D`t”).count

       

        # place those variables into our string.

        $status_string += “c:” + $git_create_count + `

            ” u:” + $git_update_count + `

            ” d:” + $git_delete_count + ” | “

    }

    else {

        # Not in a Git environment, must be PowerShell!

        $status_string = “PS “

    }

   

    # write out the status_string with the approprate color.

    # prompt is done!

    if ($status_string.StartsWith(“GIT”)) {

        Write-Host ($status_string + $(get-location) + “>”) `

            -nonewline -foregroundcolor yellow

    }

    else {

        Write-Host ($status_string + $(get-location) + “>”) `

            -nonewline -foregroundcolor green

    }

    return ” “

 }

The prompts are then color coded, so I can keep track of where I am (as if the really long prompt didn’t give it away).

Prompts

Now, with our prompts and our pathing setup to our Git directory, we have all the advantages of Git—in a stellar PowerShell package.

NOTE: I would like to point out that I use PortableGit, not the installed variety.  Since Git also moves back and forth across my Live Mesh, it seemed more reasonable to use the Portable version.  I don’t believe; however, there would be a difference as long as the \bin directory is referenced.

Setting up Aliases—The Easy Way

Brad Wilson’s implementation of find-to-set-alias is brillant.  Snag the script and get ready for aliasing the easy way.  I keep my most common tools aliased—Visual Studio, PowerShell ISE, and NotePad.  I mean, is there anything else?  (Well, yes, but I have Launchy for that).

Using find-to-set-alias is easy—provide a location, an executable, and an alias name:

find-to-set-alias ‘c:\program files*\Microsoft Visual Studio 9.0\Common7\IDE’ devenv.exe vs

find-to-set-alias ‘c:\windows\system32\WindowsPowerShell\v1.0\’ PowerShell_ISE.exe psise

find-to-set-alias ‘c:\program files*\Notepad2′ Notepad2.exe np

Helpers – Assembly-Info

After getting tired of loading up System.Reflection.Assembly everytime I wanted to see what version of a library I had, I came up with a quick script that dumps out the name of the assembly and the file version.

param(

  $file= $(throw “An assembly file name is required.”)

)

    $fullpath = (Get-Item $file).FullName

    $assembly = [System.Reflection.Assembly]::Loadfile($fullpath)

 

    # Get name, version and display the results

    $name = $assembly.GetName()

    $version =  $name.version

 

    “{0} [{1}]“ -f $name.name, $version

With this, running assembly-info NHibernate.dll returns:

NHibernate [2.1.0.4000]

Nifty.

Taking it a step further, I created a quick function in my profile called ‘aia’ or ‘assembly info all’ that runs assembly-info on all .dlls in the directory.

function aia {

    get-childitem | ?{ $_.extension -eq “.dll” } | %{ ai $_ }

}

Now, in that same directory, I get:

Antlr3.Runtime [3.1.0.39271]
Castle.Core [1.1.0.0]
Castle.DynamicProxy2 [2.1.0.0]
FluentNHibernate [0.1.0.0]
Iesi.Collections [1.0.1.0]
log4net [1.2.10.0]
Microsoft.Practices.ServiceLocation [1.0.0.0]
Moq [4.0.812.4]
MySql.Data [6.0.4.0]
NHibernate.ByteCode.Castle [2.1.0.4000]
NHibernate [2.1.0.4000]
System.Data.SQLite [1.0.60.0]
System.Web.DataVisualization.Design [3.5.0.0]
System.Web.DataVisualization [3.5.0.0]
xunit [1.1.0.1323]

Stellar.

Helpers – Visual Studio “Here”

This was created totally out of laziness.  I have already setup an alias to Visual Studio (‘vs’); however, I didn’t want to type “vs .\projectName.sln”.  That’s a lot.  I mean, look at it. 

So, a quick, and admitted dirty, method to either:

  1. Open the passed solution,
  2. If multiple .sln exist in the directory, open the first one,
  3. If only one .sln exists, open that one.

I don’t often have multiple solution files in the same directory, so #3 is where I wanted to end up.

function vsh {

    param ($param)

   

    if ($param -eq $NULL) {

        “A solution was not specified, opening the first one found.”

        $solutions = get-childitem | ?{ $_.extension -eq “.sln” }

    }

    else {

        “Opening {0} …” -f $param

        vs $param

        break

    }

    if ($solutions.count -gt 1) {

        “Opening {0} …” -f $solutions[0].Name

        vs $solutions[0].Name

    }

    else {

        “Opening {0} …” -f $solutions.Name

        vs $solutions.Name

    }

}

That’s about the gist of it.  The challenge (and fun part) is to keep looking for ways to imrpove common processes using Git.  As those opportunities arise, I’ll toss them out here. :)

UPDATE: The full source code (plus a few extras) for this article can be found here : http://codepaste.net/53a7z6

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. ;)

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Filtering an Enum by Attribute

July 9, 2009 Comments off

I had a curve ball thrown at me this morning—changing requirements.  It happens and was easily solved by a couple of custom attributes and a helper method.

UPDATE: I’ve updated the code (and explaination) for FilterEnumWithAttributeOf below to tidy it up a bit.

In our current project, there is an enum of standard, static “periods” (times of days students are in school).  Easy enough.

BeforeSchool = 0,
FirstPeriod = 1,
SecondPeriod = 2,
etc.

But what happens if we want to “query” our list down a bit… say a certain group only wanted a subset of the “periods”.

I could create an entirely different Enum — Group1Period and Group2Period.

But then handling things in FluentNHibernate’s automapping would get freaked out with the Period property.

So, what about a custom attribute?

  1. I can assign multiple custom attributes to the same Enum field so I can be in Group1 and Group2 at the same time.
  2. I can keep the same Enum “Period” for my ORM layer.
  3. Now how do I query it down…?

Here’s an abstracted example of how the enum looks right now:

public enum Period

{

    [Elementary][Secondary]

    [Description("Before School")]

    BeforeSchool = 0,

 

    [Elementary]

    Homeroom = 12,

 

    [Secondary]

    [Description("1st")]

    First = 1,

}

Elementary and Secondary (our two groups, in this case) are “logicless” attributes (I’m just looking at them as flags, not passing/storing information).

[AttributeUsage(AttributeTargets.Field)]

public class ElementaryAttribute : Attribute

{

}

 

[AttributeUsage(AttributeTargets.Field)]

public class SecondaryAttribute : Attribute

{

}

Now, to filter out those pesky periods based on the attributes.

Update:

Old Code!

public IEnumerable<TEnum> FilterEnumWithAttributeOf<TEnum, TAttribute>()

{

    foreach (var field in

        typeof (TEnum).GetFields(BindingFlags.GetField |

                                 BindingFlags.Public |

                                 BindingFlags.Static))

    {

        foreach (var attribute in

            field.GetCustomAttributes(typeof (TAttribute), false))

        {

            yield return (TEnum) field.GetValue(null);

        }

    }

}

New Code!

public static IEnumerable<TEnum> FilterEnumWithAttributeOf<TEnum, TAttribute>()

    where TEnum : struct

    where TAttribute : class

{

    foreach (var field in

        typeof(TEnum).GetFields(BindingFlags.GetField |

                                 BindingFlags.Public |

                                 BindingFlags.Static))

    {

 

        if (field.GetCustomAttributes(typeof(TAttribute), false).Length > 0)

            yield return (TEnum)field.GetValue(null);

    }

}

Why new code?

Well, after looking over the code, I don’t need to iterate through each attribute, simply see if the field contains it (Length > 0).  If it does, then return it.  That cuts a loop out of our code and performs the same function.  I also added two generic constraints.  You can’t constrain by Enum, but struct works well.

I’m passing in two generics in this case—TEnum, which is the type of the of the Enum and TAttribute.. the type of the attribute.  Yeah, I realize that my creativity of naming is pretty low.  Work with me here, alright? ;)

Past that, the loops are pretty easy.

  1. Loop through each field of the enumeration.  Return the field (GetField) and be sure to check Public and Static fields.
  2. Loop through each custom attribute on each field (returned by GetField) and only return the fields that match the type of our attribute.  I pass along the false parameter (do not inherit) because I’m not interested in inherited attributes. You could leave this as true. YMMV.
  3. If the field’s attribute’s contains our type, yield out the actual Enum value (a string of the field isn’t as useful).

Now, for using it…

var enums = FilterEnumWithAttributeOf<Period, ElementaryAttribute>();

 

foreach (var period in enums)

{

    Console.WriteLine(“{0}, {1}”.AsFormatFor(period, (int)period));

}

Easy enough.  ElementaryAttribute returns:

BeforeSchool, 0
Homeroom, 12
AfterSchool, 10
etc..

Running the same code, but asking for SecondaryAttribute returns:

BeforeSchool, 0
First, 1
Second, 2
etc..

Sweet.

Tags: , ,

AutoMappings in NHibernate – A Quick Runthrough

June 26, 2009 Comments off

For most of my projects, at least since I’ve moved to NHibernate/Fluent NHibernate, I’ve been trapped using the existing data structures of prior iterations.  Funky naming conventions (many due to cross-cultural, international column naming), missing data relationships, and general craziness.

Having used Fluent Mappings (creating a class that implements ClassMap<objectType>) in the past, they were a huge jump up from writing painful data objects, connecting them together, and recreating the wheel with “SELECT {column} from {table}” code.  Create a map, use the fluent methods to match column to property, and away you go.

In a recent project, I’ve had the opportunity to build a new system from the ground up.  With this, I decided to dive head first into using the AutoMappings functionality of Fluent NHibernate. 

This post is somewhat a rambling self-discussion of my explorations with AutoMappings.

What are AutoMappings?

The FluentNHibernate wiki provides a simple definition:

[…] which is a mechanism for automatically mapping all your entities based on a set of conventions.

Rather than hand-mapping each column to a property, we create conventions (rules) to map those.. automatically.  Hey look, auto…mappings.  ;)

How?

Using the same fluent language, configuring AutoMapping is an exercise in implementing conventions for the logical naming and handling of data.

Fluently

    .Configure()

    .Database(MsSqlConfiguration.MsSql2005

                  .ConnectionString(cs => cs

                                              .Server(“server”)

                                              .Database(“db”)

                                              .Username(“user”)

                                              .Password(“password”)

                  )

                  .UseReflectionOptimizer()

                  .UseOuterJoin()

                  .AdoNetBatchSize(10)

                  .DefaultSchema(“dbo”)

                  .ShowSql()

    )

    .ExposeConfiguration(raw =>

                             {

                                 // Testing/NHibernate Profiler stuffs.

                                 raw.SetProperty(“generate_statistics”, “true”);

                                 RebuildSchema(raw);

                             })

    .Mappings(m =>

              m.AutoMappings.Add(AutoPersistenceModel

                                     .MapEntitiesFromAssemblyOf<Walkthrough>()

                                     .ConventionDiscovery.Setup(c =>

                                                                    {

                                                                        c.Add<EnumMappingConvention>();

                                                                        c.Add<ReferencesConvention>();

                                                                        c.Add<HasManyConvention>();

                                                                        c.Add<ClassMappingConvention>();

                                                                    })

                                     .WithSetup(c => c.IsBaseType = type => type == typeof (Entity)))

                  .ExportTo(@”.\”)

    );

As you can see above, the only difference from a fluent mappings configuration is in the actual Mappings area.  Good deal!  That helps ensure my existing work using fluent mappings could translate without too much headache.

I’ve specified four conventions.  Each of these conventions have interfaces that provide the necessary methods to ensure your rules are appied to the correct objects.

EnumMappingConvention

internal class EnumMappingConvention : IUserTypeConvention

{

    public bool Accept(IProperty target)

    {

        return target.PropertyType.IsEnum;

    }

 

    public void Apply(IProperty target)

    {

        target.CustomTypeIs(target.PropertyType);

    }

 

    public bool Accept(Type type)

    {

        return type.IsEnum;

    }

}

The great thing about these methods is they’re fluent enough to translate to English.

Accept… targets where the property type is an enumeration.

Apply… to the target that the “Custom Type Is” the property type of the target.
  NOTE: This translates from a ClassMap into: Map(x => x.MyEnumFlag).CustomTypeIs(typeof(MyEnum));

Accept… a type that is an enumeration.

ReferenceConvention

The Reference convention handles those reference relationships between our classes (and the foreign keys).

internal class ReferencesConvention : IReferenceConvention

{

    public bool Accept(IManyToOnePart target)

    {

        return string.IsNullOrEmpty(target.GetColumnName());

    }

 

    public void Apply(IManyToOnePart target)

    {

        target.ColumnName(target.Property.Name + “Id”);

    }

}

The most important part here is enforcing how your foreign keys are going to be named.  I prefer the simple {Object}Id format.

Car.Battery on the object side and [Car].[BatteryId] on the database side.

HasManyConvention

The HasManys are our lists, bags, and collections of objects.

internal class HasManyConvention : IHasManyConvention

{

 

    public bool Accept(IOneToManyPart target)

    {

        return target.KeyColumnNames.List().Count == 0;

    }

 

    public void Apply(IOneToManyPart target)

    {

        target.KeyColumnNames.Add(target.EntityType.Name + “Id”);

        target.Cascade.AllDeleteOrphan();

        target.Inverse();

    }

}

We want to make sure that we haven’t added any other key columns (the Count == 0), and then apply both the naming convention as well as a few properties.

Cascade.AllDeleteOrphan() and Inverse() allows our parent objects (Car) to add new child objects (Car.Battery (Battery), Car.Accessories (IList<Accessory>)) without separating them out.

ClassMappingConvention

Finally, the important Class mapping.  This convention ensures that our tables are named property with pluralization.

public class ClassMappingConvention : IClassConvention

{

    public bool Accept(IClassMap target)

    {

        return true; // everything

    }

 

    public void Apply(IClassMap target)

    {

        target.WithTable(PluralOf(target.EntityType.Name));

    }

}

I’m using a pluralization method from one of my base libraries that I borrowed from Hudson Akridge.  This helper method works really well and I don’t need to add additional references and libraries into my application just to handle the table names.

public static string PluralOf(string text)

  {

      var pluralString = text;

      var lastCharacter = pluralString.Substring(pluralString.Length – 1).ToLower();

 

      // y’s become ies (such as Category to Categories)

      if (string.Equals(lastCharacter, “y”, StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString = pluralString.Remove(pluralString.Length – 1);

          pluralString += “ie”;

      }

 

      // ch’s become ches (such as Pirch to Pirches)

      if (string.Equals(pluralString.Substring(pluralString.Length – 2), “ch”,

                        StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString += “e”;

      }

      switch (lastCharacter)

      {

          case “s”:

              return pluralString + “es”;

          default:

              return pluralString + “s”;

      }

  }

Save and build.  The ExportSchema method will generate the SQL and/or regen the database based on the specifications you’ve provided to it. and you’re ready to hit the ground running!

 

A Flexible “Is In Range” Extension Method

I’m working out some business rules for an application that allows the end user to specify whether or not to direct records by the first letter of the last name of a student or by the grade of the student.

The quick extension method looks like this:

public static bool IsInRange<T>(this T value, T start, T end)

where T: IComparable<T>

{

return value.CompareTo(start) >= 0 && value.CompareTo(end) <= 0;

}

Our tests:

[Fact]

public void IsInRange_returns_correct_boolean_for_comparison()

{

9.IsInRange(1, 10).ShouldBeTrue();

“L”.IsInRange(“A”, “J”).ShouldBeFalse();

“A”.IsInRange(“A”, “A”).ShouldBeTrue();

“B”.IsInRange(“A”, “A”).ShouldBeFalse();

12302.IsInRange(1, 10).ShouldBeFalse();

 

“Bob”.IsInRange(“A”, “A”).ShouldBeFalse();

“Smith”.IsInRange(“A”, “Z”).ShouldBeTrue();

}

Everything works well… I haven’t tested all of the permutations and types yet… but it gets me out of the jam I’m in right now.

Is there a better way? :D

Follow

Get every new post delivered to your Inbox.