Archive

Archive for the ‘c#’ Category

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:

Student,Code,Product,IUID,TSSOC,Date
123456,e11234,Reading,jsmith,0:18,1/4/2012
123456,e11234,Reading,jsmith,1:04,1/4/2012
123456,e11234,Reading,jsmith,0:27,1/5/2012
123456,e11234,Reading,jsmith,0:19,1/7/2012
123456,e11235,Math,jsmith,0:14,1/7/2012

Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {

   }
$out

Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0′s or 1′s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
   }
$out

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   }

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. :)

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. ;) I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

Workaround: Oracle, NHibernate, and CreateSQLQuery Not Working

January 17, 2012 Comments off

It’s difficult to sum this post up with a title. I started the morning adding (what I thought to be) a trivial feature to one of our shared repository libraries.

By the time I saw light at the end of the rabbit hole, I wasn’t sure what happened. This is the tale of my journey. All of the code is is guaranteed to work on my machine… usually. ;)

I’ve done this before–how hard could it be?

The full source code is available via a gist.

The initial need

A simple need really: take a complex query and trim it down to a model using NHibernate’s Session.CreateSQLQuery and Transformers.AliasToBean<T>.

The problems

 

So far, the only data provider I’ve had these problems with is Oracle’s ODP: Oracle.DataAccess. I’m not sure if the built-in System.Data.OracleClient.

 

Problem #1 – Why is EVERYTHING IN CAPS?

The first oddness I ran into seemed to be caused by the IPropertyAccessor returning the properties in ALL CAPS. When it tried to match the aliases in the array, [FIRSTNAME] != [FirstName]. Well, that’s annoying.

Workaround: Add an additional PropertyInfo[] and fetch the properties myself.

This method ignores the aliases parameter in TransformTuple and relies on a call in the constructor to populate the Transformer’s properties.

public OracleSQLAliasToBeanTransformer(Type resultClass)
{
    // [snip!]
    // this is also a PERSONAL preference to only return fields that have a valid setter.
    _fields = this._resultClass.GetProperties(Flags)
                   .Where(x => x.GetSetMethod() != null).ToArray();
}

Inside TransformTuple, I then call on _fields rather than the aliases constructor parameter.

var fieldNames = _fields.Select(x => x.Name).ToList();

// [big snip!]

setters = new ISetter[fieldNames.Count];
for (var i = 0; i < fieldNames.Count; i++)
{
    var fieldName = fieldNames[i];
    _setters[i] = _propertyAccessor.GetSetter(_resultClass, fieldName);
}

Problem solved. Everything is proper case.

Bold assumption: I’m guessing this is coming back in as upper case because Oracle, by default, stores and retrieves everything as upper case unless it’s surrounded by quotes (which has it’s own disadvantages).

Problem #2 – Why are my ints coming in as decimals and strings as char[]?

This one I’m taking a wild guess. I found a similar issue for Hibernate (Java daddy of NHibernate), but didn’t see a matching NHibernate issue. It seems that the types coming in are correct, but the tuple data types are wrong.

For example, if an object as a integer 0 value, it returns as 0M and implicitly converts to decimal.

Workaround: Use System.Convert.ChangeType(obj, T)

If I used this on every piece of code, I’d feel more guilty than I do; however, on edge cases where the standard AliasToBeanTransformer won’t work, I chalk it up to part of doing business with Oracle.

Inside the TransformTuple method, I iterate over the fields and recast each tuple member accordingly.  The only caveat is that I’m separating out enums and specifically casting them as int32. YMMV.


var fieldNames = _fields.Select(x => x.Name).ToList();
for (var i = 0; i < fieldNames.Count; i++)
{
    var fieldType = _fields[i].PropertyType;
    if (fieldType.IsEnum)
    {
        // It can't seem to handle enums, so convert them
	// to Int (so the enum will work)
	tuple[i] = Convert.ChangeType(tuple[i], TypeCode.Int32);
    }
    else
    {
        // set it to the actual field type on the property we're
	// filling.
	tuple[i] = Convert.ChangeType(tuple[i], fieldType);
    }
}

At this point, everything is recast to match the Type of the incoming property. When all is said and done, adding a bit of exception handling around this is recommended (though, I’m not sure when a non-expected error might pop here).

Problem solved. Our _setters[i].Set() can now populate our transformation and return it to the client.

Summary

Lessons learned? Like Mr. Clarkson usually discovers, when it sounds easy, it means you’ll usually end up on fire. Keep fire extinguishers handy at all times.

Is there another way to do this? Probably. I could probably create a throwaway DTO with all capital letters then use AutoMapper or such to map it to the properly-cased objects. That, honestly, seems more mindnumbing than this (though perhaps less voodoo).

Categories: .net 4.0, c# Tags: , , , ,

Dynamically Adding Sub Reports to an ActiveReport

April 5, 2011 Comments off

I’m currently working on a project where I needed to iterate through a group of users and plug in a little sub report that contains some demographic information and a Code38 barcode.

One sub report is easy–add the control to the page, set the .Report property and away we go; however, adding multiple sub reports dynamically and getting the spacing right proved to be a bit challenging.

Warning: It “works on my machine”.

To add controls to your report dynamically, you must use the _ReportStart event of your report.

To address the spacing issue, let’s start out by specifying our ‘base’ top and the height of our sub report.

const float height = 0.605f;
var currentTop = 7.250f;

In my case, I want my sub reports to start at about 7.25″ and be ~0.605″ in height.

The actual creation of the sub report placeholder is fairly standard–new it up and assign a few properties. I’ll get into the looping a bit later.

var subReport = new SubReport
{
    CloseBorder = false,
    Height = height,
    Left = 0F,
    Width = 7.5F,
    Top = currentTop,
    Name = person.DisplayName + "_SubReport",
    ReportName = person.DisplayName + "_SubReport",
};

Notice how I’ve set the Top property to be our currentTop variable. Keep that in mind.

The next step is to new up our actual sub report object.  My sub report has two properties on it, each for a data item.  I could pass it along as an object, but it seems a bit overkill for two string properties. After the report object is assigned to our new sub report container, add the container to our details section.

var spReport = new _PersonAssignment()
{
    Name = person.DisplayName,
    EmployeeId = person.EmployeeId
};

subReport.Report = spReport;
this.Sections["detail"].Controls.Add(subReport);

Because we’ve assigned it a left, width, and top, our sub report will be added where expected.

The final piece is incrementing the ‘top’ to accommodate for the height of the last sub report.

currentTop += height;

Easy.  Now the next sub report will start at 7.250″ + 0.605″ or 7.855″.  Keep in mind that the 0.605″ includes a bit of whitespace, if you need additional whitespace, pad the height number.

The full _ReportStart event looks like:

const float height = 0.605f;
var currentTop = 7.250f;
foreach (var person in Model.People)
{
    var subReport = new SubReport
    {
        CloseBorder = false,
        Height = height,
        Left = 0F,
        Width = 7.5F,
        Top = currentTop,
        Name = person.DisplayName + "_SubReport",
        ReportName = person.DisplayName + "_SubReport",
      };

    var spReport = new _PersonAssignment()
    {
        Name = person.DisplayName,
        EmployeeId = person.EmployeeId
    };

    subReport.Report = spReport;
    this.Sections["detail"].Controls.Add(subReport);
    currentTop += height;
}

Bingo.

Example of Dynamic Sub Reports

Categories: .net 3.5, c# Tags: , ,

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Using xUnit 1.5 with .NET 4.0 RTW

April 16, 2010 Comments off

After a bit of tinkering, I finally managed to find the sweet spot for getting xUnit 1.5 to run (without errors) with projects targeted at the new .NET 4.0 Framework.

After initial solution conversion, if you run your tests with xunit.console.x86.exe (or the 64-bit version, I’m assuming), you’ll face the following helpful error:

System.BadImageFormatException: Could not load file or assembly 'Assembly.Test.dll' 
or one of its dependencies. This assmbly is built by a runtime newer than the 
currently loaded runtime and cannot be loaded.
File name: 'J:\projects\Framework\build\Assembly.Test.dll'
   at System.Reflection.AssemblyName.nGetFileInformation(String s)
   at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
   at Xunit.Sdk.Executor..ctor(String assemblyFilename)
   at Xunit.ExecutorWrapper.RethrowWithNoStackTraceLoss(Exception ex)
   at Xunit.ExecutorWrapper.CreateObject(String typeName, Object[] args)
   at Xunit.ExecutorWrapper..ctor(String assemblyFilename, String configFilename,
      Boolean shadowCopy)
   at Xunit.ConsoleClient.Program.Main(String[] args)

What?  BadImageFormatException?  But the bitness didn’t change!

@markhneedham blogged last year how to fix the problem: updating the config files for xunit to “support” the new version.  That worked then, but the version numbers have changed.

Here’s what the new configuration files need to include:

	<startup uselegacyv2runtimeactivationpolicy="true">
		<supportedruntime version="v4.0.30319" />
	</startup>

The useLegacyV2RuntimeActivationPolicy attribute ensures that the latest supported runtimes are loaded. For my projects, this seems to keep Oracle.DataAccess.dll and System.Data.Sqlite.dll (both x86 libraries) happy.

The supportedRuntime element denotes the current version of .NET 4.0 (30319 is the RTM build).

After that, everything runs like a champ!

xUnit.net console test runner (32-bit .NET 4.0.30319.1)
Copyright (C) 2007-9 Microsoft Corporation.

xunit.dll:     Version 1.4.9.0
Test assembly: Assembly.Test.dll

Total tests: 397, Failures: 0, Skipped: 0, Time: 7.508 seconds

Crazy Way To Set Display Order From an Enum

I’m sure someone has a better way to do this and I’m all ears; however, this seems to ‘work’ and performs well enough (YAGNI fully applied).

We have an Enum of our different “school levels”–elementary, middle, high schools, special schools, etc.  Since the Enum’s values correspond to the data values in our student system, their display/alphabetical orders do not match the common order (e.g. High School = 4, Middle School = 5).

I came up with a simple attribute that’s added to the Enum to “specify” display order and it’s worked like a champ for the past year or so.

[Description("Elementary")]
[DisplayOrder(1)]
Elementary = 2,

[Description("Middle")]
[DisplayOrder(2)]
Middle = 5,

Not at all fancy.

However useful it is at looping and providing context in code, what about using it for ordering OTHER “level” information from another source?  So far, I haven’t found a clean way.

The “Solution”?

Use the index of our already-ordered Enum list.  In this instance, our model’s SchoolLevel property matches the Description of the Enum.

            var levelsInOrder = Enum<level>.ToList();
            // AutoMapper mappings, etc.

            model.Data = data
                .OrderBy(x => levelsInOrder
                    .FindIndex(z => z.Description() == x.SchoolLevel));

Is it perfect? No, but it works. I could probably even refactor the Enum list and FindIndex call out for a bit more clarity.

        model.Data = data.OrderBy(x => x.GetDisplayOrder(x.SchoolLevel));

        ...

        private int GetDisplayOrder(string level)
        {
            return Enum<level>.ToList().FindIndex(x => x.Description() == level);
        }

Is there a better way?

Categories: .net 3.0, c#, LINQ Tags: , ,

Populating Select Lists in ASP.NET MVC and jQuery

September 25, 2009 Comments off

I’ve been working the last bit to find the best way to create/populate select (option) lists using a mixture of ASP.NET MVC and jQuery.  What I’ve run into is that the “key” and “value” tags are not passed along when using Json(data).

Here’s what I’m trying to pull off in jQuery: building a simple select drop down list.

var dd_activities = “<select id=’dd_activities’>”;
var count = data.length;
for (var i = 0; i < count; i++) {
 dd_activities += “<option value=’” + data[i].Key + “‘>” + data[i].Value + “</option>”;
}
dd_activities += “</select>”;

$(“#activities”).before(dd_activities);

Using some very basic key/value data:

[
 {"3","Text Value"},
 {"4","Another Text Value"},
 {"1","More boring values..."},
 {"2","Running out of values"},
 {"5","Last value..."}
]

Without any sort of name, I was at a loss on how to access the information, how to get it’s length, or anything.  FireBug was happy to order it up… but that didn’t help.
 
My first attempt was to use a custom object, but that just felt dirty—creating NEW objects simply to return Json data.
 
My second attempt matched the mentality of newing new anonymous Json objects and seemed to work like a champ:
 

[Authorize]

[CacheFilter(Duration = 20)]

public ActionResult GetActivitiesList()

{

    try

    {

        var results =

        _activityRepository

            .GetAll()

            .OrderBy(x => x.Target.Name).OrderBy(x => x.Name)

            .Select(x => new

                {

                    Key = x.Id.ToString(),

                    Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

                })

            .ToList();

 

        return Json(results);

    }

    catch (Exception ex)

    {

        return Json(ex.Message);

    }

}

 
Well, not beautiful, but returns a sexy Key/Value list that Json expects—and that populates our select list.
[
 {"Key":"3","Value":"Text Value"},
 {"Key":"4","Value":"Another Text Value"},
 {"Key":"1","Value":"More boring values..."},
 {"Key":"2","Value":"Running out of values"},
 {"Key":"5","Value":"Last value..."}
]
The next step was to get that out of the controller and into the data repository… pushing some of that logic back down to the database.
 

var criteria =

    Session.CreateCriteria<Activity>()

    .CreateAlias(“Target”, “Target”)

    .Add(Restrictions.Eq(“IsValid”, true))

    .AddOrder(Order.Asc(“Target.Name”))

    .AddOrder(Order.Asc(“Name”))

    .SetMaxResults(100);

 

var data = criteria.List<Activity>();

var result =

    data

        .Select(x => new

            {

                Key = x.Id.ToString(),

                Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

            })

        .ToList();

tx.Commit();

return result;

 
A bit of formatting, restrictions, push the ordering back to the database, and a tidy SQL statement is created.
 
The last touch is the return type.  Since we’re returning a “List” of anonymous types, the return type of GetActivitiesList() must be an IList.
 
That shrinks down my ActionResult to a single call.
 

try

 {

     return Json(_activityRepository.GetActivitiesList());

 }

 catch (Exception ex)

 {

     return Json(ex.Message);

 }

 
That works… and will work for now.  Though, I’ve marked it as a HACK in my code.  Why?  I’m honestly not sure yet.  Just a feeling.

Html.Grid Rendering as Plain Text?

September 23, 2009 1 comment

Notice: Stupid, stupid moment described ahead.  Proceed with caution.

I spent a good half hour trying to figure out why my MVCContrib Html.Grid<T> wasn’t rendering.  It wasn’t throwing an error, it was simply returning the HTML code as Plain Text.

  • The AutoMapper code looked good,
  • The Html.Grid<T> code looked good (it’d be templated off another page anyway and that page was working),
  • The view model code looked good.

So why was I being greeted with garble junk?

${Html.Grid(Model.Details) .Attributes(Id => “RoutineDetails”) .Columns(column => { column.For(c => this.Button(“EditDetail”).Value(“Edit”).Id(string.Format(“edit_{0}”, c.Id))).DoNotEncode(); column.For(c => c.Activity.Target.Name).Named(“Target Area”); column.For(c => c.Activity.Name).Named(“Activity”); column.For(c => c.Sets); column.For(c => c.Weight); column.For(c => c.Repetitions); column.For(c => c.Duration); })}

Html.Grid that is not… well, at least not properly.

Encoding issue? Maybe. Data issue? Perhaps.

No, the issue was typing too quick and not paying attention.

public ActionResult New()

 {

     var viewModel = BuildRoutineNewViewModel(new Routine());

     return View();

 }

Yeah, that’s the problem… right there.  I’d forgotten to pass the view model into the View.  Apparently the Html.Grid<T> helper simply panics if the model you’re reading from is empty or null—rather than throwing an error.

Oddly enough, this is one of those times I’d wish the screen would have lit up red.  Lessons learned.

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. ;)  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. ;)

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. ;)

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Follow

Get every new post delivered to your Inbox.