Archive

Archive for the ‘c#’ Category

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:

Student,Code,Product,IUID,TSSOC,Date
123456,e11234,Reading,jsmith,0:18,1/4/2012
123456,e11234,Reading,jsmith,1:04,1/4/2012
123456,e11234,Reading,jsmith,0:27,1/5/2012
123456,e11234,Reading,jsmith,0:19,1/7/2012
123456,e11235,Math,jsmith,0:14,1/7/2012

Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {

   }
$out

Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0’s or 1’s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
   }
$out

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   }

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. :)

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. ;) I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

Workaround: Oracle, NHibernate, and CreateSQLQuery Not Working

January 17, 2012 Comments off

It’s difficult to sum this post up with a title. I started the morning adding (what I thought to be) a trivial feature to one of our shared repository libraries.

By the time I saw light at the end of the rabbit hole, I wasn’t sure what happened. This is the tale of my journey. All of the code is is guaranteed to work on my machine… usually. ;)

I’ve done this before–how hard could it be?

The full source code is available via a gist.

The initial need

A simple need really: take a complex query and trim it down to a model using NHibernate’s Session.CreateSQLQuery and Transformers.AliasToBean<T>.

The problems

 

So far, the only data provider I’ve had these problems with is Oracle’s ODP: Oracle.DataAccess. I’m not sure if the built-in System.Data.OracleClient.

 

Problem #1 – Why is EVERYTHING IN CAPS?

The first oddness I ran into seemed to be caused by the IPropertyAccessor returning the properties in ALL CAPS. When it tried to match the aliases in the array, [FIRSTNAME] != [FirstName]. Well, that’s annoying.

Workaround: Add an additional PropertyInfo[] and fetch the properties myself.

This method ignores the aliases parameter in TransformTuple and relies on a call in the constructor to populate the Transformer’s properties.

public OracleSQLAliasToBeanTransformer(Type resultClass)
{
    // [snip!]
    // this is also a PERSONAL preference to only return fields that have a valid setter.
    _fields = this._resultClass.GetProperties(Flags)
                   .Where(x => x.GetSetMethod() != null).ToArray();
}

Inside TransformTuple, I then call on _fields rather than the aliases constructor parameter.

var fieldNames = _fields.Select(x => x.Name).ToList();

// [big snip!]

setters = new ISetter[fieldNames.Count];
for (var i = 0; i < fieldNames.Count; i++)
{
    var fieldName = fieldNames[i];
    _setters[i] = _propertyAccessor.GetSetter(_resultClass, fieldName);
}

Problem solved. Everything is proper case.

Bold assumption: I’m guessing this is coming back in as upper case because Oracle, by default, stores and retrieves everything as upper case unless it’s surrounded by quotes (which has it’s own disadvantages).

Problem #2 – Why are my ints coming in as decimals and strings as char[]?

This one I’m taking a wild guess. I found a similar issue for Hibernate (Java daddy of NHibernate), but didn’t see a matching NHibernate issue. It seems that the types coming in are correct, but the tuple data types are wrong.

For example, if an object as a integer 0 value, it returns as 0M and implicitly converts to decimal.

Workaround: Use System.Convert.ChangeType(obj, T)

If I used this on every piece of code, I’d feel more guilty than I do; however, on edge cases where the standard AliasToBeanTransformer won’t work, I chalk it up to part of doing business with Oracle.

Inside the TransformTuple method, I iterate over the fields and recast each tuple member accordingly.  The only caveat is that I’m separating out enums and specifically casting them as int32. YMMV.


var fieldNames = _fields.Select(x => x.Name).ToList();
for (var i = 0; i < fieldNames.Count; i++)
{
    var fieldType = _fields[i].PropertyType;
    if (fieldType.IsEnum)
    {
        // It can't seem to handle enums, so convert them
	// to Int (so the enum will work)
	tuple[i] = Convert.ChangeType(tuple[i], TypeCode.Int32);
    }
    else
    {
        // set it to the actual field type on the property we're
	// filling.
	tuple[i] = Convert.ChangeType(tuple[i], fieldType);
    }
}

At this point, everything is recast to match the Type of the incoming property. When all is said and done, adding a bit of exception handling around this is recommended (though, I’m not sure when a non-expected error might pop here).

Problem solved. Our _setters[i].Set() can now populate our transformation and return it to the client.

Summary

Lessons learned? Like Mr. Clarkson usually discovers, when it sounds easy, it means you’ll usually end up on fire. Keep fire extinguishers handy at all times.

Is there another way to do this? Probably. I could probably create a throwaway DTO with all capital letters then use AutoMapper or such to map it to the properly-cased objects. That, honestly, seems more mindnumbing than this (though perhaps less voodoo).

Categories: .net 4.0, c# Tags: , , , ,

Dynamically Adding Sub Reports to an ActiveReport

April 5, 2011 Comments off

I’m currently working on a project where I needed to iterate through a group of users and plug in a little sub report that contains some demographic information and a Code38 barcode.

One sub report is easy–add the control to the page, set the .Report property and away we go; however, adding multiple sub reports dynamically and getting the spacing right proved to be a bit challenging.

Warning: It “works on my machine”.

To add controls to your report dynamically, you must use the _ReportStart event of your report.

To address the spacing issue, let’s start out by specifying our ‘base’ top and the height of our sub report.

const float height = 0.605f;
var currentTop = 7.250f;

In my case, I want my sub reports to start at about 7.25″ and be ~0.605″ in height.

The actual creation of the sub report placeholder is fairly standard–new it up and assign a few properties. I’ll get into the looping a bit later.

var subReport = new SubReport
{
    CloseBorder = false,
    Height = height,
    Left = 0F,
    Width = 7.5F,
    Top = currentTop,
    Name = person.DisplayName + "_SubReport",
    ReportName = person.DisplayName + "_SubReport",
};

Notice how I’ve set the Top property to be our currentTop variable. Keep that in mind.

The next step is to new up our actual sub report object.  My sub report has two properties on it, each for a data item.  I could pass it along as an object, but it seems a bit overkill for two string properties. After the report object is assigned to our new sub report container, add the container to our details section.

var spReport = new _PersonAssignment()
{
    Name = person.DisplayName,
    EmployeeId = person.EmployeeId
};

subReport.Report = spReport;
this.Sections["detail"].Controls.Add(subReport);

Because we’ve assigned it a left, width, and top, our sub report will be added where expected.

The final piece is incrementing the ‘top’ to accommodate for the height of the last sub report.

currentTop += height;

Easy.  Now the next sub report will start at 7.250″ + 0.605″ or 7.855″.  Keep in mind that the 0.605″ includes a bit of whitespace, if you need additional whitespace, pad the height number.

The full _ReportStart event looks like:

const float height = 0.605f;
var currentTop = 7.250f;
foreach (var person in Model.People)
{
    var subReport = new SubReport
    {
        CloseBorder = false,
        Height = height,
        Left = 0F,
        Width = 7.5F,
        Top = currentTop,
        Name = person.DisplayName + "_SubReport",
        ReportName = person.DisplayName + "_SubReport",
      };

    var spReport = new _PersonAssignment()
    {
        Name = person.DisplayName,
        EmployeeId = person.EmployeeId
    };

    subReport.Report = spReport;
    this.Sections["detail"].Controls.Add(subReport);
    currentTop += height;
}

Bingo.

Example of Dynamic Sub Reports

Categories: .net 3.5, c# Tags: , ,

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Using xUnit 1.5 with .NET 4.0 RTW

April 16, 2010 Comments off

After a bit of tinkering, I finally managed to find the sweet spot for getting xUnit 1.5 to run (without errors) with projects targeted at the new .NET 4.0 Framework.

After initial solution conversion, if you run your tests with xunit.console.x86.exe (or the 64-bit version, I’m assuming), you’ll face the following helpful error:

System.BadImageFormatException: Could not load file or assembly 'Assembly.Test.dll' 
or one of its dependencies. This assmbly is built by a runtime newer than the 
currently loaded runtime and cannot be loaded.
File name: 'J:\projects\Framework\build\Assembly.Test.dll'
   at System.Reflection.AssemblyName.nGetFileInformation(String s)
   at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
   at Xunit.Sdk.Executor..ctor(String assemblyFilename)
   at Xunit.ExecutorWrapper.RethrowWithNoStackTraceLoss(Exception ex)
   at Xunit.ExecutorWrapper.CreateObject(String typeName, Object[] args)
   at Xunit.ExecutorWrapper..ctor(String assemblyFilename, String configFilename,
      Boolean shadowCopy)
   at Xunit.ConsoleClient.Program.Main(String[] args)

What?  BadImageFormatException?  But the bitness didn’t change!

@markhneedham blogged last year how to fix the problem: updating the config files for xunit to “support” the new version.  That worked then, but the version numbers have changed.

Here’s what the new configuration files need to include:

	<startup uselegacyv2runtimeactivationpolicy="true">
		<supportedruntime version="v4.0.30319" />
	</startup>

The useLegacyV2RuntimeActivationPolicy attribute ensures that the latest supported runtimes are loaded. For my projects, this seems to keep Oracle.DataAccess.dll and System.Data.Sqlite.dll (both x86 libraries) happy.

The supportedRuntime element denotes the current version of .NET 4.0 (30319 is the RTM build).

After that, everything runs like a champ!

xUnit.net console test runner (32-bit .NET 4.0.30319.1)
Copyright (C) 2007-9 Microsoft Corporation.

xunit.dll:     Version 1.4.9.0
Test assembly: Assembly.Test.dll

Total tests: 397, Failures: 0, Skipped: 0, Time: 7.508 seconds

Crazy Way To Set Display Order From an Enum

I’m sure someone has a better way to do this and I’m all ears; however, this seems to ‘work’ and performs well enough (YAGNI fully applied).

We have an Enum of our different “school levels”–elementary, middle, high schools, special schools, etc.  Since the Enum’s values correspond to the data values in our student system, their display/alphabetical orders do not match the common order (e.g. High School = 4, Middle School = 5).

I came up with a simple attribute that’s added to the Enum to “specify” display order and it’s worked like a champ for the past year or so.

[Description("Elementary")]
[DisplayOrder(1)]
Elementary = 2,

[Description("Middle")]
[DisplayOrder(2)]
Middle = 5,

Not at all fancy.

However useful it is at looping and providing context in code, what about using it for ordering OTHER “level” information from another source?  So far, I haven’t found a clean way.

The “Solution”?

Use the index of our already-ordered Enum list.  In this instance, our model’s SchoolLevel property matches the Description of the Enum.

            var levelsInOrder = Enum<level>.ToList();
            // AutoMapper mappings, etc.

            model.Data = data
                .OrderBy(x => levelsInOrder
                    .FindIndex(z => z.Description() == x.SchoolLevel));

Is it perfect? No, but it works. I could probably even refactor the Enum list and FindIndex call out for a bit more clarity.

        model.Data = data.OrderBy(x => x.GetDisplayOrder(x.SchoolLevel));

        ...

        private int GetDisplayOrder(string level)
        {
            return Enum<level>.ToList().FindIndex(x => x.Description() == level);
        }

Is there a better way?

Categories: .net 3.0, c#, LINQ Tags: , ,

Populating Select Lists in ASP.NET MVC and jQuery

September 25, 2009 Comments off

I’ve been working the last bit to find the best way to create/populate select (option) lists using a mixture of ASP.NET MVC and jQuery.  What I’ve run into is that the “key” and “value” tags are not passed along when using Json(data).

Here’s what I’m trying to pull off in jQuery: building a simple select drop down list.

var dd_activities = “<select id=’dd_activities’>”;
var count = data.length;
for (var i = 0; i < count; i++) {
 dd_activities += “<option value='” + data[i].Key + “‘>” + data[i].Value + “</option>”;
}
dd_activities += “</select>”;

$(“#activities”).before(dd_activities);

Using some very basic key/value data:

[
 {“3″,”Text Value”},
 {“4″,”Another Text Value”},
 {“1″,”More boring values…”},
 {“2″,”Running out of values”},
 {“5″,”Last value…”}
]

Without any sort of name, I was at a loss on how to access the information, how to get it’s length, or anything.  FireBug was happy to order it up… but that didn’t help.
 
My first attempt was to use a custom object, but that just felt dirty—creating NEW objects simply to return Json data.
 
My second attempt matched the mentality of newing new anonymous Json objects and seemed to work like a champ:
 

[Authorize]

[CacheFilter(Duration = 20)]

public ActionResult GetActivitiesList()

{

    try

    {

        var results =

        _activityRepository

            .GetAll()

            .OrderBy(x => x.Target.Name).OrderBy(x => x.Name)

            .Select(x => new

                {

                    Key = x.Id.ToString(),

                    Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

                })

            .ToList();

 

        return Json(results);

    }

    catch (Exception ex)

    {

        return Json(ex.Message);

    }

}

 
Well, not beautiful, but returns a sexy Key/Value list that Json expects—and that populates our select list.
[
 {“Key”:”3″,”Value”:”Text Value”},
 {“Key”:”4″,”Value”:”Another Text Value”},
 {“Key”:”1″,”Value”:”More boring values…”},
 {“Key”:”2″,”Value”:”Running out of values”},
 {“Key”:”5″,”Value”:”Last value…”}
]
The next step was to get that out of the controller and into the data repository… pushing some of that logic back down to the database.
 

var criteria =

    Session.CreateCriteria<Activity>()

    .CreateAlias(“Target”, “Target”)

    .Add(Restrictions.Eq(“IsValid”, true))

    .AddOrder(Order.Asc(“Target.Name”))

    .AddOrder(Order.Asc(“Name”))

    .SetMaxResults(100);

 

var data = criteria.List<Activity>();

var result =

    data

        .Select(x => new

            {

                Key = x.Id.ToString(),

                Value = string.Format(“[{0}] {1}”, x.Target.Name, x.Name)

            })

        .ToList();

tx.Commit();

return result;

 
A bit of formatting, restrictions, push the ordering back to the database, and a tidy SQL statement is created.
 
The last touch is the return type.  Since we’re returning a “List” of anonymous types, the return type of GetActivitiesList() must be an IList.
 
That shrinks down my ActionResult to a single call.
 

try

 {

     return Json(_activityRepository.GetActivitiesList());

 }

 catch (Exception ex)

 {

     return Json(ex.Message);

 }

 
That works… and will work for now.  Though, I’ve marked it as a HACK in my code.  Why?  I’m honestly not sure yet.  Just a feeling.
Follow

Get every new post delivered to your Inbox.