Archive

Archive for the ‘.net 4.0’ Category

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:

Student,Code,Product,IUID,TSSOC,Date
123456,e11234,Reading,jsmith,0:18,1/4/2012
123456,e11234,Reading,jsmith,1:04,1/4/2012
123456,e11234,Reading,jsmith,0:27,1/5/2012
123456,e11234,Reading,jsmith,0:19,1/7/2012
123456,e11235,Math,jsmith,0:14,1/7/2012

Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {

   }
$out

Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0′s or 1′s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
   }
$out

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   }

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. :)

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. ;) I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

Workaround: Oracle, NHibernate, and CreateSQLQuery Not Working

January 17, 2012 Comments off

It’s difficult to sum this post up with a title. I started the morning adding (what I thought to be) a trivial feature to one of our shared repository libraries.

By the time I saw light at the end of the rabbit hole, I wasn’t sure what happened. This is the tale of my journey. All of the code is is guaranteed to work on my machine… usually. ;)

I’ve done this before–how hard could it be?

The full source code is available via a gist.

The initial need

A simple need really: take a complex query and trim it down to a model using NHibernate’s Session.CreateSQLQuery and Transformers.AliasToBean<T>.

The problems

 

So far, the only data provider I’ve had these problems with is Oracle’s ODP: Oracle.DataAccess. I’m not sure if the built-in System.Data.OracleClient.

 

Problem #1 – Why is EVERYTHING IN CAPS?

The first oddness I ran into seemed to be caused by the IPropertyAccessor returning the properties in ALL CAPS. When it tried to match the aliases in the array, [FIRSTNAME] != [FirstName]. Well, that’s annoying.

Workaround: Add an additional PropertyInfo[] and fetch the properties myself.

This method ignores the aliases parameter in TransformTuple and relies on a call in the constructor to populate the Transformer’s properties.

public OracleSQLAliasToBeanTransformer(Type resultClass)
{
    // [snip!]
    // this is also a PERSONAL preference to only return fields that have a valid setter.
    _fields = this._resultClass.GetProperties(Flags)
                   .Where(x => x.GetSetMethod() != null).ToArray();
}

Inside TransformTuple, I then call on _fields rather than the aliases constructor parameter.

var fieldNames = _fields.Select(x => x.Name).ToList();

// [big snip!]

setters = new ISetter[fieldNames.Count];
for (var i = 0; i < fieldNames.Count; i++)
{
    var fieldName = fieldNames[i];
    _setters[i] = _propertyAccessor.GetSetter(_resultClass, fieldName);
}

Problem solved. Everything is proper case.

Bold assumption: I’m guessing this is coming back in as upper case because Oracle, by default, stores and retrieves everything as upper case unless it’s surrounded by quotes (which has it’s own disadvantages).

Problem #2 – Why are my ints coming in as decimals and strings as char[]?

This one I’m taking a wild guess. I found a similar issue for Hibernate (Java daddy of NHibernate), but didn’t see a matching NHibernate issue. It seems that the types coming in are correct, but the tuple data types are wrong.

For example, if an object as a integer 0 value, it returns as 0M and implicitly converts to decimal.

Workaround: Use System.Convert.ChangeType(obj, T)

If I used this on every piece of code, I’d feel more guilty than I do; however, on edge cases where the standard AliasToBeanTransformer won’t work, I chalk it up to part of doing business with Oracle.

Inside the TransformTuple method, I iterate over the fields and recast each tuple member accordingly.  The only caveat is that I’m separating out enums and specifically casting them as int32. YMMV.


var fieldNames = _fields.Select(x => x.Name).ToList();
for (var i = 0; i < fieldNames.Count; i++)
{
    var fieldType = _fields[i].PropertyType;
    if (fieldType.IsEnum)
    {
        // It can't seem to handle enums, so convert them
	// to Int (so the enum will work)
	tuple[i] = Convert.ChangeType(tuple[i], TypeCode.Int32);
    }
    else
    {
        // set it to the actual field type on the property we're
	// filling.
	tuple[i] = Convert.ChangeType(tuple[i], fieldType);
    }
}

At this point, everything is recast to match the Type of the incoming property. When all is said and done, adding a bit of exception handling around this is recommended (though, I’m not sure when a non-expected error might pop here).

Problem solved. Our _setters[i].Set() can now populate our transformation and return it to the client.

Summary

Lessons learned? Like Mr. Clarkson usually discovers, when it sounds easy, it means you’ll usually end up on fire. Keep fire extinguishers handy at all times.

Is there another way to do this? Probably. I could probably create a throwaway DTO with all capital letters then use AutoMapper or such to map it to the properly-cased objects. That, honestly, seems more mindnumbing than this (though perhaps less voodoo).

Categories: .net 4.0, c# Tags: , , , ,

Using Cassette with Spark View Engine

July 21, 2011 Comments off

Knapsack… *cough*… I mean Cassette is a fantastic javascript/css/coffeescript resource manager from Andrew Davey. It did, however, take a bit to figure out why it wouldn’t work with Spark View Engine. Thankfully, blogs exist to remind me of this at a later date. :)

Namely—because I’ve never tried to use anything that returned void before. Helpers tend to always return either Html or a value.

I finally stumbled on a section in the Spark View Engine documentation for inline expressions.

Sometimes, when push comes to shove, you have a situation where you’re not writing output and there isn’t a markup construct for what you want to do. As a last resort you can produce code directly in-place in the generated class.

Well then, that sounds like what I want.

So our void methods, the Html.ReferenceScript and Html.ReferenceStylesheet should be written as:

#{Html.ReferenceScript("scripts/app/home.index.js");}
#{Html.ReferenceStylesheet("styles/app");}

Note the # (pound sign) and the semi-colon at the end of the statement block.

Our rendering scripts; however, use standard Spark output syntax:

${Html.RenderScripts()}
${Html.RenderStylesheetLinks()}

Now my Spark view contains the hashed Urls–in order–as it should.

 <!DOCTYPE html>
 <html>
 <head>
   <link href="/styles/app/site.css?f8f8e3a3aec6d4e07008efb57d1233562d2c4b70" type="text/css" rel="stylesheet" />
 </head>
 <body>
 <h2>Index</h2>
   <script src="/scripts/libs/jquery-1.6.2.js?eeee9d4604e71f2e01b818fc1439f7b5baf1be7a" type="text/javascript"></script>
   <script src="/scripts/app/application.js?91c81d13cf1762045ede31783560e6a46efc33d3" type="text/javascript"></script>
   <script src="/scripts/app/home.index.js?b0a66f7ba204e2dcf5261ab75934baba9cb94e51" type="text/javascript"></script>
 </body> 

Excellent.

Quick Solution Generation using PowerShell: New-Project

February 13, 2011 1 comment

When I have an idea or want to prototype things, I tend to mock it up in Balsamiq, then dig right in and write some specs to see how it’d work.  Unfortunately deleting the junk Class1.cs in Library projects, the plethora of excess in MVC3 webs, and such tends to be the most time intensive part of wiring up a quick project in .net.

All that deleting is too many steps–especially if you’re developing on the fly with a room of folks. I needed something ala command line to fit my normal workflow:

  1. o init-wrap MyProject -git
  2. cd MyProject
  3. git flow init
  4. {something to create projects, solutions, etc}
  5. o init-wrap -all
  6. {spend 5 minutes cleaning up junk files in my way}

Introducing New-Project

Yes, I know. I’m not a marketing guru. I don’t have a cool name for it.  Just a standard PowerShell convention.

Usage:

  -Library { } : Takes a string[] of names for c# class libraries to create.

  -Web { } : Takes a string[] of names for MVC3 web projects to create.

  -Solution "" : Takes a single string for your solution name.

Example:

New-Project -Library MyProj.Core, MyProj.Specs -Web MyProj.Web -Solution MyProject

SiteScaffolding

 

What does this all do?

Well, honestly, I’m not sure how ‘reusable’ this is… the projects are pretty tailored.

Libraries

  • Libraries don’t have the annoying Class1.cs file that you always delete.
  • AssemblyInfo.cs is updated with the specified Name and Title.

MVC3 Webs

  • The web.config is STRIPPED down to the minimal (27 lines).
  • The folder structure is reorganized (removed unnecessary folders, like Controllers, which I put in libraries, not the web project).

Solution

  • This is the only one I’m actually using the VisualStudio.DTE for–it makes it super easy to create and add projects into the solution.

But there are other scaffolding systems out there–why not use them?

Most of the time, I don’t need a full system. I don’t need my objects mapped, views automatically set up, or anything else. 

Start with three empty projects, load up the Specifications project, and start driving out features.  That’s how it’s supposed to work, right?  So why would I want to have to pre-fill my projects ahead of time?

 

What’s next?

  • Error catching and handling (it’s pretty lax right now)
  • Handle setting up gitflow, openwrap, jQuery, etc. Less typing good!
  • Something… who knows. :D

 

Where to get it?

I’ve tossed it up on github at https://github.com/drlongnecker/New-Project.  Right now it has the WOMM warranty.

6a0120a85dcdae970b0128776ff992970c

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Using xUnit 1.5 with .NET 4.0 RTW

April 16, 2010 Comments off

After a bit of tinkering, I finally managed to find the sweet spot for getting xUnit 1.5 to run (without errors) with projects targeted at the new .NET 4.0 Framework.

After initial solution conversion, if you run your tests with xunit.console.x86.exe (or the 64-bit version, I’m assuming), you’ll face the following helpful error:

System.BadImageFormatException: Could not load file or assembly 'Assembly.Test.dll' 
or one of its dependencies. This assmbly is built by a runtime newer than the 
currently loaded runtime and cannot be loaded.
File name: 'J:\projects\Framework\build\Assembly.Test.dll'
   at System.Reflection.AssemblyName.nGetFileInformation(String s)
   at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
   at Xunit.Sdk.Executor..ctor(String assemblyFilename)
   at Xunit.ExecutorWrapper.RethrowWithNoStackTraceLoss(Exception ex)
   at Xunit.ExecutorWrapper.CreateObject(String typeName, Object[] args)
   at Xunit.ExecutorWrapper..ctor(String assemblyFilename, String configFilename,
      Boolean shadowCopy)
   at Xunit.ConsoleClient.Program.Main(String[] args)

What?  BadImageFormatException?  But the bitness didn’t change!

@markhneedham blogged last year how to fix the problem: updating the config files for xunit to “support” the new version.  That worked then, but the version numbers have changed.

Here’s what the new configuration files need to include:

	<startup uselegacyv2runtimeactivationpolicy="true">
		<supportedruntime version="v4.0.30319" />
	</startup>

The useLegacyV2RuntimeActivationPolicy attribute ensures that the latest supported runtimes are loaded. For my projects, this seems to keep Oracle.DataAccess.dll and System.Data.Sqlite.dll (both x86 libraries) happy.

The supportedRuntime element denotes the current version of .NET 4.0 (30319 is the RTM build).

After that, everything runs like a champ!

xUnit.net console test runner (32-bit .NET 4.0.30319.1)
Copyright (C) 2007-9 Microsoft Corporation.

xunit.dll:     Version 1.4.9.0
Test assembly: Assembly.Test.dll

Total tests: 397, Failures: 0, Skipped: 0, Time: 7.508 seconds

Visual Studio 2010 & .NET Framework 4.0 Beta 1 Out

May 18, 2009 Comments off

For MSDN Subscribers, VS2010 and .NET 4.0 beta 1 have hit the downloads site.

After reading through Microsoft’s “Overview” page—my biggest hope is that it doesn’t do “too much”.  Visual Studio is already a monolith of hard drive cranking pain as it is.  My Cray is still being held up by our purchasing department—I hope I don’t need it.  Maybe they took Win7’s approach that less is more and trimmed things down?

Overview: http://www.microsoft.com/visualstudio/en-us/products/2010/default.mspx

Beta Docs: http://msdn.microsoft.com/en-us/library/dd831853(VS.100).aspx

The biggest point of interest (so far, I mean—it’s still downloading) is the warning for Win7 RC:


NOTE: If you are installing Visual Studio 2010 Beta 1 on Windows 7 RC, you may receive a compatibility warning when installing SQL Server 2008. For more information and a workaround, please click here.

The link, however, sends you to an explaination of the error—which is simply that Win7 RC requires SQL Server 2008 SP1 or 2005 SP3.  If I remember right, I thought the error message SAID that.  I guess Microsoft is just covering all of the bases.

Finally, it looks like the “official” tag on Twitter is #vs10; however, quite a few of us have been using #vs2010.  Follow both for the full scope.

I’ve got about 20 minutes left on the download and a VM waiting for this to be loaded.  More to come!

Follow

Get every new post delivered to your Inbox.