Archive

Archive for the ‘PowerShell’ Category

Playing nice with psake, PartCover, and TeamCity

December 2, 2010 1 comment

While code coverage isn’t the holy grail of development benchmarks, it has a place in the tool belt. We have several legacy systems where we are retrofitting tests in as we enhance and maintain the project.

I looked at NCover as a code coverage solution. NCover is a SLICK product and I really appreciated it’s Explorer and charting for deep dive analysis. Unfortunately, working in public education, the budget cuts just keep coming and commercial software didn’t fit the bill. After finding the revitalized project on GitHub, I dug back into PartCover.net.  Shaun Wilde (@shauncv) and others have been quite active revitalizing the project and flushing out features for .net 4.0.

You can find the repo and installation details for PartCover.net at https://github.com/sawilde/partcover.net4.

Now, armed with code coverage reports, I had to find a way to get that information into TeamCity. Since I use psake, the command line runner doesn’t have the option for importing code coverage results. That just means I’ll need to handle executing PartCover inside of psake. Easy enough. Thankfully, the TeamCity Reports tabs can also be customized. Time to get to work!

Step 0: Setting up PartCover for 64-bit

If you’re on a 32-bit installation, then skip this (are there people still running 32-bit installs?).

PartCover, itself, installs just fine, however, if you’re on an x64 machine, you’ll need to modify PartCover.exe and PartCover.Browser.exe to force 32-bit using corflags.exe.

Corflags.exe is usually located at C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin if you’re running Windows 7. Older versions of Windows might be at v6.0A.

Once you’ve located corflags.exe, simply force the 32-bit flag:

corflags /32bit+ /force PartCover.exe
corflags /32bit+ /force PartCover.Browser.exe

If you have trouble on this step, hit up Google. There are tons of articles out there hashing and rehashing this problem.

Step 1: Setting up your PartCover settings file

PartCover allows you to specify configuration settings directly at the command line, but I find that hard to track when changes are made.  Alternatively, a simple XML file allows you to keep your runner command clean.

Here’s the settings file I’m running with:

<PartCoverSettings>
  <Target>.\Tools\xunit\xunit.console.x86.exe</Target>
  <TargetWorkDir></TargetWorkDir>
  <TargetArgs> .\build\MyApp.Test.dll /teamcity /html .\build\test_output.html</TargetArgs>
  <DisableFlattenDomains>True</DisableFlattenDomains>
  <Rule>+[MyApp]*</Rule>
  <Rule>-[MyApp.Test]*</Rule>
  <Rule>-[*]*__*</Rule>
</PartCoverSettings>

Just replace MyApp with the project names/namespaces you wish to cover (and exclude).

For more details on setting up a settings file, check out the documentation that comes with PartCover.  The only real oddity I have here is the last rule <Rule>-[*]*__*</Rule>.  PartCover, by default, also picks up the dynamically created classes from lambda expressions.  Since I’m already testing those, I’m hiding the results for the dynamic classes with two underscores in them __. It’s a cheap trick, but seems to work.

Step 2: Setting up psake for your tests and coverage

My default test runner task in psake grabs a set of assemblies and iterates through them passing each to the test runner.  That hasn’t changed; however, I now need to call PartCover instead of mspec/xunit.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      exec { invoke-expression $testExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

$coverage_runner, $base_directory, and $build_directory refer to variables configured at the top of my psake script.

Now, you should be able to drop to the command line and execute your test task.  A partcover.xml file should appear.  But who wants to read XML. Where are the shiny HTML reports?

We need to create them! 🙂

Step 2: Generating the HTML report from PartCover

There are two XML style sheets included with PartCover, however, I really like Gáspár Nagy’s detailed report found at http://gasparnagy.blogspot.com/2010/09/detailed-report-for-partcover-in.html. It’s clean, detailed, and fast.  I’ve downloaded this xslt and placed it in the PartCover directory.

Without TeamCity to do the heavy lifting, we need an XSL transformer of our own.  For my purposes, I’m using the one bundled with Sandcastle (http://sandcastle.codeplex.com/).  Install (or extract the MSI) and copy the contents to your project (ex: .\tools\sandcastle).

The syntax for xsltransform.exe is:

xsltransform.exe {xml_input} /xsl:{transform file} /out:{output html file}

In this case, since I’m storing the partcover_results.xml in my $build_directory, the full command would look like (without the line breaks):

    .\tools\sandcastle\xsltransform.exe .\build\partcover_results.xml
        /xsl:.\tools\partcover\partcoverfullreport.xslt
        /out:.\build\partcover.html

Fantastic.  Now, we want that to call every time we run our tests, so let’s add it to our psake script.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      $coverageExpression = "$transform_runner $build_directory\partcover_results.xml /xsl:$transform_xsl /out:$build_directory\partcover.html"
      exec { invoke-expression $testExpression }
      " - Converting coverage results for $test to HTML report..."
      exec { invoke-expression $coverageExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

Now, we have a $coverage_expression to invoke.  The xsltransform.exe and path to the report are tied up into variables for easy updating.

You should be able to run your test task again and open up the HTML file.

Step 3: Setting up the Code Coverage tab in TeamCity

The last step is the easiest. As I haven’t found a way to have the Code Coverage automatically pick up the PartCover reports, we’ll need to add a custom Report.

In TeamCity, go under Administration > Server Configuration > Report Tabs.

Add a new report tab with the Start Page set to the name of your PartCover HTML file.  In my case, it’s partcover.html.  Remember, the base path is your build directory (which is why I’m outputting the file to the $build_directory in psake).

Commit your repository with your new psake script and PartCover (if you haven’t already) and run–you’ll see a new Code Coverage tab ready and waiting.

- 12_2_2010 , 11_31_27 AM

Setting up a NuGet PowerShell Profile

November 15, 2010 Comments off

While NuGet alone is a pretty spectacular package management system, one of the hottest features involves the underlying PowerShell Package Console (we’ll call it PM from here out).  It may look fancy and have a funky PM> prompt, but it’s still PowerShell.

Considering I use psake and live in PowerShell most of the day, I wanted to do a bit of customizing.  Out of the box… urr… package, PM is barebones PowerShell.

Screenshot - 11_15_2010 , 8_31_25 PM

Well, that’s pretty boring!

The question is, how would I customize things?

The answer? Setting up a profile!

Step 1: Finding the NuGet Profile

Just like a normal PowerShell installation, the $profile variable exists in PM.

image

Okay, that was easy.  If you try to edit the file, however, it’s empty. You can either use the touch command to create an empty file, then edit it with Notepad, or simply run Notepad with the $profile path–it’ll ask you to create the file. 🙂

For an example, we’ll just pipe a bit of text into our profile and see what happens.

image

Now, close Visual Studio (PM only seems to load the profile when Visual Studio first starts) and relaunch it.  PM should now welcome you!

image

 

Step 2: Customize, customize, customize!

Now we’re ready to add variables, setup custom paths and scripts, add some git-tastic support, and add modules (like psake).  There are quite a few posts on the blog about customizing PowerShell’s environment, click here to check them out.

Remember: Since we’re using a separate PowerShell profile, be frugal with your commands and keep them "development centric".  For example, I don’t load HyperV modules, Active Directory management commands, and other "non-Solution" things into the PM. Visual Studio is slow enough–don’t bog it down. :) 

This is also a great opportunity to trim your profile down and break it into modular pieces (whether that be scripts or modules).  Keep those profiles as DRY as possible.

 

A few caveats…

There do seem to be a few caveats while in the PM environment:

1. Execution, whether it’s an actual executable, a script, or piping something to more, seems to be tossed into another process, executed, and then the results returned to the PM console window. Not terrible, but something to be aware of if it seems like it’s not doing anything.

2. I can’t find a good way to manipulate the boring PM> prompt. It seems that $Host.UI is pretty much locked down. I’m hopeful that will change with further releases because not KNOWING where I am in the directory structure is PAINFUL.

Using PowerShell to Kick off SQL Server Agent Jobs

We’re in the process of a few upgrades around our office and one of those will require a full shutdown mid-business day. To expedite the shutdown, I wanted a quick way to run our backup jobs (hot backups), move them, and down the boxes.

Remember, I’m lazy–walking around our server room smacking rack switches takes effort. I want to sit and drink coffee while the world comes to a halt.

Here’s a bit of what I came up with. The script handles four things:

  • accepts a server name parameter (so I can iterate through a list of servers),
  • checks to see if the server is online (simple ping),
  • checks to see if it’s a SQL Server (looks for the MSSQLSERVER service),
  • iterates through the jobs and, if the -run switch is passed, kicks them off.

As with most, the couple of helper functions are more than the actual script itself.

param (
  [string]$server = $(Read-Host -prompt "Enter the server name"),
  [switch]$run = $false
)

# little helper function to ping the remove server.
function get-server-online ([string] $server) {
  $serverStatus = 
    (new-object System.Net.NetworkInformation.Ping).Send($server).Status -eq "Success"
  
  Write-Host "Server Online: $serverStatus"
  return $serverStatus
}

# little helper function to check to see if the remote 
# server has SQL Server running.
function is-sql-server([string] $server) {
   if (get-server-online($server)) {
    
    $sqlStatus = (gwmi Win32_service -computername $server | 
        ? { $_.name -like "MSSQLSERVER" }).Status -eq "OK"
        
    Write-Host "SQL Server Found: $sqlStatus"
    return $sqlStatus
   }
}

# uhh, because someday I may need to add more to this?
function cannot-find-server([string] $server) {
  Write-Host "Cannot contact $server..."
}

if (is-sql-server($server)) {
  
    [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null
    $srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $server
    $jobs = $srv.JobServer.Jobs | ? {$_.IsEnabled -eq $true} 

    $jobs | 
        select Name, CurrentRunStatus, LastRunOutcome, LastRunDate, NextRunDate | 
        ft -a
    
    if ($run) {
      $jobs | % { $_.Start(); "Starting job: $_ ..." }
      
    }
}
else {
  cannot-find-server($server)
}

Things left to do–someday…?

  • Allow picking and choosing what jobs get kicked off…

It’s not fancy, but it’ll save time backing up and shutting down a couple dozen SQL instances and ensuring they’re not missed in the chaos. 🙂

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Creating Local Git Repositories – Yeah, it’s that simple!

November 9, 2009 Comments off

We’re in the process of evaluating several source control systems here at the office.  It’s a JOYOUS experience and I mean that… sorta.  Currently, we have a montage of files dumped into two SourceSafe repos.

At this point, most of you are either laughing, crying, or have closed your web browser so you don’t get caught reading something with the word ‘SourceSafe’ in it.  That’s cool, I get that.

As part of my demonstrations (for the various toolings we’re looking at), I wanted to show how simple it was to create a local GIT repository.

Since I’m discussing ‘local’, I’m taking a lot of things out of the picture.

  • Authentication/authorization is handled by Windows file permissions/Active Directory.  If you can touch the repo and read it, then knock yourself out.
  • Everything is handled through shares, no HTTP or that goodness.  Users are used to hitting a mapped drive for SourceSafe repos, we’re just emulating that.

So what do we need to do to create a Git repository? Three easy steps (commands).

1. Create your directory with the .git extention.

mkdir example.git

2. Change into that new directory.

cd example.git

3. Initialize the repository using Git.

git init —bare

Our new Git repository is now initialized. You can even see that PowerShell picks up on the new repository and has changed my prompt accordingly.

Local Git Repository - example.git

Now that we have example.git as our ‘remote’ repository, we’re ready to make a local clone.

git clone H:\example.git example

Now we have an empty clone (and it’s kind enough to tell us).

Empty clone.

All of our Git goodness is packed into the standard .git directory.

Git clone contents.

To test things out, let’s create a quick file, add it to our repo, and then commit it in.

First, let’s create/append an example file in our clone.

Create/append an example file to Git

(note: this is called ‘so lazy, I don’t even want to open NotePad’)

Now, we can add and verify that our newly added ‘example.text’ file shows up:

Git - add file and verify

Finally, commit…

git commit -a -m “this is our first commit”

First commit

… and push!

git push origin master

Push to origin repo.

The last step is to ensure that our new ‘remote’ repository can be reproduced.  Here’s a quick single line to clone our repository into another new folder, list out the contents, and verify the log has the commit we made in the first repo.

Finished Second Clone to Verify

It’s amazing how quick and easy it is to setup local Git repositories.  From here, we can look at the file system/Windows for authentication/authorization and focus on our migration AWAY from Visual SourceSafe. 🙂

Tags: ,

PowerShell: Recreating SQL*Plus ‘ed’ Command

November 9, 2009 Comments off

Aside from PowerShell, I spend a bit of time day-to-day in Oracle’s SQL*Plus.  I’m slowly replacing my dependency on SQL*Plus with PowerShell scripts, functions, and such, but there are some core things that it’s just easier to bring up a console and hack out.

One thing I really love about SQL*Plus is the ‘ed’ command.  The ‘ed’ command dumps your last statement into a temp file, then opens it in Notepad.  Make your changes in Notepad, save, and rerun (in SQL*Plus, it’s a single slash and then Enter), and the frustration of lengthy command line editing is solved.

So, this is a work in progress—my attempt to simulate the same process in PowerShell.

Example Situation

Let’s say, for example, we have a long command we have been hacking away at, but are getting tired of arrowing back and forth on:

$count = 0; sq mydb “select id from reports” | % { $count++ }; $count

(Yes, yes, it’s not ‘very long’, but that’s why this is called an example!)

So, a simple row counter.

The ‘ed’ Command

To recreate the ‘ed’ experience, I need two things:

  • Access to the command line history via get-history.
  • A temporary file.

touch c:\temp.tmp

get-history | ?{ $_.commandline -ne “ex” } |

? { $_.commandline -ne “ed” } |

select commandline -last 1 |

%{ $_.CommandLine} > c:\temp.tmp

np c:\temp.tmp

Stepping through the code:

  1. Create a temporary file.  I used c:\temp.tmp, but you could get a bit fancier and use the systems $TEMP directory or something.
  2. Get-History.  This returns an ascending list of your command history in PowerShell by ID and CommandLine.  We want to ignore any calls to our ‘ed’ or ‘ex’ functions and grab the “last 1”—then insert it into our temp file.
  3. Finally, open up the temp file in NotePad (np is an alias on my PowerShell profile for NotePad2).

So now that I have my ed command, let’s try it out after I’ve already ran my simple row counter.

Last Command in NotePad2

Cool.  We can now make changes to this and SAVE the temp file.  The next step would be to jump back to PowerShell and execute the updated command.

The ‘ex’ Command

Since the slash has special meaning in a PowerShell window, I simply named this script ‘ex’ instead.  Ex… execute… surely I can’t forget that, right?

get-content c:\temp.tmp |

% {

$_

invoke-expression $_

}

The ‘ex’ command is pretty simple.

  1. Get the content of our temporary file from get-content.
  2. For each line (since get-content returns an array of lines), print out the line as text and then invoke (run) it using invoke-expression.

Invoke-Expression takes a string as a command (from our text file) and returns the results.  For more information on Invoke-Expression, check out MSDN.

Wrapping It Up

Now that we have our two commands—ed and ex—we’re ready for those “oh, this is short and easy” commands that turn into multi-page monsters—without fighting tabs and arrow keys.

As I said, this is a work-in-progress.  I’ll update the post as new/better ideas come up. 🙂

PowerShell : TGIF Reminder

November 6, 2009 1 comment

Jeff Hicks’ TGIF post yesterday was a fun way to learn a bit of PowerShell (casting, working with dates, etc) and also add something valuable to your professional toolbelt—knowing when to go home. 🙂

I tweaked the date output a bit to be more human readable, but also moved it from just a function to part of my UI.  I mean, I should ALWAYS know how close I am to quittin’ time, right? Especially as we joke around our office during our ‘payless weeks’.

# Determines if today is the end of friday! (fun function)
function get-tgif {
 $now=Get-Date
 # If it’s Saturday or Sunday then Stop! It’s the weekend!
 if ([int]$now.DayOfWeek -eq 6 -or [int]$now.DayOfWeek -eq 7)
 {
  “Weekend!”
 }
 else {
  # Determine when the next quittin’ time is…
  [datetime]$TGIF=”{0:MM/dd/yyyy} 4:30:00 PM” -f `
   (($now.AddDays( 5 – [int]$now.DayOfWeek)) )
  
  # Must be Friday, but after quittin’ time, GO HOME!
  if ((get-date) -ge $TGIF) {
   “TGIF has started without you!”
  }
  else {
   # Awww, still at work–how long until
   # it’s time to go to the pub?
   $diff = $($tgif – (get-date))
   “TGIF: $($diff.Days)d $($diff.Hours)h $($diff.Minutes)m”
  }
 }
}

NOTE: My “end time” is set to 4:30PM, not 5:00PM—since that’s when I escape.  Change as necessary. 😉

The code comments explain most of it.  As you can see, I added in one more check—let me know when it’s simply the weekend.  I also removed the Write-Host calls, since I simply want to return a String from the function.  I could use the function, as necessary, with formatting, and add it to other scripts and such.  For example:

Write-Host $(get-tgif) -fore red

The next step was tapping into the $Host variable.  Since I use Console2, my PowerShell window is a tab rather than the whole window.  Console2 is aware of PowerShell’s $Host.UI methods and adheres to the changes.

To add get-tgif to my prompt’s window title:

$windowTitle = “(” + $(get-tgif) + “) “
$host.ui.rawui.WindowTitle = $windowTitle

Easy enough.  Now my window title looks like (ignore the path in there for now):

TGIF countdown in WindowTitle

But that only sets it when you log in… and I want to update it (and keep that path updated as I traverse through directories).  To do that add a function called ‘prompt’ to your PowerShell Profile.  Prompt is processed every time the “PS>” is generated and allows you a great deal of customization.  See the post here for further details on how I’ve customized my prompt to handle Git repositories.

So, move those two lines into our prompt function, and our TGIF timer now updates every time our prompt changes… keeping it pretty close to real time as you work.

function prompt {
 $windowTitle = “(” + $(get-tgif) + “) ” + $(get-location).ToString()
 $host.ui.rawui.WindowTitle = $windowTitle

[…]

}

This could be tweaked to any type of countdown.  I’m sure a few of those around the office would have fun adding retirement countdowns, etc. 😉

Happy TGIF!