Archive

Posts Tagged ‘PowerShell’

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:

Student,Code,Product,IUID,TSSOC,Date
123456,e11234,Reading,jsmith,0:18,1/4/2012
123456,e11234,Reading,jsmith,1:04,1/4/2012
123456,e11234,Reading,jsmith,0:27,1/5/2012
123456,e11234,Reading,jsmith,0:19,1/7/2012
123456,e11235,Math,jsmith,0:14,1/7/2012

Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {

   }
$out

Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0′s or 1′s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
   }
$out

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   }

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. :)

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. ;) I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

DeployTo – a simple PowerShell web deployment script

February 10, 2012 1 comment

We’re constantly working to standardize how builds get pushed out to our development, UAT, and production servers. The typical ‘order of operations’ includes:

  1. compile the build
  2. backup the existing deployment
  3. copy the new deployment
  4. celebrate

Pretty simple, but with a few moving parts (git push, TeamCity pulls in, compiles, runs deployment procedures, IIS (hopefully) doesn’t explode).

One step to standardize this has been to add these steps into our psake scripts, but that got tiring (and dangerous when we found a flaw).  When in doubt, refactor!

First, get the codez!

DeployTo.ps1 and an example settings.xml file.

Creating a simple deployment tool – DeployTo

The PowerShell file, DeployTo.ps1, should be located in your project, your PATH, or wherever your CI server can find it–I tend to include it in a folder we have that synchronizes to ALL of our build servers automatically via Live Mesh. You could include it with your project to ensure dependencies are always met (for public projects).

DeployTo has one expectation, that a settings.xml file (or file passed in the Settings argument) will contain a breakdown of your deployment paths.

Example:

<site>
    <name>development</name>
    <path>\\server\webs\path</path>
</site>

With names and paths in hand, DeployTo sets about to match the passed in deployment location to what exists in the file. If one is found, it proceeds with the backup and deployment process.

Calling DeployTo is as simple as:

deployto development

Now, looping through our settings.xml file looking for ‘deployment’:

foreach ($site in $xml.settings.site) {
    if ($site.name.ToLower() -eq $deploy.ToLower()) {
        writeMessage ("Found deployment plan for {0} -> {1}." -f $site.name, $site.path)
	if ($SkipBackup -eq $false) {
	    backup($site)
	}
	deploy($site)
	$success = $true
	break;
    }
}

The output also lets us know what’s happening (and is helpful for diagnosing issues in your CI’s build logs).

Deploying to DEVELOPMENT
Reading settings file at settings.xml.
Testing release path at .\release.
Found deployment plan for development -> \\server\site.
Making backup of 255 file(s) at \\server\site to \\server\site-2012-02-10-105321.
Backup succeeded.
Removing existing files at \\server\site.
Copying new release to \\server\site.
Deployment succeeded.
SUCCESS!

Backing up – A safety net when things go awry.

Your builds NEVER go bad, right? Deployments work 100% of the time? Right? Sure. ;) No matter how many staging sites you test on, things can go back on a deployment. That’s why we have BACKUPS. I could get fancy and .7z/.gzip up the files and such, but a simple directory copy serves exactly what I need.

The backup function itself is quite simple–take a list directory of files, copy it into a new directory with the directory name + current date/time.

function backup($site) {
try {
    $currentDate = (Get-Date).ToString("yyyy-MM-dd-HHmmss");
    $backupPath = $site.path + "-" + $currentDate;

    $originalCount = (gci -recurse $site.path).count

    writeMessage ("Making backup of {0} file(s) at {1} to {2}." -f $originalCount, $site.path, $backupPath)
    
    # do the actual file copy, but ignore the thumbs.db file. It's such a horrid little file.
    cp -recurse -exclude thumbs.db $site.path $backupPath

    $backupCount = (gci -recurse $backupPath).count	

    if ($originalCount -ne $backupCount) {
      writeError ("Backup failed; attempted to copy {0} file(s) and only copied {1} file(s)." -f $originalCount, $backupCount)
    }
    else {
      writeSuccess ("Backup succeeded.")
    }
}
catch
{
    writeError ("Could not complete backup. EXCEPTION: {1}" -f $_)
}
}

Deploying — copying files, plain and simple

Someday, I may have the need to be fancy. Since IIS automatically boots itself when a new web.config is added, I don’t have any ‘logic’ to my deployment scripts. We also, for now, keep our database deployments separate from our web view deployments. For now, deploying is copying files; however, who wants to do that by hand? Not me.

function deploy($site) {
try {
    writeMessage ("Removing existing files at {0}." -f $site.path)

    # force, because thumbs.db is annoying
    rm -force -recurse $site.path

    writeMessage ("Copying new release to {0}." -f $site.path)

    cp -recurse -exclude thumbs.db  $releaseDirectory $site.path
    $originalCount = (gci -recurse $releaseDirectory).count
    $siteCount = (gci -recurse $site.path).count
    
    if ($originalCount -ne $siteCount)
    {
      writeError ( "Deployment failed; attempted to copy {0} file(s) and only copied {1} file(s)." -f $originalCount, $siteCount)
    }
    else {
      writeSuccess ("Deployment succeeded.")
    }
}
catch {
    writeError ("Could not deploy. EXCEPTION: {1}" -f $_)
}
}

That’s it.

Once thing you’ll notice in both scripts is I am doing a bit of monitoring and testing.

  • Do paths exist before we begin the process?
  • Do the backed up/copied/original file counts match?
  • Did anything else go awry so we can throw a general error?

It’s a work in progress, but has met our needs quite well over the past several months with psake and TeamCity.

Posting to Campfire using PowerShell (and curl)

January 16, 2012 Comments off

I have a few tasks that kick off nightly that I wanted to post status updates into our team’s Campfire room. Thankfully, 37signals Campfire has an amazing API.  With that knowledge, time to create a quick PowerShell function!

NOTE: I use curl for this. The Linux/Unix folks likely know curl, however, I’m sure the Windows folks have funny looks on their faces. You can grab the latest curl here for Windows (the Win32 or Win64 generic versions are fine).

The full code for this function is available via gist.

I pass two parameters: the room number (though this could be tweaked to be optional if you only have one room) and the message to post.

param (
 [string]$RoomNumber = (Read-Host "The room to post to (default: 123456) "),
 [string]$Message = (Read-Host "The message to send ")
)
$defaultRoom = "123456"
if ($RoomNumber -eq "") {
 $RoomNumber = $defaultRoom
}

There are two baked-in variables, the authentication token for your posting user (we created a ‘robot’ account that we use) and the YOURDOMAIN prefix for Campfire.

$authToken = "YOUR AUTH TOKEN"
$postUrl = "https://YOURDOMAIN.campfirenow.com/room/{0}/speak.json" -f $RoomNumber

The rest is simply using curl to HTTP POST a bit of JSON back up to the web service. If you’re not familiar with the JSON data format, give this a quick read. The best way I can sum up JSON is that it’s XML objects for the web with less wrist-cutting. :)

$data = "`"{'message':{'body':'$message'}}`""

$command = "curl -i --user {0}:X -H 'Content-Type: application/json' --data {1} {2}" 
     -f $authToken, $data, $postUrl

$result = Invoke-Expression ($command)

if ($result[0].EndsWith("Created") -ne $true) {
	Write-Host "Error!" -foregroundcolor red
	$result
}
else {
	Write-Host "Success!" -foregroundcolor green
}
Running SendTo-CampFire

Running SendTo-Campfire with Feedback

Indeed, there be success!

Success!

Success!

It’s important to remember that PowerShell IS extremely powerful, but can become even more powerful coupled with other available tools–even the web itself!

Changing Default Printers by Network Subnet

January 13, 2012 Comments off

Windows 7 includes a pretty handy feature for mobile devices called location-aware printing. The feature itself is pretty cool and great if you’re moving between two distinct networks (home and work, for example). However, if you’re moving within the SAME network–and the SAME wireless SSID, it doesn’t register a difference. LAP doesn’t pay attention to your IP address, just the SSID you’re connected to.

In our organization, and most large corporations, wireless access points have the same name/credentials so that users can move seamlessly through the enterprise. How can we address location-based printing then?

One of my peers recently moved into a position where they are constantly between two buildings multiple times per day and frequently forgetting to reset their default printer.

Here’s how I helped her out using a bit of PowerShell.

For the full code, check out this gist.

Set-PrinterByLocation in action!

Set-PrinterByLocation in action!

To begin, we need to specify our IP subnets and the printers associated to them. As this gets bigger (say 4-5 sites), it’d be easier to toss this into a separate file as a key-value pair and import it.

$homeNet = "10.1.4.*", "OfficePrinter"
$remoteNet = "10.1.6.*", "W382_HP_Printer"

Next, let’s grab all of the IP addresses currently active on our computer. Since we could have both wireless and wired plugged in, this returns an array.

$ipAddress = @()
$ipAddress = gwmi win32_NetworkAdapterConfiguration |
	? { $_.IPEnabled -eq $true } |
	% { $_.IPAddress } |
	% { [IPAddress]$_ } |
	? { $_.AddressFamily -eq 'internetwork'  } |
	% { $_.IPAddressToString }

Write-Host -fore cyan "Your current network is $ipAddress."

Our last step is to switch (using the awesome -wildcard flag since we’re using wildcards ‘*’ in our subnets) based on the returned IPs. The Set-DefaultPrinter function is a tweaked version of this code from The Scripting Guy.

function Set-DefaultPrinter([string]$printerPath) {
	$printers = gwmi -class Win32_Printer -computer .
	Write-Host -fore cyan "Default Printer: $printerPath"
	$dp = $printers | ? { $_.deviceID -match $printerPath }
	$dp.SetDefaultPrinter() | Out-Null
}

switch -wildcard ($ipAddress) {
	$homeNet[0] { Set-DefaultPrinter $homeNet[1] }
	$remoteNet[0] { Set-DefaultPrinter $remoteNet[1] }
	default { Set-DefaultPrinter $homeNet[1] }

The full source code (and a constantly updated version available from gist).

$homeNet = "10.1.4.*", "OfficePrinter"
$remoteNet = "10.1.6.*", "W382_HP_Printer"

function Set-DefaultPrinter([string]$printerPath) {
	$printers = gwmi -class Win32_Printer -computer .
	Write-Host -fore cyan "Default Printer: $printerPath"
	$dp = $printers | ? { $_.deviceID -match $printerPath }
	$dp.SetDefaultPrinter() | Out-Null
}

$ipAddress = @()
$ipAddress = gwmi win32_NetworkAdapterConfiguration |
	? { $_.IPEnabled -eq $true } |
	% { $_.IPAddress } |
	% { [IPAddress]$_ } |
	? { $_.AddressFamily -eq 'internetwork'  } |
	% { $_.IPAddressToString }

Write-Host -fore cyan "Your current network is $ipAddress."

switch -wildcard ($ipAddress) {
	$homeNet[0] { Set-DefaultPrinter $homeNet[1] }
	$remoteNet[0] { Set-DefaultPrinter $remoteNet[1] }
	default { Set-DefaultPrinter $homeNet[1] }
}

Finding TODOs and Reporting in PowerShell

December 28, 2011 Comments off

After @andyedinborough shared a blog post on how to report TODOs in CruiseControl.NET, I figured there HAD to be a snazzy way to do this with TeamCity.

I already have TeamCity configured to dump out the results from our Machine.Specifications tests and PartCover code coverage (simple HTML and XML transforms) so another HTML file shouldn’t be difficult. When we’re done, we’ll have an HTML file, so we just need to setup a custom report to look for the file name. For more information on setting up custom reports in TeamCity, checkout this post (step 3 in particular).

Let’s Dig In!

 

The code! The current working version of the code is available via this gist.

The code is designed to be placed in a file of it’s own. Mine’s named Get-TODO.ps1

Note: If you want to include this in another file (such as your PowerShell profile), be sure to replace the param() into function Get-TODO( {all the params} ).

The the script parameters make a few assumptions based on my own coding standards; change as necessary:

  • My projects are named {SolutionName}.Web/.Specifications/.Core and are inside a directory called {SolutionName},
  • By default, it includes several file extensions: .spark, .cs, .coffee, .js, and .rb,
  • By default, it excludes several directories: fubu-content, packages, build, and release.

It also includes a couple of flags for Html output–we’ll get to that in a bit.

param(
	#this is appalling; there has to be a better way to get the raw name of "this" directory. 
	[string]$DirectoryMask = 
		(get-location),
	[array]$Include = 
		@("*.spark","*.cs","*.js","*.coffee","*.rb"),
	[array]$Exclude = 
		@("fubu-content","packages","build","release"),
	[switch]$Html,
	[string]$FileName = "todoList.html")

Fetching Our Files

We need to grab a collection of all of our solution files. This will include the file extensions from $Include and use the directories from $DirectoryMask (which the default is everything from the current location).

$originalFiles = 
	gci -Recurse -Include $Include -Path $DirectoryMask;

Unfortunately, the -Exclude flag is…well, unhappy, in PowerShell as it doesn’t seem to work with -Recurse (or if it does, there’s voodoo involved).

How do we address voodoo issues? Regular expressions. Smile Aww, yeah.

$withoutExcludes = $originalFiles | ? {
	if ($Exclude.count -gt 0) {
		#need moar regex
		[regex] $ex = 
			# 1: case insensitive and start of string
			# 2: join our array, using RegEx to escape special characters
			# the * allow wildcards so directory names filter out
			# and not just exact paths.
			# 3: end of string
			#  1                   2										3
			'(?i)^(.*'+(($Exclude|%{[regex]::escape($_)}) -join ".*|.*")+'.*)$'
		$_ -notmatch $ex
	}
	else { 
		$_ 
	}
}

Breath, it’s okay. That is mostly code comments to explain what each line is doing.  This snippet is dynamically creating a regular expression string based on our $Exclude parameter then comparing the current file in the iteration to the regular expression and returning the ones that do not match.  If $Exclude is empty, it simply returns all files.

Finding our TODOs

Select-String provides us with a quick way to find patterns inside of text files and provides us with helpful output such as the line of text, line number, and file name (select-string is somewhat like PowerShell’s grep command).

$todos = $withoutExcludes | 
	% { select-string $_ -pattern "TODO:" } |
	select-object LineNumber, FileName, Line;

Let’s take all of our remaining files, look at them as STRINGs and find the pattern "TODO:".  This returns an array of MatchInfo objects. Since these have a few ancillary properties we don’t want, let’s simply grab the LineNumber, FileName, and the Line (of the match) for our reporting.

Now that we have our list, let’s pretty it up a bit.  PowerShell’s select-object has the ability to reformat objects on the fly using expressions.  I could have combined this in the last command; however, it’s a bit clearer to fetch our results THEN format them.

$formattedOutput = $todos | select-object `
	@{Name='LineNumber'; Expression={$_.LineNumber}}, `
	@{Name='FileName'; Expression={$_.FileName}}, `
	@{Name='TODO'; Expression={$_.Line.Trim() -replace '// TODO: ',''}}
  1. LineNumber: We’re keeping this line the same.
  2. FileName: Again, input->output. Smile
  3. TODO: Here’s the big change. "Line" isn’t real descriptive, so let’s set Name=’TODO’ for this column. Second, let’s trim the whitespace off the line AND replace the string "// TODO: " with an empty string (remove it).  This cleans it up a bit.

Reporting It Out

At this point, we could simply return $formattedOutput | ft -a and have a handy table that looks like this:

consoleoutput

Good for scripts, not so good for reporting and presenting to the boss/team (well, not my boss or team…). I added a -Html and -FileName flag to our parameters at the beginning just for this. I’ve pre-slugged the filename to todoList.html so I can set TeamCity to pickup the report automagically.  But how do we build the report?

PowerShell contains a fantastic built-in cmdlet called ConvertTo-Html that you can pipe pretty much any tabular content into and it turns it into a table.

 

if ($Html) {
	$head = @'
	<style> 
	body{	font-family:Segoe UI; background-color:white;} 
	table{	border-width: 1px;border-style: solid;
			border-color: black;border-collapse: collapse;width:100%;} 
	th{		font-family:Segoe Print;font-size:1.0em; border-width: 1px;
			padding: 2px;border-style: solid;border-color:black;
			background-color:lightblue;} 
	td{		border-width: 1px;padding: 2px;border-style: solid;
			border-color: black;background-color:white;} 
	</style> 
'@
	$solutionName = 
		(Get-Location).ToString().Split("\")[(Get-Location).ToString().Split("\").Count-1];

	$header = "<h1>"+$solutionName +" TODO List</h1><p>Generated on "+ [DateTime]::Now +".</p>"
	$title = $solutionName +" TODO List"
	$formattedOutput | 
              ConvertTo-HTML -head $head -body $header -title $title |
              Out-File $FileName
}
else {
	$formattedOutput
}

Our html <head> tag needs some style. Simple CSS to address that problem.  I even added a few fonts in there for kicks and placed them in $head.

As I mentioned before, my solutions are usually the name of the project–which is what I want as the $title and $header. The $solutionName is ugly right now–if anyone out there has a BETTER way to get the STRING name of the current directory (not the Path), I’m all ears.

We take our $formattedOutput, $header, $title and pass them into ConvertTo-Html. It looks a bit strange that we’re setting our ‘body’ as the header; however, whatever is piped into the cmdlet appends to what is passed into -body. It then parses our content out to the $FileName specified.

htmloutput

Updating NuGet Spec’s Version Numbers in psake

December 3, 2010 Comments off

As part of our psake build process on a couple of framework libraries, I wanted to add in updating our internal NuGet repository.  The actual .nuspec file is laid out quite simplistically (details); however, the version number is hard coded.

For packages that have a static ‘major’ version, that’s not a bad deal; however, I wanted to keep our package up to date with the latest and greatest versions, so I needed to update that version element.

Since I have the full power of PowerShell at my disposal, modifying an XML file is a cakewalk. Here’s how I went about it.

function build-nuget-package {
	# update nuget spec version number
	[xml] $spec = gc $nuget_spec
	$spec.package.metadata.version = GetBuildNumber
	$spec.Save($nuget_spec)

	# rebuild the package using the updated .nuspec file.
	cd $release_directory
	exec { invoke-expression "$nuget pack $nuget_spec" }
	cd $build_directory
}

GetBuildNumber is an existing psake function I use to snag the AssemblyVersion from \Properties\AssemblyInfo.cs (and return for things like TeamCity). $nuget and $nuget_spec are variables configured in psake that point to the nuget executable and the specification file used by the build script.

Once the package is built, in our Deploy task, we do a simple copy.

task Deploy -depends Release {
	 [ ... cut for brevity ]

	# build and deploy nuget package
	build-nuget-package
	copy-item $release_directory\*.nupkg \\server\nugetshare$
}

Now the NuGet repository is updated with a new package each time we build. I may refine it and only update the package on major version number changes or something later, but this gets the job done.

Playing nice with psake, PartCover, and TeamCity

December 2, 2010 1 comment

While code coverage isn’t the holy grail of development benchmarks, it has a place in the tool belt. We have several legacy systems where we are retrofitting tests in as we enhance and maintain the project.

I looked at NCover as a code coverage solution. NCover is a SLICK product and I really appreciated it’s Explorer and charting for deep dive analysis. Unfortunately, working in public education, the budget cuts just keep coming and commercial software didn’t fit the bill. After finding the revitalized project on GitHub, I dug back into PartCover.net.  Shaun Wilde (@shauncv) and others have been quite active revitalizing the project and flushing out features for .net 4.0.

You can find the repo and installation details for PartCover.net at https://github.com/sawilde/partcover.net4.

Now, armed with code coverage reports, I had to find a way to get that information into TeamCity. Since I use psake, the command line runner doesn’t have the option for importing code coverage results. That just means I’ll need to handle executing PartCover inside of psake. Easy enough. Thankfully, the TeamCity Reports tabs can also be customized. Time to get to work!

Step 0: Setting up PartCover for 64-bit

If you’re on a 32-bit installation, then skip this (are there people still running 32-bit installs?).

PartCover, itself, installs just fine, however, if you’re on an x64 machine, you’ll need to modify PartCover.exe and PartCover.Browser.exe to force 32-bit using corflags.exe.

Corflags.exe is usually located at C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin if you’re running Windows 7. Older versions of Windows might be at v6.0A.

Once you’ve located corflags.exe, simply force the 32-bit flag:

corflags /32bit+ /force PartCover.exe
corflags /32bit+ /force PartCover.Browser.exe

If you have trouble on this step, hit up Google. There are tons of articles out there hashing and rehashing this problem.

Step 1: Setting up your PartCover settings file

PartCover allows you to specify configuration settings directly at the command line, but I find that hard to track when changes are made.  Alternatively, a simple XML file allows you to keep your runner command clean.

Here’s the settings file I’m running with:

<PartCoverSettings>
  <Target>.\Tools\xunit\xunit.console.x86.exe</Target>
  <TargetWorkDir></TargetWorkDir>
  <TargetArgs> .\build\MyApp.Test.dll /teamcity /html .\build\test_output.html</TargetArgs>
  <DisableFlattenDomains>True</DisableFlattenDomains>
  <Rule>+[MyApp]*</Rule>
  <Rule>-[MyApp.Test]*</Rule>
  <Rule>-[*]*__*</Rule>
</PartCoverSettings>

Just replace MyApp with the project names/namespaces you wish to cover (and exclude).

For more details on setting up a settings file, check out the documentation that comes with PartCover.  The only real oddity I have here is the last rule <Rule>-[*]*__*</Rule>.  PartCover, by default, also picks up the dynamically created classes from lambda expressions.  Since I’m already testing those, I’m hiding the results for the dynamic classes with two underscores in them __. It’s a cheap trick, but seems to work.

Step 2: Setting up psake for your tests and coverage

My default test runner task in psake grabs a set of assemblies and iterates through them passing each to the test runner.  That hasn’t changed; however, I now need to call PartCover instead of mspec/xunit.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      exec { invoke-expression $testExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

$coverage_runner, $base_directory, and $build_directory refer to variables configured at the top of my psake script.

Now, you should be able to drop to the command line and execute your test task.  A partcover.xml file should appear.  But who wants to read XML. Where are the shiny HTML reports?

We need to create them! :)

Step 2: Generating the HTML report from PartCover

There are two XML style sheets included with PartCover, however, I really like Gáspár Nagy’s detailed report found at http://gasparnagy.blogspot.com/2010/09/detailed-report-for-partcover-in.html. It’s clean, detailed, and fast.  I’ve downloaded this xslt and placed it in the PartCover directory.

Without TeamCity to do the heavy lifting, we need an XSL transformer of our own.  For my purposes, I’m using the one bundled with Sandcastle (http://sandcastle.codeplex.com/).  Install (or extract the MSI) and copy the contents to your project (ex: .\tools\sandcastle).

The syntax for xsltransform.exe is:

xsltransform.exe {xml_input} /xsl:{transform file} /out:{output html file}

In this case, since I’m storing the partcover_results.xml in my $build_directory, the full command would look like (without the line breaks):

    .\tools\sandcastle\xsltransform.exe .\build\partcover_results.xml
        /xsl:.\tools\partcover\partcoverfullreport.xslt
        /out:.\build\partcover.html

Fantastic.  Now, we want that to call every time we run our tests, so let’s add it to our psake script.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      $coverageExpression = "$transform_runner $build_directory\partcover_results.xml /xsl:$transform_xsl /out:$build_directory\partcover.html"
      exec { invoke-expression $testExpression }
      " - Converting coverage results for $test to HTML report..."
      exec { invoke-expression $coverageExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

Now, we have a $coverage_expression to invoke.  The xsltransform.exe and path to the report are tied up into variables for easy updating.

You should be able to run your test task again and open up the HTML file.

Step 3: Setting up the Code Coverage tab in TeamCity

The last step is the easiest. As I haven’t found a way to have the Code Coverage automatically pick up the PartCover reports, we’ll need to add a custom Report.

In TeamCity, go under Administration > Server Configuration > Report Tabs.

Add a new report tab with the Start Page set to the name of your PartCover HTML file.  In my case, it’s partcover.html.  Remember, the base path is your build directory (which is why I’m outputting the file to the $build_directory in psake).

Commit your repository with your new psake script and PartCover (if you haven’t already) and run–you’ll see a new Code Coverage tab ready and waiting.

- 12_2_2010 , 11_31_27 AM

Setting up a NuGet PowerShell Profile

November 15, 2010 Comments off

While NuGet alone is a pretty spectacular package management system, one of the hottest features involves the underlying PowerShell Package Console (we’ll call it PM from here out).  It may look fancy and have a funky PM> prompt, but it’s still PowerShell.

Considering I use psake and live in PowerShell most of the day, I wanted to do a bit of customizing.  Out of the box… urr… package, PM is barebones PowerShell.

Screenshot - 11_15_2010 , 8_31_25 PM

Well, that’s pretty boring!

The question is, how would I customize things?

The answer? Setting up a profile!

Step 1: Finding the NuGet Profile

Just like a normal PowerShell installation, the $profile variable exists in PM.

image

Okay, that was easy.  If you try to edit the file, however, it’s empty. You can either use the touch command to create an empty file, then edit it with Notepad, or simply run Notepad with the $profile path–it’ll ask you to create the file. :)

For an example, we’ll just pipe a bit of text into our profile and see what happens.

image

Now, close Visual Studio (PM only seems to load the profile when Visual Studio first starts) and relaunch it.  PM should now welcome you!

image

 

Step 2: Customize, customize, customize!

Now we’re ready to add variables, setup custom paths and scripts, add some git-tastic support, and add modules (like psake).  There are quite a few posts on the blog about customizing PowerShell’s environment, click here to check them out.

Remember: Since we’re using a separate PowerShell profile, be frugal with your commands and keep them "development centric".  For example, I don’t load HyperV modules, Active Directory management commands, and other "non-Solution" things into the PM. Visual Studio is slow enough–don’t bog it down. :) 

This is also a great opportunity to trim your profile down and break it into modular pieces (whether that be scripts or modules).  Keep those profiles as DRY as possible.

 

A few caveats…

There do seem to be a few caveats while in the PM environment:

1. Execution, whether it’s an actual executable, a script, or piping something to more, seems to be tossed into another process, executed, and then the results returned to the PM console window. Not terrible, but something to be aware of if it seems like it’s not doing anything.

2. I can’t find a good way to manipulate the boring PM> prompt. It seems that $Host.UI is pretty much locked down. I’m hopeful that will change with further releases because not KNOWING where I am in the directory structure is PAINFUL.

Using PowerShell to Kick off SQL Server Agent Jobs

We’re in the process of a few upgrades around our office and one of those will require a full shutdown mid-business day. To expedite the shutdown, I wanted a quick way to run our backup jobs (hot backups), move them, and down the boxes.

Remember, I’m lazy–walking around our server room smacking rack switches takes effort. I want to sit and drink coffee while the world comes to a halt.

Here’s a bit of what I came up with. The script handles four things:

  • accepts a server name parameter (so I can iterate through a list of servers),
  • checks to see if the server is online (simple ping),
  • checks to see if it’s a SQL Server (looks for the MSSQLSERVER service),
  • iterates through the jobs and, if the -run switch is passed, kicks them off.

As with most, the couple of helper functions are more than the actual script itself.

param (
  [string]$server = $(Read-Host -prompt "Enter the server name"),
  [switch]$run = $false
)

# little helper function to ping the remove server.
function get-server-online ([string] $server) {
  $serverStatus = 
    (new-object System.Net.NetworkInformation.Ping).Send($server).Status -eq "Success"
  
  Write-Host "Server Online: $serverStatus"
  return $serverStatus
}

# little helper function to check to see if the remote 
# server has SQL Server running.
function is-sql-server([string] $server) {
   if (get-server-online($server)) {
    
    $sqlStatus = (gwmi Win32_service -computername $server | 
        ? { $_.name -like "MSSQLSERVER" }).Status -eq "OK"
        
    Write-Host "SQL Server Found: $sqlStatus"
    return $sqlStatus
   }
}

# uhh, because someday I may need to add more to this?
function cannot-find-server([string] $server) {
  Write-Host "Cannot contact $server..."
}

if (is-sql-server($server)) {
  
    [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null
    $srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $server
    $jobs = $srv.JobServer.Jobs | ? {$_.IsEnabled -eq $true} 

    $jobs | 
        select Name, CurrentRunStatus, LastRunOutcome, LastRunDate, NextRunDate | 
        ft -a
    
    if ($run) {
      $jobs | % { $_.Start(); "Starting job: $_ ..." }
      
    }
}
else {
  cannot-find-server($server)
}

Things left to do–someday…?

  • Allow picking and choosing what jobs get kicked off…

It’s not fancy, but it’ll save time backing up and shutting down a couple dozen SQL instances and ensuring they’re not missed in the chaos. :)

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Follow

Get every new post delivered to your Inbox.