Archive

Archive for the ‘TeamCity’ Category

DeployTo – a simple PowerShell web deployment script

February 10, 2012 1 comment

We’re constantly working to standardize how builds get pushed out to our development, UAT, and production servers. The typical ‘order of operations’ includes:

  1. compile the build
  2. backup the existing deployment
  3. copy the new deployment
  4. celebrate

Pretty simple, but with a few moving parts (git push, TeamCity pulls in, compiles, runs deployment procedures, IIS (hopefully) doesn’t explode).

One step to standardize this has been to add these steps into our psake scripts, but that got tiring (and dangerous when we found a flaw).  When in doubt, refactor!

First, get the codez!

DeployTo.ps1 and an example settings.xml file.

Creating a simple deployment tool – DeployTo

The PowerShell file, DeployTo.ps1, should be located in your project, your PATH, or wherever your CI server can find it–I tend to include it in a folder we have that synchronizes to ALL of our build servers automatically via Live Mesh. You could include it with your project to ensure dependencies are always met (for public projects).

DeployTo has one expectation, that a settings.xml file (or file passed in the Settings argument) will contain a breakdown of your deployment paths.

Example:

<site>
    <name>development</name>
    <path>\\server\webs\path</path>
</site>

With names and paths in hand, DeployTo sets about to match the passed in deployment location to what exists in the file. If one is found, it proceeds with the backup and deployment process.

Calling DeployTo is as simple as:

deployto development

Now, looping through our settings.xml file looking for ‘deployment’:

foreach ($site in $xml.settings.site) {
    if ($site.name.ToLower() -eq $deploy.ToLower()) {
        writeMessage ("Found deployment plan for {0} -> {1}." -f $site.name, $site.path)
	if ($SkipBackup -eq $false) {
	    backup($site)
	}
	deploy($site)
	$success = $true
	break;
    }
}

The output also lets us know what’s happening (and is helpful for diagnosing issues in your CI’s build logs).

Deploying to DEVELOPMENT
Reading settings file at settings.xml.
Testing release path at .\release.
Found deployment plan for development -> \\server\site.
Making backup of 255 file(s) at \\server\site to \\server\site-2012-02-10-105321.
Backup succeeded.
Removing existing files at \\server\site.
Copying new release to \\server\site.
Deployment succeeded.
SUCCESS!

Backing up – A safety net when things go awry.

Your builds NEVER go bad, right? Deployments work 100% of the time? Right? Sure. 😉 No matter how many staging sites you test on, things can go back on a deployment. That’s why we have BACKUPS. I could get fancy and .7z/.gzip up the files and such, but a simple directory copy serves exactly what I need.

The backup function itself is quite simple–take a list directory of files, copy it into a new directory with the directory name + current date/time.

function backup($site) {
try {
    $currentDate = (Get-Date).ToString("yyyy-MM-dd-HHmmss");
    $backupPath = $site.path + "-" + $currentDate;

    $originalCount = (gci -recurse $site.path).count

    writeMessage ("Making backup of {0} file(s) at {1} to {2}." -f $originalCount, $site.path, $backupPath)
    
    # do the actual file copy, but ignore the thumbs.db file. It's such a horrid little file.
    cp -recurse -exclude thumbs.db $site.path $backupPath

    $backupCount = (gci -recurse $backupPath).count	

    if ($originalCount -ne $backupCount) {
      writeError ("Backup failed; attempted to copy {0} file(s) and only copied {1} file(s)." -f $originalCount, $backupCount)
    }
    else {
      writeSuccess ("Backup succeeded.")
    }
}
catch
{
    writeError ("Could not complete backup. EXCEPTION: {1}" -f $_)
}
}

Deploying — copying files, plain and simple

Someday, I may have the need to be fancy. Since IIS automatically boots itself when a new web.config is added, I don’t have any ‘logic’ to my deployment scripts. We also, for now, keep our database deployments separate from our web view deployments. For now, deploying is copying files; however, who wants to do that by hand? Not me.

function deploy($site) {
try {
    writeMessage ("Removing existing files at {0}." -f $site.path)

    # force, because thumbs.db is annoying
    rm -force -recurse $site.path

    writeMessage ("Copying new release to {0}." -f $site.path)

    cp -recurse -exclude thumbs.db  $releaseDirectory $site.path
    $originalCount = (gci -recurse $releaseDirectory).count
    $siteCount = (gci -recurse $site.path).count
    
    if ($originalCount -ne $siteCount)
    {
      writeError ( "Deployment failed; attempted to copy {0} file(s) and only copied {1} file(s)." -f $originalCount, $siteCount)
    }
    else {
      writeSuccess ("Deployment succeeded.")
    }
}
catch {
    writeError ("Could not deploy. EXCEPTION: {1}" -f $_)
}
}

That’s it.

Once thing you’ll notice in both scripts is I am doing a bit of monitoring and testing.

  • Do paths exist before we begin the process?
  • Do the backed up/copied/original file counts match?
  • Did anything else go awry so we can throw a general error?

It’s a work in progress, but has met our needs quite well over the past several months with psake and TeamCity.

NuGet Package Restore, Multiple Repositories, and CI Servers

January 20, 2012 1 comment

I really like NuGet’s new Package Restore feature (and so does our git repositories).

We have several common libraries that we’ve moved into a private, local NuGet repository on our network. It’s really helped deal with the dependency and version nightmares between projects and developers.

Boom!I checked my first project using full package restore and our new local repositories into our CI server, TeamCity, the other day and noticed that the Package Restore feature couldn’t find the packages stored in our local repository.

At first, I thought there was a snag (permissions, network, general unhappiness) with our NuGet share, but all seemed well. To my surprise, repository locations are not stored in that swanky .nuget directory, but as part of the current user profile. %appdata%\NuGet\NuGet.Config to be precise.

Well, that’s nice on my workstation, but NETWORK SERVICE doesn’t have a profile and the All Users AppData directory didn’t seem to take effect.

The solution:

For TeamCity, at least, the solution was to set the TeamCity build agent services to run as a specific user (I chose a domain user in our network, you could use a local user as well). Once you have a profile, go into %drive%:\users\{your service name}\appdata\roaming\nuget and modify the nuget.config file.

Here’s an example of the file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet official package source" value="https://msft.long.url.here" />
    <add key="Student Achievement [local]" value="\\server.domain.com\shared$\nuget" />
  </packageSources>
  <activePackageSource>
    <add key="NuGet official package source" value="https://msft.long.url.here" />
  </activePackageSource>
</configuration>

Package Restore will attempt to find the packages on the ‘activePackageSource’ first then proceed through the list.

Remember, if you have multiple build agent servers, this must be done on each server.

Wish List: The option to include non-standard repositories as part of the .nuget folder. 🙂

Finding TODOs and Reporting in PowerShell

December 28, 2011 Comments off

After @andyedinborough shared a blog post on how to report TODOs in CruiseControl.NET, I figured there HAD to be a snazzy way to do this with TeamCity.

I already have TeamCity configured to dump out the results from our Machine.Specifications tests and PartCover code coverage (simple HTML and XML transforms) so another HTML file shouldn’t be difficult. When we’re done, we’ll have an HTML file, so we just need to setup a custom report to look for the file name. For more information on setting up custom reports in TeamCity, checkout this post (step 3 in particular).

Let’s Dig In!

 

The code! The current working version of the code is available via this gist.

The code is designed to be placed in a file of it’s own. Mine’s named Get-TODO.ps1

Note: If you want to include this in another file (such as your PowerShell profile), be sure to replace the param() into function Get-TODO( {all the params} ).

The the script parameters make a few assumptions based on my own coding standards; change as necessary:

  • My projects are named {SolutionName}.Web/.Specifications/.Core and are inside a directory called {SolutionName},
  • By default, it includes several file extensions: .spark, .cs, .coffee, .js, and .rb,
  • By default, it excludes several directories: fubu-content, packages, build, and release.

It also includes a couple of flags for Html output–we’ll get to that in a bit.

param(
	#this is appalling; there has to be a better way to get the raw name of "this" directory. 
	[string]$DirectoryMask = 
		(get-location),
	[array]$Include = 
		@("*.spark","*.cs","*.js","*.coffee","*.rb"),
	[array]$Exclude = 
		@("fubu-content","packages","build","release"),
	[switch]$Html,
	[string]$FileName = "todoList.html")

Fetching Our Files

We need to grab a collection of all of our solution files. This will include the file extensions from $Include and use the directories from $DirectoryMask (which the default is everything from the current location).

$originalFiles = 
	gci -Recurse -Include $Include -Path $DirectoryMask;

Unfortunately, the -Exclude flag is…well, unhappy, in PowerShell as it doesn’t seem to work with -Recurse (or if it does, there’s voodoo involved).

How do we address voodoo issues? Regular expressions. Smile Aww, yeah.

$withoutExcludes = $originalFiles | ? {
	if ($Exclude.count -gt 0) {
		#need moar regex
		[regex] $ex = 
			# 1: case insensitive and start of string
			# 2: join our array, using RegEx to escape special characters
			# the * allow wildcards so directory names filter out
			# and not just exact paths.
			# 3: end of string
			#  1                   2										3
			'(?i)^(.*'+(($Exclude|%{[regex]::escape($_)}) -join ".*|.*")+'.*)$'
		$_ -notmatch $ex
	}
	else { 
		$_ 
	}
}

Breath, it’s okay. That is mostly code comments to explain what each line is doing.  This snippet is dynamically creating a regular expression string based on our $Exclude parameter then comparing the current file in the iteration to the regular expression and returning the ones that do not match.  If $Exclude is empty, it simply returns all files.

Finding our TODOs

Select-String provides us with a quick way to find patterns inside of text files and provides us with helpful output such as the line of text, line number, and file name (select-string is somewhat like PowerShell’s grep command).

$todos = $withoutExcludes | 
	% { select-string $_ -pattern "TODO:" } |
	select-object LineNumber, FileName, Line;

Let’s take all of our remaining files, look at them as STRINGs and find the pattern "TODO:".  This returns an array of MatchInfo objects. Since these have a few ancillary properties we don’t want, let’s simply grab the LineNumber, FileName, and the Line (of the match) for our reporting.

Now that we have our list, let’s pretty it up a bit.  PowerShell’s select-object has the ability to reformat objects on the fly using expressions.  I could have combined this in the last command; however, it’s a bit clearer to fetch our results THEN format them.

$formattedOutput = $todos | select-object `
	@{Name='LineNumber'; Expression={$_.LineNumber}}, `
	@{Name='FileName'; Expression={$_.FileName}}, `
	@{Name='TODO'; Expression={$_.Line.Trim() -replace '// TODO: ',''}}
  1. LineNumber: We’re keeping this line the same.
  2. FileName: Again, input->output. Smile
  3. TODO: Here’s the big change. "Line" isn’t real descriptive, so let’s set Name=’TODO’ for this column. Second, let’s trim the whitespace off the line AND replace the string "// TODO: " with an empty string (remove it).  This cleans it up a bit.

Reporting It Out

At this point, we could simply return $formattedOutput | ft -a and have a handy table that looks like this:

consoleoutput

Good for scripts, not so good for reporting and presenting to the boss/team (well, not my boss or team…). I added a -Html and -FileName flag to our parameters at the beginning just for this. I’ve pre-slugged the filename to todoList.html so I can set TeamCity to pickup the report automagically.  But how do we build the report?

PowerShell contains a fantastic built-in cmdlet called ConvertTo-Html that you can pipe pretty much any tabular content into and it turns it into a table.

 

if ($Html) {
	$head = @'
	<style> 
	body{	font-family:Segoe UI; background-color:white;} 
	table{	border-width: 1px;border-style: solid;
			border-color: black;border-collapse: collapse;width:100%;} 
	th{		font-family:Segoe Print;font-size:1.0em; border-width: 1px;
			padding: 2px;border-style: solid;border-color:black;
			background-color:lightblue;} 
	td{		border-width: 1px;padding: 2px;border-style: solid;
			border-color: black;background-color:white;} 
	</style> 
'@
	$solutionName = 
		(Get-Location).ToString().Split("\")[(Get-Location).ToString().Split("\").Count-1];

	$header = "<h1>"+$solutionName +" TODO List</h1><p>Generated on "+ [DateTime]::Now +".</p>"
	$title = $solutionName +" TODO List"
	$formattedOutput | 
              ConvertTo-HTML -head $head -body $header -title $title |
              Out-File $FileName
}
else {
	$formattedOutput
}

Our html <head> tag needs some style. Simple CSS to address that problem.  I even added a few fonts in there for kicks and placed them in $head.

As I mentioned before, my solutions are usually the name of the project–which is what I want as the $title and $header. The $solutionName is ugly right now–if anyone out there has a BETTER way to get the STRING name of the current directory (not the Path), I’m all ears.

We take our $formattedOutput, $header, $title and pass them into ConvertTo-Html. It looks a bit strange that we’re setting our ‘body’ as the header; however, whatever is piped into the cmdlet appends to what is passed into body. It then parses our content out to the $FileName specified.

htmloutput

Playing nice with psake, PartCover, and TeamCity

December 2, 2010 1 comment

While code coverage isn’t the holy grail of development benchmarks, it has a place in the tool belt. We have several legacy systems where we are retrofitting tests in as we enhance and maintain the project.

I looked at NCover as a code coverage solution. NCover is a SLICK product and I really appreciated it’s Explorer and charting for deep dive analysis. Unfortunately, working in public education, the budget cuts just keep coming and commercial software didn’t fit the bill. After finding the revitalized project on GitHub, I dug back into PartCover.net.  Shaun Wilde (@shauncv) and others have been quite active revitalizing the project and flushing out features for .net 4.0.

You can find the repo and installation details for PartCover.net at https://github.com/sawilde/partcover.net4.

Now, armed with code coverage reports, I had to find a way to get that information into TeamCity. Since I use psake, the command line runner doesn’t have the option for importing code coverage results. That just means I’ll need to handle executing PartCover inside of psake. Easy enough. Thankfully, the TeamCity Reports tabs can also be customized. Time to get to work!

Step 0: Setting up PartCover for 64-bit

If you’re on a 32-bit installation, then skip this (are there people still running 32-bit installs?).

PartCover, itself, installs just fine, however, if you’re on an x64 machine, you’ll need to modify PartCover.exe and PartCover.Browser.exe to force 32-bit using corflags.exe.

Corflags.exe is usually located at C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin if you’re running Windows 7. Older versions of Windows might be at v6.0A.

Once you’ve located corflags.exe, simply force the 32-bit flag:

corflags /32bit+ /force PartCover.exe
corflags /32bit+ /force PartCover.Browser.exe

If you have trouble on this step, hit up Google. There are tons of articles out there hashing and rehashing this problem.

Step 1: Setting up your PartCover settings file

PartCover allows you to specify configuration settings directly at the command line, but I find that hard to track when changes are made.  Alternatively, a simple XML file allows you to keep your runner command clean.

Here’s the settings file I’m running with:

<PartCoverSettings>
  <Target>.\Tools\xunit\xunit.console.x86.exe</Target>
  <TargetWorkDir></TargetWorkDir>
  <TargetArgs> .\build\MyApp.Test.dll /teamcity /html .\build\test_output.html</TargetArgs>
  <DisableFlattenDomains>True</DisableFlattenDomains>
  <Rule>+[MyApp]*</Rule>
  <Rule>-[MyApp.Test]*</Rule>
  <Rule>-[*]*__*</Rule>
</PartCoverSettings>

Just replace MyApp with the project names/namespaces you wish to cover (and exclude).

For more details on setting up a settings file, check out the documentation that comes with PartCover.  The only real oddity I have here is the last rule <Rule>-[*]*__*</Rule>.  PartCover, by default, also picks up the dynamically created classes from lambda expressions.  Since I’m already testing those, I’m hiding the results for the dynamic classes with two underscores in them __. It’s a cheap trick, but seems to work.

Step 2: Setting up psake for your tests and coverage

My default test runner task in psake grabs a set of assemblies and iterates through them passing each to the test runner.  That hasn’t changed; however, I now need to call PartCover instead of mspec/xunit.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      exec { invoke-expression $testExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

$coverage_runner, $base_directory, and $build_directory refer to variables configured at the top of my psake script.

Now, you should be able to drop to the command line and execute your test task.  A partcover.xml file should appear.  But who wants to read XML. Where are the shiny HTML reports?

We need to create them! 🙂

Step 2: Generating the HTML report from PartCover

There are two XML style sheets included with PartCover, however, I really like Gáspár Nagy’s detailed report found at http://gasparnagy.blogspot.com/2010/09/detailed-report-for-partcover-in.html. It’s clean, detailed, and fast.  I’ve downloaded this xslt and placed it in the PartCover directory.

Without TeamCity to do the heavy lifting, we need an XSL transformer of our own.  For my purposes, I’m using the one bundled with Sandcastle (http://sandcastle.codeplex.com/).  Install (or extract the MSI) and copy the contents to your project (ex: .\tools\sandcastle).

The syntax for xsltransform.exe is:

xsltransform.exe {xml_input} /xsl:{transform file} /out:{output html file}

In this case, since I’m storing the partcover_results.xml in my $build_directory, the full command would look like (without the line breaks):

    .\tools\sandcastle\xsltransform.exe .\build\partcover_results.xml
        /xsl:.\tools\partcover\partcoverfullreport.xslt
        /out:.\build\partcover.html

Fantastic.  Now, we want that to call every time we run our tests, so let’s add it to our psake script.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      $coverageExpression = "$transform_runner $build_directory\partcover_results.xml /xsl:$transform_xsl /out:$build_directory\partcover.html"
      exec { invoke-expression $testExpression }
      " - Converting coverage results for $test to HTML report..."
      exec { invoke-expression $coverageExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

Now, we have a $coverage_expression to invoke.  The xsltransform.exe and path to the report are tied up into variables for easy updating.

You should be able to run your test task again and open up the HTML file.

Step 3: Setting up the Code Coverage tab in TeamCity

The last step is the easiest. As I haven’t found a way to have the Code Coverage automatically pick up the PartCover reports, we’ll need to add a custom Report.

In TeamCity, go under Administration > Server Configuration > Report Tabs.

Add a new report tab with the Start Page set to the name of your PartCover HTML file.  In my case, it’s partcover.html.  Remember, the base path is your build directory (which is why I’m outputting the file to the $build_directory in psake).

Commit your repository with your new psake script and PartCover (if you haven’t already) and run–you’ll see a new Code Coverage tab ready and waiting.

- 12_2_2010 , 11_31_27 AM