Archive

Archive for the ‘Visual Studio 2010’ Category

NuGet Package Restore, Multiple Repositories, and CI Servers

January 20, 2012 1 comment

I really like NuGet’s new Package Restore feature (and so does our git repositories).

We have several common libraries that we’ve moved into a private, local NuGet repository on our network. It’s really helped deal with the dependency and version nightmares between projects and developers.

Boom!I checked my first project using full package restore and our new local repositories into our CI server, TeamCity, the other day and noticed that the Package Restore feature couldn’t find the packages stored in our local repository.

At first, I thought there was a snag (permissions, network, general unhappiness) with our NuGet share, but all seemed well. To my surprise, repository locations are not stored in that swanky .nuget directory, but as part of the current user profile. %appdata%\NuGet\NuGet.Config to be precise.

Well, that’s nice on my workstation, but NETWORK SERVICE doesn’t have a profile and the All Users AppData directory didn’t seem to take effect.

The solution:

For TeamCity, at least, the solution was to set the TeamCity build agent services to run as a specific user (I chose a domain user in our network, you could use a local user as well). Once you have a profile, go into %drive%:\users\{your service name}\appdata\roaming\nuget and modify the nuget.config file.

Here’s an example of the file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet official package source" value="https://msft.long.url.here" />
    <add key="Student Achievement [local]" value="\\server.domain.com\shared$\nuget" />
  </packageSources>
  <activePackageSource>
    <add key="NuGet official package source" value="https://msft.long.url.here" />
  </activePackageSource>
</configuration>

Package Restore will attempt to find the packages on the ‘activePackageSource’ first then proceed through the list.

Remember, if you have multiple build agent servers, this must be done on each server.

Wish List: The option to include non-standard repositories as part of the .nuget folder. :)

Finding TODOs and Reporting in PowerShell

December 28, 2011 Comments off

After @andyedinborough shared a blog post on how to report TODOs in CruiseControl.NET, I figured there HAD to be a snazzy way to do this with TeamCity.

I already have TeamCity configured to dump out the results from our Machine.Specifications tests and PartCover code coverage (simple HTML and XML transforms) so another HTML file shouldn’t be difficult. When we’re done, we’ll have an HTML file, so we just need to setup a custom report to look for the file name. For more information on setting up custom reports in TeamCity, checkout this post (step 3 in particular).

Let’s Dig In!

 

The code! The current working version of the code is available via this gist.

The code is designed to be placed in a file of it’s own. Mine’s named Get-TODO.ps1

Note: If you want to include this in another file (such as your PowerShell profile), be sure to replace the param() into function Get-TODO( {all the params} ).

The the script parameters make a few assumptions based on my own coding standards; change as necessary:

  • My projects are named {SolutionName}.Web/.Specifications/.Core and are inside a directory called {SolutionName},
  • By default, it includes several file extensions: .spark, .cs, .coffee, .js, and .rb,
  • By default, it excludes several directories: fubu-content, packages, build, and release.

It also includes a couple of flags for Html output–we’ll get to that in a bit.

param(
	#this is appalling; there has to be a better way to get the raw name of "this" directory. 
	[string]$DirectoryMask = 
		(get-location),
	[array]$Include = 
		@("*.spark","*.cs","*.js","*.coffee","*.rb"),
	[array]$Exclude = 
		@("fubu-content","packages","build","release"),
	[switch]$Html,
	[string]$FileName = "todoList.html")

Fetching Our Files

We need to grab a collection of all of our solution files. This will include the file extensions from $Include and use the directories from $DirectoryMask (which the default is everything from the current location).

$originalFiles = 
	gci -Recurse -Include $Include -Path $DirectoryMask;

Unfortunately, the -Exclude flag is…well, unhappy, in PowerShell as it doesn’t seem to work with -Recurse (or if it does, there’s voodoo involved).

How do we address voodoo issues? Regular expressions. Smile Aww, yeah.

$withoutExcludes = $originalFiles | ? {
	if ($Exclude.count -gt 0) {
		#need moar regex
		[regex] $ex = 
			# 1: case insensitive and start of string
			# 2: join our array, using RegEx to escape special characters
			# the * allow wildcards so directory names filter out
			# and not just exact paths.
			# 3: end of string
			#  1                   2										3
			'(?i)^(.*'+(($Exclude|%{[regex]::escape($_)}) -join ".*|.*")+'.*)$'
		$_ -notmatch $ex
	}
	else { 
		$_ 
	}
}

Breath, it’s okay. That is mostly code comments to explain what each line is doing.  This snippet is dynamically creating a regular expression string based on our $Exclude parameter then comparing the current file in the iteration to the regular expression and returning the ones that do not match.  If $Exclude is empty, it simply returns all files.

Finding our TODOs

Select-String provides us with a quick way to find patterns inside of text files and provides us with helpful output such as the line of text, line number, and file name (select-string is somewhat like PowerShell’s grep command).

$todos = $withoutExcludes | 
	% { select-string $_ -pattern "TODO:" } |
	select-object LineNumber, FileName, Line;

Let’s take all of our remaining files, look at them as STRINGs and find the pattern "TODO:".  This returns an array of MatchInfo objects. Since these have a few ancillary properties we don’t want, let’s simply grab the LineNumber, FileName, and the Line (of the match) for our reporting.

Now that we have our list, let’s pretty it up a bit.  PowerShell’s select-object has the ability to reformat objects on the fly using expressions.  I could have combined this in the last command; however, it’s a bit clearer to fetch our results THEN format them.

$formattedOutput = $todos | select-object `
	@{Name='LineNumber'; Expression={$_.LineNumber}}, `
	@{Name='FileName'; Expression={$_.FileName}}, `
	@{Name='TODO'; Expression={$_.Line.Trim() -replace '// TODO: ',''}}
  1. LineNumber: We’re keeping this line the same.
  2. FileName: Again, input->output. Smile
  3. TODO: Here’s the big change. "Line" isn’t real descriptive, so let’s set Name=’TODO’ for this column. Second, let’s trim the whitespace off the line AND replace the string "// TODO: " with an empty string (remove it).  This cleans it up a bit.

Reporting It Out

At this point, we could simply return $formattedOutput | ft -a and have a handy table that looks like this:

consoleoutput

Good for scripts, not so good for reporting and presenting to the boss/team (well, not my boss or team…). I added a -Html and -FileName flag to our parameters at the beginning just for this. I’ve pre-slugged the filename to todoList.html so I can set TeamCity to pickup the report automagically.  But how do we build the report?

PowerShell contains a fantastic built-in cmdlet called ConvertTo-Html that you can pipe pretty much any tabular content into and it turns it into a table.

 

if ($Html) {
	$head = @'
	<style> 
	body{	font-family:Segoe UI; background-color:white;} 
	table{	border-width: 1px;border-style: solid;
			border-color: black;border-collapse: collapse;width:100%;} 
	th{		font-family:Segoe Print;font-size:1.0em; border-width: 1px;
			padding: 2px;border-style: solid;border-color:black;
			background-color:lightblue;} 
	td{		border-width: 1px;padding: 2px;border-style: solid;
			border-color: black;background-color:white;} 
	</style> 
'@
	$solutionName = 
		(Get-Location).ToString().Split("\")[(Get-Location).ToString().Split("\").Count-1];

	$header = "<h1>"+$solutionName +" TODO List</h1><p>Generated on "+ [DateTime]::Now +".</p>"
	$title = $solutionName +" TODO List"
	$formattedOutput | 
              ConvertTo-HTML -head $head -body $header -title $title |
              Out-File $FileName
}
else {
	$formattedOutput
}

Our html <head> tag needs some style. Simple CSS to address that problem.  I even added a few fonts in there for kicks and placed them in $head.

As I mentioned before, my solutions are usually the name of the project–which is what I want as the $title and $header. The $solutionName is ugly right now–if anyone out there has a BETTER way to get the STRING name of the current directory (not the Path), I’m all ears.

We take our $formattedOutput, $header, $title and pass them into ConvertTo-Html. It looks a bit strange that we’re setting our ‘body’ as the header; however, whatever is piped into the cmdlet appends to what is passed into -body. It then parses our content out to the $FileName specified.

htmloutput

Using Cassette with Spark View Engine

July 21, 2011 Comments off

Knapsack… *cough*… I mean Cassette is a fantastic javascript/css/coffeescript resource manager from Andrew Davey. It did, however, take a bit to figure out why it wouldn’t work with Spark View Engine. Thankfully, blogs exist to remind me of this at a later date. :)

Namely—because I’ve never tried to use anything that returned void before. Helpers tend to always return either Html or a value.

I finally stumbled on a section in the Spark View Engine documentation for inline expressions.

Sometimes, when push comes to shove, you have a situation where you’re not writing output and there isn’t a markup construct for what you want to do. As a last resort you can produce code directly in-place in the generated class.

Well then, that sounds like what I want.

So our void methods, the Html.ReferenceScript and Html.ReferenceStylesheet should be written as:

#{Html.ReferenceScript("scripts/app/home.index.js");}
#{Html.ReferenceStylesheet("styles/app");}

Note the # (pound sign) and the semi-colon at the end of the statement block.

Our rendering scripts; however, use standard Spark output syntax:

${Html.RenderScripts()}
${Html.RenderStylesheetLinks()}

Now my Spark view contains the hashed Urls–in order–as it should.

 <!DOCTYPE html>
 <html>
 <head>
   <link href="/styles/app/site.css?f8f8e3a3aec6d4e07008efb57d1233562d2c4b70" type="text/css" rel="stylesheet" />
 </head>
 <body>
 <h2>Index</h2>
   <script src="/scripts/libs/jquery-1.6.2.js?eeee9d4604e71f2e01b818fc1439f7b5baf1be7a" type="text/javascript"></script>
   <script src="/scripts/app/application.js?91c81d13cf1762045ede31783560e6a46efc33d3" type="text/javascript"></script>
   <script src="/scripts/app/home.index.js?b0a66f7ba204e2dcf5261ab75934baba9cb94e51" type="text/javascript"></script>
 </body> 

Excellent.

Updating NuGet Spec’s Version Numbers in psake

December 3, 2010 Comments off

As part of our psake build process on a couple of framework libraries, I wanted to add in updating our internal NuGet repository.  The actual .nuspec file is laid out quite simplistically (details); however, the version number is hard coded.

For packages that have a static ‘major’ version, that’s not a bad deal; however, I wanted to keep our package up to date with the latest and greatest versions, so I needed to update that version element.

Since I have the full power of PowerShell at my disposal, modifying an XML file is a cakewalk. Here’s how I went about it.

function build-nuget-package {
	# update nuget spec version number
	[xml] $spec = gc $nuget_spec
	$spec.package.metadata.version = GetBuildNumber
	$spec.Save($nuget_spec)

	# rebuild the package using the updated .nuspec file.
	cd $release_directory
	exec { invoke-expression "$nuget pack $nuget_spec" }
	cd $build_directory
}

GetBuildNumber is an existing psake function I use to snag the AssemblyVersion from \Properties\AssemblyInfo.cs (and return for things like TeamCity). $nuget and $nuget_spec are variables configured in psake that point to the nuget executable and the specification file used by the build script.

Once the package is built, in our Deploy task, we do a simple copy.

task Deploy -depends Release {
	 [ ... cut for brevity ]

	# build and deploy nuget package
	build-nuget-package
	copy-item $release_directory\*.nupkg \\server\nugetshare$
}

Now the NuGet repository is updated with a new package each time we build. I may refine it and only update the package on major version number changes or something later, but this gets the job done.

Setting up a NuGet PowerShell Profile

November 15, 2010 Comments off

While NuGet alone is a pretty spectacular package management system, one of the hottest features involves the underlying PowerShell Package Console (we’ll call it PM from here out).  It may look fancy and have a funky PM> prompt, but it’s still PowerShell.

Considering I use psake and live in PowerShell most of the day, I wanted to do a bit of customizing.  Out of the box… urr… package, PM is barebones PowerShell.

Screenshot - 11_15_2010 , 8_31_25 PM

Well, that’s pretty boring!

The question is, how would I customize things?

The answer? Setting up a profile!

Step 1: Finding the NuGet Profile

Just like a normal PowerShell installation, the $profile variable exists in PM.

image

Okay, that was easy.  If you try to edit the file, however, it’s empty. You can either use the touch command to create an empty file, then edit it with Notepad, or simply run Notepad with the $profile path–it’ll ask you to create the file. :)

For an example, we’ll just pipe a bit of text into our profile and see what happens.

image

Now, close Visual Studio (PM only seems to load the profile when Visual Studio first starts) and relaunch it.  PM should now welcome you!

image

 

Step 2: Customize, customize, customize!

Now we’re ready to add variables, setup custom paths and scripts, add some git-tastic support, and add modules (like psake).  There are quite a few posts on the blog about customizing PowerShell’s environment, click here to check them out.

Remember: Since we’re using a separate PowerShell profile, be frugal with your commands and keep them "development centric".  For example, I don’t load HyperV modules, Active Directory management commands, and other "non-Solution" things into the PM. Visual Studio is slow enough–don’t bog it down. :) 

This is also a great opportunity to trim your profile down and break it into modular pieces (whether that be scripts or modules).  Keep those profiles as DRY as possible.

 

A few caveats…

There do seem to be a few caveats while in the PM environment:

1. Execution, whether it’s an actual executable, a script, or piping something to more, seems to be tossed into another process, executed, and then the results returned to the PM console window. Not terrible, but something to be aware of if it seems like it’s not doing anything.

2. I can’t find a good way to manipulate the boring PM> prompt. It seems that $Host.UI is pretty much locked down. I’m hopeful that will change with further releases because not KNOWING where I am in the directory structure is PAINFUL.

Getting buildNumber for TeamCity via AssemblyInfo

June 16, 2010 Comments off

I’m a proud psake user and love the flexibility of PowerShell during my build process. I recently had a project that I really wanted the build number to show up in TeamCity rather than the standard incrementing number.

In the eternal words of Jeremy Clarkson, “How hard can it be?”

On my local machine, I have a spiffy “gav”–getassemblyversion–command that uses Reflection to grab the assembly version.  Unfortunately, since I don’t want to rely on the local version of .NET and I’ve already set the build number in AssemblyInfo.cs as part of my build process, I just want to fetch what’s in that file.

Regular expressions to the rescue!

Here’s the final psake task. I call it as part of my build/release tasks and it generates the right meta output that TeamCity needs for the build version. Here’s a gist of the source: http://gist.github.com/440646.

Here’s the actual source to review:

task GetBuildNumber { 
  $version = gc $base_directory\$solution_name\Properties\AssemblyInfo.cs | select-string -pattern "AssemblyVersion"
  
  $version -match '^\[assembly: AssemblyVersion\(\"(?<major>[0-9]+)\.(?<minor>[0-9]+)\.(?<revision>[0-9]+)\.(?<build>[0-9]+)\"\)\]'
  
  "##teamcity[buildNumber '{0}.{1}.{2}.{3}']" -f $matches["major"], $matches["minor"], $matches["revision"], $matches["build"]
}

Enjoy!

Using xUnit 1.5 with .NET 4.0 RTW

April 16, 2010 Comments off

After a bit of tinkering, I finally managed to find the sweet spot for getting xUnit 1.5 to run (without errors) with projects targeted at the new .NET 4.0 Framework.

After initial solution conversion, if you run your tests with xunit.console.x86.exe (or the 64-bit version, I’m assuming), you’ll face the following helpful error:

System.BadImageFormatException: Could not load file or assembly 'Assembly.Test.dll' 
or one of its dependencies. This assmbly is built by a runtime newer than the 
currently loaded runtime and cannot be loaded.
File name: 'J:\projects\Framework\build\Assembly.Test.dll'
   at System.Reflection.AssemblyName.nGetFileInformation(String s)
   at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
   at Xunit.Sdk.Executor..ctor(String assemblyFilename)
   at Xunit.ExecutorWrapper.RethrowWithNoStackTraceLoss(Exception ex)
   at Xunit.ExecutorWrapper.CreateObject(String typeName, Object[] args)
   at Xunit.ExecutorWrapper..ctor(String assemblyFilename, String configFilename,
      Boolean shadowCopy)
   at Xunit.ConsoleClient.Program.Main(String[] args)

What?  BadImageFormatException?  But the bitness didn’t change!

@markhneedham blogged last year how to fix the problem: updating the config files for xunit to “support” the new version.  That worked then, but the version numbers have changed.

Here’s what the new configuration files need to include:

	<startup uselegacyv2runtimeactivationpolicy="true">
		<supportedruntime version="v4.0.30319" />
	</startup>

The useLegacyV2RuntimeActivationPolicy attribute ensures that the latest supported runtimes are loaded. For my projects, this seems to keep Oracle.DataAccess.dll and System.Data.Sqlite.dll (both x86 libraries) happy.

The supportedRuntime element denotes the current version of .NET 4.0 (30319 is the RTM build).

After that, everything runs like a champ!

xUnit.net console test runner (32-bit .NET 4.0.30319.1)
Copyright (C) 2007-9 Microsoft Corporation.

xunit.dll:     Version 1.4.9.0
Test assembly: Assembly.Test.dll

Total tests: 397, Failures: 0, Skipped: 0, Time: 7.508 seconds

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. ;)

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Visual Studio 2010 & .NET Framework 4.0 Beta 1 Out

May 18, 2009 Comments off

For MSDN Subscribers, VS2010 and .NET 4.0 beta 1 have hit the downloads site.

After reading through Microsoft’s “Overview” page—my biggest hope is that it doesn’t do “too much”.  Visual Studio is already a monolith of hard drive cranking pain as it is.  My Cray is still being held up by our purchasing department—I hope I don’t need it.  Maybe they took Win7’s approach that less is more and trimmed things down?

Overview: http://www.microsoft.com/visualstudio/en-us/products/2010/default.mspx

Beta Docs: http://msdn.microsoft.com/en-us/library/dd831853(VS.100).aspx

The biggest point of interest (so far, I mean—it’s still downloading) is the warning for Win7 RC:


NOTE: If you are installing Visual Studio 2010 Beta 1 on Windows 7 RC, you may receive a compatibility warning when installing SQL Server 2008. For more information and a workaround, please click here.

The link, however, sends you to an explaination of the error—which is simply that Win7 RC requires SQL Server 2008 SP1 or 2005 SP3.  If I remember right, I thought the error message SAID that.  I guess Microsoft is just covering all of the bases.

Finally, it looks like the “official” tag on Twitter is #vs10; however, quite a few of us have been using #vs2010.  Follow both for the full scope.

I’ve got about 20 minutes left on the download and a VM waiting for this to be loaded.  More to come!

Follow

Get every new post delivered to your Inbox.