Archive

Archive for the ‘Windows 7’ Category

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. 😉  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. 😉

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. 😉

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Querying Oracle using PowerShell

September 1, 2009 4 comments

Yesterday, I wrote up a quick bit of code to query out our SQL Servers.  Initially, I wanted a speedy way to hit, parse, and report back log4net logs in our “server status” scripts.

Well, never one to leave something alone, I started tinkering with Oracle support.  In our enterprise, most of our key systems sit on Oracle and there are SEVERAL opportunities for quick data retrieval routines that could help out in daily work.

Plus, doing an Oracle query in PowerShell beats five minute process of cranking up Oracle SQL Developer for a simple, single query. 🙂

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

[System.Reflection.Assembly]::LoadWithPartialName(“System.Data.OracleClient”) | out-null
$connection = new-object system.data.oracleclient.oracleconnection( `
    “Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$server)(PORT=1521)) `
    (CONNECT_DATA=(SERVICE_NAME=$instance)));User Id=USER_ID;Password=PASSWORD;”);

$set = new-object system.data.dataset   

$adapter = new-object system.data.oracleclient.oracledataadapter ($query, $connection)
$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

I chose to use the OracleClient library for simplicity sake.  I could have used ODP.Net; however, that’d make my scripts FAR less portable.  Since OracleClient isn’t loaded by default in PowerShell, this script loads it.  In addition, I chose to use the TNS-less connection string as I don’t typically keep a ‘tnsnames.ora’ file on my computer.  This further adds to the portability of the script.

Past that and the change from SqlClient to OracleClient, the rest of the code is the same from the prior example.

Dealing With Empty Strings and Nulls

One thing that I did run across that differed between Oracle and Microsoft SQL revolved around how empty strings were dealt with when parsing using PowerShell.

Example:

oq “SELECT * FROM Schools”

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School

102 School

112 School      user3@domain.net

140 School      user1@domain.net

etc.

Now, what if I wanted to just see the schools missing a principal_email_address?  I’d just rewrite my SQL query, right?  Yeah, probably, but for the sake of argument and perhaps some scripting.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq “”}

No results.

What? Why not?  I see two in my last query.  Unfortunately, dealing with “nulls” and empty strings can get a bit tricky when pulling from database data.  With Microsoft SQL, a text-based column (varchar, ntext, etc) seems to handle -eq “” just fine, but Oracle is less than pleased.  @ShayLevy suggested -eq [string]::Empty but that didn’t pull through either. 

From a prior experiment, I also tried -eq $null and was greeted with something very different—it returned all results. Meh.

Randomly, I tried -like $null and it worked. Well, that’s interesting.  So the value isn’t empty in Oracle, but it is “like” a null.  After a bit more digging, I discovered that the real value is –eq [DBNull]::Value.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq [DBNull]::Value }

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School
102 School

It makes sense… but more testing is required to see which is more reliable for a wide variety of data types.  I like the concept of “like null” to simulate “string empty or null”.  Further testing required. 🙂

 

Querying SQL Server using PowerShell

August 31, 2009 2 comments

The great thing about PowerShell is direct access to objects.  For my apps, database connectivity is happily handled by NHibernate; however, that doesn’t mean we can’t take advantage of good old System.Data.SqlClient for our PowerShell scripting.

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

$connection = new-object system.data.sqlclient.sqlconnection( `
    “Data Source=$server;Initial Catalog=$instance;Integrated Security=SSPI;”);
   
$adapter = new-object system.data.sqlclient.sqldataadapter ($query, $connection)
$set = new-object system.data.dataset

$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

Not too long or challenging—it’s mostly working to instantiate a quick SQL connection and pass in your query.  I even considered plugging in a check on the $query parameter to ensure it began with SELECT to ensure I wouldn’t do accidental damage to a system. Maybe I’m just paranoid. 😉

What this little snippet allows me to do is quickly add log4net checking into some of my server monitoring PowerShell scripts.

query sqlServer myDatabase “Select count(id), logger from logs group by logger” | format-table -autosize

Notice I didn’t include the format-table command in my main query script.  Why?  I wanted to keep the flexibility to select, group, and parse the information returned by my query.  Unfortunately, it seems that the format commands break that if they’re ran before a manipulation keyword.  Adding in “ft –a” isn’t difficult in a pinch.

WebStorageHandler[1]

Quick and easy…

Other uses:

  • Customer calls up with a question about data—save time and do a quick query rather than waiting for Management Studio to wind up.
  • Keep tabs on database statistics, jobs, etc.
  • and more…

Digging into the Event Log with PowerShell

August 25, 2009 Comments off

There are a few of our applications that haven’t been converted over to log4net logging so their events still land in the good ol’ Windows Event Log.  That’s fine and was fairly easy to browse, sort, and filter using the new tools in Windows Server 2008.

I’ve found a bit better tool, however, over the past few hours for digging into the logs on short notice and searching—obviously, PowerShell.

Full source for this can be found here.

I wanted to be able to quickly query out:

  • the time – to look at trending,
  • the user – trending, and filtering if I have them on the phone,
  • the URL – shows both the application and the page the problem is occuring on,
  • the type – the exception type for quick filtering,
  • the exception – the core of the issue,
  • the details – lengthy, but can be ever so helpful even showing the line number of the code in question.

param ([string]$computerName = (gc env:computername))

function GetExceptionType($type, $logEvent)
{
 if ($type -ne "Error") { $logEvent.ReplacementStrings[17] }
 else {
        $rx = [regex]"Exception:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])
        $matches.Groups[1].Value
 }
}

function GetException($type, $logEvent)
{
 if ($type -ne "Error") { $logEvent.ReplacementStrings[18] }
 else {
        $rx = [regex]"Message:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])
        $matches.Groups[1].Value
 }
}

get-eventlog -log application -ComputerName $computerName |
    ? { $_.Source -eq "ASP.NET 2.0.50727.0" } |
    ? { $_.EntryType -ne "Information" } |
    select `
  Index, EntryType, TimeGenerated, `
  @{Name="User"; Expression={$_.ReplacementStrings[22]}}, `
  @{Name="Url"; Expression={truncate-string $_.ReplacementStrings[19] 60 }}, `
  @{Name="Type"; Expression={GetExceptionType $_.EntryType $_ }}, `
  @{Name="Exception"; Expression={GetException $_.EntryType $_ }}, `
  @{Name="Details"; Expression={$_.ReplacementStrings[29]}}

The code itself is probably pretty overworked and, I hope, can be refined as time goes on.

The two helper functions, GetExceptionType and GetException, exist because (it seems) that Warnings and Information store their information in one location and Errors store their information in one HUGE blob of text that needs to be parsed.  Those helpers provide that switch logic.

The get-eventlog logic itself is pretty straightforward:

  1. Open up the ‘Application’ EventLog on the specified computer,
  2. Filter only “ASP.NET 2.0.50727.0” sourced events,
  3. Exclude “Information” type events,
  4. Select 3 columns and generate 5 columns from expressions.

The great advantage is I can then take this file and “pipe” it into other commands.

get-aspnet-events webserver1 | select user, url, type | format-table -auto

User               Url                               Type
----               ---                               ----
domain\dlongnecker http://domain.net/Create.aspx     PreconditionException
domain\dlongnecker http://domain.net/Create.aspx     PreconditionException
domain\dlongnecker http://domain.net/View.aspx       PostconditionException
domain\dlongnecker http://domain.net/View.aspx       AssertionException

or

get-aspnet-events webserver1 | ? { $_.user -like “*dlongnecker” }

The possibilities are great—and a real time saver than hitting each server and looking through the GUI tool.

The code also includes a helper method I created for truncating strings available here via codepaste.  If there’s built-in truncating, I’d love to know about it.

 

Using Git (and everything else) through PowerShell

August 21, 2009 5 comments

After a discussion on Stack Overflow a few days ago (and hopefully a useful answer), I got to thinking a bit about how I use PowerShell.  It may be a bit geekish, but PowerShell starts up on Windows startup for me.  The prompt is almost always open on a second monitor–ready for whatever task I may need.

As the SO post mentioned, I also use PowerShell to connect to my Git repositories.  At the office, it has a few more customizations to hashout against our *shudder* SourceSafe */shudder* repositories, but that’s a different post. 

For now, I wanted to walk through how profile script is setup in a bit more detail than the SO post.

Creating a Profile Script

UPDATE: The full source code (plus a few extras) for this article can be found here : http://codepaste.net/53a7z6

A profile script is essentially a “startup” script for your PowerShell environment. 

By default (perhaps a registry key changes this), it’s located in %userprofile%\Documents\WindowsPowerShell and is aptly named Microsoft.PowerShell_Profile.ps1.  The naming convention between “WindowsPowerShell” and “MicrosoftPowerShell” is a bit annoying, but not a big problem.

The file is just plain text, so feel free to use your editor of choice or PowerShell ISE (Windows 7, Windows 2008 R2) for some fancy content highlighting.

What goes in here?

As far as I can tell, the profile is a great place to initialize global customizations:

  • environmental variables,
  • paths,
  • aliases,
  • functions that you don’t want extracted to .ps1 files,
  • customizatons to the console window,
  • and, most importantly, customize the command prompt.

The Console

I use Console2 rather than the standard PowerShell prompt.  Console2 is an amazing open source alternative to the standard console and includes features such as ClearType, multiple tabs, and more.  Check it out.

I also use Live Mesh, so there are a few things that are unnecessary for most users.  Live Mesh is an online syncronization service… so my PowerShell scripts (amongst other things) stay synced between my home and work environments.

My PowerShell Prompt At Startup

Preparing the Environment

My profile script starts off by setting up a few global variables to paths.  I use a quick function to setup the parameters based on the computer I’m currently using. 

# General variables

$computer = get-content env:computername

switch ($computer)

{

    “WORKCOMPUTER_NAME” {

        ReadyEnvironment “E:” “dlongnecker” $computer ; break }

    “HOMECOMPUTER_NAME” {

        ReadyEnvironment “D:” “david” $computer ; break }

    default {

        break; }

}

function ReadyEnvironment (

            [string]$sharedDrive,

            [string]$userName,

            [string]$computerName)

{

    set-variable tools “$sharedDrive\shared_tools”scope 1

    set-variable scripts “$sharedDrive\shared_scripts”scope 1

    set-variable rdpDirectory “$sharedDrive\shared_tools\RDP”scope 1

    set-variable desktop “C:\Users\$userName\DESKTOP”scope 1

    Write-Host “Setting environment for $computerName”foregroundcolor cyan

}

Easy enough.  I’m sure I could optimize this a bit more, but it works.  Again, this wouldn’t be necessary on a single computer, but since I use LiveMesh and the same PowerShell profile on multiple computers—this keeps my paths in check.

The second step is to modify the $PATH environmental variable to point to my scripts and Git as well as add a new $HOME variable to satisfy Git’s needs.

# Add Git executables to the mix.

[System.Environment]::SetEnvironmentVariable(“PATH”, $Env:Path + “;” + (Join-Path $tools “\PortableGit-1.6.3.2\bin”), “Process”)

 

# Add our scripts directory in the mix.

[System.Environment]::SetEnvironmentVariable(“PATH”, $Env:Path + “;” + $scripts, “Process”)

 

# Setup Home so that Git doesn’t freak out.

[System.Environment]::SetEnvironmentVariable(“HOME”, (Join-Path $Env:HomeDrive $Env:HomePath), “Process”)

Customizing the Console Prompt

The ‘prompt’ function overrides how the command prompt is generated and allows a great deal of customization.  As I mentioned in the SO post, the inspiration for my Git prompt comes from this blog post.

I’ve added quite a few code comments in here for reference. 

function prompt {

    Write-Host(“”)

    $status_string = “”

    # check to see if this is a directory containing a symbolic reference,

    # fails (gracefully) on non-git repos.

    $symbolicref = git symbolic-ref HEAD

    if($symbolicref -ne $NULL) {

       

        # if a symbolic reference exists, snag the last bit as our

        # branch name. eg “[master]”

        $status_string += “GIT [“ + `

            $symbolicref.substring($symbolicref.LastIndexOf(“/”) +1) + “] “

       

        # grab the differences in this branch   

        $differences = (git diff-indexname-status HEAD)

       

        # use a regular expression to count up the differences.

        # M`t, A`t, and D`t refer to M {tab}, etc.

        $git_update_count = [regex]::matches($differences, “M`t”).count

        $git_create_count = [regex]::matches($differences, “A`t”).count

        $git_delete_count = [regex]::matches($differences, “D`t”).count

       

        # place those variables into our string.

        $status_string += “c:” + $git_create_count + `

            ” u:” + $git_update_count + `

            ” d:” + $git_delete_count + ” | “

    }

    else {

        # Not in a Git environment, must be PowerShell!

        $status_string = “PS “

    }

   

    # write out the status_string with the approprate color.

    # prompt is done!

    if ($status_string.StartsWith(“GIT”)) {

        Write-Host ($status_string + $(get-location) + “>”) `

            nonewlineforegroundcolor yellow

    }

    else {

        Write-Host ($status_string + $(get-location) + “>”) `

            nonewlineforegroundcolor green

    }

    return ” “

 }

The prompts are then color coded, so I can keep track of where I am (as if the really long prompt didn’t give it away).

Prompts

Now, with our prompts and our pathing setup to our Git directory, we have all the advantages of Git—in a stellar PowerShell package.

NOTE: I would like to point out that I use PortableGit, not the installed variety.  Since Git also moves back and forth across my Live Mesh, it seemed more reasonable to use the Portable version.  I don’t believe; however, there would be a difference as long as the \bin directory is referenced.

Setting up Aliases—The Easy Way

Brad Wilson’s implementation of find-to-set-alias is brillant.  Snag the script and get ready for aliasing the easy way.  I keep my most common tools aliased—Visual Studio, PowerShell ISE, and NotePad.  I mean, is there anything else?  (Well, yes, but I have Launchy for that).

Using find-to-set-alias is easy—provide a location, an executable, and an alias name:

find-to-set-alias ‘c:\program files*\Microsoft Visual Studio 9.0\Common7\IDE’ devenv.exe vs

find-to-set-alias ‘c:\windows\system32\WindowsPowerShell\v1.0\’ PowerShell_ISE.exe psise

find-to-set-alias ‘c:\program files*\Notepad2’ Notepad2.exe np

Helpers – Assembly-Info

After getting tired of loading up System.Reflection.Assembly everytime I wanted to see what version of a library I had, I came up with a quick script that dumps out the name of the assembly and the file version.

param(

  $file= $(throw “An assembly file name is required.”)

)

    $fullpath = (Get-Item $file).FullName

    $assembly = [System.Reflection.Assembly]::Loadfile($fullpath)

 

    # Get name, version and display the results

    $name = $assembly.GetName()

    $version =  $name.version

 

    “{0} [{1}]”f $name.name, $version

With this, running assembly-info NHibernate.dll returns:

NHibernate [2.1.0.4000]

Nifty.

Taking it a step further, I created a quick function in my profile called ‘aia’ or ‘assembly info all’ that runs assembly-info on all .dlls in the directory.

function aia {

    get-childitem | ?{ $_.extension -eq “.dll” } | %{ ai $_ }

}

Now, in that same directory, I get:

Antlr3.Runtime [3.1.0.39271]
Castle.Core [1.1.0.0]
Castle.DynamicProxy2 [2.1.0.0]
FluentNHibernate [0.1.0.0]
Iesi.Collections [1.0.1.0]
log4net [1.2.10.0]
Microsoft.Practices.ServiceLocation [1.0.0.0]
Moq [4.0.812.4]
MySql.Data [6.0.4.0]
NHibernate.ByteCode.Castle [2.1.0.4000]
NHibernate [2.1.0.4000]
System.Data.SQLite [1.0.60.0]
System.Web.DataVisualization.Design [3.5.0.0]
System.Web.DataVisualization [3.5.0.0]
xunit [1.1.0.1323]

Stellar.

Helpers – Visual Studio “Here”

This was created totally out of laziness.  I have already setup an alias to Visual Studio (‘vs’); however, I didn’t want to type “vs .\projectName.sln”.  That’s a lot.  I mean, look at it. 

So, a quick, and admitted dirty, method to either:

  1. Open the passed solution,
  2. If multiple .sln exist in the directory, open the first one,
  3. If only one .sln exists, open that one.

I don’t often have multiple solution files in the same directory, so #3 is where I wanted to end up.

function vsh {

    param ($param)

   

    if ($param -eq $NULL) {

        “A solution was not specified, opening the first one found.”

        $solutions = get-childitem | ?{ $_.extension -eq “.sln” }

    }

    else {

        “Opening {0} …”f $param

        vs $param

        break

    }

    if ($solutions.count -gt 1) {

        “Opening {0} …”f $solutions[0].Name

        vs $solutions[0].Name

    }

    else {

        “Opening {0} …”f $solutions.Name

        vs $solutions.Name

    }

}

That’s about the gist of it.  The challenge (and fun part) is to keep looking for ways to imrpove common processes using Git.  As those opportunities arise, I’ll toss them out here. 🙂

UPDATE: The full source code (plus a few extras) for this article can be found here : http://codepaste.net/53a7z6

Tips for Booting/Using VHDs in Windows 7

August 6, 2009 3 comments

Both Windows 7 and Windows Server 2008 R2 (aka Windows 7 Server) support booting directly from a VHD.  This is FANTASTIC, AWESOME, and other bolded, all-caps words.  For the full details, check out Hanselman’s handy post.

I’m a HUGE user of differencing disks.  My layout follows the basic structure of:

  • system (parent/dynamically expanding)
    • environment (child of system/differencing)
      • task (child of environment/differencing)
  • Windows Server 2008 R2 (2008r2.vhd)
    • VS2008 + tools (vs2008.vhd)
      • “production” work (projectName.vhd)
      • freelance/open source work (dev1.vhd)
      • tinkering (dev3.vhd)
    • VS2010 + tools (vs2010.vhd)
      • tinkering (dev2.vhd)
  • Windows 7 (win7.vhd)
    • Simulated client “a” environment (client-a.vhd)
    • Simulated client “b” environment (client-b.vhd)

The great thing is, I have a single “2008r2.vhd” and “win7.vhd” as a baseline.  A customer calls and needs a quick mockup?  I can instanciate a new development environment in moments (or quicker via PowerShell scripts).  Who really wants to walk through reinstalling the operating system again anyway?  Not me.

With that, here’s a few tips for situations I ran into building up my environment.

Q: I had a series of VHDs from [Virtual Server 2005 R2 | Virtual PC 2007 | The Interwebz] and they won’t work.
Correct.  Only VHDs from HyperV or created directly in Windows 7 or Server 2008 R2 (that R2 part is important) using DISKPART are bootable.


Q: My system will not boot after installing!  I just get a BSOD!

If you can catch the BSOD message or press F8 and turn off automatic reboot, the error reads:

“An initialization failure occurred while attempting to boot from a VHD.  The volume that hosts the VHD does not have enough free space to expand the VHD.”
What?  Huh?  We setup dynamically expanding VHDs.. why would it need all of the free space?  Well, it seems that to boot from a VHD, it expands it to full capacity (assuming with zeros because I don’t see a latency on boot-up).  If you’re like me, you probably set your “dynamically expanding disk” to a wild maximum capacity, such as 200GB.  Even if you get close to that number, it’s likely that the parent/child VHD chains are split across multiple partitions/spindles.

That’s a gotcha.

Lesson: Be prudent with how you size your VHDs.  Ensure you have room for you’re intentions, but also ensure you have enough physical capacity.

Here’s how to fix the problem without totally reinstalling your VHD.

  1. Boot into your parent operating system and attach the VHD as a partition using either DISKPART or the Disk Management GUI.
    1. select vdisk file=”d:\vm\basedrives\2008r2.vhd”
    2. attach vdisk
  2. Shrink the VHD using the Disk Management GUI (it’s just easier, trust me).  If your original maximum capacity was 200GB and you only have 150GB free, set it to 120GB or something reasonable.
  3. Use the free VHDResizer tool to trim off the excess “maximum capacity” of your newly shrunken VHD.  You can get VHDResizer here. Set the maximum size to the same size as your new partition size.
    1. VHDResizer will require you to specify a new name for the resized VHD.  After it’s done, rename the old VHD to “file_old.vhd” and the new VHD to the same as your old file to ensure the boot manager picks up the VHD.
  4. Restart and continue along with configuring your new system.
Q: The Parent Disk is Complete.  How do I create a Differencing Disk?
Creating a differencing disk is pretty easy–a few commands in DISKPART and an administrator-privilaged console window and you’re set.
Before doing any of this, be sure that you’ve defragmented and ran the precompactor in your VHD.  This cleans up the data and zeros out the free space so that it compacts nicely.  If you don’t want to install Virtual Server to get ISO image for the PreCompactor (though I recommend this just to be safe), you can download an ‘extracted’ version from eeSJae.com.  Here’s a direct link to the precompact.exe file.
  1. Using DISKPART, select your parent VHD, compact it, and create a child (differencing) disk.
    1. select vdisk file=”d:\vm\basedrives\2008r2.vhd”
    2. compact vdisk
    3. create vdisk file=”d:\vm\vs2008.vhd” parent=”d:\vm\basedrives\2008r2.vhd”
  2. Run bcdedit /v and grab the {guid} of your existing VHD boot loader.
  3. Use BCDEDIT to replace the ‘device’ and ‘osdevice’ VHD paths.
    1. bcdedit /set {guid} device vhd=[LOCATE]\vm\vs2008.vhd
    2. bcdedit /set {guid} osdevice vhd=[LOCATE]\vm\vs2008.vhd
  4. Browse (using Windows Explorer, command window, etc) to your original, newly parent VHD (2008r2.vhd in this example) and mark it as read-only for safe keeping.
  5. Reboot and load up your new differencing disk.
Quick note:  In BCDEdit, the [LOCATE] tag is super—it allows the boot loader to FIND the location of the file rather than you specifying it.  This is great if your drive letters tend to bounce around (which they will… a bit).
Be aware that the previous note that your VHDs will expand to their full size remains.  You now, however, have the static size of your parent VHD and the “full size” of your new differencing disk (which inherits the parent’s maximum size).  If your parent is 8GB and the maximum size is 120GB, you’re now using 128GB, not 120GB.  Keep that in mind as you chain differencing disks. 🙂

Q: DVDs are annoying.  I can mount VHDs, why can’t I mount ISOs?

Who knows.  At least with Windows 7, we can actually BURN ISO images… much like 1999.  In either case, I recommend tossing SlySoft’s Virtual CloneDrive on your images (and your host).  It’s fast, mounts ISOs super easy, and saves a TON of time.

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. 😉

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Configuring Oracle SQL Developer for Windows 7

I’m fully Vista-free now and loving it; however, Oracle SQL Developer has (once again) decided to simply be annoying to configure.

<rant>Yes, I realize Oracle hates Microsoft.  Girls, you’re both pretty—please grow up.</rant>

Anyway, after a bit of hunting, I think I’ve found a good mix for those of us who love SQL Developer for coding and testing, but don’t “use” a lot of the Oracle proprietary junk features that comes with it.

Originally, I was going to include a couple of the configuration files; however, they’re spread out EVERYWHERE and, frankly, I can’t find them all. 😦  I also can’t figure out a way to import settings (without blowing my settings away first).

File Paths

As I mentioned before, some of the configuration files are spread out—everywhere.  Here’s where the most common files are located.

sqldeveloper.conf – <sqldeveloper folder>\sqldeveloper\bin\

ide.conf – <sqldeveloper folder>\ide\bin\

User-Configured Settings – %appdata%\SQL Developer\

I prefer to make my modifications in sqldeveloper.conf; however, a lot of the resources that pop up on Google make them in ide.conf.  Why?  I’m not sure.  sqldeveloper.conf simply CALLS ide.conf.  Meh.

Fixing Memory Consumption on Minimize

I found a reference to JDeveloper (another Java utility) that discussed how JDeveloper (and similarly, SQL Developer) pages everything when you minimize the window.

To fix this, open up sqldeveloper.conf and add the following line:

AddVMOption -Dsun.awt.keepWorkingSetOnMinimize=true

Fixing Aero Basic Theme

Tired of your IDE swapping back to Aero Basic whenever you launch SQL Developer?  Me too.  For now, Oracle reports that SQL Developer doesn’t support the full Aero Theme… or does it?

To enable Aero support (or at least keep it from bouncing back to Aero Basic), open up sqldeveloper.conf and add the following line:

AddVMOption -Dsun.java2d.noddraw=true

The Oracle forums also recommend trying the following line:

AddVMOption -Dsun.java2d.ddoffscreen=false

That option; however, never resolved the issue for me.  Your mileage may vary.

Cleaning Up The UI

The default UI leaves a lot to be desired for Oracle SQL Developer.  Here’s a few UI tips to try out.  These settings are found under Tools > Preferences.

Change the Theme – Environment > Theme. 

I like Experience Blue.  It’s clean, simple, and goes well with Windows 7’s look and feel.

Change Fonts – Code Editor > …

There are quite a few fonts that can be configured.  Here’s what I changed:

Code Insight – Segoe UI, 12
Display – check ‘Enable Text Anti-Aliasing’
Fonts – Consolas, 11
Printing – Consolas, 10

Disable Unnecessary Extensions – Extensions > …

Honestly, I don’t use ANY of the extentions, so I disabled everything as well as unchecking ‘Automatically Check for Updates’.  I’ve noticed that load time for the UI is insanely fast now (well, insanely fast for a Java app on Windows).

Window Locations

The only thing that I can’t figure out how to fix is the window location and placement.  Example: When you open a new worksheet, the results area is not visible (you have to drag that frame up each time).  That annoys me to no end and I can’t find a place to ‘save current window layout’ or similar.  Ideas?

That’s it!

With that, SQL Developer loads up quickly, connects, and displays just fine in Windows 7.

Visual Studio 2010 & .NET Framework 4.0 Beta 1 Out

May 18, 2009 Comments off

For MSDN Subscribers, VS2010 and .NET 4.0 beta 1 have hit the downloads site.

After reading through Microsoft’s “Overview” page—my biggest hope is that it doesn’t do “too much”.  Visual Studio is already a monolith of hard drive cranking pain as it is.  My Cray is still being held up by our purchasing department—I hope I don’t need it.  Maybe they took Win7’s approach that less is more and trimmed things down?

Overview: http://www.microsoft.com/visualstudio/en-us/products/2010/default.mspx

Beta Docs: http://msdn.microsoft.com/en-us/library/dd831853(VS.100).aspx

The biggest point of interest (so far, I mean—it’s still downloading) is the warning for Win7 RC:

NOTE: If you are installing Visual Studio 2010 Beta 1 on Windows 7 RC, you may receive a compatibility warning when installing SQL Server 2008. For more information and a workaround, please click here.

The link, however, sends you to an explaination of the error—which is simply that Win7 RC requires SQL Server 2008 SP1 or 2005 SP3.  If I remember right, I thought the error message SAID that.  I guess Microsoft is just covering all of the bases.

Finally, it looks like the “official” tag on Twitter is #vs10; however, quite a few of us have been using #vs2010.  Follow both for the full scope.

I’ve got about 20 minutes left on the download and a VM waiting for this to be loaded.  More to come!

Fixing Streaming In Windows 7 (.asx Files)

After tolerating the issue for all of the last beta and since installing RC, I decided to dedicate sometime into it this afternoon to research the cause and look for a fix.

The problem:

When attempting to play any .asx streaming playlist file, Windows would report that it could not locate the stream.

“Windows Media Player cannot play the file.  The Player might not support the file type or might not support the codec that was used to compress the file.”

WMP12 Streaming Error Image

The initial response (usually) to this is that the codec is missing or corrupted.  I think I tried almost every possible of codecs around–no dice.

The fix:

A bit of registry “checking” is in order.  There are four keys I updated:

[HKEY_CLASSES_ROOT\.avi]
@=”WMP11.AssocFile.AVI”

[HKEY_CLASSES_ROOT\.asf]
@=”WMP11.AssocFile.ASF”

[HKEY_CLASSES_ROOT\.asx]
@=”WMP11.AssocFile.ASX”

[HKEY_CLASSES_ROOT\.wmv]
@=”WMP11.AssocFile.WMV”

Note: The @= equates to the (Default) value.

Browse the web to a streaming source (I used Solo Piano Radio–calm, relaxing music while reading and such) and launch the streaming link.  It should work like a champ–like it has for years.

So far I’ve implemented this on three different computers with Windows 7–-so far, so good.

Resources:

According to the official Microsoft Windows 7 (Media) forums, it seems one of the Windows Live installers is to blame; however, if you catch the verbiage, they seem to relate it to another vendor.  I wonder if “vendor” without much more of a second thought. 

The fact that this issue survived to RC shocks me; however, all things considered—Win7 is still running like a champ.