Archive

Archive for the ‘Windows Server’ Category

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:

Student,Code,Product,IUID,TSSOC,Date
123456,e11234,Reading,jsmith,0:18,1/4/2012
123456,e11234,Reading,jsmith,1:04,1/4/2012
123456,e11234,Reading,jsmith,0:27,1/5/2012
123456,e11234,Reading,jsmith,0:19,1/7/2012
123456,e11235,Math,jsmith,0:14,1/7/2012

Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {

   }
$out

Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0’s or 1’s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
   }
$out

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   }

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. 🙂

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. 😉 I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

NuGet Package Restore, Multiple Repositories, and CI Servers

January 20, 2012 1 comment

I really like NuGet’s new Package Restore feature (and so does our git repositories).

We have several common libraries that we’ve moved into a private, local NuGet repository on our network. It’s really helped deal with the dependency and version nightmares between projects and developers.

Boom!I checked my first project using full package restore and our new local repositories into our CI server, TeamCity, the other day and noticed that the Package Restore feature couldn’t find the packages stored in our local repository.

At first, I thought there was a snag (permissions, network, general unhappiness) with our NuGet share, but all seemed well. To my surprise, repository locations are not stored in that swanky .nuget directory, but as part of the current user profile. %appdata%\NuGet\NuGet.Config to be precise.

Well, that’s nice on my workstation, but NETWORK SERVICE doesn’t have a profile and the All Users AppData directory didn’t seem to take effect.

The solution:

For TeamCity, at least, the solution was to set the TeamCity build agent services to run as a specific user (I chose a domain user in our network, you could use a local user as well). Once you have a profile, go into %drive%:\users\{your service name}\appdata\roaming\nuget and modify the nuget.config file.

Here’s an example of the file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="NuGet official package source" value="https://msft.long.url.here" />
    <add key="Student Achievement [local]" value="\\server.domain.com\shared$\nuget" />
  </packageSources>
  <activePackageSource>
    <add key="NuGet official package source" value="https://msft.long.url.here" />
  </activePackageSource>
</configuration>

Package Restore will attempt to find the packages on the ‘activePackageSource’ first then proceed through the list.

Remember, if you have multiple build agent servers, this must be done on each server.

Wish List: The option to include non-standard repositories as part of the .nuget folder. 🙂

Posting to Campfire using PowerShell (and curl)

January 16, 2012 Comments off

I have a few tasks that kick off nightly that I wanted to post status updates into our team’s Campfire room. Thankfully, 37signals Campfire has an amazing API.  With that knowledge, time to create a quick PowerShell function!

NOTE: I use curl for this. The Linux/Unix folks likely know curl, however, I’m sure the Windows folks have funny looks on their faces. You can grab the latest curl here for Windows (the Win32 or Win64 generic versions are fine).

The full code for this function is available via gist.

I pass two parameters: the room number (though this could be tweaked to be optional if you only have one room) and the message to post.

param (
 [string]$RoomNumber = (Read-Host "The room to post to (default: 123456) "),
 [string]$Message = (Read-Host "The message to send ")
)
$defaultRoom = "123456"
if ($RoomNumber -eq "") {
 $RoomNumber = $defaultRoom
}

There are two baked-in variables, the authentication token for your posting user (we created a ‘robot’ account that we use) and the YOURDOMAIN prefix for Campfire.

$authToken = "YOUR AUTH TOKEN"
$postUrl = "https://YOURDOMAIN.campfirenow.com/room/{0}/speak.json" -f $RoomNumber

The rest is simply using curl to HTTP POST a bit of JSON back up to the web service. If you’re not familiar with the JSON data format, give this a quick read. The best way I can sum up JSON is that it’s XML objects for the web with less wrist-cutting. 🙂

$data = "`"{'message':{'body':'$message'}}`""

$command = "curl -i --user {0}:X -H 'Content-Type: application/json' --data {1} {2}" 
     -f $authToken, $data, $postUrl

$result = Invoke-Expression ($command)

if ($result[0].EndsWith("Created") -ne $true) {
	Write-Host "Error!" -foregroundcolor red
	$result
}
else {
	Write-Host "Success!" -foregroundcolor green
}
Running SendTo-CampFire

Running SendTo-Campfire with Feedback

Indeed, there be success!

Success!

Success!

It’s important to remember that PowerShell IS extremely powerful, but can become even more powerful coupled with other available tools–even the web itself!

PowerShell: Recreating SQL*Plus ‘ed’ Command

November 9, 2009 Comments off

Aside from PowerShell, I spend a bit of time day-to-day in Oracle’s SQL*Plus.  I’m slowly replacing my dependency on SQL*Plus with PowerShell scripts, functions, and such, but there are some core things that it’s just easier to bring up a console and hack out.

One thing I really love about SQL*Plus is the ‘ed’ command.  The ‘ed’ command dumps your last statement into a temp file, then opens it in Notepad.  Make your changes in Notepad, save, and rerun (in SQL*Plus, it’s a single slash and then Enter), and the frustration of lengthy command line editing is solved.

So, this is a work in progress—my attempt to simulate the same process in PowerShell.

Example Situation

Let’s say, for example, we have a long command we have been hacking away at, but are getting tired of arrowing back and forth on:

$count = 0; sq mydb “select id from reports” | % { $count++ }; $count

(Yes, yes, it’s not ‘very long’, but that’s why this is called an example!)

So, a simple row counter.

The ‘ed’ Command

To recreate the ‘ed’ experience, I need two things:

  • Access to the command line history via get-history.
  • A temporary file.

touch c:\temp.tmp

get-history | ?{ $_.commandline -ne “ex” } |

? { $_.commandline -ne “ed” } |

select commandline -last 1 |

%{ $_.CommandLine} > c:\temp.tmp

np c:\temp.tmp

Stepping through the code:

  1. Create a temporary file.  I used c:\temp.tmp, but you could get a bit fancier and use the systems $TEMP directory or something.
  2. Get-History.  This returns an ascending list of your command history in PowerShell by ID and CommandLine.  We want to ignore any calls to our ‘ed’ or ‘ex’ functions and grab the “last 1”—then insert it into our temp file.
  3. Finally, open up the temp file in NotePad (np is an alias on my PowerShell profile for NotePad2).

So now that I have my ed command, let’s try it out after I’ve already ran my simple row counter.

Last Command in NotePad2

Cool.  We can now make changes to this and SAVE the temp file.  The next step would be to jump back to PowerShell and execute the updated command.

The ‘ex’ Command

Since the slash has special meaning in a PowerShell window, I simply named this script ‘ex’ instead.  Ex… execute… surely I can’t forget that, right?

get-content c:\temp.tmp |

% {

$_

invoke-expression $_

}

The ‘ex’ command is pretty simple.

  1. Get the content of our temporary file from get-content.
  2. For each line (since get-content returns an array of lines), print out the line as text and then invoke (run) it using invoke-expression.

Invoke-Expression takes a string as a command (from our text file) and returns the results.  For more information on Invoke-Expression, check out MSDN.

Wrapping It Up

Now that we have our two commands—ed and ex—we’re ready for those “oh, this is short and easy” commands that turn into multi-page monsters—without fighting tabs and arrow keys.

As I said, this is a work-in-progress.  I’ll update the post as new/better ideas come up. 🙂

Updated: PowerShell “Code-Cat” With Colors and Line Breaks

October 13, 2009 Comments off

Motivated by Jeffery Hicks’ fun with my original “code-cat” version, I decided to fill in a few of the original updates I had floating in my head.

The full code is available here via CodePaste.Net.

First off, I loved the idea of colorizing the comments in a file; however, I wanted to keep my line numbers out of the “coloring” and simply color the comments.  To do this, I had to break apart the line numbers from the actual output.

Coloring Code Cat Comments

Second, I really wanted a cleaner way to wrap lines in my code and to have those lines still lineup with everything else. 

Line Wraps

The full code is available via CodePaste.Net (click here), but let’s step through it and see how things work.

SplitString Function

You’ll notice at the top, I’ve included a custom helper function that takes a string and splits it into an array.  I use this to break up “long” strings into window-sized strings.

function splitString([string]$string, [int]$length)
{
 $lines = @();
 $stringLength = $string.Length;
 $position = 0;

 while ($position -lt $stringLength)
 {
  if (($position + $length) -lt $stringLength)
  {
   $lines += $string.substring($position, $length);
  }
  else
  {
   $lines += $string.substring($position);
  }
  $position += $length;
 }

 return $lines;
}

This method is actually in my PowerShell profile for use in numerous parsing activities.

Colors and Window Size

Unlike the original method, I’ve pulled out the colors into variables so they are easily configured.  I am also using the built-in PowerShell $Host object to determine the width of my console window.  I’ll use this later for the line breaks.

$codeColor = “Gray”;                
$counterColor = “White”;
$commentColor = “Green”;
$newLineColor = “Yellow”;

# Window size, minus the counter, spacing, and a bit extra padding.
$maxLineSize = $Host.UI.RawUI.WindowSize.Width – 12;

Parsing The Code File

There are a few important points in the $content body, here we go!

Replacing Tabs With Spaces

I found that the Substring and Length functions got really quirkly (is a tab 1 character or five, etc) when they came across several tabs (which, tabbing is pretty common in a code file, especially some SQL files).  To remedy this, I replace tabs with five spaces.  This also allows me to control the tabs a bit more, so if I wanted to ease my code output to the left, I could use three or four spaces.

# remove tabs, they’re counted as one character, not five.
$currentLine = $_.Replace(“`t”, ”    “);

Setting the Color of Comments

Using Jeffery’s example, I’m also setting my comments (for PowerShell, C#, and TSQL) to a bright color.

# Color-code comments green (PowerShell, C#, SQL)
if ($currentLine.Trim().StartsWith(“#”) -or `
    $currentLine.Trim().StartsWith(“//”) -or `
    $currentLine.Trim().StartsWith(“–“))
{
     $foregroundColor = $commentColor;
}

Separating Line Numbers From Content

I didn’t want my line numbers to show up green for comments and I wanted to treat them with a different color than the rest of the code body, so I’m taking advantage of the “-nonewline” feature of write-host. This returns the content without forcing a new line break into the result.  Very handy.

write-host (“{0:d4} | ” -f $counter) -nonewline -foregroundColor $counterColor;

Checking for Line Breaks and Handling Them

The next bit is in “beta” for the time being.  The gist is that it looks at the length of the current line vs. the maximum line size we specified in the variables. 

If the current line is longer, it breaks the line into an array of “max line size” lines.  The first line is output using our normal methods—we want our counter to show up, right?  The ‘overflow’ lines are output using a fancy arrow “\—>” and do not include a counter—simply because they are not ‘new lines’.

Of course, if our line is shorter than our window size, it is simply output.

if ($currentLine.Length -ge $maxLineSize)
{
 $splitLines = splitString $currentLine $maxLineSize;
    $countOfLines = $splitLines.Length;
 for ($i = 0; $i -lt $countOfLines; $i++)
    {
  if ($i -eq 0)
  {
   # our first line doesn’t look any different.
   write-host (“{0}” -f $splitLines[$i]) -foregroundColor $foregroundColor;
  }
  else
  {
   # next lines are using the fancy arrow.
   write-host (“\–> |   “) -nonewline -foregroundColor $newLineColor;
   write-host (“{0}`n” -f $splitLines[$i]) -nonewline -foregroundColor $foregroundColor;
  }
 }
}
else
{
 write-host (“{0}” -f $_) -foregroundColor $foregroundColor;
}

Again, the full code is available here via CodePaste.Net.

Like all things, a work in progress!

PowerShell – Easy Line Numbers using Cat (Get-Content)

October 8, 2009 2 comments

I often cat out code pages from the prompt when I simply want to see what’s inside or look for a method.  Cat, of course, is an alias to PowerShell’s get-content cmdlet, but cat brings me back to days in Unix and makes me feel better. 🙂

One thing that was a bit of a trouble was looking for a specific line of code.

For example: Looking through the debug logs, I see an error occuring at line 40 of some file.  That’s great, but who wants to count lines?  Instead, we open up Visual Studio and look.  Well, that’s a hassle too.

Here’s a quick script that does a simple line count with the line in the file.

param ([string] $filename)
$counter = 0;
$content = get-content $filename |
 % { $counter++; write-host “$counter`t| $_” }

Since the result of get-content becomes an enumerable line-by-line list, it’s easy to iterate through and increment a counter.

To call it, just call the method with the file name.  For mine, I called it cat-code.ps1, though I’ll probably set an alias to ‘cc’ or something later on.

.\cat-code.ps1 psake_default.ps1

Code Cat example.

From here, you could add in a check to the $Host.UI.RawUI and handle the overflows a bit better, but that’s for another day!

Automating Extracts Using External .SQL Files and PowerShell

September 29, 2009 Comments off

Rather than rely on a system task, I wanted to be able to kick off an export of one of our SQL databases on the fly, have it generate a CSV, then email the CSV to one of our developers.

Sounds like a task for PowerShell!

Why PowerShell? Well, frankly, because moving jobs around and such on our mixed-mash of SQL versions and servers is a bit annoying and I wanted to see if it worked.

So, to start off, I have a file, call it export_data.sql that contains all of the logic of my SQL query and the format I require.

I’ll also be using the Microsoft SQL query script I discussed in this blog post.

The tricky part is reading in the .sql file as a single “entity”—not the array of rows that get-content usually provides.  Thankfully, there is an easy way to do it.

@(gc ‘export_data.sql’ -readcount 0)

According to the PowerShell documentation, the readcount parameter serves two purposes: one is to set the number of lines to read at a time, the second (when set to zero) is to pull in all of the lines at once.  Great, that’s exactly what we need.

From there, it’s simply feeding our query to the sql-query function I wrote and exporting to CSV.  Nice enough, CSV exports are built-in to PowerShell!

The final command looks something like this:

@(gc ‘export_data.sql’ -readcount 0) | 
   % { sql-query server db $_ } |
   export-csv dump.csv

I could then add another field to mail the csv using the built-in command Send-MailMessage.

❤ PowerShell.

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. 😉  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. 😉

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. 😉

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Querying Oracle using PowerShell

September 1, 2009 4 comments

Yesterday, I wrote up a quick bit of code to query out our SQL Servers.  Initially, I wanted a speedy way to hit, parse, and report back log4net logs in our “server status” scripts.

Well, never one to leave something alone, I started tinkering with Oracle support.  In our enterprise, most of our key systems sit on Oracle and there are SEVERAL opportunities for quick data retrieval routines that could help out in daily work.

Plus, doing an Oracle query in PowerShell beats five minute process of cranking up Oracle SQL Developer for a simple, single query. 🙂

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

[System.Reflection.Assembly]::LoadWithPartialName(“System.Data.OracleClient”) | out-null
$connection = new-object system.data.oracleclient.oracleconnection( `
    “Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$server)(PORT=1521)) `
    (CONNECT_DATA=(SERVICE_NAME=$instance)));User Id=USER_ID;Password=PASSWORD;”);

$set = new-object system.data.dataset   

$adapter = new-object system.data.oracleclient.oracledataadapter ($query, $connection)
$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

I chose to use the OracleClient library for simplicity sake.  I could have used ODP.Net; however, that’d make my scripts FAR less portable.  Since OracleClient isn’t loaded by default in PowerShell, this script loads it.  In addition, I chose to use the TNS-less connection string as I don’t typically keep a ‘tnsnames.ora’ file on my computer.  This further adds to the portability of the script.

Past that and the change from SqlClient to OracleClient, the rest of the code is the same from the prior example.

Dealing With Empty Strings and Nulls

One thing that I did run across that differed between Oracle and Microsoft SQL revolved around how empty strings were dealt with when parsing using PowerShell.

Example:

oq “SELECT * FROM Schools”

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School

102 School

112 School      user3@domain.net

140 School      user1@domain.net

etc.

Now, what if I wanted to just see the schools missing a principal_email_address?  I’d just rewrite my SQL query, right?  Yeah, probably, but for the sake of argument and perhaps some scripting.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq “”}

No results.

What? Why not?  I see two in my last query.  Unfortunately, dealing with “nulls” and empty strings can get a bit tricky when pulling from database data.  With Microsoft SQL, a text-based column (varchar, ntext, etc) seems to handle -eq “” just fine, but Oracle is less than pleased.  @ShayLevy suggested -eq [string]::Empty but that didn’t pull through either. 

From a prior experiment, I also tried -eq $null and was greeted with something very different—it returned all results. Meh.

Randomly, I tried -like $null and it worked. Well, that’s interesting.  So the value isn’t empty in Oracle, but it is “like” a null.  After a bit more digging, I discovered that the real value is –eq [DBNull]::Value.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq [DBNull]::Value }

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School
102 School

It makes sense… but more testing is required to see which is more reliable for a wide variety of data types.  I like the concept of “like null” to simulate “string empty or null”.  Further testing required. 🙂