Archive

Archive for the ‘Microsoft’ Category

PowerShell : Add Recursive Directories to PATH

October 19, 2009 Comments off

As I continue to go script crazy (a bit) with PowerShell, I’ve noticed that my general scripts directory has grown quite a bit and is becoming a bit unmanagable.  Being a lover of neat and tidy, I wanted to separate ‘common’ scripts from ‘server-based’ and ‘task’ scripts.

Sounds easy enough, but that breaks part of my PowerShell profile—the part where I add the scripts directory to the path.  Moving those files out of that directory, even if they’re in subdirectories, would take them out of the path and I’d lose my [a..b]-{tab} goodness.

So, how to add the recursive structure to PATH?

It’s easy!

In my Profile, $scripts is a variable that contains a path to my ‘standard’ scripts directory based on whatever computer I’m currently logged into.

From there, simply use the built-in features of PowerShell.

gci $scripts |

                ? { $_.psIsContainer } |

                % { [System.Environment]::SetEnvironmentVariable(“PATH”,

                                $Env:Path + “;” + $_.FullName, “Process”) }

Verbatim: for each child object in $scripts that is a container, append the full name to PATH.

Exactly what I wanted and works like a champ.  Now I can move my scripts around, create new directories, the works—and each time I restart my console, it’ll detect the new directories and simply work—exactly like a shell should.

Categories: Microsoft, PowerShell

PowerShell – Easy Line Numbers using Cat (Get-Content)

October 8, 2009 2 comments

I often cat out code pages from the prompt when I simply want to see what’s inside or look for a method.  Cat, of course, is an alias to PowerShell’s get-content cmdlet, but cat brings me back to days in Unix and makes me feel better. 🙂

One thing that was a bit of a trouble was looking for a specific line of code.

For example: Looking through the debug logs, I see an error occuring at line 40 of some file.  That’s great, but who wants to count lines?  Instead, we open up Visual Studio and look.  Well, that’s a hassle too.

Here’s a quick script that does a simple line count with the line in the file.

param ([string] $filename)
$counter = 0;
$content = get-content $filename |
 % { $counter++; write-host “$counter`t| $_” }

Since the result of get-content becomes an enumerable line-by-line list, it’s easy to iterate through and increment a counter.

To call it, just call the method with the file name.  For mine, I called it cat-code.ps1, though I’ll probably set an alias to ‘cc’ or something later on.

.\cat-code.ps1 psake_default.ps1

Code Cat example.

From here, you could add in a check to the $Host.UI.RawUI and handle the overflows a bit better, but that’s for another day!

Automating Extracts Using External .SQL Files and PowerShell

September 29, 2009 Comments off

Rather than rely on a system task, I wanted to be able to kick off an export of one of our SQL databases on the fly, have it generate a CSV, then email the CSV to one of our developers.

Sounds like a task for PowerShell!

Why PowerShell? Well, frankly, because moving jobs around and such on our mixed-mash of SQL versions and servers is a bit annoying and I wanted to see if it worked.

So, to start off, I have a file, call it export_data.sql that contains all of the logic of my SQL query and the format I require.

I’ll also be using the Microsoft SQL query script I discussed in this blog post.

The tricky part is reading in the .sql file as a single “entity”—not the array of rows that get-content usually provides.  Thankfully, there is an easy way to do it.

@(gc ‘export_data.sql’ -readcount 0)

According to the PowerShell documentation, the readcount parameter serves two purposes: one is to set the number of lines to read at a time, the second (when set to zero) is to pull in all of the lines at once.  Great, that’s exactly what we need.

From there, it’s simply feeding our query to the sql-query function I wrote and exporting to CSV.  Nice enough, CSV exports are built-in to PowerShell!

The final command looks something like this:

@(gc ‘export_data.sql’ -readcount 0) | 
   % { sql-query server db $_ } |
   export-csv dump.csv

I could then add another field to mail the csv using the built-in command Send-MailMessage.

❤ PowerShell.

ASP.NET Development Server From ‘Here’ in PowerShell

September 9, 2009 Comments off

Long title… almost longer than the code.

I used to have an old registry setting that started up the ASP.NET Development Server from the current path; however, since I rarely open up Explorer—and then opening up FireFox was even more painful—I needed a script.

What does it do?

The script starts up the ASP.NET Development server with a random port (so you can run multiples…) at your current location.  It then activates your machine’s DEFAULT BROWSER and browses to the site.  FireFox user?  No problem.  Works like a champ!

The Script (Full Code)

$path = resolve-path .
$rand = New-Object system.random
$port = $rand.next(2048,10240)
$path_to_devserver = “C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer\\9.0\\Webdev.WebServer.exe”

& $path_to_devserver /port:$port /path:$path
(new-object -com shell.application).ShellExecute(“http:\\localhost:$port”)

The $path_to_devserver can be updated—depending on 64–bit vs. 32–bit machines.  Simple, easy, and to the point.  Now, no more fumbling around to launch a quick web application!

Ramping up with PSake

September 8, 2009 Comments off

I’ve been tetering back and forth with PSake and my trusty NAnt scripts for quite a while now.  For those not familiar with PSake, it’s build automation that makes you drunk—but in a good way. 😉  You can read James Kovacs’ original post here or check out the repository here for the latest bits.

I originally looked at rake scripts (after exposure working with Fluent NHibernate) as PowerShell is loathed in our organization—or was.  That mindset is slowly changing (being able to show people how to crank out what was originally scoped at a week in two lines of PowerShell script helps out); so I’m using PSake as further motivation.

My prior PSake scripts were a bit tame.  Launch msbuild, copy a few files.  With the latest release of xUnit 1.5 hitting the wires over the weekend (and a much needed x86 version for my poor, cranky Oracle libraries), I decided to bite the bullet and dig in to PSake.

I had two goals:

  1. Build a reliable framework “default.ps1” file that I could drop into almost any project and configure with little or no effort.
  2. Compile, test, and rollout updates from a single PSake command task.

I borrowed the basic layout from Ayende’s Rhino Mocks PSake; however, I couldn’t get msbuild to run correctly simply by calling it.

Here’s what I ended up with for our internal core library.  The core library, isn’t so much a “utilities” container, but just as it sounds—the framework all of our applications are built on to keep connections to our various applications (HR, student systems, data warehouses, etc) consistant as well as hold our base FNH conventions.

CODE: Full code available on CodePaste.NET

Properties

The properties area holds all of the configuration for the PSake script.  For me, it’s common to configure $solution_name, $libraries_to_merge, and $libraries_to_copy.  With our naming standards, the $test_library should be left unchanged.  I also added in the tester information so we could change from XUnit to MBUnit (if Hell froze over or something)).

properties {

 

  # ****************  CONFIGURE ****************

       $solution_name =           “Framework”

       $test_library =            “$solution_name.Test.dll”

 

       $libraries_to_merge =      “antlr3.runtime.dll”, `

                                  “ajaxcontroltoolkit.dll”, `

                                  “Castle.DynamicProxy2.dll”, `

                                  “Castle.Core.dll”, `

                                  “FluentNHibernate.dll”, `

                                  “log4net.dll”, `

                                  “system.linq.dynamic.dll”, `

                                  “xunit.dll”, `

                                  “nhibernate.caches.syscache2.dll”, `

                                  “cssfriendly.dll”, `

                                  “iesi.collections.dll”, `

                                  “nhibernate.bytecode.castle.dll”, `

                                  “oracle.dataaccess.dll”

      

       $libraries_to_copy =       “system.data.sqlite.dll”

 

       $tester_directory = “j:\shared_libraries\xunit\msbuild”

       $tester_executable = “xunit.console.x86.exe”

       $tools_directory =         “$tools”

       $base_directory  =         resolve-path .

       $thirdparty_directory =    “$base_directory\thirdparty”

       $build_directory =         “$base_directory\build”

       $solution_file =           “$base_directory\$solution_name.sln”

       $release_directory =       “$base_directory\release”

}

Clean and easy enough.  You’ll notice that $libraries_to_merge and $libraries_to_copy are implied string arrays.  That works out well since string arrays end up as params when passed to commands… and our $libraries_to_copy can be iterated over later in the code.

Tasks – Default

task default -depends Release

The default task (if just running ‘psake’ without parameters) runs Release.  Easy enough.

Tasks – Clean

task Clean {

  remove-item -force -recurse $build_directory -ErrorAction SilentlyContinue | Out-Null

  remove-item -force -recurse $release_directory -ErrorAction SilentlyContinue | Out-Null

}

Clean up those build and release directories.

Tasks – Init

task Init -depends Clean {

    new-item $release_directory -itemType directory | Out-Null

    new-item $build_directory -itemType directory | Out-Null

    cp $tester_directory\*.* $build_directory

}

Restore those build and release directories that we cleaned up; then copy in our unit testing framework so we can run our tests (if necessary).

Tasks – Compile

task Compile -depends Init {

       # from http://poshcode.org/1050 (first lines to get latest versions)

       [System.Reflection.Assembly]::Load(‘Microsoft.Build.Utilities.v3.5, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’) | Out-Null

       $msbuild = [Microsoft.Build.Utilities.ToolLocationHelper]::GetPathToDotNetFrameworkFile(“msbuild.exe”, “VersionLatest”)

      

       # adding double slash for directories with spaces. Stupid MSBuild.

       &$msbuild /verbosity:minimal /p:Configuration=”Release” /p:Platform=”Any CPU” /p:OutDir=”$build_directory”\\ “$solution_file”

}

Compile is a bit tricky.  As noted in the code, I ended up using a SharePoint example from PoSH code to get MSBuild to behave.  The standard exec methodology provided by PSake kept ignoring my parameters.  Maybe someone has an good reason.. but this works.

You also see that my OutDir has TWO slashes.  It seems that directories with spaces require the second.  I’m sure this will somehow bite me later on, but it seems to be working for now. 😉

Tasks – Test

task Test -depends Compile {

  $origin_directory = pwd

  cd $build_directory

  exec .\$tester_executable “$build_directory\$test_library”

  cd $origin_directory       

}

I want to thank Ayende for the idea to dump the origin directory into a parameter—brilliant.  This one is pretty simple—just calls the tester and tests.

Tasks – Merge

task Merge {

       $origin_directory = pwd

       cd $build_directory

      

       remove-item “$solution_name.merge.dll” -erroraction SilentlyContinue

       rename-item “$solution_name.dll” “$solution_name.merge.dll”

      

       & $tools\ilmerge\ilmerge.exe /out:”$solution_name.dll” /t:library /xmldocs /log:”$solution_name.merge.log” `

              “$solution_name.merge.dll” $libraries_to_merge

                          

       if ($lastExitCode -ne 0) {

              throw “Error: Failed to merge assemblies!”

       }

       cd $origin_directory

}

Merge calls ILMerge and wraps all of my libraries into one.  Do I need to do this?  Nah, but for the framework, I prefer to keep everything together.  I don’t want to be chasing mis-versioned libraries around.  Again, since $libraries_to_merge is a string array, it passes each “string” as a separate parameter—which is exactly what ILMerge wants to see.

I also have ILMerge generate and keep a log of what it did—just to have.  Since the build directory gets blown away between builds (and isn’t replicated to source control), then no harm.  Space is mostly free. 😉

Tasks – Build & Release

task Build -depends Compile, Merge {

       # When I REALLY don’t want to test…

}

 

task Release -depends Test, Merge {

       copy-item $build_directory\$solution_name.dll $release_directory

       copy-item $build_directory\$solution_name.xml $release_directory

      

       # copy libraries that cannot be merged

       % { $libraries_to_copy } | %{ copy-item (join-path $build_directory $_) $release_directory }

      

}

Build provides just that—building with no testing and no copying to the release directory.  This is more for testing out the scripts, but useful in some cases.

Release copies the library and the xml documentation out ot the release directory.  It then iterates through the string array of “other” libraries (non-manged code libraries that can’t be merged, etc) and copies them as well.

 

 

 

Querying Oracle using PowerShell

September 1, 2009 4 comments

Yesterday, I wrote up a quick bit of code to query out our SQL Servers.  Initially, I wanted a speedy way to hit, parse, and report back log4net logs in our “server status” scripts.

Well, never one to leave something alone, I started tinkering with Oracle support.  In our enterprise, most of our key systems sit on Oracle and there are SEVERAL opportunities for quick data retrieval routines that could help out in daily work.

Plus, doing an Oracle query in PowerShell beats five minute process of cranking up Oracle SQL Developer for a simple, single query. 🙂

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

[System.Reflection.Assembly]::LoadWithPartialName(“System.Data.OracleClient”) | out-null
$connection = new-object system.data.oracleclient.oracleconnection( `
    “Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$server)(PORT=1521)) `
    (CONNECT_DATA=(SERVICE_NAME=$instance)));User Id=USER_ID;Password=PASSWORD;”);

$set = new-object system.data.dataset   

$adapter = new-object system.data.oracleclient.oracledataadapter ($query, $connection)
$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

I chose to use the OracleClient library for simplicity sake.  I could have used ODP.Net; however, that’d make my scripts FAR less portable.  Since OracleClient isn’t loaded by default in PowerShell, this script loads it.  In addition, I chose to use the TNS-less connection string as I don’t typically keep a ‘tnsnames.ora’ file on my computer.  This further adds to the portability of the script.

Past that and the change from SqlClient to OracleClient, the rest of the code is the same from the prior example.

Dealing With Empty Strings and Nulls

One thing that I did run across that differed between Oracle and Microsoft SQL revolved around how empty strings were dealt with when parsing using PowerShell.

Example:

oq “SELECT * FROM Schools”

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School

102 School

112 School      user3@domain.net

140 School      user1@domain.net

etc.

Now, what if I wanted to just see the schools missing a principal_email_address?  I’d just rewrite my SQL query, right?  Yeah, probably, but for the sake of argument and perhaps some scripting.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq “”}

No results.

What? Why not?  I see two in my last query.  Unfortunately, dealing with “nulls” and empty strings can get a bit tricky when pulling from database data.  With Microsoft SQL, a text-based column (varchar, ntext, etc) seems to handle -eq “” just fine, but Oracle is less than pleased.  @ShayLevy suggested -eq [string]::Empty but that didn’t pull through either. 

From a prior experiment, I also tried -eq $null and was greeted with something very different—it returned all results. Meh.

Randomly, I tried -like $null and it worked. Well, that’s interesting.  So the value isn’t empty in Oracle, but it is “like” a null.  After a bit more digging, I discovered that the real value is –eq [DBNull]::Value.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq [DBNull]::Value }

ID  NAME        PRINCIPAL_EMAIL_ADDRESS

  —-        ———————–

100 School
102 School

It makes sense… but more testing is required to see which is more reliable for a wide variety of data types.  I like the concept of “like null” to simulate “string empty or null”.  Further testing required. 🙂

 

Querying SQL Server using PowerShell

August 31, 2009 2 comments

The great thing about PowerShell is direct access to objects.  For my apps, database connectivity is happily handled by NHibernate; however, that doesn’t mean we can’t take advantage of good old System.Data.SqlClient for our PowerShell scripting.

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

$connection = new-object system.data.sqlclient.sqlconnection( `
    “Data Source=$server;Initial Catalog=$instance;Integrated Security=SSPI;”);
   
$adapter = new-object system.data.sqlclient.sqldataadapter ($query, $connection)
$set = new-object system.data.dataset

$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

Not too long or challenging—it’s mostly working to instantiate a quick SQL connection and pass in your query.  I even considered plugging in a check on the $query parameter to ensure it began with SELECT to ensure I wouldn’t do accidental damage to a system. Maybe I’m just paranoid. 😉

What this little snippet allows me to do is quickly add log4net checking into some of my server monitoring PowerShell scripts.

query sqlServer myDatabase “Select count(id), logger from logs group by logger” | format-table -autosize

Notice I didn’t include the format-table command in my main query script.  Why?  I wanted to keep the flexibility to select, group, and parse the information returned by my query.  Unfortunately, it seems that the format commands break that if they’re ran before a manipulation keyword.  Adding in “ft –a” isn’t difficult in a pinch.

WebStorageHandler[1]

Quick and easy…

Other uses:

  • Customer calls up with a question about data—save time and do a quick query rather than waiting for Management Studio to wind up.
  • Keep tabs on database statistics, jobs, etc.
  • and more…