Archive for the ‘Microsoft’ Category

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:


Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {


Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0’s or 1’s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Minutes=($hours + $minutes);
	$out = $out + $item

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Minutes=($hours + $minutes);
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. 🙂

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. 😉 I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

The Post-Certification Era?

February 13, 2012 1 comment

Oh look, starting off with a disclaimer. This should be good!

These are patterns I’ve noticed in our organization over the past ten years–ranging from hardware to software to development technical staff. These are my observations, experiences with recruiting, and a good dash of my opinions. I’m certain there are exceptions. If you’re an exception, you get a cookie. 🙂

This isn’t specifically focused on Microsoft’s certifications. We’re a .NET shop, but we’re also an Oracle shop, a Solaris shop, and a RHEL shop. So many certification opportunities, so little training dollars.

Finally, I’ll also throw out that I have a few certifications. When I made my living as a full-time consultant and contractor and was just getting started, they were the right thing to do (read on for why). Years later … things have changed.

Evaluating The Post-Certification Era

In today’s development ecosystem, certifications seem play a nearly unmentionable role outside of college recruitment offices and general practice consulting agencies. While certifications provide a baseline for those just entering the field, I rarely see established developers (read: >~2 years experience) heading out to the courseware to seek a new certification.

Primary reasons for certifications: entry into the field and “saleability”.
Entry into the field – provides a similar baseline to compare candidates for entry-level positions.

Example: An entry-level developer vs. hiring an experienced enterprise architect. For an entry-level developer, a certification usually provides a baseline of skills.

For an experienced architect, however, past project experience, core understanding of architecture practices, examples of work in open source communities, and scenario-based knowledge provides the best gauge of skills.

“Saleability” of certifications for consulting agencies allows “one upping” other organizations, but usually lack the actual real-world skills necessary for implementation.

Example: We had a couple of fiascos years back with a very reputable consulting company filled with certified developers, but simply couldn’t wrap those skills into a finished product. We managed to bring the project back in-house and get our customers squared away, but it broke the working relationship we had with that consulting company.

Certifications provide a baseline for experience and expertise similar to college degrees.
Like in college, being able to cram and pass a certification test is a poor indicator (or replacement) for handling real-life situations.

Example: Many certification “crammers” and boot camps are available for a fee–rapid memorization and passing of tests.  I do not believe that these prepare you for actual situations AND do not prepare you to continue to expand your knowledge base.

Certifications are outdated before they’re even released.
Test-makers and publishers cannot keep up with technology at it’s current pace. The current core Microsoft certifications focus on v2.0 technologies (though are slowly being updated to 4.0).

I’m sure it’s a game of tag between the DivDev and Training teams up in Redmond. We, as developers, push for new features faster, but the courseware can only be written/edited/reviewed/approved so quickly.

In addition, almost all of our current, production applications are .NET applications; however, a great deal of functionality is derived from open-source and community-driven projects that go beyond the scope of a Microsoft certification.

Certifications do not account for today’s open-source/community environment.
A single “Microsoft” certification does not cover a large majority of the programming practices and tools used in modern development.

Looking beyond Microsoft allows us the flexibility to find the right tool/technology for the task. In nearly every case, these alternatives provide a cost savings to the district.

Example: Many sites that we develop now feature non-Microsoft ‘tools’ from the ground up.

  • web engine: FubuMVC, OpenRasta, ASP.NET MVC
  • view engine: Spark, HAML
  • dependency injection/management: StructureMap, Ninject, Cassette
  • source control: git, hg
  • data storage: NHibernate, RavenDB, MySQL
  • testing: TeamCity, MSpec, Moq, Jasmine
  • tooling: PowerShell, rake

This doesn’t even take into consideration the extensive use of client-side programming technologies, such as JavaScript.

A more personal example: I’ve used NHibernate/FluentNHibernate for years now. Fluent mappings, auto mappings, insane conventions and more fill my day-to-day data modeling. NH meets our needs in spades and, since many of our objects talk to vendor views and Oracle objects, Entity Framework doesn’t meet our needs. If I wanted our team to dig into the Microsoft certification path, we’d have to dig into Entity Framework. Why would I want to waste everyone’s time?

This same question applies to many of the plug-and-go features of .NET, especially since most certification examples focus on arcane things that most folks would look up in a time of crisis anyway and not on the meat and potatoes of daily tasks.

Certifications do not account for the current scope of modern development languages.
Being able to determine an integer from a string and when to call a certain method crosses language and vendor boundaries.  A typical Student Achievement project contains anywhere from three to six different languages–only one of those being a Microsoft-based language.

Whether it’s Microsoft’s C#, Sun’s Java, JavaScript, Ruby, or any number of scripting languages implemented in our department–there are ubiquitous core skills to cultivate.

Cultivating the Post-Certification Developer

In a “Google age”, knowing how and why components optimally fit together provides far more value than syntax and memorization. If someone needs a code syntax explanation, a quick search reveals the answer. For something more destructive, such as modifications to our Solaris servers, I’d PREFER our techs look up the syntax–especially if it’s something they do once a decade. There are no heroes when a backwards bash flag formats an array. 😉

Within small development shops, such as ours, a large percentage of development value-added skills lie in enterprise architecture, domain expertise, and understanding design patterns–typical skills not covered on technology certification exams.

Rather than focusing on outdated technologies and unused skills, a modern developer and development organization can best be ‘grown’ by an active community involvement.  Active community involvement provides a post-certification developer with several learning tools:

Participating in open-source projects allows the developer to observe, comment, and learn from other professional developers using modern tools and technologies.

Example: Submitting a code example to an open source project where a dozen developers pick it apart and, if necessary, provide feedback on better coding techniques.

Developing a social network of professional developers provides an instant feedback loop for ideas, new technologies, and best practices. Blogging, and reading blogs, allows a developer to cultivate their programming skill set with a world-wide echo chamber.

Example: A simple message on Twitter about an error in a technology released that day can garner instant feedback from a project manager at that company, prompting email exchanges, telephone calls, and the necessary steps to resolve the problem directly from the developer who implemented the feature in the new technology.

Participating in community-driven events such as webinars/webcasts, user groups, and open space discussions. These groups bolster existing social networks and provide knowledge transfer of best practices and patterns on current subjects as well as provide networking opportunities with peers in the field.

Example: Community-driven events provide both a medium to learn and a medium to give back to the community through talks and online sessions.  This helps build both a mentoring mentality in developers as well as a drive to fully understand the inner-workings of each technology.


While certifications can provide a bit of value–especially getting your foot in the door, I don’t see many on the resumes coming across my desk these days. Most, especially the younger crowd, flaunt their open source projects, hacks, and adventures with ‘technology X’ as a badge of achievement rather than certifications. In our shop and hiring process, that works out well. I doubt it’s the same everywhere.

Looking past certifications in ‘technology X’ to long-term development value-added skills adds more bang to the resume, and the individual, than any finite-lived piece of paper.

NuGet Package Restore, Multiple Repositories, and CI Servers

January 20, 2012 1 comment

I really like NuGet’s new Package Restore feature (and so does our git repositories).

We have several common libraries that we’ve moved into a private, local NuGet repository on our network. It’s really helped deal with the dependency and version nightmares between projects and developers.

Boom!I checked my first project using full package restore and our new local repositories into our CI server, TeamCity, the other day and noticed that the Package Restore feature couldn’t find the packages stored in our local repository.

At first, I thought there was a snag (permissions, network, general unhappiness) with our NuGet share, but all seemed well. To my surprise, repository locations are not stored in that swanky .nuget directory, but as part of the current user profile. %appdata%\NuGet\NuGet.Config to be precise.

Well, that’s nice on my workstation, but NETWORK SERVICE doesn’t have a profile and the All Users AppData directory didn’t seem to take effect.

The solution:

For TeamCity, at least, the solution was to set the TeamCity build agent services to run as a specific user (I chose a domain user in our network, you could use a local user as well). Once you have a profile, go into %drive%:\users\{your service name}\appdata\roaming\nuget and modify the nuget.config file.

Here’s an example of the file:

<?xml version="1.0" encoding="utf-8"?>
    <add key="NuGet official package source" value="" />
    <add key="Student Achievement [local]" value="\\\shared$\nuget" />
    <add key="NuGet official package source" value="" />

Package Restore will attempt to find the packages on the ‘activePackageSource’ first then proceed through the list.

Remember, if you have multiple build agent servers, this must be done on each server.

Wish List: The option to include non-standard repositories as part of the .nuget folder. 🙂

Changing Default Printers by Network Subnet

January 13, 2012 Comments off

Windows 7 includes a pretty handy feature for mobile devices called location-aware printing. The feature itself is pretty cool and great if you’re moving between two distinct networks (home and work, for example). However, if you’re moving within the SAME network–and the SAME wireless SSID, it doesn’t register a difference. LAP doesn’t pay attention to your IP address, just the SSID you’re connected to.

In our organization, and most large corporations, wireless access points have the same name/credentials so that users can move seamlessly through the enterprise. How can we address location-based printing then?

One of my peers recently moved into a position where they are constantly between two buildings multiple times per day and frequently forgetting to reset their default printer.

Here’s how I helped her out using a bit of PowerShell.

For the full code, check out this gist.

Set-PrinterByLocation in action!

Set-PrinterByLocation in action!

To begin, we need to specify our IP subnets and the printers associated to them. As this gets bigger (say 4-5 sites), it’d be easier to toss this into a separate file as a key-value pair and import it.

$homeNet = "10.1.4.*", "OfficePrinter"
$remoteNet = "10.1.6.*", "W382_HP_Printer"

Next, let’s grab all of the IP addresses currently active on our computer. Since we could have both wireless and wired plugged in, this returns an array.

$ipAddress = @()
$ipAddress = gwmi win32_NetworkAdapterConfiguration |
	? { $_.IPEnabled -eq $true } |
	% { $_.IPAddress } |
	% { [IPAddress]$_ } |
	? { $_.AddressFamily -eq 'internetwork'  } |
	% { $_.IPAddressToString }

Write-Host -fore cyan "Your current network is $ipAddress."

Our last step is to switch (using the awesome -wildcard flag since we’re using wildcards ‘*’ in our subnets) based on the returned IPs. The Set-DefaultPrinter function is a tweaked version of this code from The Scripting Guy.

function Set-DefaultPrinter([string]$printerPath) {
	$printers = gwmi -class Win32_Printer -computer .
	Write-Host -fore cyan "Default Printer: $printerPath"
	$dp = $printers | ? { $_.deviceID -match $printerPath }
	$dp.SetDefaultPrinter() | Out-Null

switch -wildcard ($ipAddress) {
	$homeNet[0] { Set-DefaultPrinter $homeNet[1] }
	$remoteNet[0] { Set-DefaultPrinter $remoteNet[1] }
	default { Set-DefaultPrinter $homeNet[1] }

The full source code (and a constantly updated version available from gist).

$homeNet = "10.1.4.*", "OfficePrinter"
$remoteNet = "10.1.6.*", "W382_HP_Printer"

function Set-DefaultPrinter([string]$printerPath) {
	$printers = gwmi -class Win32_Printer -computer .
	Write-Host -fore cyan "Default Printer: $printerPath"
	$dp = $printers | ? { $_.deviceID -match $printerPath }
	$dp.SetDefaultPrinter() | Out-Null

$ipAddress = @()
$ipAddress = gwmi win32_NetworkAdapterConfiguration |
	? { $_.IPEnabled -eq $true } |
	% { $_.IPAddress } |
	% { [IPAddress]$_ } |
	? { $_.AddressFamily -eq 'internetwork'  } |
	% { $_.IPAddressToString }

Write-Host -fore cyan "Your current network is $ipAddress."

switch -wildcard ($ipAddress) {
	$homeNet[0] { Set-DefaultPrinter $homeNet[1] }
	$remoteNet[0] { Set-DefaultPrinter $remoteNet[1] }
	default { Set-DefaultPrinter $homeNet[1] }

Tip: Excluding Auto-Generated Files from SourceSafe

December 9, 2009 Comments off

Being an avid git user outside the workplace, I’m used to setting up .gitignore files for my ReSharper and pre-generated sludge that find their way into my projects.  However, until today, I never found a clean way of handling pre-generated files with Visual SourceSafe.

Seems the answer was just in a menu that I never used (imagine that…).


For now, rather than having dotless generate my files on the fly, I’m using the compiler to compile my .less into .css files.  Since I’m using the built-in publishing features (which, I am replacing with psake tasks–more on that later) for this project, any files not included in the project are skipped/not copied.  That’s a bummer for my generated .css file.

The answer is to include the file in the project; however, when checked in, dotless.Compiler crashes because it can’t rewrite the file (since it’s read-only).


Exclude the file from source control. Sure, that sounds good, but how using SourceSafe?

1. Select the file, site.css in this case.

2. File > Source Control > Exclude ‘site.css’ from Source Control.

Yeah, seriously that easy.  Instead of the normal lock/checkmark, a red (-) appears by the file (which is fine) and everything compiles as expected.

I’ve used SourceSafe for years now and never saw that in there… it doesn’t look like I can wildcard files or extensions like .gitignore (or even folders–the option disappears if anything but a single file is selected), but for a one-off case like this, it works just fine.

PowerShell: Recreating SQL*Plus ‘ed’ Command

November 9, 2009 Comments off

Aside from PowerShell, I spend a bit of time day-to-day in Oracle’s SQL*Plus.  I’m slowly replacing my dependency on SQL*Plus with PowerShell scripts, functions, and such, but there are some core things that it’s just easier to bring up a console and hack out.

One thing I really love about SQL*Plus is the ‘ed’ command.  The ‘ed’ command dumps your last statement into a temp file, then opens it in Notepad.  Make your changes in Notepad, save, and rerun (in SQL*Plus, it’s a single slash and then Enter), and the frustration of lengthy command line editing is solved.

So, this is a work in progress—my attempt to simulate the same process in PowerShell.

Example Situation

Let’s say, for example, we have a long command we have been hacking away at, but are getting tired of arrowing back and forth on:

$count = 0; sq mydb “select id from reports” | % { $count++ }; $count

(Yes, yes, it’s not ‘very long’, but that’s why this is called an example!)

So, a simple row counter.

The ‘ed’ Command

To recreate the ‘ed’ experience, I need two things:

  • Access to the command line history via get-history.
  • A temporary file.

touch c:\temp.tmp

get-history | ?{ $_.commandline -ne “ex” } |

? { $_.commandline -ne “ed” } |

select commandline -last 1 |

%{ $_.CommandLine} > c:\temp.tmp

np c:\temp.tmp

Stepping through the code:

  1. Create a temporary file.  I used c:\temp.tmp, but you could get a bit fancier and use the systems $TEMP directory or something.
  2. Get-History.  This returns an ascending list of your command history in PowerShell by ID and CommandLine.  We want to ignore any calls to our ‘ed’ or ‘ex’ functions and grab the “last 1”—then insert it into our temp file.
  3. Finally, open up the temp file in NotePad (np is an alias on my PowerShell profile for NotePad2).

So now that I have my ed command, let’s try it out after I’ve already ran my simple row counter.

Last Command in NotePad2

Cool.  We can now make changes to this and SAVE the temp file.  The next step would be to jump back to PowerShell and execute the updated command.

The ‘ex’ Command

Since the slash has special meaning in a PowerShell window, I simply named this script ‘ex’ instead.  Ex… execute… surely I can’t forget that, right?

get-content c:\temp.tmp |

% {


invoke-expression $_


The ‘ex’ command is pretty simple.

  1. Get the content of our temporary file from get-content.
  2. For each line (since get-content returns an array of lines), print out the line as text and then invoke (run) it using invoke-expression.

Invoke-Expression takes a string as a command (from our text file) and returns the results.  For more information on Invoke-Expression, check out MSDN.

Wrapping It Up

Now that we have our two commands—ed and ex—we’re ready for those “oh, this is short and easy” commands that turn into multi-page monsters—without fighting tabs and arrow keys.

As I said, this is a work-in-progress.  I’ll update the post as new/better ideas come up. 🙂

PowerShell : TGIF Reminder

November 6, 2009 1 comment

Jeff Hicks’ TGIF post yesterday was a fun way to learn a bit of PowerShell (casting, working with dates, etc) and also add something valuable to your professional toolbelt—knowing when to go home. 🙂

I tweaked the date output a bit to be more human readable, but also moved it from just a function to part of my UI.  I mean, I should ALWAYS know how close I am to quittin’ time, right? Especially as we joke around our office during our ‘payless weeks’.

# Determines if today is the end of friday! (fun function)
function get-tgif {
 # If it’s Saturday or Sunday then Stop! It’s the weekend!
 if ([int]$now.DayOfWeek -eq 6 -or [int]$now.DayOfWeek -eq 7)
 else {
  # Determine when the next quittin’ time is…
  [datetime]$TGIF=”{0:MM/dd/yyyy} 4:30:00 PM” -f `
   (($now.AddDays( 5 – [int]$now.DayOfWeek)) )
  # Must be Friday, but after quittin’ time, GO HOME!
  if ((get-date) -ge $TGIF) {
   “TGIF has started without you!”
  else {
   # Awww, still at work–how long until
   # it’s time to go to the pub?
   $diff = $($tgif – (get-date))
   “TGIF: $($diff.Days)d $($diff.Hours)h $($diff.Minutes)m”

NOTE: My “end time” is set to 4:30PM, not 5:00PM—since that’s when I escape.  Change as necessary. 😉

The code comments explain most of it.  As you can see, I added in one more check—let me know when it’s simply the weekend.  I also removed the Write-Host calls, since I simply want to return a String from the function.  I could use the function, as necessary, with formatting, and add it to other scripts and such.  For example:

Write-Host $(get-tgif) -fore red

The next step was tapping into the $Host variable.  Since I use Console2, my PowerShell window is a tab rather than the whole window.  Console2 is aware of PowerShell’s $Host.UI methods and adheres to the changes.

To add get-tgif to my prompt’s window title:

$windowTitle = “(” + $(get-tgif) + “) “
$host.ui.rawui.WindowTitle = $windowTitle

Easy enough.  Now my window title looks like (ignore the path in there for now):

TGIF countdown in WindowTitle

But that only sets it when you log in… and I want to update it (and keep that path updated as I traverse through directories).  To do that add a function called ‘prompt’ to your PowerShell Profile.  Prompt is processed every time the “PS>” is generated and allows you a great deal of customization.  See the post here for further details on how I’ve customized my prompt to handle Git repositories.

So, move those two lines into our prompt function, and our TGIF timer now updates every time our prompt changes… keeping it pretty close to real time as you work.

function prompt {
 $windowTitle = “(” + $(get-tgif) + “) ” + $(get-location).ToString()
 $host.ui.rawui.WindowTitle = $windowTitle



This could be tweaked to any type of countdown.  I’m sure a few of those around the office would have fun adding retirement countdowns, etc. 😉

Happy TGIF!

PowerShell : Add Recursive Directories to PATH

October 19, 2009 Comments off

As I continue to go script crazy (a bit) with PowerShell, I’ve noticed that my general scripts directory has grown quite a bit and is becoming a bit unmanagable.  Being a lover of neat and tidy, I wanted to separate ‘common’ scripts from ‘server-based’ and ‘task’ scripts.

Sounds easy enough, but that breaks part of my PowerShell profile—the part where I add the scripts directory to the path.  Moving those files out of that directory, even if they’re in subdirectories, would take them out of the path and I’d lose my [a..b]-{tab} goodness.

So, how to add the recursive structure to PATH?

It’s easy!

In my Profile, $scripts is a variable that contains a path to my ‘standard’ scripts directory based on whatever computer I’m currently logged into.

From there, simply use the built-in features of PowerShell.

gci $scripts |

                ? { $_.psIsContainer } |

                % { [System.Environment]::SetEnvironmentVariable(“PATH”,

                                $Env:Path + “;” + $_.FullName, “Process”) }

Verbatim: for each child object in $scripts that is a container, append the full name to PATH.

Exactly what I wanted and works like a champ.  Now I can move my scripts around, create new directories, the works—and each time I restart my console, it’ll detect the new directories and simply work—exactly like a shell should.

Categories: Microsoft, PowerShell

PowerShell – Easy Line Numbers using Cat (Get-Content)

October 8, 2009 2 comments

I often cat out code pages from the prompt when I simply want to see what’s inside or look for a method.  Cat, of course, is an alias to PowerShell’s get-content cmdlet, but cat brings me back to days in Unix and makes me feel better. 🙂

One thing that was a bit of a trouble was looking for a specific line of code.

For example: Looking through the debug logs, I see an error occuring at line 40 of some file.  That’s great, but who wants to count lines?  Instead, we open up Visual Studio and look.  Well, that’s a hassle too.

Here’s a quick script that does a simple line count with the line in the file.

param ([string] $filename)
$counter = 0;
$content = get-content $filename |
 % { $counter++; write-host “$counter`t| $_” }

Since the result of get-content becomes an enumerable line-by-line list, it’s easy to iterate through and increment a counter.

To call it, just call the method with the file name.  For mine, I called it cat-code.ps1, though I’ll probably set an alias to ‘cc’ or something later on.

.\cat-code.ps1 psake_default.ps1

Code Cat example.

From here, you could add in a check to the $Host.UI.RawUI and handle the overflows a bit better, but that’s for another day!

Automating Extracts Using External .SQL Files and PowerShell

September 29, 2009 Comments off

Rather than rely on a system task, I wanted to be able to kick off an export of one of our SQL databases on the fly, have it generate a CSV, then email the CSV to one of our developers.

Sounds like a task for PowerShell!

Why PowerShell? Well, frankly, because moving jobs around and such on our mixed-mash of SQL versions and servers is a bit annoying and I wanted to see if it worked.

So, to start off, I have a file, call it export_data.sql that contains all of the logic of my SQL query and the format I require.

I’ll also be using the Microsoft SQL query script I discussed in this blog post.

The tricky part is reading in the .sql file as a single “entity”—not the array of rows that get-content usually provides.  Thankfully, there is an easy way to do it.

@(gc ‘export_data.sql’ -readcount 0)

According to the PowerShell documentation, the readcount parameter serves two purposes: one is to set the number of lines to read at a time, the second (when set to zero) is to pull in all of the lines at once.  Great, that’s exactly what we need.

From there, it’s simply feeding our query to the sql-query function I wrote and exporting to CSV.  Nice enough, CSV exports are built-in to PowerShell!

The final command looks something like this:

@(gc ‘export_data.sql’ -readcount 0) | 
   % { sql-query server db $_ } |
   export-csv dump.csv

I could then add another field to mail the csv using the built-in command Send-MailMessage.

❤ PowerShell.