Archive

Archive for the ‘General Development’ Category

Mashing CSVs around using PowerShell

February 15, 2012 2 comments

Since I spend most of my day in the console, PowerShell also serves as my ‘Excel’. So, continuing my recent trend of PowerShell related posts, let’s dig into a quick and easy way to parse up CSV files (or most any type of file) by creating objects!

We, of course, need a few rows of example data. Let’s use this pseudo student roster.

Example data:

Student,Code,Product,IUID,TSSOC,Date
123456,e11234,Reading,jsmith,0:18,1/4/2012
123456,e11234,Reading,jsmith,1:04,1/4/2012
123456,e11234,Reading,jsmith,0:27,1/5/2012
123456,e11234,Reading,jsmith,0:19,1/7/2012
123456,e11235,Math,jsmith,0:14,1/7/2012

Now, for reporting, I want my ‘Minutes’ to be a calculation of the TSSOC column (hours:minutes). Easy, we have PowerShell–it can split AND multiple!

The code:

Begin by creating an empty array to hold our output, importing our data into the ‘pipe’, and opening up an iteration (for each) function. The final $out is our return value–calling our array so we can see our results.

$out = @()
import-csv data_example.csv |
   % {

   }
$out

Next, let’s add in our logic to split out the hours and minutes. We have full access to the .NET string methods in PowerShell, which includes .Split(). .Split() returns an array, so since we have HH:MM, our first number is our hours and our second number is our minutes. Hours then need to be multiplied by 60 to return the “minutes per hour.”

You’ll also notice the [int] casting–this ensures we can properly multiply–give it a whirl without and you’ll get 60 0′s or 1′s back (it multiples the string).

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
   }
$out

The next step is to create a new object to contain our return values. We can use the new PowerShell v2.0 syntax to create a quick hashtable of our properties and values. Once we have our item, add it to our $out array.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   }

With that, we’re done, we can pipe it to an orderby for a bit of sorting, grouping, table formatting, or even export it BACK out as another CSV.

$out = @()
import-csv data_example.csv |
   % {
	$hours = [int]$_.TSSOC.Split(':')[0] * 60
	$minutes = [int]$_.TSSOC.Split(':')[1]
        $item = new-object PSObject -Property @{
			Date=$_.Date;
			Minutes=($hours + $minutes);
			UserId=$_.IUID;
			StudentId=$_.Student;
			Code=$_.Code;
			Course=$_.Product
		}
	$out = $out + $item
   } | sortby Date, Code
$out | ft -a

Quick and dirty CSV manipulation–all without opening anything but the command prompt!

UPDATE: Matt has an excellent point in the comments below. PowerShell isn’t the ‘golden hammer’ for every task and finding the right tool for the job. We’re a mixed environment (Windows, Solaris, RHEL, Ubuntu), so PowerShell only applies to our Windows boxes. However, as a .net developer, I spend 80-90% of my time on those Windows boxes. So let’s say it’s a silver hammer. :)

Now, the code in this post looks pretty long–and hopping back and forth between notepad, the CLI, and your CSV is tiresome. I bounce back and forth between the CLI and notepad2 with the ‘ed’ and ‘ex’ functions (these commands are ‘borrowed’ from Oracle PL/SQL). More information here.

So how would I type this if my boss ran into my cube with a CSV and needed a count of Minutes?

$out=@();Import-Csv data_example.csv | % { $out += (new-object psobject -prop @{ Date=$_.Date;Minutes=[int]$_.TSSOC.Split(':')[1]+([int]$_.TSSOC.Split(':')[0]*60);UserId=$_.IUID;StudentId=$_.Student;Code=$_.Code;Course=$_.Product }) }; $out | ft -a

Now, that’s quicker to type, but a LOT harder to explain. ;) I’m sure this can be simplified down–any suggestions? If you could do automatic/implied property names, that’d REALLY cut it down.

The Post-Certification Era?

February 13, 2012 1 comment

Oh look, starting off with a disclaimer. This should be good!

These are patterns I’ve noticed in our organization over the past ten years–ranging from hardware to software to development technical staff. These are my observations, experiences with recruiting, and a good dash of my opinions. I’m certain there are exceptions. If you’re an exception, you get a cookie. :)

This isn’t specifically focused on Microsoft’s certifications. We’re a .NET shop, but we’re also an Oracle shop, a Solaris shop, and a RHEL shop. So many certification opportunities, so little training dollars.

Finally, I’ll also throw out that I have a few certifications. When I made my living as a full-time consultant and contractor and was just getting started, they were the right thing to do (read on for why). Years later … things have changed.

Evaluating The Post-Certification Era

In today’s development ecosystem, certifications seem play a nearly unmentionable role outside of college recruitment offices and general practice consulting agencies. While certifications provide a baseline for those just entering the field, I rarely see established developers (read: >~2 years experience) heading out to the courseware to seek a new certification.

Primary reasons for certifications: entry into the field and “saleability”.
Entry into the field – provides a similar baseline to compare candidates for entry-level positions.

Example: An entry-level developer vs. hiring an experienced enterprise architect. For an entry-level developer, a certification usually provides a baseline of skills.

For an experienced architect, however, past project experience, core understanding of architecture practices, examples of work in open source communities, and scenario-based knowledge provides the best gauge of skills.

“Saleability” of certifications for consulting agencies allows “one upping” other organizations, but usually lack the actual real-world skills necessary for implementation.

Example: We had a couple of fiascos years back with a very reputable consulting company filled with certified developers, but simply couldn’t wrap those skills into a finished product. We managed to bring the project back in-house and get our customers squared away, but it broke the working relationship we had with that consulting company.

Certifications provide a baseline for experience and expertise similar to college degrees.
Like in college, being able to cram and pass a certification test is a poor indicator (or replacement) for handling real-life situations.

Example: Many certification “crammers” and boot camps are available for a fee–rapid memorization and passing of tests.  I do not believe that these prepare you for actual situations AND do not prepare you to continue to expand your knowledge base.

Certifications are outdated before they’re even released.
Test-makers and publishers cannot keep up with technology at it’s current pace. The current core Microsoft certifications focus on v2.0 technologies (though are slowly being updated to 4.0).

I’m sure it’s a game of tag between the DivDev and Training teams up in Redmond. We, as developers, push for new features faster, but the courseware can only be written/edited/reviewed/approved so quickly.

In addition, almost all of our current, production applications are .NET applications; however, a great deal of functionality is derived from open-source and community-driven projects that go beyond the scope of a Microsoft certification.

Certifications do not account for today’s open-source/community environment.
A single “Microsoft” certification does not cover a large majority of the programming practices and tools used in modern development.

Looking beyond Microsoft allows us the flexibility to find the right tool/technology for the task. In nearly every case, these alternatives provide a cost savings to the district.

Example: Many sites that we develop now feature non-Microsoft ‘tools’ from the ground up.

  • web engine: FubuMVC, OpenRasta, ASP.NET MVC
  • view engine: Spark, HAML
  • dependency injection/management: StructureMap, Ninject, Cassette
  • source control: git, hg
  • data storage: NHibernate, RavenDB, MySQL
  • testing: TeamCity, MSpec, Moq, Jasmine
  • tooling: PowerShell, rake

This doesn’t even take into consideration the extensive use of client-side programming technologies, such as JavaScript.

A more personal example: I’ve used NHibernate/FluentNHibernate for years now. Fluent mappings, auto mappings, insane conventions and more fill my day-to-day data modeling. NH meets our needs in spades and, since many of our objects talk to vendor views and Oracle objects, Entity Framework doesn’t meet our needs. If I wanted our team to dig into the Microsoft certification path, we’d have to dig into Entity Framework. Why would I want to waste everyone’s time?

This same question applies to many of the plug-and-go features of .NET, especially since most certification examples focus on arcane things that most folks would look up in a time of crisis anyway and not on the meat and potatoes of daily tasks.

Certifications do not account for the current scope of modern development languages.
Being able to determine an integer from a string and when to call a certain method crosses language and vendor boundaries.  A typical Student Achievement project contains anywhere from three to six different languages–only one of those being a Microsoft-based language.

Whether it’s Microsoft’s C#, Sun’s Java, JavaScript, Ruby, or any number of scripting languages implemented in our department–there are ubiquitous core skills to cultivate.

Cultivating the Post-Certification Developer

In a “Google age”, knowing how and why components optimally fit together provides far more value than syntax and memorization. If someone needs a code syntax explanation, a quick search reveals the answer. For something more destructive, such as modifications to our Solaris servers, I’d PREFER our techs look up the syntax–especially if it’s something they do once a decade. There are no heroes when a backwards bash flag formats an array. ;)

Within small development shops, such as ours, a large percentage of development value-added skills lie in enterprise architecture, domain expertise, and understanding design patterns–typical skills not covered on technology certification exams.

Rather than focusing on outdated technologies and unused skills, a modern developer and development organization can best be ‘grown’ by an active community involvement.  Active community involvement provides a post-certification developer with several learning tools:

Participating in open-source projects allows the developer to observe, comment, and learn from other professional developers using modern tools and technologies.

Example: Submitting a code example to an open source project where a dozen developers pick it apart and, if necessary, provide feedback on better coding techniques.

Developing a social network of professional developers provides an instant feedback loop for ideas, new technologies, and best practices. Blogging, and reading blogs, allows a developer to cultivate their programming skill set with a world-wide echo chamber.

Example: A simple message on Twitter about an error in a technology released that day can garner instant feedback from a project manager at that company, prompting email exchanges, telephone calls, and the necessary steps to resolve the problem directly from the developer who implemented the feature in the new technology.

Participating in community-driven events such as webinars/webcasts, user groups, and open space discussions. These groups bolster existing social networks and provide knowledge transfer of best practices and patterns on current subjects as well as provide networking opportunities with peers in the field.

Example: Community-driven events provide both a medium to learn and a medium to give back to the community through talks and online sessions.  This helps build both a mentoring mentality in developers as well as a drive to fully understand the inner-workings of each technology.

Summary

While certifications can provide a bit of value–especially getting your foot in the door, I don’t see many on the resumes coming across my desk these days. Most, especially the younger crowd, flaunt their open source projects, hacks, and adventures with ‘technology X’ as a badge of achievement rather than certifications. In our shop and hiring process, that works out well. I doubt it’s the same everywhere.

Looking past certifications in ‘technology X’ to long-term development value-added skills adds more bang to the resume, and the individual, than any finite-lived piece of paper.

Quick Solution Generation using PowerShell: New-Project

February 13, 2011 1 comment

When I have an idea or want to prototype things, I tend to mock it up in Balsamiq, then dig right in and write some specs to see how it’d work.  Unfortunately deleting the junk Class1.cs in Library projects, the plethora of excess in MVC3 webs, and such tends to be the most time intensive part of wiring up a quick project in .net.

All that deleting is too many steps–especially if you’re developing on the fly with a room of folks. I needed something ala command line to fit my normal workflow:

  1. o init-wrap MyProject -git
  2. cd MyProject
  3. git flow init
  4. {something to create projects, solutions, etc}
  5. o init-wrap -all
  6. {spend 5 minutes cleaning up junk files in my way}

Introducing New-Project

Yes, I know. I’m not a marketing guru. I don’t have a cool name for it.  Just a standard PowerShell convention.

Usage:

  -Library { } : Takes a string[] of names for c# class libraries to create.

  -Web { } : Takes a string[] of names for MVC3 web projects to create.

  -Solution "" : Takes a single string for your solution name.

Example:

New-Project -Library MyProj.Core, MyProj.Specs -Web MyProj.Web -Solution MyProject

SiteScaffolding

 

What does this all do?

Well, honestly, I’m not sure how ‘reusable’ this is… the projects are pretty tailored.

Libraries

  • Libraries don’t have the annoying Class1.cs file that you always delete.
  • AssemblyInfo.cs is updated with the specified Name and Title.

MVC3 Webs

  • The web.config is STRIPPED down to the minimal (27 lines).
  • The folder structure is reorganized (removed unnecessary folders, like Controllers, which I put in libraries, not the web project).

Solution

  • This is the only one I’m actually using the VisualStudio.DTE for–it makes it super easy to create and add projects into the solution.

But there are other scaffolding systems out there–why not use them?

Most of the time, I don’t need a full system. I don’t need my objects mapped, views automatically set up, or anything else. 

Start with three empty projects, load up the Specifications project, and start driving out features.  That’s how it’s supposed to work, right?  So why would I want to have to pre-fill my projects ahead of time?

 

What’s next?

  • Error catching and handling (it’s pretty lax right now)
  • Handle setting up gitflow, openwrap, jQuery, etc. Less typing good!
  • Something… who knows. :D

 

Where to get it?

I’ve tossed it up on github at https://github.com/drlongnecker/New-Project.  Right now it has the WOMM warranty.

6a0120a85dcdae970b0128776ff992970c

Updating NuGet Spec’s Version Numbers in psake

December 3, 2010 Comments off

As part of our psake build process on a couple of framework libraries, I wanted to add in updating our internal NuGet repository.  The actual .nuspec file is laid out quite simplistically (details); however, the version number is hard coded.

For packages that have a static ‘major’ version, that’s not a bad deal; however, I wanted to keep our package up to date with the latest and greatest versions, so I needed to update that version element.

Since I have the full power of PowerShell at my disposal, modifying an XML file is a cakewalk. Here’s how I went about it.

function build-nuget-package {
	# update nuget spec version number
	[xml] $spec = gc $nuget_spec
	$spec.package.metadata.version = GetBuildNumber
	$spec.Save($nuget_spec)

	# rebuild the package using the updated .nuspec file.
	cd $release_directory
	exec { invoke-expression "$nuget pack $nuget_spec" }
	cd $build_directory
}

GetBuildNumber is an existing psake function I use to snag the AssemblyVersion from \Properties\AssemblyInfo.cs (and return for things like TeamCity). $nuget and $nuget_spec are variables configured in psake that point to the nuget executable and the specification file used by the build script.

Once the package is built, in our Deploy task, we do a simple copy.

task Deploy -depends Release {
	 [ ... cut for brevity ]

	# build and deploy nuget package
	build-nuget-package
	copy-item $release_directory\*.nupkg \\server\nugetshare$
}

Now the NuGet repository is updated with a new package each time we build. I may refine it and only update the package on major version number changes or something later, but this gets the job done.

Playing nice with psake, PartCover, and TeamCity

December 2, 2010 1 comment

While code coverage isn’t the holy grail of development benchmarks, it has a place in the tool belt. We have several legacy systems where we are retrofitting tests in as we enhance and maintain the project.

I looked at NCover as a code coverage solution. NCover is a SLICK product and I really appreciated it’s Explorer and charting for deep dive analysis. Unfortunately, working in public education, the budget cuts just keep coming and commercial software didn’t fit the bill. After finding the revitalized project on GitHub, I dug back into PartCover.net.  Shaun Wilde (@shauncv) and others have been quite active revitalizing the project and flushing out features for .net 4.0.

You can find the repo and installation details for PartCover.net at https://github.com/sawilde/partcover.net4.

Now, armed with code coverage reports, I had to find a way to get that information into TeamCity. Since I use psake, the command line runner doesn’t have the option for importing code coverage results. That just means I’ll need to handle executing PartCover inside of psake. Easy enough. Thankfully, the TeamCity Reports tabs can also be customized. Time to get to work!

Step 0: Setting up PartCover for 64-bit

If you’re on a 32-bit installation, then skip this (are there people still running 32-bit installs?).

PartCover, itself, installs just fine, however, if you’re on an x64 machine, you’ll need to modify PartCover.exe and PartCover.Browser.exe to force 32-bit using corflags.exe.

Corflags.exe is usually located at C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin if you’re running Windows 7. Older versions of Windows might be at v6.0A.

Once you’ve located corflags.exe, simply force the 32-bit flag:

corflags /32bit+ /force PartCover.exe
corflags /32bit+ /force PartCover.Browser.exe

If you have trouble on this step, hit up Google. There are tons of articles out there hashing and rehashing this problem.

Step 1: Setting up your PartCover settings file

PartCover allows you to specify configuration settings directly at the command line, but I find that hard to track when changes are made.  Alternatively, a simple XML file allows you to keep your runner command clean.

Here’s the settings file I’m running with:

<PartCoverSettings>
  <Target>.\Tools\xunit\xunit.console.x86.exe</Target>
  <TargetWorkDir></TargetWorkDir>
  <TargetArgs> .\build\MyApp.Test.dll /teamcity /html .\build\test_output.html</TargetArgs>
  <DisableFlattenDomains>True</DisableFlattenDomains>
  <Rule>+[MyApp]*</Rule>
  <Rule>-[MyApp.Test]*</Rule>
  <Rule>-[*]*__*</Rule>
</PartCoverSettings>

Just replace MyApp with the project names/namespaces you wish to cover (and exclude).

For more details on setting up a settings file, check out the documentation that comes with PartCover.  The only real oddity I have here is the last rule <Rule>-[*]*__*</Rule>.  PartCover, by default, also picks up the dynamically created classes from lambda expressions.  Since I’m already testing those, I’m hiding the results for the dynamic classes with two underscores in them __. It’s a cheap trick, but seems to work.

Step 2: Setting up psake for your tests and coverage

My default test runner task in psake grabs a set of assemblies and iterates through them passing each to the test runner.  That hasn’t changed; however, I now need to call PartCover instead of mspec/xunit.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      exec { invoke-expression $testExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

$coverage_runner, $base_directory, and $build_directory refer to variables configured at the top of my psake script.

Now, you should be able to drop to the command line and execute your test task.  A partcover.xml file should appear.  But who wants to read XML. Where are the shiny HTML reports?

We need to create them! :)

Step 2: Generating the HTML report from PartCover

There are two XML style sheets included with PartCover, however, I really like Gáspár Nagy’s detailed report found at http://gasparnagy.blogspot.com/2010/09/detailed-report-for-partcover-in.html. It’s clean, detailed, and fast.  I’ve downloaded this xslt and placed it in the PartCover directory.

Without TeamCity to do the heavy lifting, we need an XSL transformer of our own.  For my purposes, I’m using the one bundled with Sandcastle (http://sandcastle.codeplex.com/).  Install (or extract the MSI) and copy the contents to your project (ex: .\tools\sandcastle).

The syntax for xsltransform.exe is:

xsltransform.exe {xml_input} /xsl:{transform file} /out:{output html file}

In this case, since I’m storing the partcover_results.xml in my $build_directory, the full command would look like (without the line breaks):

    .\tools\sandcastle\xsltransform.exe .\build\partcover_results.xml
        /xsl:.\tools\partcover\partcoverfullreport.xslt
        /out:.\build\partcover.html

Fantastic.  Now, we want that to call every time we run our tests, so let’s add it to our psake script.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      $coverageExpression = "$transform_runner $build_directory\partcover_results.xml /xsl:$transform_xsl /out:$build_directory\partcover.html"
      exec { invoke-expression $testExpression }
      " - Converting coverage results for $test to HTML report..."
      exec { invoke-expression $coverageExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

Now, we have a $coverage_expression to invoke.  The xsltransform.exe and path to the report are tied up into variables for easy updating.

You should be able to run your test task again and open up the HTML file.

Step 3: Setting up the Code Coverage tab in TeamCity

The last step is the easiest. As I haven’t found a way to have the Code Coverage automatically pick up the PartCover reports, we’ll need to add a custom Report.

In TeamCity, go under Administration > Server Configuration > Report Tabs.

Add a new report tab with the Start Page set to the name of your PartCover HTML file.  In my case, it’s partcover.html.  Remember, the base path is your build directory (which is why I’m outputting the file to the $build_directory in psake).

Commit your repository with your new psake script and PartCover (if you haven’t already) and run–you’ll see a new Code Coverage tab ready and waiting.

- 12_2_2010 , 11_31_27 AM

Using .less To Simplify BluePrintCSS

December 17, 2009 2 comments

For the past few projects, I’ve used BluePrintCSS and really liked the experience.  It forced me both to conquer my CSS layout fears (tables no more) and standardize a few of my formatting techniques that we use on our internal and external applications.  Good deal all around.

The one caveat that I really… really didn’t like was how I had to name things.

The clean class codes and IDs that I had…

<div class="page">
    <div class="header">
        <div class="title">
            <h1>${H(ApplicationName)}</h1>
        </div>
        [...]
    </div>
</div>

Turned into long, drawn out classes…

<div class="container">
    <div class="span-24">
        <div class="prepend-1 span-12 column">
            <h1>${H(ApplicationName)}</h1>
        </div>
    </div>
</div>

Without the BluePrintCSS guide or the CSS files available, you couldn’t look at the classes and tell much of what was going on… and it wasn’t descriptive like ‘header’ and ‘title’.

Welcome To .less (dotless)

I stumbled onto .less (aka dotless, dotlesscss, that shizzle css thingy) back in November and thought “hey, that’s cool… that’s how CSS should work” and didn’t give it much more thought.  Shortly after fav’ing it in github, I noticed they pushed an update targeting BluePrintCSS operability.  Cool–I’ve GOT to try this out.

Getting Started with .less

The instructions on the home page (right side of the screen) is all you need.  Clone, compile, update web.config and start go!

The Benefits

So what’s the big hype?  This:

1. Import your BluePrintCSS file into your .less file (for me, it’s site.less).

@import “screen.css”;

2. Simply reference any of the BluePrintCSS class styles as part of your custom styles.

#header {
    #title {
        .span-10;
        .column;
    }
 
    #menucontainer {
        .span-14;
        .column;
        .last;
        text-align: right;
    }
}
 
#left-content {
    .span-18;
    .column;
}
 
#right-boxes {
    .span-6;
    .column;
    .last;
}

Then a miracle occurs...3. “Then a miracle occurs…”

When dotless’ HttpHandler hits your .less file (or your use dotless.Compiler), it translates those referenced styles into their actual CSS tags.


#header #title{width:390px;float:left;margin-right:10px;}

Nice.  Plain and simple (and miraculous).

Lessons Learned

Some “lessons learned” so far:

1. Order matters.  Referencing a style before you’ve ‘created’ it will bork the interperter. So @imports always go at the top and if you’re referencing within the same .less file, keep things in order.

2. Pre-compiling is fun.  For now, I’m pre-compiling my .less files without using the handler and simply sending the css file up to our web server.  This is easily taken care of with either a MS Build task or psake task.  Here’s how an example of a quick MS Build task that references the dotless.Compiler in the solution’s “tools” directory.

$(SolutionDir)Tools\dotLess\dotless.compiler.exe -m $(ProjectDir)content\css\site.less $(ProjectDir)content\css\site.css

3. .less files need to be ‘Content’. Since VS2008 is stupid, .less files (like .spark views, etc) need to be explicitly set to have a Build Action of ‘Content’ so that the publishing process sends them up to the web server.  If you’re publishing via psake or another automation tool, then ignore this. ;)

That’s it for now.  Hit up the project site, peruse and clone the github repo, and join the discussion for .less and (finally) start applying some DRY to your CSS.

Tip: Excluding Auto-Generated Files from SourceSafe

December 9, 2009 Comments off

Being an avid git user outside the workplace, I’m used to setting up .gitignore files for my ReSharper and pre-generated sludge that find their way into my projects.  However, until today, I never found a clean way of handling pre-generated files with Visual SourceSafe.

Seems the answer was just in a menu that I never used (imagine that…).

Situation

For now, rather than having dotless generate my files on the fly, I’m using the compiler to compile my .less into .css files.  Since I’m using the built-in publishing features (which, I am replacing with psake tasks–more on that later) for this project, any files not included in the project are skipped/not copied.  That’s a bummer for my generated .css file.

The answer is to include the file in the project; however, when checked in, dotless.Compiler crashes because it can’t rewrite the file (since it’s read-only).

Solution

Exclude the file from source control. Sure, that sounds good, but how using SourceSafe?

1. Select the file, site.css in this case.

2. File > Source Control > Exclude ‘site.css’ from Source Control.

Yeah, seriously that easy.  Instead of the normal lock/checkmark, a red (-) appears by the file (which is fine) and everything compiles as expected.

I’ve used SourceSafe for years now and never saw that in there… it doesn’t look like I can wildcard files or extensions like .gitignore (or even folders–the option disappears if anything but a single file is selected), but for a one-off case like this, it works just fine.

Stop using MS Paint for your demos – Get Balsamiq Mocks!

November 25, 2009 3 comments

Ever had your boss walk in and want a ‘mockup’ of a product the next day? In the past, I’d whip something up either in Visio (*cough* or Paint *cough*) and give a basic sketch of how screens and flow would work.

Over the past year, I’ve read a lot of rave reviews around Balsamiq Mockups, but hadn’t had a chance to try it out. 

I emailed Balsamiq to inquery how their licensing structure worked.  I was interested in purchasing a personal copy (budgets are tight here at the office—cuts in public education are running deep) and curious if I could still use it for “work-related” activities. To my surprise, I had nearly an instant response thanking me for my interest and providing me a free license to use here at work (educational). That’s freaking AWESOME and greatly appreciated and further motivates me to pick up a copy to use for consulting and personal projects.

So, next comes usage.  Up until now, I’ve tinkered with the ‘web demo’ version of Balsamiq. I was pleased that, on moving to the desktop version, the layout, tools, and functionality remained almost identical. No lost time or learning curve.

How It Saved The Day

Last week, I was hit up by the opening situation.  “Hey, we need a demo of how you’d do this… and we need it later this afternoon to present to the laywers, {big boss A}, and {big boss B}…”

For something to hit that level, I’d usually opt to bypass the Big Chief and Crayon as well as Paint and simply mock up a UI on the web.  Unfortunately, for this, the time wasn’t there; however, Balsamiq was up to the task.

Rough (very rough) specs to screens, screens transformed into flow, and added into a PowerPoint for a quick presentation, acceptance by all parties involved, and hero fanfare.  We’re now a week later and using those mockups to generate our UI screens—and the customer loves how well things translate without any ‘surprises’. Excellent.

UPDATE: I just pulled down the 1.6.46 version of Balsamiq and it exports to PDF. Sweet..

Features I Love

Linked Screens – A bit of a counter to one of the ‘Things I Wish It Did’ is the ability to link mockup screens together with ‘links’.  It’s great for demos to provide a true flow of how things fit together.

In this example, we’re showing a simulation of the login screen. You can see the ‘link’ icon on the Login button. In a demo, clicking the Login button takes us to our next screen—just like it would (if authenticated right ;)) in our app.

Balsamiq Mockups - Linked Screens

Sticky Notes – On a few screens, showing business logic is a bit challenging. That’s where I love sticky notes comments. I’m a huge fan of whiteboarding and sticky notes and am thrilled I can bring that into my mocks. It’s also great when exporting and being able to keep track of comments and ideas as the team is working through a mock.

Balsamiq Mockups - Sticky Notes

Easily Populated Controls – Data grids, buttons, tabs—the common elements of a user interface and all easily populated with test data.  No longer do I need to draw lines in Paint or try to copy/paste screen clips out of Excel to get a decent looking grid view—a bit of text and commas and a snazzy grid appears!

Balsamiq Mockups - Grid

Things I Wish It Did

Master Pages – Most UI layouts have the same headers/footers. You can easily reproduce this by ‘cloning’ the current mockup; however, when the customer walks in and wants to move the logo from the left side to the right side, you now have 10… 20… 100 individual screens to move it on. I’d love to be able to designate a screen as the ‘parent’.

… wow, that’s about all I can come up with!

Comic Sans Makes Me Cry… and How to Fix it

One thing that really threw me about Mockups was the fact that it used Comic Sans. Seriously? Thankfully, I’m not the first to ask this and they’ve provided a stellar walkthrough on how to use whatever font you desire.

My configuration file looks like:

<config>
 <fontFace>Segoe UI</fontFace>
 <rememberWindowSize>true</rememberWindowSize>
 <useCookies>true</useCookies>
</config>

I opted for Vista/Windows 7’s clean Segoe UI font.  Having my mockups look like my scribbles on paper wasn’t that important to me.  Professional earns more points than crayons on a Big Chief tablet. ;)

Conclusions

I’ve found myself, since picking up Balsamiq Mockups, simply using it for everything.  Sitting in planning meetings and sketching out reports, screens, even data flows.  With a little creativity, you can sketch out almost anything.  For the things you can’t, I’m ‘crayoning’ it using my Bamboo tablet and drawing out what I want–then importing it as an image. Sweet.

If you haven’t check out Balsamiq, give it a run–it’s an amazing tool.

 

Creating Local Git Repositories – Yeah, it’s that simple!

November 9, 2009 Comments off

We’re in the process of evaluating several source control systems here at the office.  It’s a JOYOUS experience and I mean that… sorta.  Currently, we have a montage of files dumped into two SourceSafe repos.

At this point, most of you are either laughing, crying, or have closed your web browser so you don’t get caught reading something with the word ‘SourceSafe’ in it.  That’s cool, I get that.

As part of my demonstrations (for the various toolings we’re looking at), I wanted to show how simple it was to create a local GIT repository.

Since I’m discussing ‘local’, I’m taking a lot of things out of the picture.

  • Authentication/authorization is handled by Windows file permissions/Active Directory.  If you can touch the repo and read it, then knock yourself out.
  • Everything is handled through shares, no HTTP or that goodness.  Users are used to hitting a mapped drive for SourceSafe repos, we’re just emulating that.

So what do we need to do to create a Git repository? Three easy steps (commands).

1. Create your directory with the .git extention.

mkdir example.git

2. Change into that new directory.

cd example.git

3. Initialize the repository using Git.

git init —bare

Our new Git repository is now initialized. You can even see that PowerShell picks up on the new repository and has changed my prompt accordingly.

Local Git Repository - example.git

Now that we have example.git as our ‘remote’ repository, we’re ready to make a local clone.

git clone H:\example.git example

Now we have an empty clone (and it’s kind enough to tell us).

Empty clone.

All of our Git goodness is packed into the standard .git directory.

Git clone contents.

To test things out, let’s create a quick file, add it to our repo, and then commit it in.

First, let’s create/append an example file in our clone.

Create/append an example file to Git

(note: this is called ‘so lazy, I don’t even want to open NotePad’)

Now, we can add and verify that our newly added ‘example.text’ file shows up:

Git - add file and verify

Finally, commit…

git commit -a -m “this is our first commit”

First commit

… and push!

git push origin master

Push to origin repo.

The last step is to ensure that our new ‘remote’ repository can be reproduced.  Here’s a quick single line to clone our repository into another new folder, list out the contents, and verify the log has the commit we made in the first repo.

Finished Second Clone to Verify

It’s amazing how quick and easy it is to setup local Git repositories.  From here, we can look at the file system/Windows for authentication/authorization and focus on our migration AWAY from Visual SourceSafe. :)

Tags: ,

PowerShell : TGIF Reminder

November 6, 2009 1 comment

Jeff Hicks’ TGIF post yesterday was a fun way to learn a bit of PowerShell (casting, working with dates, etc) and also add something valuable to your professional toolbelt—knowing when to go home. :)

I tweaked the date output a bit to be more human readable, but also moved it from just a function to part of my UI.  I mean, I should ALWAYS know how close I am to quittin’ time, right? Especially as we joke around our office during our ‘payless weeks’.

# Determines if today is the end of friday! (fun function)
function get-tgif {
 $now=Get-Date
 # If it’s Saturday or Sunday then Stop! It’s the weekend!
 if ([int]$now.DayOfWeek -eq 6 -or [int]$now.DayOfWeek -eq 7)
 {
  “Weekend!”
 }
 else {
  # Determine when the next quittin’ time is…
  [datetime]$TGIF=”{0:MM/dd/yyyy} 4:30:00 PM” -f `
   (($now.AddDays( 5 – [int]$now.DayOfWeek)) )
  
  # Must be Friday, but after quittin’ time, GO HOME!
  if ((get-date) -ge $TGIF) {
   “TGIF has started without you!”
  }
  else {
   # Awww, still at work–how long until
   # it’s time to go to the pub?
   $diff = $($tgif – (get-date))
   “TGIF: $($diff.Days)d $($diff.Hours)h $($diff.Minutes)m”
  }
 }
}

NOTE: My “end time” is set to 4:30PM, not 5:00PM—since that’s when I escape.  Change as necessary. ;)

The code comments explain most of it.  As you can see, I added in one more check—let me know when it’s simply the weekend.  I also removed the Write-Host calls, since I simply want to return a String from the function.  I could use the function, as necessary, with formatting, and add it to other scripts and such.  For example:

Write-Host $(get-tgif) -fore red

The next step was tapping into the $Host variable.  Since I use Console2, my PowerShell window is a tab rather than the whole window.  Console2 is aware of PowerShell’s $Host.UI methods and adheres to the changes.

To add get-tgif to my prompt’s window title:

$windowTitle = “(” + $(get-tgif) + “) “
$host.ui.rawui.WindowTitle = $windowTitle

Easy enough.  Now my window title looks like (ignore the path in there for now):

TGIF countdown in WindowTitle

But that only sets it when you log in… and I want to update it (and keep that path updated as I traverse through directories).  To do that add a function called ‘prompt’ to your PowerShell Profile.  Prompt is processed every time the “PS>” is generated and allows you a great deal of customization.  See the post here for further details on how I’ve customized my prompt to handle Git repositories.

So, move those two lines into our prompt function, and our TGIF timer now updates every time our prompt changes… keeping it pretty close to real time as you work.

function prompt {
 $windowTitle = “(” + $(get-tgif) + “) ” + $(get-location).ToString()
 $host.ui.rawui.WindowTitle = $windowTitle

[…]

}

This could be tweaked to any type of countdown.  I’m sure a few of those around the office would have fun adding retirement countdowns, etc. ;)

Happy TGIF!

Follow

Get every new post delivered to your Inbox.