Search Results

Keyword: ‘TEST’

Using RedGate ANTS to Profile XUnit Tests

August 5, 2009 3 comments

RedGate’s ANTS Performance and Memory profilers can do some pretty slick testing, so why not automate it?  The “theory” is that if my coverage is hitting all the high points, I’m profiling all the high points and can see bottlenecks.

So, how does this work?  Since the tests are in a compiled library, I can’t just “load” the unit tests. However, you can load Xunit and run the tests.

NOTE: If your profiling x86 libraries on an x64 machine, you’ll need XUnit 1.5 CTP (or later) that includes xunit.console.x86.exe.  If you’re on an x86 or do not call x86 libraries, pay no attention to this notice. 😉

To begin, start up ANTS Performance Profiler and Profile a New .NET Executable.

XUnit ala ANTS Profiler

For the .NET Executable, point it towards XUnit and in the Arguments, point it towards the library you are testing.  Simple enough.

Click “Start Profiling” and let the profiling begin!

Now if I could just get the “top 10” methods to export to HTML or something so I could automate this in our reporting.

Testing/Profiling From the Web Server

October 7, 2008 Comments off

Running ANTS Profiler or gathering other information is fine and good against the local IIS on my workstation; however, sometimes it’s good to see how things perform on server-class systems.  Our development/testing servers are, in many cases, nearly identical to our production servers to provide the most accurate baselines possible. 

For the past few weeks, I’ve wanted to run performance profiling locally on the development server; however, I could never get single sign-on authentication to work.  The domain authentication box would popup, not accept the password, and throw an HTTP 401.1 denied message.

Until recently, I simply shrugged it off as something I’ve dinked with and broke on the server—and it probably needed reimaged.  A bit of Googling pointing out that, as with most things, this “behavior is by design” in Server 2003 SP1 and higher.

From KB 896861:

This issue occurs if you install Microsoft Windows XP Service Pack 2 (SP2) or Microsoft Windows Server 2003 Service Pack 1 (SP1). Windows XP SP2 and Windows Server 2003 SP1 include a loopback check security feature that is designed to help prevent reflection attacks on your computer. Therefore, authentication fails if the FQDN or the custom host header that you use does not match the local computer name.

Yep, that’s me—we use host headers for most of our sites, e.g. app1.domain.local.

The KB article lists two “work arounds”:

  • Disable the loopback check
  • Specify host names

I tried the second option first; however, I couldn’t get it to accept more than three names and didn’t like having to modify the registry (even on a development server) every time we rolled a new app to it to test.

The second option to simply disable the check did work; however, is not something I’d roll out to a non-development box (our dev servers are on a separate IP network and are pretty much hidden from everything else outside this building).
Method 1: Disable the loopback check
Follow these steps:
 
1. Click Start, click Run, type regedit, and then click OK.
2. In Registry Editor, locate and then click the following registry key:
 
 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
3. Right-click Lsa, point to New, and then click DWORD Value.
4. Type DisableLoopbackCheck, and then press ENTER.
5. Right-click DisableLoopbackCheck, and then click Modify.
6. In the Value data box, type 1, and then click OK.
7. Quit Registry Editor, and then restart your computer.
 

Test Coverage Tools in VS2008

Perhaps I spend a bit too much time “working” and not enough time learning the ins and outs of my tools.  I stumbled across the Code Coverage Results tab this morning and was quite pleased.

The Code Coverage metrics count what lines of code are not tested in your project.  I’ve seen 3rd party tools that so this, but never found this inside of Visual Studio.  To use these tools, I believe you must use the Visual Studio testing framework (which I do).

Here’s how to get it working.

1. Enable Code Coverage in the local test run configuration for your test project.

Test > Edit Test Run Configurations > Your Test Configuration

Enabling Code Coverage

Click on Code Coverage and check the libraries to include in the code coverage routines.

2. Execute your Test Project.  After the tests are complete, right-click one of the tests and select “Code Coverage Results”.

Select Code Coverage Results

3. Use the Code Coverage Results window to analyze the coverage of your unit tests.

The Code Coverage Results window (with the default columns) tell you your:

  • Not Covered Blocks
  • Not Covered Blocks %
  • Covered Blocks
  • Covered Blocks %

Code Coverage Results

In the image above, you can see that ERC.Models has VERY poor coverage.  That’s a LINQ library that, quite honestly, DOES have poor coverage as all of the code is automatically generated.  The implementation of the Model (in ERC.Controllers) has quite good coverage, but has room for improvement.

I can further drill down into the ERC.Controllers namespace and see that I left out the ReportsController.  I remember creating the tests for the controller, but I added a quick method to it and forgot to update the test. 

For this controller, with only a few lines, it’s easy to spot the problem but what about a namespace or class with thousands of lines of code?  This is where the code highlighting comes in handy.

4. Use Code Highlighting to pick out the missed lines of code.  Click on the Code Highlighting button to toggle highlighting on and off.

Code highlighting toggle button.The code highlighting button, seen to the left, toggles red highlights on and off in your code.  This only works for code that is included in your code coverage metrics, but helps the developer find those little code blocks that may have been overlooked.

In my ReportsController, I remember adding a quick method, but forgetting to update the test.  I can open up that controller and see that the untested code is now highlighted.

Untested code is highlighted red.

From here, I can go back, add or update the approprate tests, and rerun.

It’s a simple feature, but GREAT to see it built in—especially now that I know it’s there!  The only caveat is that I wish you could (or wish I knew how to) exclude  pre-generated code, such as LINQ code.

Exploring the ASP.NET MVC Preview 2 – Tests and Mocks

March 21, 2008 Comments off

The changes made around testing and mocking are a big step in the right direction from Preview 1. 

Mocking Controller Tests

In Preview 1, it was more common to create subclass testers for each Controller object—a very time consuming tasks.  Mocks “worked”—but, honest, just acted a bit odd.  Our ViewEngine still requires faking out, but Moq helps us along with the rest.

To start off, we’ll “Moq” up a Contact view and get the test to fail, then fix it (aka: implement it) to pass the test.

[TestMethod]

public void Contact_ReturnsContactView()

{

// Create a Moq instance of our HomeController.

       var controller = new Mock<HomeController>();

 

       // Create an instance of our FakeViewEngine.

       // There has got to be a way to do this without

       // ‘faking’ it. 😦

var fakeViewEngine = new ViewHelper.FakeViewEngine();

 

       // Set the ViewEngine of our mock object to the fakeViewEngine.

       controller.Object.ViewEngine = fakeViewEngine;

 

       // Using the extension method from the MvcMockHelpers class,

       // set the controller context.

       controller.Object.SetFakeControllerContext();

 

       // Invoke the “Index” method.

       controller.Object.Contact();

 

       // Assert that Index() actually returned a view named Index.

       Assert.AreEqual(

              “Contact”,

             fakeViewEngine.ViewContext.ViewName);

}

This is part of the unit testing I still don’t like—that FakeViewEngine class.  The class consists of a ViewContext and get/set properties because, by default, IViewEngine can only RenderData()—it doesn’t allow direct access to the ViewContext. 

I suppose you could override IViewEngine and implement your own—but, for testability, is there harm in exposing those as read-only?

At this point, our Contact() method call is invalid (as it doesn’t exist in the HomeController class.

For now, so our test will run, we’ll add in an empty method to the HomeController class.

public void Contact()

{

           

}

Now, we can run the test.  As expected, it fails because the object (the ViewPage) doesn’t exist yet.

Test method Contact_ReturnsContactView threw exception:  System.NullReferenceException: Object reference not set to an instance of an object..

Now, to set off to resolve the error. 

Note: The templates are very precise.  This is a content page (linked to a master page) and a view page rendering the view of a MVC controller.  Be sure to pick the right item template for your task.

In our Views > Home directory, we need to add a new MVC View Content Page item named Contact.

After the View page itself has been added, implement the RenderView method in the HomeController class by modifying the empty method added earlier.

public void Contact()

{

RenderView(“Contact”);      

}

Now, rerun our test!

Testing Routes

There isn’t any mocking (yet) in our Route tests; however, we do use some of the helper methods Scott Hanselman wrote about and that I prepackaged up (see First Glance post for downloads).  I’ve created two quick and simple tests:

  • Does RegisterRoutes work successfully register routes to the RouteTable?
  • Does the Default.aspx mapping work at the root?

Before we get started, the tests require a simple TestInitialize (or SetUp, depending on your testing tool) to initialize the RouteCollection object.

private RouteCollection routes;

 

[TestInitialize()]

public void TestInitialize()

{

routes = new RouteCollection();

}

RegisterRoutes()

Our RegisterRoutes test method is extremely easy:

[TestMethod]

public void RoutesRegistered()

{

Assert.IsTrue(routes.Count == 0);

RouteManager.RegisterRoutes(routes);

Assert.IsTrue(routes.Count > 0);

}

Simply initialized, our RouteCollection should be empty (Count == 0) and, after RegisterRoutes is called, should be populated with routes.  Easy enough and ensures that our RegisterRoutes method is doing it’s job.

ContactRoot_MapsToHomeView()

To test our routes, we must do two things: register our route and then fake the navigation to a specific URL—in this example, “~/Contact.aspx”.   This is really useful during transitions from other frameworks to MVC—especially if you have preprinted letterhead, business cards, etc. with a specific URL on it. 🙂

To bypass the default “controller/action/id” logic of the MVC framework, we can specify exact names, paths, etc. 

First, let’s build our test.

[TestMethod]

public void ContactRoot_MapsToHomeView()

{

RouteManager.RegisterRoutes(routes);

RouteData routeData =

              routes.GetRouteData(

MockHelper.FakeHttpContext(“~/Contact.aspx”));

           

// Check to see if a route exists for the specified URL.

Assert.IsNotNull(routeData);

      

// Check to see that the controller matches our expectation.

Assert.AreEqual(“Home”, routeData.Values[“Controller”]);

 

// Check to see that the action matches our expectation.

Assert.AreEqual(“Contact”, routeData.Values[“Action”]);

}

This test checks three things to pass:

  • does the URL specified exist in the route data (is there a valid route?),
  • does it call the anticipated controller,
  • and does it call the anticipated action?

All three are required to pass as a “valid route”.

When we try to run the test, the first to fail is our Assert.IsNotNull—a route does not exist for our Default.aspx page.  So let’s add a route to get this test to pass.

routes.Add(new Route(“Contact.aspx”, defaultRouteHandler)

{

Defaults =

              new RouteValueDictionary(

                     new

                    {

controller = “Home”,

action = “Contact”,

id = “”

}),

});

Success!

Working with tests for both Controllers and Routes is quite a bit easier now—both due to continued improvement to the MVC framework and the work in the community with helper and extension methods.

For more details on MVC Testing, check out Scott Hanselman’s great webcast on asp.net.

Edit: Don’t bother with the asp.net web site with IE 8–-it’s totally borked up and, even worse, Silverlight won’t load properly in FireFox 2 (it’s an invalid browser).  Break out IE7 for this venture.

The .NET MVC Framework – Part 2 – Testing Complex Routes and Controllers

December 18, 2007 2 comments

This is part two of the series looking at the new .NET MVC framework.  In the last post, I discussed a bit of background of the MVC framework, setting up routes, and creating the first new controller and view for the project.

Before I begin, I want to point out that I’m using Rhino Mocks 3.3.  Works great so far—if you run into any issues with this code, please post up and let me know.

Testing Routes

Phil Haack, the Senior Program Manager for the ASP.NET team, has created a great blog post (and attached helper methods) for testing routes using Rhino Mocks.  I highly suggest reading his post before proceeding.

I’m using his “MvcMockHelpers” and “TestHelper” classes—they’re fantastic and, maybe if we ask nicely, might find their way into the MSTest or Mvc framework itself.

After those two files are added into your project, add a new Unit Test template to your project.  I’ll call mine, similar to Phil’s, RouteTests (I may not lose it or forget what it does if it’s called that. ;)).

Using the AssertRoute of Phil’s TestHelper class, we can quickly and easily see if the RouteTable we’ve specified is working.

[TestMethod]

public void CanMapNormalControllerAndDefaultActionRoute()

{

RouteCollection routes = new RouteCollection();

       RouteManager.RegisterRoutes(routes);

 

       TestHelper.AssertRoute(

routes,

“home”,

new { controller = “home”, action = “index” });

}

AssertRoute, when used like this, has three parameters.

“routes” – passes the RouteCollection generated from our RouteManager—we could explicitly define additional routes if we wanted and inject them here.

“home” – the controller/view we want to render.  It could just as easily say “galleries/view/12”.

new {} anonymous type – this is the expected RETURN of the assert; for this example, by using the specified route and calling “home”, we expect the MVC application to return the “home” controller and respond with the “index” action.

I’ve also taken the RouteCollection and RouteManager and pulled those two lines of code out into a “BuildDefaultRoutes” method.  I can call that method when needed OR skip it and build my own when needed.

private RouteCollection BuildDefaultRoutes()

{

RouteCollection defaultRoutes = new RouteCollection();

RouteManager.RegisterRoutes(defaultRoutes);

return defaultRoutes;

}

Now, what about our Galleries/Show route, we want to verify that it’s a valid path.

[TestMethod]

public void CanMap_Galleries()

{

TestHelper.AssertRoute(BuildDefaultRoutes(), “galleries/show”,

new { controller = “galleries”, action = “show” });

}

Galleries test #1 - Passed!

Good deal, “galleries/show” will route to the controller and action I expect.

Now, what if I want to test something a bit more unique—I want to be able to handle the CURRENT urls that are being passed to the WebStorage gallery at http://photos.tiredstudent.com.

~/WebStorageHandler.ashx?id=166&tb=false&type=Photo

The query string contains three important parts of information as we progress—the ID of the object in the database, whether or not to generate a thumbnail, and what type of object it is (so it knows how to handle the stream—something that needs fixed).

So, we can use our RoutesTest to build our test, have it fail, and then build the right route to make the test pass.

[TestMethod]

public void CanMap_OldWebStorageHandler()

{          

       // Develop our test using our new route.

       TestHelper.AssertRoute(

BuildDefaultRoutes(),

“WebStorageHandler.ashx?id=166&tb=false&type=Photo”,

new {

controller = “galleries”,

action = “CatchHandler” });

}

In this test, we’re looking for that specific URL, and want it to forward it to the Show action on the Galleries controller and pass along the ID of 166 (since, by default, the URL routes read the query string if the parameters can’t be found in the path).

Now that our test is in there (and fails), what kind of route and controller action would we need to add?

routes.Add(new Route

{

       Url = “WebStorageHandler.ashx”,

Defaults = new

       {

controller = “galleries”,

             action = “CatchHandler”

},

RouteHandler = typeof(MvcRouteHandler)

});

 

[ControllerAction]

public void CatchHandler(int id, bool? tb)

{

if (tb == true)

{

             RedirectToAction(new

             {

             action = “ShowThumbnail”,

             id = id

             });

}

       else

       {

             RedirectToAction(new

             {

             action = “Show”,

             id = id

             });

}

}

We can test the route, controller, and actions by prefabing two real URLs taken from the live Photo site:

WebStorageHandler.ashx?id=166&tb=false&type=Photo successfully redirects to /galleries/Show/166

and

WebStorageHandler.ashx?id=166&tb=true&type=Photo successfully redirects to /galleries/ShowThumbnail/166

Good deal!

WarningNotice: I’ll be posting a follow-up later today that describes a current issue with mocking up these sorts of complex routes.  If you attempt to pass query string parameters as your expectations, and your parameters are not EXACT within your route information, the test will fail.  The follow-up will describe how to pull in the query string parameters.

This route intercepts anything looking for WebStorageHandler.ashx and forwards it on to the Galleries controller/CatchHandler action—passing along the rest as parameters.

Testing Controllers

Writing accurate Controller tests also has it’s complications unique challenges with the MVC Framework.  Phil has posted up his blog that the current breakage in Mocks should be fixed soon; however, the subclass techniques seem to be gaining popularity [ David Hayden’s post is interesting and has a good debate of comments on it as well]. 

To create the subclasses, create a new class and inherit from the base controller.

Here’s an example using the GalleriesController.

public class GalleriesControllerTester : GalleriesController

{

public string ActualViewName;

       public string ActualMasterName;

       public object ActualViewData;

       public string RedirectToActionValues;

 

protected override void RenderView(string viewName,

string masterName, object viewData)

        {

            ActualViewName = viewName;

            ActualMasterName = masterName;

            ActualViewData = viewData;

        }

 

        protected override void RedirectToAction(object values)

        {

            RedirectToActionValues = values.ToString();

        }

}

InformationFor now, and until later CTPs that allow me to mock these up (or at least use an interface/base class), I’m placing these in a separate class called ControllerTesters.  That’s not necessary (I noticed David and Phil both placing theirs directly inside the {x}ControllerTest classes).  So, you’ll need a {x}ControllerTester for every controller. 😦

[TestMethod]

public void CanViewGalleries_Show()

{

GalleriesControllerTester controller = new GalleriesControllerTester();

       controller.Show(1);

       Assert.AreEqual(“Show”, controller.ActualViewName);

}

So what does this tell us?

Using the Tester subclass, we can assert whether or not the .Show() action and resulting view match what we expect.

Conclusion

This covers creating unit tests for your routes, and your controllers/views; you might be wondering why I don’t have any testing of the models.  Since unit testing LINQ-to-SQL isn’t specific to MVC, I’ll leave that out for now—there’s plenty of information on that out there.

 

When Unit Tests Go Mad

October 11, 2007 1 comment

It’s going to be one of those “ehh, wtf” days.  I even tried AreEqualing the object to itself… I wonder if my objects are having a multiple personality crisis. 😦

Assert.AreEqual failed.
Expected:<Sis.Schools.School>. Actual:<Sis.Schools.School>.

Swamped, but appreciating unit tests…

October 1, 2007 Comments off

The past three weeks have been almost mindnumbing between work and school.  I’ve had a few points that I honestly haven’t been sure whether to focus on school work (since I graduate in a few weeks) or work work (since employment is a good thing).

Beyond that, I finished some major revisions this morning to a series of common libraries I use at work.  Various domain objects and such that simplify common application creation.  Today provided an awesome for other “non believers” to see the advantages to unit tests in preventing regression errors and simply that green glowy sense of goodness when you compile, test, and everything’s happy—without changing the tests.  Also, the library is in Visual Studio 2008 using their new test environment.  It’s a bit too “Team System”-ish for my liking, but it’s nice and quick.  I miss the dual panes of nUnit though.

I’ve got a few posts drafted up and hope to get those posted up tomorrow.

 

Tags: ,

The Post-Certification Era?

February 13, 2012 1 comment

Oh look, starting off with a disclaimer. This should be good!

These are patterns I’ve noticed in our organization over the past ten years–ranging from hardware to software to development technical staff. These are my observations, experiences with recruiting, and a good dash of my opinions. I’m certain there are exceptions. If you’re an exception, you get a cookie. 🙂

This isn’t specifically focused on Microsoft’s certifications. We’re a .NET shop, but we’re also an Oracle shop, a Solaris shop, and a RHEL shop. So many certification opportunities, so little training dollars.

Finally, I’ll also throw out that I have a few certifications. When I made my living as a full-time consultant and contractor and was just getting started, they were the right thing to do (read on for why). Years later … things have changed.

Evaluating The Post-Certification Era

In today’s development ecosystem, certifications seem play a nearly unmentionable role outside of college recruitment offices and general practice consulting agencies. While certifications provide a baseline for those just entering the field, I rarely see established developers (read: >~2 years experience) heading out to the courseware to seek a new certification.

Primary reasons for certifications: entry into the field and “saleability”.
Entry into the field – provides a similar baseline to compare candidates for entry-level positions.

Example: An entry-level developer vs. hiring an experienced enterprise architect. For an entry-level developer, a certification usually provides a baseline of skills.

For an experienced architect, however, past project experience, core understanding of architecture practices, examples of work in open source communities, and scenario-based knowledge provides the best gauge of skills.

“Saleability” of certifications for consulting agencies allows “one upping” other organizations, but usually lack the actual real-world skills necessary for implementation.

Example: We had a couple of fiascos years back with a very reputable consulting company filled with certified developers, but simply couldn’t wrap those skills into a finished product. We managed to bring the project back in-house and get our customers squared away, but it broke the working relationship we had with that consulting company.

Certifications provide a baseline for experience and expertise similar to college degrees.
Like in college, being able to cram and pass a certification test is a poor indicator (or replacement) for handling real-life situations.

Example: Many certification “crammers” and boot camps are available for a fee–rapid memorization and passing of tests.  I do not believe that these prepare you for actual situations AND do not prepare you to continue to expand your knowledge base.

Certifications are outdated before they’re even released.
Test-makers and publishers cannot keep up with technology at it’s current pace. The current core Microsoft certifications focus on v2.0 technologies (though are slowly being updated to 4.0).

I’m sure it’s a game of tag between the DivDev and Training teams up in Redmond. We, as developers, push for new features faster, but the courseware can only be written/edited/reviewed/approved so quickly.

In addition, almost all of our current, production applications are .NET applications; however, a great deal of functionality is derived from open-source and community-driven projects that go beyond the scope of a Microsoft certification.

Certifications do not account for today’s open-source/community environment.
A single “Microsoft” certification does not cover a large majority of the programming practices and tools used in modern development.

Looking beyond Microsoft allows us the flexibility to find the right tool/technology for the task. In nearly every case, these alternatives provide a cost savings to the district.

Example: Many sites that we develop now feature non-Microsoft ‘tools’ from the ground up.

  • web engine: FubuMVC, OpenRasta, ASP.NET MVC
  • view engine: Spark, HAML
  • dependency injection/management: StructureMap, Ninject, Cassette
  • source control: git, hg
  • data storage: NHibernate, RavenDB, MySQL
  • testing: TeamCity, MSpec, Moq, Jasmine
  • tooling: PowerShell, rake

This doesn’t even take into consideration the extensive use of client-side programming technologies, such as JavaScript.

A more personal example: I’ve used NHibernate/FluentNHibernate for years now. Fluent mappings, auto mappings, insane conventions and more fill my day-to-day data modeling. NH meets our needs in spades and, since many of our objects talk to vendor views and Oracle objects, Entity Framework doesn’t meet our needs. If I wanted our team to dig into the Microsoft certification path, we’d have to dig into Entity Framework. Why would I want to waste everyone’s time?

This same question applies to many of the plug-and-go features of .NET, especially since most certification examples focus on arcane things that most folks would look up in a time of crisis anyway and not on the meat and potatoes of daily tasks.

Certifications do not account for the current scope of modern development languages.
Being able to determine an integer from a string and when to call a certain method crosses language and vendor boundaries.  A typical Student Achievement project contains anywhere from three to six different languages–only one of those being a Microsoft-based language.

Whether it’s Microsoft’s C#, Sun’s Java, JavaScript, Ruby, or any number of scripting languages implemented in our department–there are ubiquitous core skills to cultivate.

Cultivating the Post-Certification Developer

In a “Google age”, knowing how and why components optimally fit together provides far more value than syntax and memorization. If someone needs a code syntax explanation, a quick search reveals the answer. For something more destructive, such as modifications to our Solaris servers, I’d PREFER our techs look up the syntax–especially if it’s something they do once a decade. There are no heroes when a backwards bash flag formats an array. 😉

Within small development shops, such as ours, a large percentage of development value-added skills lie in enterprise architecture, domain expertise, and understanding design patterns–typical skills not covered on technology certification exams.

Rather than focusing on outdated technologies and unused skills, a modern developer and development organization can best be ‘grown’ by an active community involvement.  Active community involvement provides a post-certification developer with several learning tools:

Participating in open-source projects allows the developer to observe, comment, and learn from other professional developers using modern tools and technologies.

Example: Submitting a code example to an open source project where a dozen developers pick it apart and, if necessary, provide feedback on better coding techniques.

Developing a social network of professional developers provides an instant feedback loop for ideas, new technologies, and best practices. Blogging, and reading blogs, allows a developer to cultivate their programming skill set with a world-wide echo chamber.

Example: A simple message on Twitter about an error in a technology released that day can garner instant feedback from a project manager at that company, prompting email exchanges, telephone calls, and the necessary steps to resolve the problem directly from the developer who implemented the feature in the new technology.

Participating in community-driven events such as webinars/webcasts, user groups, and open space discussions. These groups bolster existing social networks and provide knowledge transfer of best practices and patterns on current subjects as well as provide networking opportunities with peers in the field.

Example: Community-driven events provide both a medium to learn and a medium to give back to the community through talks and online sessions.  This helps build both a mentoring mentality in developers as well as a drive to fully understand the inner-workings of each technology.

Summary

While certifications can provide a bit of value–especially getting your foot in the door, I don’t see many on the resumes coming across my desk these days. Most, especially the younger crowd, flaunt their open source projects, hacks, and adventures with ‘technology X’ as a badge of achievement rather than certifications. In our shop and hiring process, that works out well. I doubt it’s the same everywhere.

Looking past certifications in ‘technology X’ to long-term development value-added skills adds more bang to the resume, and the individual, than any finite-lived piece of paper.

DeployTo – a simple PowerShell web deployment script

February 10, 2012 1 comment

We’re constantly working to standardize how builds get pushed out to our development, UAT, and production servers. The typical ‘order of operations’ includes:

  1. compile the build
  2. backup the existing deployment
  3. copy the new deployment
  4. celebrate

Pretty simple, but with a few moving parts (git push, TeamCity pulls in, compiles, runs deployment procedures, IIS (hopefully) doesn’t explode).

One step to standardize this has been to add these steps into our psake scripts, but that got tiring (and dangerous when we found a flaw).  When in doubt, refactor!

First, get the codez!

DeployTo.ps1 and an example settings.xml file.

Creating a simple deployment tool – DeployTo

The PowerShell file, DeployTo.ps1, should be located in your project, your PATH, or wherever your CI server can find it–I tend to include it in a folder we have that synchronizes to ALL of our build servers automatically via Live Mesh. You could include it with your project to ensure dependencies are always met (for public projects).

DeployTo has one expectation, that a settings.xml file (or file passed in the Settings argument) will contain a breakdown of your deployment paths.

Example:

<site>
    <name>development</name>
    <path>\\server\webs\path</path>
</site>

With names and paths in hand, DeployTo sets about to match the passed in deployment location to what exists in the file. If one is found, it proceeds with the backup and deployment process.

Calling DeployTo is as simple as:

deployto development

Now, looping through our settings.xml file looking for ‘deployment’:

foreach ($site in $xml.settings.site) {
    if ($site.name.ToLower() -eq $deploy.ToLower()) {
        writeMessage ("Found deployment plan for {0} -> {1}." -f $site.name, $site.path)
	if ($SkipBackup -eq $false) {
	    backup($site)
	}
	deploy($site)
	$success = $true
	break;
    }
}

The output also lets us know what’s happening (and is helpful for diagnosing issues in your CI’s build logs).

Deploying to DEVELOPMENT
Reading settings file at settings.xml.
Testing release path at .\release.
Found deployment plan for development -> \\server\site.
Making backup of 255 file(s) at \\server\site to \\server\site-2012-02-10-105321.
Backup succeeded.
Removing existing files at \\server\site.
Copying new release to \\server\site.
Deployment succeeded.
SUCCESS!

Backing up – A safety net when things go awry.

Your builds NEVER go bad, right? Deployments work 100% of the time? Right? Sure. 😉 No matter how many staging sites you test on, things can go back on a deployment. That’s why we have BACKUPS. I could get fancy and .7z/.gzip up the files and such, but a simple directory copy serves exactly what I need.

The backup function itself is quite simple–take a list directory of files, copy it into a new directory with the directory name + current date/time.

function backup($site) {
try {
    $currentDate = (Get-Date).ToString("yyyy-MM-dd-HHmmss");
    $backupPath = $site.path + "-" + $currentDate;

    $originalCount = (gci -recurse $site.path).count

    writeMessage ("Making backup of {0} file(s) at {1} to {2}." -f $originalCount, $site.path, $backupPath)
    
    # do the actual file copy, but ignore the thumbs.db file. It's such a horrid little file.
    cp -recurse -exclude thumbs.db $site.path $backupPath

    $backupCount = (gci -recurse $backupPath).count	

    if ($originalCount -ne $backupCount) {
      writeError ("Backup failed; attempted to copy {0} file(s) and only copied {1} file(s)." -f $originalCount, $backupCount)
    }
    else {
      writeSuccess ("Backup succeeded.")
    }
}
catch
{
    writeError ("Could not complete backup. EXCEPTION: {1}" -f $_)
}
}

Deploying — copying files, plain and simple

Someday, I may have the need to be fancy. Since IIS automatically boots itself when a new web.config is added, I don’t have any ‘logic’ to my deployment scripts. We also, for now, keep our database deployments separate from our web view deployments. For now, deploying is copying files; however, who wants to do that by hand? Not me.

function deploy($site) {
try {
    writeMessage ("Removing existing files at {0}." -f $site.path)

    # force, because thumbs.db is annoying
    rm -force -recurse $site.path

    writeMessage ("Copying new release to {0}." -f $site.path)

    cp -recurse -exclude thumbs.db  $releaseDirectory $site.path
    $originalCount = (gci -recurse $releaseDirectory).count
    $siteCount = (gci -recurse $site.path).count
    
    if ($originalCount -ne $siteCount)
    {
      writeError ( "Deployment failed; attempted to copy {0} file(s) and only copied {1} file(s)." -f $originalCount, $siteCount)
    }
    else {
      writeSuccess ("Deployment succeeded.")
    }
}
catch {
    writeError ("Could not deploy. EXCEPTION: {1}" -f $_)
}
}

That’s it.

Once thing you’ll notice in both scripts is I am doing a bit of monitoring and testing.

  • Do paths exist before we begin the process?
  • Do the backed up/copied/original file counts match?
  • Did anything else go awry so we can throw a general error?

It’s a work in progress, but has met our needs quite well over the past several months with psake and TeamCity.

Posting to Campfire using PowerShell (and curl)

January 16, 2012 Comments off

I have a few tasks that kick off nightly that I wanted to post status updates into our team’s Campfire room. Thankfully, 37signals Campfire has an amazing API.  With that knowledge, time to create a quick PowerShell function!

NOTE: I use curl for this. The Linux/Unix folks likely know curl, however, I’m sure the Windows folks have funny looks on their faces. You can grab the latest curl here for Windows (the Win32 or Win64 generic versions are fine).

The full code for this function is available via gist.

I pass two parameters: the room number (though this could be tweaked to be optional if you only have one room) and the message to post.

param (
 [string]$RoomNumber = (Read-Host "The room to post to (default: 123456) "),
 [string]$Message = (Read-Host "The message to send ")
)
$defaultRoom = "123456"
if ($RoomNumber -eq "") {
 $RoomNumber = $defaultRoom
}

There are two baked-in variables, the authentication token for your posting user (we created a ‘robot’ account that we use) and the YOURDOMAIN prefix for Campfire.

$authToken = "YOUR AUTH TOKEN"
$postUrl = "https://YOURDOMAIN.campfirenow.com/room/{0}/speak.json" -f $RoomNumber

The rest is simply using curl to HTTP POST a bit of JSON back up to the web service. If you’re not familiar with the JSON data format, give this a quick read. The best way I can sum up JSON is that it’s XML objects for the web with less wrist-cutting. 🙂

$data = "`"{'message':{'body':'$message'}}`""

$command = "curl -i --user {0}:X -H 'Content-Type: application/json' --data {1} {2}" 
     -f $authToken, $data, $postUrl

$result = Invoke-Expression ($command)

if ($result[0].EndsWith("Created") -ne $true) {
	Write-Host "Error!" -foregroundcolor red
	$result
}
else {
	Write-Host "Success!" -foregroundcolor green
}
Running SendTo-CampFire

Running SendTo-Campfire with Feedback

Indeed, there be success!

Success!

Success!

It’s important to remember that PowerShell IS extremely powerful, but can become even more powerful coupled with other available tools–even the web itself!