Archive

Author Archive

Finding TODOs and Reporting in PowerShell

December 28, 2011 Comments off

After @andyedinborough shared a blog post on how to report TODOs in CruiseControl.NET, I figured there HAD to be a snazzy way to do this with TeamCity.

I already have TeamCity configured to dump out the results from our Machine.Specifications tests and PartCover code coverage (simple HTML and XML transforms) so another HTML file shouldn’t be difficult. When we’re done, we’ll have an HTML file, so we just need to setup a custom report to look for the file name. For more information on setting up custom reports in TeamCity, checkout this post (step 3 in particular).

Let’s Dig In!

 

The code! The current working version of the code is available via this gist.

The code is designed to be placed in a file of it’s own. Mine’s named Get-TODO.ps1

Note: If you want to include this in another file (such as your PowerShell profile), be sure to replace the param() into function Get-TODO( {all the params} ).

The the script parameters make a few assumptions based on my own coding standards; change as necessary:

  • My projects are named {SolutionName}.Web/.Specifications/.Core and are inside a directory called {SolutionName},
  • By default, it includes several file extensions: .spark, .cs, .coffee, .js, and .rb,
  • By default, it excludes several directories: fubu-content, packages, build, and release.

It also includes a couple of flags for Html output–we’ll get to that in a bit.

param(
	#this is appalling; there has to be a better way to get the raw name of "this" directory. 
	[string]$DirectoryMask = 
		(get-location),
	[array]$Include = 
		@("*.spark","*.cs","*.js","*.coffee","*.rb"),
	[array]$Exclude = 
		@("fubu-content","packages","build","release"),
	[switch]$Html,
	[string]$FileName = "todoList.html")

Fetching Our Files

We need to grab a collection of all of our solution files. This will include the file extensions from $Include and use the directories from $DirectoryMask (which the default is everything from the current location).

$originalFiles = 
	gci -Recurse -Include $Include -Path $DirectoryMask;

Unfortunately, the -Exclude flag is…well, unhappy, in PowerShell as it doesn’t seem to work with -Recurse (or if it does, there’s voodoo involved).

How do we address voodoo issues? Regular expressions. Smile Aww, yeah.

$withoutExcludes = $originalFiles | ? {
	if ($Exclude.count -gt 0) {
		#need moar regex
		[regex] $ex = 
			# 1: case insensitive and start of string
			# 2: join our array, using RegEx to escape special characters
			# the * allow wildcards so directory names filter out
			# and not just exact paths.
			# 3: end of string
			#  1                   2										3
			'(?i)^(.*'+(($Exclude|%{[regex]::escape($_)}) -join ".*|.*")+'.*)$'
		$_ -notmatch $ex
	}
	else { 
		$_ 
	}
}

Breath, it’s okay. That is mostly code comments to explain what each line is doing.  This snippet is dynamically creating a regular expression string based on our $Exclude parameter then comparing the current file in the iteration to the regular expression and returning the ones that do not match.  If $Exclude is empty, it simply returns all files.

Finding our TODOs

Select-String provides us with a quick way to find patterns inside of text files and provides us with helpful output such as the line of text, line number, and file name (select-string is somewhat like PowerShell’s grep command).

$todos = $withoutExcludes | 
	% { select-string $_ -pattern "TODO:" } |
	select-object LineNumber, FileName, Line;

Let’s take all of our remaining files, look at them as STRINGs and find the pattern "TODO:".  This returns an array of MatchInfo objects. Since these have a few ancillary properties we don’t want, let’s simply grab the LineNumber, FileName, and the Line (of the match) for our reporting.

Now that we have our list, let’s pretty it up a bit.  PowerShell’s select-object has the ability to reformat objects on the fly using expressions.  I could have combined this in the last command; however, it’s a bit clearer to fetch our results THEN format them.

$formattedOutput = $todos | select-object `
	@{Name='LineNumber'; Expression={$_.LineNumber}}, `
	@{Name='FileName'; Expression={$_.FileName}}, `
	@{Name='TODO'; Expression={$_.Line.Trim() -replace '// TODO: ',''}}
  1. LineNumber: We’re keeping this line the same.
  2. FileName: Again, input->output. Smile
  3. TODO: Here’s the big change. "Line" isn’t real descriptive, so let’s set Name=’TODO’ for this column. Second, let’s trim the whitespace off the line AND replace the string "// TODO: " with an empty string (remove it).  This cleans it up a bit.

Reporting It Out

At this point, we could simply return $formattedOutput | ft -a and have a handy table that looks like this:

consoleoutput

Good for scripts, not so good for reporting and presenting to the boss/team (well, not my boss or team…). I added a -Html and -FileName flag to our parameters at the beginning just for this. I’ve pre-slugged the filename to todoList.html so I can set TeamCity to pickup the report automagically.  But how do we build the report?

PowerShell contains a fantastic built-in cmdlet called ConvertTo-Html that you can pipe pretty much any tabular content into and it turns it into a table.

 

if ($Html) {
	$head = @'
	<style> 
	body{	font-family:Segoe UI; background-color:white;} 
	table{	border-width: 1px;border-style: solid;
			border-color: black;border-collapse: collapse;width:100%;} 
	th{		font-family:Segoe Print;font-size:1.0em; border-width: 1px;
			padding: 2px;border-style: solid;border-color:black;
			background-color:lightblue;} 
	td{		border-width: 1px;padding: 2px;border-style: solid;
			border-color: black;background-color:white;} 
	</style> 
'@
	$solutionName = 
		(Get-Location).ToString().Split("\")[(Get-Location).ToString().Split("\").Count-1];

	$header = "<h1>"+$solutionName +" TODO List</h1><p>Generated on "+ [DateTime]::Now +".</p>"
	$title = $solutionName +" TODO List"
	$formattedOutput | 
              ConvertTo-HTML -head $head -body $header -title $title |
              Out-File $FileName
}
else {
	$formattedOutput
}

Our html <head> tag needs some style. Simple CSS to address that problem.  I even added a few fonts in there for kicks and placed them in $head.

As I mentioned before, my solutions are usually the name of the project–which is what I want as the $title and $header. The $solutionName is ugly right now–if anyone out there has a BETTER way to get the STRING name of the current directory (not the Path), I’m all ears.

We take our $formattedOutput, $header, $title and pass them into ConvertTo-Html. It looks a bit strange that we’re setting our ‘body’ as the header; however, whatever is piped into the cmdlet appends to what is passed into body. It then parses our content out to the $FileName specified.

htmloutput

Using Cassette with Spark View Engine

July 21, 2011 Comments off

Knapsack… *cough*… I mean Cassette is a fantastic javascript/css/coffeescript resource manager from Andrew Davey. It did, however, take a bit to figure out why it wouldn’t work with Spark View Engine. Thankfully, blogs exist to remind me of this at a later date. 🙂

Namely—because I’ve never tried to use anything that returned void before. Helpers tend to always return either Html or a value.

I finally stumbled on a section in the Spark View Engine documentation for inline expressions.

Sometimes, when push comes to shove, you have a situation where you’re not writing output and there isn’t a markup construct for what you want to do. As a last resort you can produce code directly in-place in the generated class.

Well then, that sounds like what I want.

So our void methods, the Html.ReferenceScript and Html.ReferenceStylesheet should be written as:

#{Html.ReferenceScript("scripts/app/home.index.js");}
#{Html.ReferenceStylesheet("styles/app");}

Note the # (pound sign) and the semi-colon at the end of the statement block.

Our rendering scripts; however, use standard Spark output syntax:

${Html.RenderScripts()}
${Html.RenderStylesheetLinks()}

Now my Spark view contains the hashed Urls–in order–as it should.

 <!DOCTYPE html>
 <html>
 <head>
   <link href="/styles/app/site.css?f8f8e3a3aec6d4e07008efb57d1233562d2c4b70" type="text/css" rel="stylesheet" />
 </head>
 <body>
 <h2>Index</h2>
   <script src="/scripts/libs/jquery-1.6.2.js?eeee9d4604e71f2e01b818fc1439f7b5baf1be7a" type="text/javascript"></script>
   <script src="/scripts/app/application.js?91c81d13cf1762045ede31783560e6a46efc33d3" type="text/javascript"></script>
   <script src="/scripts/app/home.index.js?b0a66f7ba204e2dcf5261ab75934baba9cb94e51" type="text/javascript"></script>
 </body> 

Excellent.

Review: Asus Transformer TF-101

tl;dr: Awesome!

I wanted to give my review after at least a week to settle in and integrate into my daily ‘flow’.

Why the Asus Transformer?

I’ve looked at more tablets than I can remember, teetering between the Motorola Xoom, Apple iPad2, Notion Ink Adam, and the Transformer.  All of these seemed like great devices and had their own pros; however, the Transformer sold me on a few of its flashy innovations:

  • Asus seems dedicated to constant updates. They’ve already pushed out Android Honeycomb 3.1 OTA and it updated like a champ.
  • The docking station/keyboard is brilliant and really bridges the gap between netbook and tablet and adds incredible battery life.
  • The mix of hardware performance for an incredible price point (499$US for the 32GB unit and 149$US for the keyboard/docking station at Amazon.com).
  • Flash support. Yes, I’m looking at you iPad2.

Photo of the Transformer... and swanky placemat backdrop.

 

How has the Transformer changed things?

I’ve noticed it’s changed things in the oddest of ways. I haven’t opened my personal laptop since I got the Transformer (it looks big and powerful, but it’s just old and … well, old).

Around the house, I’ve found myself using the Transformer in place of my EVO. I use it in the kitchen to look up recipes, check for coupons, make note of my groceries in Mighty Shopper, and listen to music. Other times, like for this post, for simple word processing, web browsing, and email. More functionality and less squinting.

Work Computer Size ComparisonAt work, I’ve used Polaris Office for note taking, presentations (via the HDMI out), web browsing and research as I wander around the office with folks, and email. The remote desktop client (2X Client) allows me to remote into servers, my office desktop, and client machines for diagnostic work.

With Android 3.1, the web browser engine finally supports NTLM/Windows authentication so I can log into our corporate web sites (I hope this is coming to the Gingerbread builds soon).

I still have my phone with me through all this.. the Transformer hasn’t replaced the portability and features of my phone, but augmented what I can do without an actual desktop computer.

 

The Hardware

You can read the unit’s specifications, so I won’t drive into them too much. There are a few things to mention.

The bulk

Overall, the unit (and keyboard doc) are quite light compared to a normal 14" laptop.  The tablet itself is very comfortable to carry around (long days here at work, so I’ve been pacing around while researching on the web).

The keyboard dock

Image of the the keyboard dock.I was a bit worried that the keyboard dock would be uncomfortable to use; however, that’s not the case.  The keys are well spaced apart (once you get used to the ‘special’ keys at the top–I keep accidently locking the unit when I hit backspace) and the monitor can tilt back far enough for easy reading.

It’s important to note that the keyboard dock’s additional battery life is due to the dock charging the main unit while it’s docked.  This is advantageous.  Wander around and run the battery down a bit on the tablet, then dock and let the keyboard charge things up, detach, and you have a fully charged tablet again.

The POWER!

It is quite disappointing that the Transformer has a unique USB cable; however, even more so that it’s only about a meter long.  This hassle is aggravated by the fact you cannot charge the unit via USB while it’s on, so if you need a charge AND need to use it, you’re trapped a meter from your electrical connection or extension cord.

Thankfully, with the 14-16 hours of battery life, charging isn’t a central focus of the day (unlike my EVO which is dead by the time I get to work).

The video output

Less of a big deal and more of a ‘oh’ moment when digging for cables.  My EVO and a few other video devices all use micro HDMI so I had to pickup a 1.3a mini HDMI cable for the Transformer.

The screen

I love the screen. It’s a bit hard to see in high light (it’s not a Kindle for outdoor reading); however, the fingerprints are a bit out of control. I plan on getting a screen protector which, I hope, will help.

The camera

The camera is only a 5MP (compared to the 8MP in my EVO) and takes 4:3 photos rather than wide screen. That aside, it’s quite a good camera for general ‘here I am!’ sorts of photos–like lounging on the hammock in the back yard on a peaceful evening.

Example Transformer camera image--my back yard.

The lack of right click

I understand… I really do. The Android device doesn’t have a context for ‘right-click’; however, I dream of the day when it does so that the remote desktop clients have right-click.

Keeping it safe

I picked up a Belkin 10" Netbook Sleeve when ordering and highly recommend it as a carry case for the Transformer.  The unit, whether you’re just carrying the tablet or the tablet and keyboard dock, fit very well into the sleeve and the zips/material feel and look good.

 

The Software

Out of the box, the Transformer walks you through a simple configuration–much like my EVO (or any Android phone). Setting up email, syncing contacts, and such worked just fine. I was impressed that the unit also synced my Google bookmarks. That’s an added bonus.

The basics are included:

  • Email (Enterprise, Gmail, etc.) – The layout and flow of both mail applications is the same and works VERY well on landscape mode.
  • Calendaring – nice, clean "Outlook-esque" layout.
  • Contacts – standard Android contacts client.
  • Polaris Office – bundled Office client for DOC, XLS, and PPT. Works quite well (and what I used for this post). It can also read/write to Google Docs, Dropbox, and a few other cloud services. At this time, I’m getting 400 errors trying to save BACK to Google Docs; however, from the forums, it appears that it is a Google API issue.
  • MyReader – Simple eBook/pdf reader akin to Aikido.
  • MyNet – network-based HD streaming client. Detects my PlayOn devices and streams perfectly.
  • MyCloud – a subscription-based, unlimited storage system for music and files. Transformer includes a free 1 year subscription. I haven’t dug into this much as I have LiveSync and DropBox.

In addition to the boxed software, there are a few applications that I’d recommend (and seem to work quite well with the Transformer and Honeycomb):

  • Dolphin Browser HD – Bit cleaner interface for web browsing and plugin modules (3.1 addresses some rendering issues though).

    .

  • Amazon Kindle Reader – Great for reading Kindle books on the run (and without your Kindle). The update for Android 3.1 works even better!
  • Pandora – Great and solid music client.
  • Grooveshark – Formatting is a bit off, but works fantastic to query and stream your favorite tunes.
  • Google Reader – Great tablet support for full-screen reading.
  • imo Chat beta – Best IM client I’ve found for any type of Android device.
  • Plume – best Twitter client that takes advantage of the notifications and wide screen layout.
  • DropBox – the Android DropBox client is pretty stellar.
  • PrintBot – a free (and for cheap) application to print to network devices over WIFI. Works great and, hopefully, future versions will allow for multiple printers.
    NOTE: Similar to the Gingerbread break, Netflix is defunct on Honeycomb as well. I had hopes (dreams) that 3.1 would fix it, but alas it doesn’t seem so. Netflix is adamant that a new version is in the works. I hope so. It was great to have mobile movies on my cell phone and the Transformer seems ideal

Upgrading to Android Honeycomb 3.1

honeycomb_3-1-580x309On 10 June 2011, Asus released Android Honeycomb 3.1 for the Transformer. The official version is v8.4.4.5 (the SKUs are separate by locale). 

There are two options:

I’m impatient, so I opted for the manual method.  The process was quite simple:

  1. Download the 8.4.4.5 zip file from the internet.
  2. On a microSD card create an \asus\update directory and copy the ZIP file into that directory.
  3. Insert the microSD card into the Transformer. You should see a notification that an update is available.
  4. The device will reboot a few times and then be good to go.

There are a few highlights to 3.1 (detailed by the release notes) that I really dig:

  1. The browser (and any browsers relying on it’s webkit) now supports NTLM/Windows Authentication. AWESOME for work!
  2. (Most) widgets are now resizable. I’ve come across a few that won’t resize.
  3. Speed! 3.1 generally feels faster.  Multitasking across a dozen apps with notifications flying and music playing doesn’t seem to bother it.

Summary

With the tablet upgraded to 3.1, the applications that I like, and features that keep impressing me, I think the Transformer will continue to be a great fit. The keyboard matches my dire need to rapidly input information (take notes, respond to emails) and the portability matches my recent need to roam free of wires and surf the web. 

The Transformer is a fantastic bridge between the smallest of laptops and the largest of cell phones and, unfortunately, is what I’d hoped the Google Chromebook would have been.  Don’t get me wrong, I like my Chromebook–it was free and I like free–but the Android-based Transformer can actually DO the things I need to do in my day-to-day life.

Dynamically Adding Sub Reports to an ActiveReport

April 5, 2011 Comments off

I’m currently working on a project where I needed to iterate through a group of users and plug in a little sub report that contains some demographic information and a Code38 barcode.

One sub report is easy–add the control to the page, set the .Report property and away we go; however, adding multiple sub reports dynamically and getting the spacing right proved to be a bit challenging.

Warning: It “works on my machine”.

To add controls to your report dynamically, you must use the _ReportStart event of your report.

To address the spacing issue, let’s start out by specifying our ‘base’ top and the height of our sub report.

const float height = 0.605f;
var currentTop = 7.250f;

In my case, I want my sub reports to start at about 7.25″ and be ~0.605″ in height.

The actual creation of the sub report placeholder is fairly standard–new it up and assign a few properties. I’ll get into the looping a bit later.

var subReport = new SubReport
{
    CloseBorder = false,
    Height = height,
    Left = 0F,
    Width = 7.5F,
    Top = currentTop,
    Name = person.DisplayName + "_SubReport",
    ReportName = person.DisplayName + "_SubReport",
};

Notice how I’ve set the Top property to be our currentTop variable. Keep that in mind.

The next step is to new up our actual sub report object.  My sub report has two properties on it, each for a data item.  I could pass it along as an object, but it seems a bit overkill for two string properties. After the report object is assigned to our new sub report container, add the container to our details section.

var spReport = new _PersonAssignment()
{
    Name = person.DisplayName,
    EmployeeId = person.EmployeeId
};

subReport.Report = spReport;
this.Sections["detail"].Controls.Add(subReport);

Because we’ve assigned it a left, width, and top, our sub report will be added where expected.

The final piece is incrementing the ‘top’ to accommodate for the height of the last sub report.

currentTop += height;

Easy.  Now the next sub report will start at 7.250″ + 0.605″ or 7.855″.  Keep in mind that the 0.605″ includes a bit of whitespace, if you need additional whitespace, pad the height number.

The full _ReportStart event looks like:

const float height = 0.605f;
var currentTop = 7.250f;
foreach (var person in Model.People)
{
    var subReport = new SubReport
    {
        CloseBorder = false,
        Height = height,
        Left = 0F,
        Width = 7.5F,
        Top = currentTop,
        Name = person.DisplayName + "_SubReport",
        ReportName = person.DisplayName + "_SubReport",
      };

    var spReport = new _PersonAssignment()
    {
        Name = person.DisplayName,
        EmployeeId = person.EmployeeId
    };

    subReport.Report = spReport;
    this.Sections["detail"].Controls.Add(subReport);
    currentTop += height;
}

Bingo.

Example of Dynamic Sub Reports

Categories: .net 3.5, c# Tags: , ,

Quick Solution Generation using PowerShell: New-Project

February 13, 2011 1 comment

When I have an idea or want to prototype things, I tend to mock it up in Balsamiq, then dig right in and write some specs to see how it’d work.  Unfortunately deleting the junk Class1.cs in Library projects, the plethora of excess in MVC3 webs, and such tends to be the most time intensive part of wiring up a quick project in .net.

All that deleting is too many steps–especially if you’re developing on the fly with a room of folks. I needed something ala command line to fit my normal workflow:

  1. o init-wrap MyProject -git
  2. cd MyProject
  3. git flow init
  4. {something to create projects, solutions, etc}
  5. o init-wrap -all
  6. {spend 5 minutes cleaning up junk files in my way}

Introducing New-Project

Yes, I know. I’m not a marketing guru. I don’t have a cool name for it.  Just a standard PowerShell convention.

Usage:

  -Library { } : Takes a string[] of names for c# class libraries to create.

  -Web { } : Takes a string[] of names for MVC3 web projects to create.

  -Solution "" : Takes a single string for your solution name.

Example:

New-Project -Library MyProj.Core, MyProj.Specs -Web MyProj.Web -Solution MyProject

SiteScaffolding

 

What does this all do?

Well, honestly, I’m not sure how ‘reusable’ this is… the projects are pretty tailored.

Libraries

  • Libraries don’t have the annoying Class1.cs file that you always delete.
  • AssemblyInfo.cs is updated with the specified Name and Title.

MVC3 Webs

  • The web.config is STRIPPED down to the minimal (27 lines).
  • The folder structure is reorganized (removed unnecessary folders, like Controllers, which I put in libraries, not the web project).

Solution

  • This is the only one I’m actually using the VisualStudio.DTE for–it makes it super easy to create and add projects into the solution.

But there are other scaffolding systems out there–why not use them?

Most of the time, I don’t need a full system. I don’t need my objects mapped, views automatically set up, or anything else. 

Start with three empty projects, load up the Specifications project, and start driving out features.  That’s how it’s supposed to work, right?  So why would I want to have to pre-fill my projects ahead of time?

 

What’s next?

  • Error catching and handling (it’s pretty lax right now)
  • Handle setting up gitflow, openwrap, jQuery, etc. Less typing good!
  • Something… who knows. 😀

 

Where to get it?

I’ve tossed it up on github at https://github.com/drlongnecker/New-Project.  Right now it has the WOMM warranty.

6a0120a85dcdae970b0128776ff992970c

Cr-48 : The First Twenty Four

December 18, 2010 7 comments
I was shocked to see the Cr-48 sitting out on the front porch last night.  I tend to pre-order things and forget–so I figured that’d it’d happened again.  Instead, Silicon Valley Santa came early. Apparently the first group of testers weren’t notified as they wanted to see how wacky we went on the web. I’m a good little lemming–hence the blog post. 🙂 

After a full day of use, I wanted to join the thousands blogging about the Cr-48 and jot down some of my impressions–how it worked, what DID work, and what didn’t work. I’ve made an effort to do most everything on the Cr-48 today–including this blog post. Man, I miss Live Writer. 😛

Unboxing and Initial Setup

Out of the box, the machine just ‘starts’.  Fantastic. That’s how unboxing a new device should be. Unfortunately, I hit a brick wall immediately after. My network is locked down pretty tight and one of those security measures only allows authorized MAC addresses to attempt to authenticate.

“How the heck do I get the MAC address on this thing?”

It wasn’t on the screen, so I looked all over the unit. Nothing. Yanked the battery… nothing. Seriously?

DIDN’T WORK: There needs to be a way to spit out the network ifconfig information on startup. If the device is appealing to geeks, then give the geeks some love.

After hacking my own network, resetting the security schema, logging in, and resetting it, I was golden. The out-of-box-experience (OOBE) for the Cr-48 is actually a lot better than most the netbooks I’ve worked with if nothing else for it’s single ‘environment/task’ configuration. I guess that’s both a pro and con of having low expectations for the device (e.g. I don’t expect it to be it more than a glorified web browser).

WORKS: Instant update to the new version of ChromeOS, restart, sync everything, and ready-to-go perfection.

DIDN’T WORK: When the Cr-48 asks to take your picture, do it–even if you just hold up something or take a photo of empty air. Why?  Because ChromeOS doesn’t have a mechanism built in to MODIFY the photo afterwards if the initial picture wasn’t taken. You have to totally format and restart if you want to have a snazzy photo.

Rooting to Developer Mode

Now that I’m logged in and see my data, I want power. *insert evil laugh here*

Thankfully, swapping to developer mode is a FEATURE of the system.  You can’t do it ala software, it’s a physical switch near the battery. The only thing I don’t like is the sad computer screen when you start.  Seriously Google? My computer doesn’t need to be sad, it needs to have a magic wand or a hoverboard when it’s in root mode.

As the instructions explain, you should do this immediately as it wipes user partition (takes a few minutes). After that, you’re good to go.

At this point, you can swap to console windows just like a regular Linux device. Hit up VT2 by hitting Ctrl-Alt- -> (the forward button at the top, like where F2 would be). Log in and hack away or, as the prompt says, “have fun and send patches!”

NOTE: Hit Ctrl-Alt- Speaker Mute Button (LOL). It takes you to the console where the kernel messages are being spewed out. Interesting if you’re a bit geeky.

WORKS: Empowering users who WANT to root the unit is fantastic. I didn’t have to stand on one hand to control my device.

Using the Device

It’s hard to not like the package.  It’s a bit plain, but that’s expected for a prototype device created on a budget (they sent out ~60k of these for free… I’m sure Silicon Valley Santa’s elves made it ‘good enough’).

Screen: The 12.1” screen is crazy bright on full brightness. A couple of notches down from max is fine in a bright room. It’s clear and clean as well. Nice.

Keyboard: I actually love the keyboard. The search button in place of CAPS lock is brillant. When does anyone actually USE CAPS lock these days (except for the trolls on forum posts)? The search button doubles as a ‘new window’ when you think about it which really speeds up navigation once you get used to it.

Touchpad: No, this is a touchbad. I hope the real units either fix this or do away with it. Want to right click in Google Docs and fix your typos? Not a chance. Right-click is a mythical beast that only works when you don’t want it to. Have larger, warm hands (me!)? You’ll be rocketing your cursor all around the screen while you type. Know where to ‘click’? I sure didn’t. There’s no clue that the bottom of the pad is the buttons. I guess MacBooks are the same way–maybe this is a PC user snafu. For now, this is easily solved by a itty bitty Targus mouse I use for traveling.

DIDN’T WORK: The touchbad gets a DIDN’T WORK all of it’s own.

Sound: I have Pandora constantly running (ala Extension) and it’s not too bad. This bad boy isn’t running Bose or Klipsch, but it’s good enough for sitting and surfing. The headphone port has a nice range, no background static (which my Dell laptop is notorious for having).

Battery: Since my last ‘plug-in’, which was about 5 hours ago, the meter says I have 1:23 hours left on the clock. That includes having a USB mouse plugged in, 6 polling tabs open, Pandora constantly playing, and writing this post in Google Docs.  So if things go to plan, the life will be about 6.5 hours. For netbooks, that’s not unheard of; however, l’m sure having the monitor a bit brighter, streaming music, and having HootSuite, Mail, StackOverflow, Groups, Reader, Docs, and four search tabs all running for that time is a bit more than a normal netbook operates with. It’s far more than my normal laptop would survive–and it has two high capacity batteries.

Performance: Honestly? Considering it’s the backbone of the entire system, It’s kinda slow.  Considering it’s running a 1.66GHz Atom, has 2GB of memory, and an SSD hard drive, it FEELS slow–which is really all that matters.  Opening a tab shouldn’t take a few seconds. Page rendering on Amazon.com shouldn’t load one little image at a time.

I’m wondering if that’s because the AR928X card is stuck in 802.11g mode instead of 802.11n mode (Google search is filled with posts about the cards not kicking up to N-mode on Linux). Hopefully this will be addressed in time with patches.

WORKS: For normal operation, once you get used to the flow of the device, isn’t not bad though. Learning the keyboard commands is REQUIRED–if you’re a mouser, even with an external mouse, you’re going to weep at how slow it can be to get around.

UX: Using the Cr-48

The user experience of the device is … different. There’s no other way to put it. That’s not a bad thing either. I spend a majority of my computer time either coding, gaming, or “all other tasks”. I still have my desktop for coding (VS2010 brings my powerhouse computer to it’s knees–I have no expectations a mobile device could run it) and gaming; however, this little computer fits the “all other tasks” category well.

To do that, however, requires a bit of change. Here are a few of the things I’ve discovered to help me use the Cr-48.

Do not try to use tabs for everything. Using tabs is great–we love tab-based browsing, but think outside that. Remember that you can open new windows (Ctrl-N). Think of these new windows as multiple ‘desktops’. For example, I’m running Google Docs in one, my standard ‘browsing’ in another, and my ‘communication’ apps (HootSuite, Mail, Groups, Reader) in another. It allows me to easily Alt-Tab between work environments while still keeping each separate.

Become a keyboard junkie. Press Ctrl-Alt-/ on the keyboard to pop up a SWEET interactive keyboard map (every app should have this).  These are the keys for the ChromeOS. On top of that, learn the keyboard shortcuts of Mail, Groups, Reader, your twitter application, whatever other site you use (e.g. github.com has some pretty slick keyboard shortcuts to move around their site totally mouseless). This really speeds things up.

Some good ChromeOS keys to get started:

  • Alt-Up/Alt-Down : Page Up/Page Down
  • Alt-Tab : Switch between windows
  • Ctrl-Tab : Switch between tabs
  • Alt-D : Focus on Location bar (and select so you can start typing)
  • Alt-Backspace : Delete

Remember the built-in hard keys. Since the Cr-48 is missing function (F-key) keys, it has a row of hard keys built-in. Back, forward, refresh, full screen, swap window, brightness, and sound controls right at your fingertips (literally if you’re hands are on home row). Full screen and back/forward are pretty snazzy once you get used to them there.

Find the app or extension that fits your requirements. I’ll talk more about extensions and apps here in a moment; however, use the Chrome Web Store and explore. There are a few cases where your application may offer BOTH an extension and app. Try both. One may work better than the other for your use.

Extensions

Since the launch of the Chrome Web Store, there’s been a lot of discussion about what an extension is… and what an app is… and why we shouldn’t all just use bookmarks like we have been.

Since ChromeOS syncs to your Chrome profile, anything you already sync shows up automatically. I had my Goo.gl Shortener, Web Developer, Mail, Evernote, glee, and Pandora extensions right OOBE.

DIDN’T WORK: The only extension that’s been a bit wonky has been glee.  It doesn’t seem to properly detect input areas–so pressing the gleeKey (‘g’) at any time pops up the box. Bummer ‘cause glee would be rockin’ on a netbook considering how keyboard-oriented it is.

WORKS: This is more of a recommendation than a works, but still. If you use Pandora, use the extension, not the app. Why? Because the Pandora app just loads up pandora.com. On a normal computer, that’s fine, but this is a Linux-based platform. What happens when Flash starts up on a Linux-based platform? Kittens are killed. The app also requires a full tab to operate while the extension sits happily in the extension bar, blaring your tunes.

Apps

I’ll admit, my initial response to the Chrome Web apps was “wut”? Why would I want fancy bookmarks? With the Cr-48, that’s still pretty much the case. There are a few apps which seem to run as actual applications, like TweetDeck’s ChromeDeck. Everything else seems to just be a bookmark to a site. That is a huge bummer. I’m hoping as the Cr-48 and other ChromeOS systems are released that this changes.

An example. As I discussed above, the Pandora application is tragic. It loads Pandora.com which promptly dies due to Flash.  The extension is rockin’, but there’s room for improvement.  ChromeOS supports panels (little pop-ups at the bottom of the screen). That’d be a great place for some Pandora action.

Random Thoughts

Multimedia is sorta meh. Flash sucks on here. I expect that on a Linux device, but I’d assume the owners of YouTube would have worked out SOMETHING magical. No, sir.  That means poor performance for YouTube and Hulu and, well, 90% of the rest of the web. Unfortunately, since this is a Linux box, Silverlight doesn’t natively work either (I haven’t tried installing Moonlight yet… though I don’t have high hopes). That counts out Netflix. It’s a bit of a letdown because the Atom processors are toted for being able to do multimedia on-the-go. I honestly think this machine COULD do it if it wasn’t for the Flash/Silverlight limitation on Linux and that really hurts to say.

WORKS: For some reason, the Flash player (using FlowPlayer) on Tekpub.com is BRILLANT and works like a champ. I don’t know enough about how the compression/streaming/playback works on FlowPlayer to say why–though it’s something I’m going to look into for a few sites I have using Flash movies.

The window switching animation is confusing. Okay, I’ll accept that I may be dense, but it just doesn’t feel natural. Rather than looping left-to-right through your windows, when you get to the end, you slingshot back to the first… so if you have three, you see left-left-right-left-left-right. That’s fine once you get used to it, but the right (the one that’s going from last to first) LOOKS like it’s going BACK. Just an odd UX thing.

Mounting storage is… awkward. I’m assuming this is a prototype issue and won’t affect the production devices, but the device is SO LOCKED DOWN that using the SD card slot is a real pain. I’m thinking this is to encourage cloud storage, which is fine, but if that’s the case, why is the SD card slot there? I can download files in Chrome, but moving them onto the removable device requires some magic.

But I want something from {device}… For this article, figuring out how the heck to get photos on here was a REAL pain. I have a phone that can take snazzy widescreen photos, but getting them to the web is a bit of a pain outside of TwitPic. I have a Sony digital camera, but plugging it into the USB port on here was futile. I wasn’t going to swap the front web cam around to snap photos–that was just silly. So how can a blogger use this and attach photos? I’m still working that out–note the lack of photos in this stream of text post.

Give us a right-click keyboard button! If right-clicking is going to be such a pain, give us the ‘properties’ right-click key that we’ve gotten used to on Windows keyboards for so long. I’ve REALLY been missing that (especially while typing this post to fix typos–I guess I should be less lazy and spell the words right the first time).

That’s it for the first twenty-four hours and see, all this cloud computing talk and I haven’t once said “to the cloud!”… oh, damn.

Updating NuGet Spec’s Version Numbers in psake

December 3, 2010 Comments off

As part of our psake build process on a couple of framework libraries, I wanted to add in updating our internal NuGet repository.  The actual .nuspec file is laid out quite simplistically (details); however, the version number is hard coded.

For packages that have a static ‘major’ version, that’s not a bad deal; however, I wanted to keep our package up to date with the latest and greatest versions, so I needed to update that version element.

Since I have the full power of PowerShell at my disposal, modifying an XML file is a cakewalk. Here’s how I went about it.

function build-nuget-package {
	# update nuget spec version number
	[xml] $spec = gc $nuget_spec
	$spec.package.metadata.version = GetBuildNumber
	$spec.Save($nuget_spec)

	# rebuild the package using the updated .nuspec file.
	cd $release_directory
	exec { invoke-expression "$nuget pack $nuget_spec" }
	cd $build_directory
}

GetBuildNumber is an existing psake function I use to snag the AssemblyVersion from \Properties\AssemblyInfo.cs (and return for things like TeamCity). $nuget and $nuget_spec are variables configured in psake that point to the nuget executable and the specification file used by the build script.

Once the package is built, in our Deploy task, we do a simple copy.

task Deploy -depends Release {
	 [ ... cut for brevity ]

	# build and deploy nuget package
	build-nuget-package
	copy-item $release_directory\*.nupkg \\server\nugetshare$
}

Now the NuGet repository is updated with a new package each time we build. I may refine it and only update the package on major version number changes or something later, but this gets the job done.

Playing nice with psake, PartCover, and TeamCity

December 2, 2010 1 comment

While code coverage isn’t the holy grail of development benchmarks, it has a place in the tool belt. We have several legacy systems where we are retrofitting tests in as we enhance and maintain the project.

I looked at NCover as a code coverage solution. NCover is a SLICK product and I really appreciated it’s Explorer and charting for deep dive analysis. Unfortunately, working in public education, the budget cuts just keep coming and commercial software didn’t fit the bill. After finding the revitalized project on GitHub, I dug back into PartCover.net.  Shaun Wilde (@shauncv) and others have been quite active revitalizing the project and flushing out features for .net 4.0.

You can find the repo and installation details for PartCover.net at https://github.com/sawilde/partcover.net4.

Now, armed with code coverage reports, I had to find a way to get that information into TeamCity. Since I use psake, the command line runner doesn’t have the option for importing code coverage results. That just means I’ll need to handle executing PartCover inside of psake. Easy enough. Thankfully, the TeamCity Reports tabs can also be customized. Time to get to work!

Step 0: Setting up PartCover for 64-bit

If you’re on a 32-bit installation, then skip this (are there people still running 32-bit installs?).

PartCover, itself, installs just fine, however, if you’re on an x64 machine, you’ll need to modify PartCover.exe and PartCover.Browser.exe to force 32-bit using corflags.exe.

Corflags.exe is usually located at C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin if you’re running Windows 7. Older versions of Windows might be at v6.0A.

Once you’ve located corflags.exe, simply force the 32-bit flag:

corflags /32bit+ /force PartCover.exe
corflags /32bit+ /force PartCover.Browser.exe

If you have trouble on this step, hit up Google. There are tons of articles out there hashing and rehashing this problem.

Step 1: Setting up your PartCover settings file

PartCover allows you to specify configuration settings directly at the command line, but I find that hard to track when changes are made.  Alternatively, a simple XML file allows you to keep your runner command clean.

Here’s the settings file I’m running with:

<PartCoverSettings>
  <Target>.\Tools\xunit\xunit.console.x86.exe</Target>
  <TargetWorkDir></TargetWorkDir>
  <TargetArgs> .\build\MyApp.Test.dll /teamcity /html .\build\test_output.html</TargetArgs>
  <DisableFlattenDomains>True</DisableFlattenDomains>
  <Rule>+[MyApp]*</Rule>
  <Rule>-[MyApp.Test]*</Rule>
  <Rule>-[*]*__*</Rule>
</PartCoverSettings>

Just replace MyApp with the project names/namespaces you wish to cover (and exclude).

For more details on setting up a settings file, check out the documentation that comes with PartCover.  The only real oddity I have here is the last rule <Rule>-[*]*__*</Rule>.  PartCover, by default, also picks up the dynamically created classes from lambda expressions.  Since I’m already testing those, I’m hiding the results for the dynamic classes with two underscores in them __. It’s a cheap trick, but seems to work.

Step 2: Setting up psake for your tests and coverage

My default test runner task in psake grabs a set of assemblies and iterates through them passing each to the test runner.  That hasn’t changed; however, I now need to call PartCover instead of mspec/xunit.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      exec { invoke-expression $testExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

$coverage_runner, $base_directory, and $build_directory refer to variables configured at the top of my psake script.

Now, you should be able to drop to the command line and execute your test task.  A partcover.xml file should appear.  But who wants to read XML. Where are the shiny HTML reports?

We need to create them! 🙂

Step 2: Generating the HTML report from PartCover

There are two XML style sheets included with PartCover, however, I really like Gáspár Nagy’s detailed report found at http://gasparnagy.blogspot.com/2010/09/detailed-report-for-partcover-in.html. It’s clean, detailed, and fast.  I’ve downloaded this xslt and placed it in the PartCover directory.

Without TeamCity to do the heavy lifting, we need an XSL transformer of our own.  For my purposes, I’m using the one bundled with Sandcastle (http://sandcastle.codeplex.com/).  Install (or extract the MSI) and copy the contents to your project (ex: .\tools\sandcastle).

The syntax for xsltransform.exe is:

xsltransform.exe {xml_input} /xsl:{transform file} /out:{output html file}

In this case, since I’m storing the partcover_results.xml in my $build_directory, the full command would look like (without the line breaks):

    .\tools\sandcastle\xsltransform.exe .\build\partcover_results.xml
        /xsl:.\tools\partcover\partcoverfullreport.xslt
        /out:.\build\partcover.html

Fantastic.  Now, we want that to call every time we run our tests, so let’s add it to our psake script.

task Test -depends Compile {
    $test_assemblies =     (gci $build_directory\* -include *Spec.dll,*Test.dll)
  if ($test_assemblies -ne $null) {
    " - Found tests/specifications..."
    foreach($test in $test_assemblies) {
      " - Executing tests and coverage on $test..."
      $testExpression = "$coverage_runner --register --settings $base_directory\partcover.xml --output $build_directory\partcover_results.xml"
      $coverageExpression = "$transform_runner $build_directory\partcover_results.xml /xsl:$transform_xsl /out:$build_directory\partcover.html"
      exec { invoke-expression $testExpression }
      " - Converting coverage results for $test to HTML report..."
      exec { invoke-expression $coverageExpression }
    }
  }
  else {
    " - No tests found, skipping step."
  }
}

Now, we have a $coverage_expression to invoke.  The xsltransform.exe and path to the report are tied up into variables for easy updating.

You should be able to run your test task again and open up the HTML file.

Step 3: Setting up the Code Coverage tab in TeamCity

The last step is the easiest. As I haven’t found a way to have the Code Coverage automatically pick up the PartCover reports, we’ll need to add a custom Report.

In TeamCity, go under Administration > Server Configuration > Report Tabs.

Add a new report tab with the Start Page set to the name of your PartCover HTML file.  In my case, it’s partcover.html.  Remember, the base path is your build directory (which is why I’m outputting the file to the $build_directory in psake).

Commit your repository with your new psake script and PartCover (if you haven’t already) and run–you’ll see a new Code Coverage tab ready and waiting.

- 12_2_2010 , 11_31_27 AM

jQuery: Function to auto-capitalize on ‘keyup’ event

November 16, 2010 Comments off

While putting the finishing touches on a view today, the customer asked if we could force the user’s CAPS LOCK on for a case-specific field. Well, no, not necessarily, but we could caps the characters as they go in.

This should be easy enough, right?  Right!

The function:

jQuery.fn.autoCap = function () {
    $(this).each(function () {
        $(this).bind('keyup', function () {
            $(this).val($(this).val().toUpperCase());
        });
    });
}

Usage:

$("#address-state").autoCap();

It’s easy enough to bind to the event and fire off toUpperCase(). There may be a more optimal way, but this seems to fit the bill and profiled well.

Is there a better way? 🙂

Categories: JavaScript, jquery Tags: ,

Setting up a NuGet PowerShell Profile

November 15, 2010 Comments off

While NuGet alone is a pretty spectacular package management system, one of the hottest features involves the underlying PowerShell Package Console (we’ll call it PM from here out).  It may look fancy and have a funky PM> prompt, but it’s still PowerShell.

Considering I use psake and live in PowerShell most of the day, I wanted to do a bit of customizing.  Out of the box… urr… package, PM is barebones PowerShell.

Screenshot - 11_15_2010 , 8_31_25 PM

Well, that’s pretty boring!

The question is, how would I customize things?

The answer? Setting up a profile!

Step 1: Finding the NuGet Profile

Just like a normal PowerShell installation, the $profile variable exists in PM.

image

Okay, that was easy.  If you try to edit the file, however, it’s empty. You can either use the touch command to create an empty file, then edit it with Notepad, or simply run Notepad with the $profile path–it’ll ask you to create the file. 🙂

For an example, we’ll just pipe a bit of text into our profile and see what happens.

image

Now, close Visual Studio (PM only seems to load the profile when Visual Studio first starts) and relaunch it.  PM should now welcome you!

image

 

Step 2: Customize, customize, customize!

Now we’re ready to add variables, setup custom paths and scripts, add some git-tastic support, and add modules (like psake).  There are quite a few posts on the blog about customizing PowerShell’s environment, click here to check them out.

Remember: Since we’re using a separate PowerShell profile, be frugal with your commands and keep them "development centric".  For example, I don’t load HyperV modules, Active Directory management commands, and other "non-Solution" things into the PM. Visual Studio is slow enough–don’t bog it down. :) 

This is also a great opportunity to trim your profile down and break it into modular pieces (whether that be scripts or modules).  Keep those profiles as DRY as possible.

 

A few caveats…

There do seem to be a few caveats while in the PM environment:

1. Execution, whether it’s an actual executable, a script, or piping something to more, seems to be tossed into another process, executed, and then the results returned to the PM console window. Not terrible, but something to be aware of if it seems like it’s not doing anything.

2. I can’t find a good way to manipulate the boring PM> prompt. It seems that $Host.UI is pretty much locked down. I’m hopeful that will change with further releases because not KNOWING where I am in the directory structure is PAINFUL.