Archive for the ‘Hardware and Software’ Category

Review: Asus Transformer TF-101

tl;dr: Awesome!

I wanted to give my review after at least a week to settle in and integrate into my daily ‘flow’.

Why the Asus Transformer?

I’ve looked at more tablets than I can remember, teetering between the Motorola Xoom, Apple iPad2, Notion Ink Adam, and the Transformer.  All of these seemed like great devices and had their own pros; however, the Transformer sold me on a few of its flashy innovations:

  • Asus seems dedicated to constant updates. They’ve already pushed out Android Honeycomb 3.1 OTA and it updated like a champ.
  • The docking station/keyboard is brilliant and really bridges the gap between netbook and tablet and adds incredible battery life.
  • The mix of hardware performance for an incredible price point (499$US for the 32GB unit and 149$US for the keyboard/docking station at
  • Flash support. Yes, I’m looking at you iPad2.

Photo of the Transformer... and swanky placemat backdrop.


How has the Transformer changed things?

I’ve noticed it’s changed things in the oddest of ways. I haven’t opened my personal laptop since I got the Transformer (it looks big and powerful, but it’s just old and … well, old).

Around the house, I’ve found myself using the Transformer in place of my EVO. I use it in the kitchen to look up recipes, check for coupons, make note of my groceries in Mighty Shopper, and listen to music. Other times, like for this post, for simple word processing, web browsing, and email. More functionality and less squinting.

Work Computer Size ComparisonAt work, I’ve used Polaris Office for note taking, presentations (via the HDMI out), web browsing and research as I wander around the office with folks, and email. The remote desktop client (2X Client) allows me to remote into servers, my office desktop, and client machines for diagnostic work.

With Android 3.1, the web browser engine finally supports NTLM/Windows authentication so I can log into our corporate web sites (I hope this is coming to the Gingerbread builds soon).

I still have my phone with me through all this.. the Transformer hasn’t replaced the portability and features of my phone, but augmented what I can do without an actual desktop computer.


The Hardware

You can read the unit’s specifications, so I won’t drive into them too much. There are a few things to mention.

The bulk

Overall, the unit (and keyboard doc) are quite light compared to a normal 14" laptop.  The tablet itself is very comfortable to carry around (long days here at work, so I’ve been pacing around while researching on the web).

The keyboard dock

Image of the the keyboard dock.I was a bit worried that the keyboard dock would be uncomfortable to use; however, that’s not the case.  The keys are well spaced apart (once you get used to the ‘special’ keys at the top–I keep accidently locking the unit when I hit backspace) and the monitor can tilt back far enough for easy reading.

It’s important to note that the keyboard dock’s additional battery life is due to the dock charging the main unit while it’s docked.  This is advantageous.  Wander around and run the battery down a bit on the tablet, then dock and let the keyboard charge things up, detach, and you have a fully charged tablet again.


It is quite disappointing that the Transformer has a unique USB cable; however, even more so that it’s only about a meter long.  This hassle is aggravated by the fact you cannot charge the unit via USB while it’s on, so if you need a charge AND need to use it, you’re trapped a meter from your electrical connection or extension cord.

Thankfully, with the 14-16 hours of battery life, charging isn’t a central focus of the day (unlike my EVO which is dead by the time I get to work).

The video output

Less of a big deal and more of a ‘oh’ moment when digging for cables.  My EVO and a few other video devices all use micro HDMI so I had to pickup a 1.3a mini HDMI cable for the Transformer.

The screen

I love the screen. It’s a bit hard to see in high light (it’s not a Kindle for outdoor reading); however, the fingerprints are a bit out of control. I plan on getting a screen protector which, I hope, will help.

The camera

The camera is only a 5MP (compared to the 8MP in my EVO) and takes 4:3 photos rather than wide screen. That aside, it’s quite a good camera for general ‘here I am!’ sorts of photos–like lounging on the hammock in the back yard on a peaceful evening.

Example Transformer camera image--my back yard.

The lack of right click

I understand… I really do. The Android device doesn’t have a context for ‘right-click’; however, I dream of the day when it does so that the remote desktop clients have right-click.

Keeping it safe

I picked up a Belkin 10" Netbook Sleeve when ordering and highly recommend it as a carry case for the Transformer.  The unit, whether you’re just carrying the tablet or the tablet and keyboard dock, fit very well into the sleeve and the zips/material feel and look good.


The Software

Out of the box, the Transformer walks you through a simple configuration–much like my EVO (or any Android phone). Setting up email, syncing contacts, and such worked just fine. I was impressed that the unit also synced my Google bookmarks. That’s an added bonus.

The basics are included:

  • Email (Enterprise, Gmail, etc.) – The layout and flow of both mail applications is the same and works VERY well on landscape mode.
  • Calendaring – nice, clean "Outlook-esque" layout.
  • Contacts – standard Android contacts client.
  • Polaris Office – bundled Office client for DOC, XLS, and PPT. Works quite well (and what I used for this post). It can also read/write to Google Docs, Dropbox, and a few other cloud services. At this time, I’m getting 400 errors trying to save BACK to Google Docs; however, from the forums, it appears that it is a Google API issue.
  • MyReader – Simple eBook/pdf reader akin to Aikido.
  • MyNet – network-based HD streaming client. Detects my PlayOn devices and streams perfectly.
  • MyCloud – a subscription-based, unlimited storage system for music and files. Transformer includes a free 1 year subscription. I haven’t dug into this much as I have LiveSync and DropBox.

In addition to the boxed software, there are a few applications that I’d recommend (and seem to work quite well with the Transformer and Honeycomb):

  • Dolphin Browser HD – Bit cleaner interface for web browsing and plugin modules (3.1 addresses some rendering issues though).


  • Amazon Kindle Reader – Great for reading Kindle books on the run (and without your Kindle). The update for Android 3.1 works even better!
  • Pandora – Great and solid music client.
  • Grooveshark – Formatting is a bit off, but works fantastic to query and stream your favorite tunes.
  • Google Reader – Great tablet support for full-screen reading.
  • imo Chat beta – Best IM client I’ve found for any type of Android device.
  • Plume – best Twitter client that takes advantage of the notifications and wide screen layout.
  • DropBox – the Android DropBox client is pretty stellar.
  • PrintBot – a free (and for cheap) application to print to network devices over WIFI. Works great and, hopefully, future versions will allow for multiple printers.
    NOTE: Similar to the Gingerbread break, Netflix is defunct on Honeycomb as well. I had hopes (dreams) that 3.1 would fix it, but alas it doesn’t seem so. Netflix is adamant that a new version is in the works. I hope so. It was great to have mobile movies on my cell phone and the Transformer seems ideal

Upgrading to Android Honeycomb 3.1

honeycomb_3-1-580x309On 10 June 2011, Asus released Android Honeycomb 3.1 for the Transformer. The official version is v8.4.4.5 (the SKUs are separate by locale). 

There are two options:

I’m impatient, so I opted for the manual method.  The process was quite simple:

  1. Download the zip file from the internet.
  2. On a microSD card create an \asus\update directory and copy the ZIP file into that directory.
  3. Insert the microSD card into the Transformer. You should see a notification that an update is available.
  4. The device will reboot a few times and then be good to go.

There are a few highlights to 3.1 (detailed by the release notes) that I really dig:

  1. The browser (and any browsers relying on it’s webkit) now supports NTLM/Windows Authentication. AWESOME for work!
  2. (Most) widgets are now resizable. I’ve come across a few that won’t resize.
  3. Speed! 3.1 generally feels faster.  Multitasking across a dozen apps with notifications flying and music playing doesn’t seem to bother it.


With the tablet upgraded to 3.1, the applications that I like, and features that keep impressing me, I think the Transformer will continue to be a great fit. The keyboard matches my dire need to rapidly input information (take notes, respond to emails) and the portability matches my recent need to roam free of wires and surf the web. 

The Transformer is a fantastic bridge between the smallest of laptops and the largest of cell phones and, unfortunately, is what I’d hoped the Google Chromebook would have been.  Don’t get me wrong, I like my Chromebook–it was free and I like free–but the Android-based Transformer can actually DO the things I need to do in my day-to-day life.

Cr-48 : The First Twenty Four

December 18, 2010 7 comments
I was shocked to see the Cr-48 sitting out on the front porch last night.  I tend to pre-order things and forget–so I figured that’d it’d happened again.  Instead, Silicon Valley Santa came early. Apparently the first group of testers weren’t notified as they wanted to see how wacky we went on the web. I’m a good little lemming–hence the blog post. 🙂 

After a full day of use, I wanted to join the thousands blogging about the Cr-48 and jot down some of my impressions–how it worked, what DID work, and what didn’t work. I’ve made an effort to do most everything on the Cr-48 today–including this blog post. Man, I miss Live Writer. 😛

Unboxing and Initial Setup

Out of the box, the machine just ‘starts’.  Fantastic. That’s how unboxing a new device should be. Unfortunately, I hit a brick wall immediately after. My network is locked down pretty tight and one of those security measures only allows authorized MAC addresses to attempt to authenticate.

“How the heck do I get the MAC address on this thing?”

It wasn’t on the screen, so I looked all over the unit. Nothing. Yanked the battery… nothing. Seriously?

DIDN’T WORK: There needs to be a way to spit out the network ifconfig information on startup. If the device is appealing to geeks, then give the geeks some love.

After hacking my own network, resetting the security schema, logging in, and resetting it, I was golden. The out-of-box-experience (OOBE) for the Cr-48 is actually a lot better than most the netbooks I’ve worked with if nothing else for it’s single ‘environment/task’ configuration. I guess that’s both a pro and con of having low expectations for the device (e.g. I don’t expect it to be it more than a glorified web browser).

WORKS: Instant update to the new version of ChromeOS, restart, sync everything, and ready-to-go perfection.

DIDN’T WORK: When the Cr-48 asks to take your picture, do it–even if you just hold up something or take a photo of empty air. Why?  Because ChromeOS doesn’t have a mechanism built in to MODIFY the photo afterwards if the initial picture wasn’t taken. You have to totally format and restart if you want to have a snazzy photo.

Rooting to Developer Mode

Now that I’m logged in and see my data, I want power. *insert evil laugh here*

Thankfully, swapping to developer mode is a FEATURE of the system.  You can’t do it ala software, it’s a physical switch near the battery. The only thing I don’t like is the sad computer screen when you start.  Seriously Google? My computer doesn’t need to be sad, it needs to have a magic wand or a hoverboard when it’s in root mode.

As the instructions explain, you should do this immediately as it wipes user partition (takes a few minutes). After that, you’re good to go.

At this point, you can swap to console windows just like a regular Linux device. Hit up VT2 by hitting Ctrl-Alt- -> (the forward button at the top, like where F2 would be). Log in and hack away or, as the prompt says, “have fun and send patches!”

NOTE: Hit Ctrl-Alt- Speaker Mute Button (LOL). It takes you to the console where the kernel messages are being spewed out. Interesting if you’re a bit geeky.

WORKS: Empowering users who WANT to root the unit is fantastic. I didn’t have to stand on one hand to control my device.

Using the Device

It’s hard to not like the package.  It’s a bit plain, but that’s expected for a prototype device created on a budget (they sent out ~60k of these for free… I’m sure Silicon Valley Santa’s elves made it ‘good enough’).

Screen: The 12.1” screen is crazy bright on full brightness. A couple of notches down from max is fine in a bright room. It’s clear and clean as well. Nice.

Keyboard: I actually love the keyboard. The search button in place of CAPS lock is brillant. When does anyone actually USE CAPS lock these days (except for the trolls on forum posts)? The search button doubles as a ‘new window’ when you think about it which really speeds up navigation once you get used to it.

Touchpad: No, this is a touchbad. I hope the real units either fix this or do away with it. Want to right click in Google Docs and fix your typos? Not a chance. Right-click is a mythical beast that only works when you don’t want it to. Have larger, warm hands (me!)? You’ll be rocketing your cursor all around the screen while you type. Know where to ‘click’? I sure didn’t. There’s no clue that the bottom of the pad is the buttons. I guess MacBooks are the same way–maybe this is a PC user snafu. For now, this is easily solved by a itty bitty Targus mouse I use for traveling.

DIDN’T WORK: The touchbad gets a DIDN’T WORK all of it’s own.

Sound: I have Pandora constantly running (ala Extension) and it’s not too bad. This bad boy isn’t running Bose or Klipsch, but it’s good enough for sitting and surfing. The headphone port has a nice range, no background static (which my Dell laptop is notorious for having).

Battery: Since my last ‘plug-in’, which was about 5 hours ago, the meter says I have 1:23 hours left on the clock. That includes having a USB mouse plugged in, 6 polling tabs open, Pandora constantly playing, and writing this post in Google Docs.  So if things go to plan, the life will be about 6.5 hours. For netbooks, that’s not unheard of; however, l’m sure having the monitor a bit brighter, streaming music, and having HootSuite, Mail, StackOverflow, Groups, Reader, Docs, and four search tabs all running for that time is a bit more than a normal netbook operates with. It’s far more than my normal laptop would survive–and it has two high capacity batteries.

Performance: Honestly? Considering it’s the backbone of the entire system, It’s kinda slow.  Considering it’s running a 1.66GHz Atom, has 2GB of memory, and an SSD hard drive, it FEELS slow–which is really all that matters.  Opening a tab shouldn’t take a few seconds. Page rendering on shouldn’t load one little image at a time.

I’m wondering if that’s because the AR928X card is stuck in 802.11g mode instead of 802.11n mode (Google search is filled with posts about the cards not kicking up to N-mode on Linux). Hopefully this will be addressed in time with patches.

WORKS: For normal operation, once you get used to the flow of the device, isn’t not bad though. Learning the keyboard commands is REQUIRED–if you’re a mouser, even with an external mouse, you’re going to weep at how slow it can be to get around.

UX: Using the Cr-48

The user experience of the device is … different. There’s no other way to put it. That’s not a bad thing either. I spend a majority of my computer time either coding, gaming, or “all other tasks”. I still have my desktop for coding (VS2010 brings my powerhouse computer to it’s knees–I have no expectations a mobile device could run it) and gaming; however, this little computer fits the “all other tasks” category well.

To do that, however, requires a bit of change. Here are a few of the things I’ve discovered to help me use the Cr-48.

Do not try to use tabs for everything. Using tabs is great–we love tab-based browsing, but think outside that. Remember that you can open new windows (Ctrl-N). Think of these new windows as multiple ‘desktops’. For example, I’m running Google Docs in one, my standard ‘browsing’ in another, and my ‘communication’ apps (HootSuite, Mail, Groups, Reader) in another. It allows me to easily Alt-Tab between work environments while still keeping each separate.

Become a keyboard junkie. Press Ctrl-Alt-/ on the keyboard to pop up a SWEET interactive keyboard map (every app should have this).  These are the keys for the ChromeOS. On top of that, learn the keyboard shortcuts of Mail, Groups, Reader, your twitter application, whatever other site you use (e.g. has some pretty slick keyboard shortcuts to move around their site totally mouseless). This really speeds things up.

Some good ChromeOS keys to get started:

  • Alt-Up/Alt-Down : Page Up/Page Down
  • Alt-Tab : Switch between windows
  • Ctrl-Tab : Switch between tabs
  • Alt-D : Focus on Location bar (and select so you can start typing)
  • Alt-Backspace : Delete

Remember the built-in hard keys. Since the Cr-48 is missing function (F-key) keys, it has a row of hard keys built-in. Back, forward, refresh, full screen, swap window, brightness, and sound controls right at your fingertips (literally if you’re hands are on home row). Full screen and back/forward are pretty snazzy once you get used to them there.

Find the app or extension that fits your requirements. I’ll talk more about extensions and apps here in a moment; however, use the Chrome Web Store and explore. There are a few cases where your application may offer BOTH an extension and app. Try both. One may work better than the other for your use.


Since the launch of the Chrome Web Store, there’s been a lot of discussion about what an extension is… and what an app is… and why we shouldn’t all just use bookmarks like we have been.

Since ChromeOS syncs to your Chrome profile, anything you already sync shows up automatically. I had my Shortener, Web Developer, Mail, Evernote, glee, and Pandora extensions right OOBE.

DIDN’T WORK: The only extension that’s been a bit wonky has been glee.  It doesn’t seem to properly detect input areas–so pressing the gleeKey (‘g’) at any time pops up the box. Bummer ‘cause glee would be rockin’ on a netbook considering how keyboard-oriented it is.

WORKS: This is more of a recommendation than a works, but still. If you use Pandora, use the extension, not the app. Why? Because the Pandora app just loads up On a normal computer, that’s fine, but this is a Linux-based platform. What happens when Flash starts up on a Linux-based platform? Kittens are killed. The app also requires a full tab to operate while the extension sits happily in the extension bar, blaring your tunes.


I’ll admit, my initial response to the Chrome Web apps was “wut”? Why would I want fancy bookmarks? With the Cr-48, that’s still pretty much the case. There are a few apps which seem to run as actual applications, like TweetDeck’s ChromeDeck. Everything else seems to just be a bookmark to a site. That is a huge bummer. I’m hoping as the Cr-48 and other ChromeOS systems are released that this changes.

An example. As I discussed above, the Pandora application is tragic. It loads which promptly dies due to Flash.  The extension is rockin’, but there’s room for improvement.  ChromeOS supports panels (little pop-ups at the bottom of the screen). That’d be a great place for some Pandora action.

Random Thoughts

Multimedia is sorta meh. Flash sucks on here. I expect that on a Linux device, but I’d assume the owners of YouTube would have worked out SOMETHING magical. No, sir.  That means poor performance for YouTube and Hulu and, well, 90% of the rest of the web. Unfortunately, since this is a Linux box, Silverlight doesn’t natively work either (I haven’t tried installing Moonlight yet… though I don’t have high hopes). That counts out Netflix. It’s a bit of a letdown because the Atom processors are toted for being able to do multimedia on-the-go. I honestly think this machine COULD do it if it wasn’t for the Flash/Silverlight limitation on Linux and that really hurts to say.

WORKS: For some reason, the Flash player (using FlowPlayer) on is BRILLANT and works like a champ. I don’t know enough about how the compression/streaming/playback works on FlowPlayer to say why–though it’s something I’m going to look into for a few sites I have using Flash movies.

The window switching animation is confusing. Okay, I’ll accept that I may be dense, but it just doesn’t feel natural. Rather than looping left-to-right through your windows, when you get to the end, you slingshot back to the first… so if you have three, you see left-left-right-left-left-right. That’s fine once you get used to it, but the right (the one that’s going from last to first) LOOKS like it’s going BACK. Just an odd UX thing.

Mounting storage is… awkward. I’m assuming this is a prototype issue and won’t affect the production devices, but the device is SO LOCKED DOWN that using the SD card slot is a real pain. I’m thinking this is to encourage cloud storage, which is fine, but if that’s the case, why is the SD card slot there? I can download files in Chrome, but moving them onto the removable device requires some magic.

But I want something from {device}… For this article, figuring out how the heck to get photos on here was a REAL pain. I have a phone that can take snazzy widescreen photos, but getting them to the web is a bit of a pain outside of TwitPic. I have a Sony digital camera, but plugging it into the USB port on here was futile. I wasn’t going to swap the front web cam around to snap photos–that was just silly. So how can a blogger use this and attach photos? I’m still working that out–note the lack of photos in this stream of text post.

Give us a right-click keyboard button! If right-clicking is going to be such a pain, give us the ‘properties’ right-click key that we’ve gotten used to on Windows keyboards for so long. I’ve REALLY been missing that (especially while typing this post to fix typos–I guess I should be less lazy and spell the words right the first time).

That’s it for the first twenty-four hours and see, all this cloud computing talk and I haven’t once said “to the cloud!”… oh, damn.

Stop using MS Paint for your demos – Get Balsamiq Mocks!

November 25, 2009 3 comments

Ever had your boss walk in and want a ‘mockup’ of a product the next day? In the past, I’d whip something up either in Visio (*cough* or Paint *cough*) and give a basic sketch of how screens and flow would work.

Over the past year, I’ve read a lot of rave reviews around Balsamiq Mockups, but hadn’t had a chance to try it out. 

I emailed Balsamiq to inquery how their licensing structure worked.  I was interested in purchasing a personal copy (budgets are tight here at the office—cuts in public education are running deep) and curious if I could still use it for “work-related” activities. To my surprise, I had nearly an instant response thanking me for my interest and providing me a free license to use here at work (educational). That’s freaking AWESOME and greatly appreciated and further motivates me to pick up a copy to use for consulting and personal projects.

So, next comes usage.  Up until now, I’ve tinkered with the ‘web demo’ version of Balsamiq. I was pleased that, on moving to the desktop version, the layout, tools, and functionality remained almost identical. No lost time or learning curve.

How It Saved The Day

Last week, I was hit up by the opening situation.  “Hey, we need a demo of how you’d do this… and we need it later this afternoon to present to the laywers, {big boss A}, and {big boss B}…”

For something to hit that level, I’d usually opt to bypass the Big Chief and Crayon as well as Paint and simply mock up a UI on the web.  Unfortunately, for this, the time wasn’t there; however, Balsamiq was up to the task.

Rough (very rough) specs to screens, screens transformed into flow, and added into a PowerPoint for a quick presentation, acceptance by all parties involved, and hero fanfare.  We’re now a week later and using those mockups to generate our UI screens—and the customer loves how well things translate without any ‘surprises’. Excellent.

UPDATE: I just pulled down the 1.6.46 version of Balsamiq and it exports to PDF. Sweet..

Features I Love

Linked Screens – A bit of a counter to one of the ‘Things I Wish It Did’ is the ability to link mockup screens together with ‘links’.  It’s great for demos to provide a true flow of how things fit together.

In this example, we’re showing a simulation of the login screen. You can see the ‘link’ icon on the Login button. In a demo, clicking the Login button takes us to our next screen—just like it would (if authenticated right ;)) in our app.

Balsamiq Mockups - Linked Screens

Sticky Notes – On a few screens, showing business logic is a bit challenging. That’s where I love sticky notes comments. I’m a huge fan of whiteboarding and sticky notes and am thrilled I can bring that into my mocks. It’s also great when exporting and being able to keep track of comments and ideas as the team is working through a mock.

Balsamiq Mockups - Sticky Notes

Easily Populated Controls – Data grids, buttons, tabs—the common elements of a user interface and all easily populated with test data.  No longer do I need to draw lines in Paint or try to copy/paste screen clips out of Excel to get a decent looking grid view—a bit of text and commas and a snazzy grid appears!

Balsamiq Mockups - Grid

Things I Wish It Did

Master Pages – Most UI layouts have the same headers/footers. You can easily reproduce this by ‘cloning’ the current mockup; however, when the customer walks in and wants to move the logo from the left side to the right side, you now have 10… 20… 100 individual screens to move it on. I’d love to be able to designate a screen as the ‘parent’.

… wow, that’s about all I can come up with!

Comic Sans Makes Me Cry… and How to Fix it

One thing that really threw me about Mockups was the fact that it used Comic Sans. Seriously? Thankfully, I’m not the first to ask this and they’ve provided a stellar walkthrough on how to use whatever font you desire.

My configuration file looks like:

 <fontFace>Segoe UI</fontFace>

I opted for Vista/Windows 7’s clean Segoe UI font.  Having my mockups look like my scribbles on paper wasn’t that important to me.  Professional earns more points than crayons on a Big Chief tablet. 😉


I’ve found myself, since picking up Balsamiq Mockups, simply using it for everything.  Sitting in planning meetings and sketching out reports, screens, even data flows.  With a little creativity, you can sketch out almost anything.  For the things you can’t, I’m ‘crayoning’ it using my Bamboo tablet and drawing out what I want–then importing it as an image. Sweet.

If you haven’t check out Balsamiq, give it a run–it’s an amazing tool.


Automating Extracts Using External .SQL Files and PowerShell

September 29, 2009 Comments off

Rather than rely on a system task, I wanted to be able to kick off an export of one of our SQL databases on the fly, have it generate a CSV, then email the CSV to one of our developers.

Sounds like a task for PowerShell!

Why PowerShell? Well, frankly, because moving jobs around and such on our mixed-mash of SQL versions and servers is a bit annoying and I wanted to see if it worked.

So, to start off, I have a file, call it export_data.sql that contains all of the logic of my SQL query and the format I require.

I’ll also be using the Microsoft SQL query script I discussed in this blog post.

The tricky part is reading in the .sql file as a single “entity”—not the array of rows that get-content usually provides.  Thankfully, there is an easy way to do it.

@(gc ‘export_data.sql’ -readcount 0)

According to the PowerShell documentation, the readcount parameter serves two purposes: one is to set the number of lines to read at a time, the second (when set to zero) is to pull in all of the lines at once.  Great, that’s exactly what we need.

From there, it’s simply feeding our query to the sql-query function I wrote and exporting to CSV.  Nice enough, CSV exports are built-in to PowerShell!

The final command looks something like this:

@(gc ‘export_data.sql’ -readcount 0) | 
   % { sql-query server db $_ } |
   export-csv dump.csv

I could then add another field to mail the csv using the built-in command Send-MailMessage.

❤ PowerShell.

Querying Oracle using PowerShell

September 1, 2009 4 comments

Yesterday, I wrote up a quick bit of code to query out our SQL Servers.  Initially, I wanted a speedy way to hit, parse, and report back log4net logs in our “server status” scripts.

Well, never one to leave something alone, I started tinkering with Oracle support.  In our enterprise, most of our key systems sit on Oracle and there are SEVERAL opportunities for quick data retrieval routines that could help out in daily work.

Plus, doing an Oracle query in PowerShell beats five minute process of cranking up Oracle SQL Developer for a simple, single query. 🙂

CODE: The full source of this is available here on

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),

[System.Reflection.Assembly]::LoadWithPartialName(“System.Data.OracleClient”) | out-null
$connection = new-object `
    “Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$server)(PORT=1521)) `
    (CONNECT_DATA=(SERVICE_NAME=$instance)));User Id=USER_ID;Password=PASSWORD;”);

$set = new-object   

$adapter = new-object ($query, $connection)

$table = new-object
$table = $set.Tables[0]

#return table

I chose to use the OracleClient library for simplicity sake.  I could have used ODP.Net; however, that’d make my scripts FAR less portable.  Since OracleClient isn’t loaded by default in PowerShell, this script loads it.  In addition, I chose to use the TNS-less connection string as I don’t typically keep a ‘tnsnames.ora’ file on my computer.  This further adds to the portability of the script.

Past that and the change from SqlClient to OracleClient, the rest of the code is the same from the prior example.

Dealing With Empty Strings and Nulls

One thing that I did run across that differed between Oracle and Microsoft SQL revolved around how empty strings were dealt with when parsing using PowerShell.


oq “SELECT * FROM Schools”


  —-        ———————–

100 School

102 School

112 School

140 School


Now, what if I wanted to just see the schools missing a principal_email_address?  I’d just rewrite my SQL query, right?  Yeah, probably, but for the sake of argument and perhaps some scripting.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq “”}

No results.

What? Why not?  I see two in my last query.  Unfortunately, dealing with “nulls” and empty strings can get a bit tricky when pulling from database data.  With Microsoft SQL, a text-based column (varchar, ntext, etc) seems to handle -eq “” just fine, but Oracle is less than pleased.  @ShayLevy suggested -eq [string]::Empty but that didn’t pull through either. 

From a prior experiment, I also tried -eq $null and was greeted with something very different—it returned all results. Meh.

Randomly, I tried -like $null and it worked. Well, that’s interesting.  So the value isn’t empty in Oracle, but it is “like” a null.  After a bit more digging, I discovered that the real value is –eq [DBNull]::Value.

oq “SELECT * FROM Schools” | ? { $_.principal_email_address -eq [DBNull]::Value }


  —-        ———————–

100 School
102 School

It makes sense… but more testing is required to see which is more reliable for a wide variety of data types.  I like the concept of “like null” to simulate “string empty or null”.  Further testing required. 🙂


Querying SQL Server using PowerShell

August 31, 2009 2 comments

The great thing about PowerShell is direct access to objects.  For my apps, database connectivity is happily handled by NHibernate; however, that doesn’t mean we can’t take advantage of good old System.Data.SqlClient for our PowerShell scripting.

CODE: The full source of this is available here on

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),

$connection = new-object `
    “Data Source=$server;Initial Catalog=$instance;Integrated Security=SSPI;”);
$adapter = new-object ($query, $connection)
$set = new-object


$table = new-object
$table = $set.Tables[0]

#return table

Not too long or challenging—it’s mostly working to instantiate a quick SQL connection and pass in your query.  I even considered plugging in a check on the $query parameter to ensure it began with SELECT to ensure I wouldn’t do accidental damage to a system. Maybe I’m just paranoid. 😉

What this little snippet allows me to do is quickly add log4net checking into some of my server monitoring PowerShell scripts.

query sqlServer myDatabase “Select count(id), logger from logs group by logger” | format-table -autosize

Notice I didn’t include the format-table command in my main query script.  Why?  I wanted to keep the flexibility to select, group, and parse the information returned by my query.  Unfortunately, it seems that the format commands break that if they’re ran before a manipulation keyword.  Adding in “ft –a” isn’t difficult in a pinch.


Quick and easy…

Other uses:

  • Customer calls up with a question about data—save time and do a quick query rather than waiting for Management Studio to wind up.
  • Keep tabs on database statistics, jobs, etc.
  • and more…

Digging into the Event Log with PowerShell

August 25, 2009 Comments off

There are a few of our applications that haven’t been converted over to log4net logging so their events still land in the good ol’ Windows Event Log.  That’s fine and was fairly easy to browse, sort, and filter using the new tools in Windows Server 2008.

I’ve found a bit better tool, however, over the past few hours for digging into the logs on short notice and searching—obviously, PowerShell.

Full source for this can be found here.

I wanted to be able to quickly query out:

  • the time – to look at trending,
  • the user – trending, and filtering if I have them on the phone,
  • the URL – shows both the application and the page the problem is occuring on,
  • the type – the exception type for quick filtering,
  • the exception – the core of the issue,
  • the details – lengthy, but can be ever so helpful even showing the line number of the code in question.

param ([string]$computerName = (gc env:computername))

function GetExceptionType($type, $logEvent)
 if ($type -ne "Error") { $logEvent.ReplacementStrings[17] }
 else {
        $rx = [regex]"Exception:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])

function GetException($type, $logEvent)
 if ($type -ne "Error") { $logEvent.ReplacementStrings[18] }
 else {
        $rx = [regex]"Message:.([0-9a-zA-Z].+)"
        $matches = $rx.match($logEvent.ReplacementStrings[0])

get-eventlog -log application -ComputerName $computerName |
    ? { $_.Source -eq "ASP.NET 2.0.50727.0" } |
    ? { $_.EntryType -ne "Information" } |
    select `
  Index, EntryType, TimeGenerated, `
  @{Name="User"; Expression={$_.ReplacementStrings[22]}}, `
  @{Name="Url"; Expression={truncate-string $_.ReplacementStrings[19] 60 }}, `
  @{Name="Type"; Expression={GetExceptionType $_.EntryType $_ }}, `
  @{Name="Exception"; Expression={GetException $_.EntryType $_ }}, `
  @{Name="Details"; Expression={$_.ReplacementStrings[29]}}

The code itself is probably pretty overworked and, I hope, can be refined as time goes on.

The two helper functions, GetExceptionType and GetException, exist because (it seems) that Warnings and Information store their information in one location and Errors store their information in one HUGE blob of text that needs to be parsed.  Those helpers provide that switch logic.

The get-eventlog logic itself is pretty straightforward:

  1. Open up the ‘Application’ EventLog on the specified computer,
  2. Filter only “ASP.NET 2.0.50727.0” sourced events,
  3. Exclude “Information” type events,
  4. Select 3 columns and generate 5 columns from expressions.

The great advantage is I can then take this file and “pipe” it into other commands.

get-aspnet-events webserver1 | select user, url, type | format-table -auto

User               Url                               Type
----               ---                               ----
domain\dlongnecker     PreconditionException
domain\dlongnecker     PreconditionException
domain\dlongnecker       PostconditionException
domain\dlongnecker       AssertionException


get-aspnet-events webserver1 | ? { $_.user -like “*dlongnecker” }

The possibilities are great—and a real time saver than hitting each server and looking through the GUI tool.

The code also includes a helper method I created for truncating strings available here via codepaste.  If there’s built-in truncating, I’d love to know about it.


Using Git (and everything else) through PowerShell

August 21, 2009 5 comments

After a discussion on Stack Overflow a few days ago (and hopefully a useful answer), I got to thinking a bit about how I use PowerShell.  It may be a bit geekish, but PowerShell starts up on Windows startup for me.  The prompt is almost always open on a second monitor–ready for whatever task I may need.

As the SO post mentioned, I also use PowerShell to connect to my Git repositories.  At the office, it has a few more customizations to hashout against our *shudder* SourceSafe */shudder* repositories, but that’s a different post. 

For now, I wanted to walk through how profile script is setup in a bit more detail than the SO post.

Creating a Profile Script

UPDATE: The full source code (plus a few extras) for this article can be found here :

A profile script is essentially a “startup” script for your PowerShell environment. 

By default (perhaps a registry key changes this), it’s located in %userprofile%\Documents\WindowsPowerShell and is aptly named Microsoft.PowerShell_Profile.ps1.  The naming convention between “WindowsPowerShell” and “MicrosoftPowerShell” is a bit annoying, but not a big problem.

The file is just plain text, so feel free to use your editor of choice or PowerShell ISE (Windows 7, Windows 2008 R2) for some fancy content highlighting.

What goes in here?

As far as I can tell, the profile is a great place to initialize global customizations:

  • environmental variables,
  • paths,
  • aliases,
  • functions that you don’t want extracted to .ps1 files,
  • customizatons to the console window,
  • and, most importantly, customize the command prompt.

The Console

I use Console2 rather than the standard PowerShell prompt.  Console2 is an amazing open source alternative to the standard console and includes features such as ClearType, multiple tabs, and more.  Check it out.

I also use Live Mesh, so there are a few things that are unnecessary for most users.  Live Mesh is an online syncronization service… so my PowerShell scripts (amongst other things) stay synced between my home and work environments.

My PowerShell Prompt At Startup

Preparing the Environment

My profile script starts off by setting up a few global variables to paths.  I use a quick function to setup the parameters based on the computer I’m currently using. 

# General variables

$computer = get-content env:computername

switch ($computer)



        ReadyEnvironment “E:” “dlongnecker” $computer ; break }


        ReadyEnvironment “D:” “david” $computer ; break }

    default {

        break; }


function ReadyEnvironment (





    set-variable tools “$sharedDrive\shared_tools”scope 1

    set-variable scripts “$sharedDrive\shared_scripts”scope 1

    set-variable rdpDirectory “$sharedDrive\shared_tools\RDP”scope 1

    set-variable desktop “C:\Users\$userName\DESKTOP”scope 1

    Write-Host “Setting environment for $computerName”foregroundcolor cyan


Easy enough.  I’m sure I could optimize this a bit more, but it works.  Again, this wouldn’t be necessary on a single computer, but since I use LiveMesh and the same PowerShell profile on multiple computers—this keeps my paths in check.

The second step is to modify the $PATH environmental variable to point to my scripts and Git as well as add a new $HOME variable to satisfy Git’s needs.

# Add Git executables to the mix.

[System.Environment]::SetEnvironmentVariable(“PATH”, $Env:Path + “;” + (Join-Path $tools “\PortableGit-\bin”), “Process”)


# Add our scripts directory in the mix.

[System.Environment]::SetEnvironmentVariable(“PATH”, $Env:Path + “;” + $scripts, “Process”)


# Setup Home so that Git doesn’t freak out.

[System.Environment]::SetEnvironmentVariable(“HOME”, (Join-Path $Env:HomeDrive $Env:HomePath), “Process”)

Customizing the Console Prompt

The ‘prompt’ function overrides how the command prompt is generated and allows a great deal of customization.  As I mentioned in the SO post, the inspiration for my Git prompt comes from this blog post.

I’ve added quite a few code comments in here for reference. 

function prompt {


    $status_string = “”

    # check to see if this is a directory containing a symbolic reference,

    # fails (gracefully) on non-git repos.

    $symbolicref = git symbolic-ref HEAD

    if($symbolicref -ne $NULL) {


        # if a symbolic reference exists, snag the last bit as our

        # branch name. eg “[master]”

        $status_string += “GIT [“ + `

            $symbolicref.substring($symbolicref.LastIndexOf(“/”) +1) + “] “


        # grab the differences in this branch   

        $differences = (git diff-indexname-status HEAD)


        # use a regular expression to count up the differences.

        # M`t, A`t, and D`t refer to M {tab}, etc.

        $git_update_count = [regex]::matches($differences, “M`t”).count

        $git_create_count = [regex]::matches($differences, “A`t”).count

        $git_delete_count = [regex]::matches($differences, “D`t”).count


        # place those variables into our string.

        $status_string += “c:” + $git_create_count + `

            ” u:” + $git_update_count + `

            ” d:” + $git_delete_count + ” | “


    else {

        # Not in a Git environment, must be PowerShell!

        $status_string = “PS “



    # write out the status_string with the approprate color.

    # prompt is done!

    if ($status_string.StartsWith(“GIT”)) {

        Write-Host ($status_string + $(get-location) + “>”) `

            nonewlineforegroundcolor yellow


    else {

        Write-Host ($status_string + $(get-location) + “>”) `

            nonewlineforegroundcolor green


    return ” “


The prompts are then color coded, so I can keep track of where I am (as if the really long prompt didn’t give it away).


Now, with our prompts and our pathing setup to our Git directory, we have all the advantages of Git—in a stellar PowerShell package.

NOTE: I would like to point out that I use PortableGit, not the installed variety.  Since Git also moves back and forth across my Live Mesh, it seemed more reasonable to use the Portable version.  I don’t believe; however, there would be a difference as long as the \bin directory is referenced.

Setting up Aliases—The Easy Way

Brad Wilson’s implementation of find-to-set-alias is brillant.  Snag the script and get ready for aliasing the easy way.  I keep my most common tools aliased—Visual Studio, PowerShell ISE, and NotePad.  I mean, is there anything else?  (Well, yes, but I have Launchy for that).

Using find-to-set-alias is easy—provide a location, an executable, and an alias name:

find-to-set-alias ‘c:\program files*\Microsoft Visual Studio 9.0\Common7\IDE’ devenv.exe vs

find-to-set-alias ‘c:\windows\system32\WindowsPowerShell\v1.0\’ PowerShell_ISE.exe psise

find-to-set-alias ‘c:\program files*\Notepad2’ Notepad2.exe np

Helpers – Assembly-Info

After getting tired of loading up System.Reflection.Assembly everytime I wanted to see what version of a library I had, I came up with a quick script that dumps out the name of the assembly and the file version.


  $file= $(throw “An assembly file name is required.”)


    $fullpath = (Get-Item $file).FullName

    $assembly = [System.Reflection.Assembly]::Loadfile($fullpath)


    # Get name, version and display the results

    $name = $assembly.GetName()

    $version =  $name.version


    “{0} [{1}]”f $, $version

With this, running assembly-info NHibernate.dll returns:

NHibernate []


Taking it a step further, I created a quick function in my profile called ‘aia’ or ‘assembly info all’ that runs assembly-info on all .dlls in the directory.

function aia {

    get-childitem | ?{ $_.extension -eq “.dll” } | %{ ai $_ }


Now, in that same directory, I get:

Antlr3.Runtime []
Castle.Core []
Castle.DynamicProxy2 []
FluentNHibernate []
Iesi.Collections []
log4net []
Microsoft.Practices.ServiceLocation []
Moq [4.0.812.4]
MySql.Data []
NHibernate.ByteCode.Castle []
NHibernate []
System.Data.SQLite []
System.Web.DataVisualization.Design []
System.Web.DataVisualization []
xunit []


Helpers – Visual Studio “Here”

This was created totally out of laziness.  I have already setup an alias to Visual Studio (‘vs’); however, I didn’t want to type “vs .\projectName.sln”.  That’s a lot.  I mean, look at it. 

So, a quick, and admitted dirty, method to either:

  1. Open the passed solution,
  2. If multiple .sln exist in the directory, open the first one,
  3. If only one .sln exists, open that one.

I don’t often have multiple solution files in the same directory, so #3 is where I wanted to end up.

function vsh {

    param ($param)


    if ($param -eq $NULL) {

        “A solution was not specified, opening the first one found.”

        $solutions = get-childitem | ?{ $_.extension -eq “.sln” }


    else {

        “Opening {0} …”f $param

        vs $param



    if ($solutions.count -gt 1) {

        “Opening {0} …”f $solutions[0].Name

        vs $solutions[0].Name


    else {

        “Opening {0} …”f $solutions.Name

        vs $solutions.Name



That’s about the gist of it.  The challenge (and fun part) is to keep looking for ways to imrpove common processes using Git.  As those opportunities arise, I’ll toss them out here. 🙂

UPDATE: The full source code (plus a few extras) for this article can be found here :

Tips for Booting/Using VHDs in Windows 7

August 6, 2009 3 comments

Both Windows 7 and Windows Server 2008 R2 (aka Windows 7 Server) support booting directly from a VHD.  This is FANTASTIC, AWESOME, and other bolded, all-caps words.  For the full details, check out Hanselman’s handy post.

I’m a HUGE user of differencing disks.  My layout follows the basic structure of:

  • system (parent/dynamically expanding)
    • environment (child of system/differencing)
      • task (child of environment/differencing)
  • Windows Server 2008 R2 (2008r2.vhd)
    • VS2008 + tools (vs2008.vhd)
      • “production” work (projectName.vhd)
      • freelance/open source work (dev1.vhd)
      • tinkering (dev3.vhd)
    • VS2010 + tools (vs2010.vhd)
      • tinkering (dev2.vhd)
  • Windows 7 (win7.vhd)
    • Simulated client “a” environment (client-a.vhd)
    • Simulated client “b” environment (client-b.vhd)

The great thing is, I have a single “2008r2.vhd” and “win7.vhd” as a baseline.  A customer calls and needs a quick mockup?  I can instanciate a new development environment in moments (or quicker via PowerShell scripts).  Who really wants to walk through reinstalling the operating system again anyway?  Not me.

With that, here’s a few tips for situations I ran into building up my environment.

Q: I had a series of VHDs from [Virtual Server 2005 R2 | Virtual PC 2007 | The Interwebz] and they won’t work.
Correct.  Only VHDs from HyperV or created directly in Windows 7 or Server 2008 R2 (that R2 part is important) using DISKPART are bootable.

Q: My system will not boot after installing!  I just get a BSOD!

If you can catch the BSOD message or press F8 and turn off automatic reboot, the error reads:

“An initialization failure occurred while attempting to boot from a VHD.  The volume that hosts the VHD does not have enough free space to expand the VHD.”
What?  Huh?  We setup dynamically expanding VHDs.. why would it need all of the free space?  Well, it seems that to boot from a VHD, it expands it to full capacity (assuming with zeros because I don’t see a latency on boot-up).  If you’re like me, you probably set your “dynamically expanding disk” to a wild maximum capacity, such as 200GB.  Even if you get close to that number, it’s likely that the parent/child VHD chains are split across multiple partitions/spindles.

That’s a gotcha.

Lesson: Be prudent with how you size your VHDs.  Ensure you have room for you’re intentions, but also ensure you have enough physical capacity.

Here’s how to fix the problem without totally reinstalling your VHD.

  1. Boot into your parent operating system and attach the VHD as a partition using either DISKPART or the Disk Management GUI.
    1. select vdisk file=”d:\vm\basedrives\2008r2.vhd”
    2. attach vdisk
  2. Shrink the VHD using the Disk Management GUI (it’s just easier, trust me).  If your original maximum capacity was 200GB and you only have 150GB free, set it to 120GB or something reasonable.
  3. Use the free VHDResizer tool to trim off the excess “maximum capacity” of your newly shrunken VHD.  You can get VHDResizer here. Set the maximum size to the same size as your new partition size.
    1. VHDResizer will require you to specify a new name for the resized VHD.  After it’s done, rename the old VHD to “file_old.vhd” and the new VHD to the same as your old file to ensure the boot manager picks up the VHD.
  4. Restart and continue along with configuring your new system.
Q: The Parent Disk is Complete.  How do I create a Differencing Disk?
Creating a differencing disk is pretty easy–a few commands in DISKPART and an administrator-privilaged console window and you’re set.
Before doing any of this, be sure that you’ve defragmented and ran the precompactor in your VHD.  This cleans up the data and zeros out the free space so that it compacts nicely.  If you don’t want to install Virtual Server to get ISO image for the PreCompactor (though I recommend this just to be safe), you can download an ‘extracted’ version from  Here’s a direct link to the precompact.exe file.
  1. Using DISKPART, select your parent VHD, compact it, and create a child (differencing) disk.
    1. select vdisk file=”d:\vm\basedrives\2008r2.vhd”
    2. compact vdisk
    3. create vdisk file=”d:\vm\vs2008.vhd” parent=”d:\vm\basedrives\2008r2.vhd”
  2. Run bcdedit /v and grab the {guid} of your existing VHD boot loader.
  3. Use BCDEDIT to replace the ‘device’ and ‘osdevice’ VHD paths.
    1. bcdedit /set {guid} device vhd=[LOCATE]\vm\vs2008.vhd
    2. bcdedit /set {guid} osdevice vhd=[LOCATE]\vm\vs2008.vhd
  4. Browse (using Windows Explorer, command window, etc) to your original, newly parent VHD (2008r2.vhd in this example) and mark it as read-only for safe keeping.
  5. Reboot and load up your new differencing disk.
Quick note:  In BCDEdit, the [LOCATE] tag is super—it allows the boot loader to FIND the location of the file rather than you specifying it.  This is great if your drive letters tend to bounce around (which they will… a bit).
Be aware that the previous note that your VHDs will expand to their full size remains.  You now, however, have the static size of your parent VHD and the “full size” of your new differencing disk (which inherits the parent’s maximum size).  If your parent is 8GB and the maximum size is 120GB, you’re now using 128GB, not 120GB.  Keep that in mind as you chain differencing disks. 🙂

Q: DVDs are annoying.  I can mount VHDs, why can’t I mount ISOs?

Who knows.  At least with Windows 7, we can actually BURN ISO images… much like 1999.  In either case, I recommend tossing SlySoft’s Virtual CloneDrive on your images (and your host).  It’s fast, mounts ISOs super easy, and saves a TON of time.

Configuring Oracle SQL Developer for Windows 7

I’m fully Vista-free now and loving it; however, Oracle SQL Developer has (once again) decided to simply be annoying to configure.

<rant>Yes, I realize Oracle hates Microsoft.  Girls, you’re both pretty—please grow up.</rant>

Anyway, after a bit of hunting, I think I’ve found a good mix for those of us who love SQL Developer for coding and testing, but don’t “use” a lot of the Oracle proprietary junk features that comes with it.

Originally, I was going to include a couple of the configuration files; however, they’re spread out EVERYWHERE and, frankly, I can’t find them all. 😦  I also can’t figure out a way to import settings (without blowing my settings away first).

File Paths

As I mentioned before, some of the configuration files are spread out—everywhere.  Here’s where the most common files are located.

sqldeveloper.conf – <sqldeveloper folder>\sqldeveloper\bin\

ide.conf – <sqldeveloper folder>\ide\bin\

User-Configured Settings – %appdata%\SQL Developer\

I prefer to make my modifications in sqldeveloper.conf; however, a lot of the resources that pop up on Google make them in ide.conf.  Why?  I’m not sure.  sqldeveloper.conf simply CALLS ide.conf.  Meh.

Fixing Memory Consumption on Minimize

I found a reference to JDeveloper (another Java utility) that discussed how JDeveloper (and similarly, SQL Developer) pages everything when you minimize the window.

To fix this, open up sqldeveloper.conf and add the following line:

AddVMOption -Dsun.awt.keepWorkingSetOnMinimize=true

Fixing Aero Basic Theme

Tired of your IDE swapping back to Aero Basic whenever you launch SQL Developer?  Me too.  For now, Oracle reports that SQL Developer doesn’t support the full Aero Theme… or does it?

To enable Aero support (or at least keep it from bouncing back to Aero Basic), open up sqldeveloper.conf and add the following line:

AddVMOption -Dsun.java2d.noddraw=true

The Oracle forums also recommend trying the following line:

AddVMOption -Dsun.java2d.ddoffscreen=false

That option; however, never resolved the issue for me.  Your mileage may vary.

Cleaning Up The UI

The default UI leaves a lot to be desired for Oracle SQL Developer.  Here’s a few UI tips to try out.  These settings are found under Tools > Preferences.

Change the Theme – Environment > Theme. 

I like Experience Blue.  It’s clean, simple, and goes well with Windows 7’s look and feel.

Change Fonts – Code Editor > …

There are quite a few fonts that can be configured.  Here’s what I changed:

Code Insight – Segoe UI, 12
Display – check ‘Enable Text Anti-Aliasing’
Fonts – Consolas, 11
Printing – Consolas, 10

Disable Unnecessary Extensions – Extensions > …

Honestly, I don’t use ANY of the extentions, so I disabled everything as well as unchecking ‘Automatically Check for Updates’.  I’ve noticed that load time for the UI is insanely fast now (well, insanely fast for a Java app on Windows).

Window Locations

The only thing that I can’t figure out how to fix is the window location and placement.  Example: When you open a new worksheet, the results area is not visible (you have to drag that frame up each time).  That annoys me to no end and I can’t find a place to ‘save current window layout’ or similar.  Ideas?

That’s it!

With that, SQL Developer loads up quickly, connects, and displays just fine in Windows 7.