Archive for the ‘Enterprise Architecture’ Category

The Post-Certification Era?

February 13, 2012 1 comment

Oh look, starting off with a disclaimer. This should be good!

These are patterns I’ve noticed in our organization over the past ten years–ranging from hardware to software to development technical staff. These are my observations, experiences with recruiting, and a good dash of my opinions. I’m certain there are exceptions. If you’re an exception, you get a cookie. 🙂

This isn’t specifically focused on Microsoft’s certifications. We’re a .NET shop, but we’re also an Oracle shop, a Solaris shop, and a RHEL shop. So many certification opportunities, so little training dollars.

Finally, I’ll also throw out that I have a few certifications. When I made my living as a full-time consultant and contractor and was just getting started, they were the right thing to do (read on for why). Years later … things have changed.

Evaluating The Post-Certification Era

In today’s development ecosystem, certifications seem play a nearly unmentionable role outside of college recruitment offices and general practice consulting agencies. While certifications provide a baseline for those just entering the field, I rarely see established developers (read: >~2 years experience) heading out to the courseware to seek a new certification.

Primary reasons for certifications: entry into the field and “saleability”.
Entry into the field – provides a similar baseline to compare candidates for entry-level positions.

Example: An entry-level developer vs. hiring an experienced enterprise architect. For an entry-level developer, a certification usually provides a baseline of skills.

For an experienced architect, however, past project experience, core understanding of architecture practices, examples of work in open source communities, and scenario-based knowledge provides the best gauge of skills.

“Saleability” of certifications for consulting agencies allows “one upping” other organizations, but usually lack the actual real-world skills necessary for implementation.

Example: We had a couple of fiascos years back with a very reputable consulting company filled with certified developers, but simply couldn’t wrap those skills into a finished product. We managed to bring the project back in-house and get our customers squared away, but it broke the working relationship we had with that consulting company.

Certifications provide a baseline for experience and expertise similar to college degrees.
Like in college, being able to cram and pass a certification test is a poor indicator (or replacement) for handling real-life situations.

Example: Many certification “crammers” and boot camps are available for a fee–rapid memorization and passing of tests.  I do not believe that these prepare you for actual situations AND do not prepare you to continue to expand your knowledge base.

Certifications are outdated before they’re even released.
Test-makers and publishers cannot keep up with technology at it’s current pace. The current core Microsoft certifications focus on v2.0 technologies (though are slowly being updated to 4.0).

I’m sure it’s a game of tag between the DivDev and Training teams up in Redmond. We, as developers, push for new features faster, but the courseware can only be written/edited/reviewed/approved so quickly.

In addition, almost all of our current, production applications are .NET applications; however, a great deal of functionality is derived from open-source and community-driven projects that go beyond the scope of a Microsoft certification.

Certifications do not account for today’s open-source/community environment.
A single “Microsoft” certification does not cover a large majority of the programming practices and tools used in modern development.

Looking beyond Microsoft allows us the flexibility to find the right tool/technology for the task. In nearly every case, these alternatives provide a cost savings to the district.

Example: Many sites that we develop now feature non-Microsoft ‘tools’ from the ground up.

  • web engine: FubuMVC, OpenRasta, ASP.NET MVC
  • view engine: Spark, HAML
  • dependency injection/management: StructureMap, Ninject, Cassette
  • source control: git, hg
  • data storage: NHibernate, RavenDB, MySQL
  • testing: TeamCity, MSpec, Moq, Jasmine
  • tooling: PowerShell, rake

This doesn’t even take into consideration the extensive use of client-side programming technologies, such as JavaScript.

A more personal example: I’ve used NHibernate/FluentNHibernate for years now. Fluent mappings, auto mappings, insane conventions and more fill my day-to-day data modeling. NH meets our needs in spades and, since many of our objects talk to vendor views and Oracle objects, Entity Framework doesn’t meet our needs. If I wanted our team to dig into the Microsoft certification path, we’d have to dig into Entity Framework. Why would I want to waste everyone’s time?

This same question applies to many of the plug-and-go features of .NET, especially since most certification examples focus on arcane things that most folks would look up in a time of crisis anyway and not on the meat and potatoes of daily tasks.

Certifications do not account for the current scope of modern development languages.
Being able to determine an integer from a string and when to call a certain method crosses language and vendor boundaries.  A typical Student Achievement project contains anywhere from three to six different languages–only one of those being a Microsoft-based language.

Whether it’s Microsoft’s C#, Sun’s Java, JavaScript, Ruby, or any number of scripting languages implemented in our department–there are ubiquitous core skills to cultivate.

Cultivating the Post-Certification Developer

In a “Google age”, knowing how and why components optimally fit together provides far more value than syntax and memorization. If someone needs a code syntax explanation, a quick search reveals the answer. For something more destructive, such as modifications to our Solaris servers, I’d PREFER our techs look up the syntax–especially if it’s something they do once a decade. There are no heroes when a backwards bash flag formats an array. 😉

Within small development shops, such as ours, a large percentage of development value-added skills lie in enterprise architecture, domain expertise, and understanding design patterns–typical skills not covered on technology certification exams.

Rather than focusing on outdated technologies and unused skills, a modern developer and development organization can best be ‘grown’ by an active community involvement.  Active community involvement provides a post-certification developer with several learning tools:

Participating in open-source projects allows the developer to observe, comment, and learn from other professional developers using modern tools and technologies.

Example: Submitting a code example to an open source project where a dozen developers pick it apart and, if necessary, provide feedback on better coding techniques.

Developing a social network of professional developers provides an instant feedback loop for ideas, new technologies, and best practices. Blogging, and reading blogs, allows a developer to cultivate their programming skill set with a world-wide echo chamber.

Example: A simple message on Twitter about an error in a technology released that day can garner instant feedback from a project manager at that company, prompting email exchanges, telephone calls, and the necessary steps to resolve the problem directly from the developer who implemented the feature in the new technology.

Participating in community-driven events such as webinars/webcasts, user groups, and open space discussions. These groups bolster existing social networks and provide knowledge transfer of best practices and patterns on current subjects as well as provide networking opportunities with peers in the field.

Example: Community-driven events provide both a medium to learn and a medium to give back to the community through talks and online sessions.  This helps build both a mentoring mentality in developers as well as a drive to fully understand the inner-workings of each technology.


While certifications can provide a bit of value–especially getting your foot in the door, I don’t see many on the resumes coming across my desk these days. Most, especially the younger crowd, flaunt their open source projects, hacks, and adventures with ‘technology X’ as a badge of achievement rather than certifications. In our shop and hiring process, that works out well. I doubt it’s the same everywhere.

Looking past certifications in ‘technology X’ to long-term development value-added skills adds more bang to the resume, and the individual, than any finite-lived piece of paper.

Certified ScrumMaster – Workshop Reflections

August 20, 2008 Comments off

Over the past couple of years, I’ve slowly added various agile practices into our workflow at the office.  Some have taken off really well, others—not so much.  Change is difficult, especially when few see any value in the change and even fewer would “use” the changes (in an organization were anyone can just say “no” to leadership and that’s accepted).

In spite of this, I was really excited to attend the Certified ScrumMaster workshop when it came to town this week—I’ve been trying to get to one for a few months now, but something always came up and travel was impossible.

The Class

The firehose was at full blast the entire workshop.  I honestly think we could have spread this out over a couple weeks and still had more to learn.  The workshop itself focused not only on the “mechanics” of Scrum, but our own experiences.  We spent a good deal of time describing our own issues, experiences, and ideas—and how scrum could be used in each situation.

It helped to have a very diverse group (yet small) to draw experiences from.  My experience deals with consulting and small groups; we had a few large group implementers, a medium group implementer, and a consultant.  I enjoyed seeing both the parallels and challenges from each side and storing a bit of it back—just incase I’m not in a “small group” forever. 🙂

Overall, the class helped me focus—opening up what I didn’t know and realizing that I have a long way to go until I can full walk in the CSM shoes.  I appreciated Mike’s candidness in teaching both the “perscribed” way as well as sharing his insights to how this works in the real world.  That brief look into reality will help brace us better than anything we’ll find in a manual.

Most memorable experience: the video of us singing the SPAM song.  Seriously—if that ends up on YouTube…

The Instructor

Mike Vizdos, of Implementing Scrum fame, led an excellent session.  Using real world examples, pulling from the class, and forcing us to not only attend—but participate—made the entire experience worthwhile.  I hope that Mike will be teaching other courses in our area—I’d be very interested in taking another one of his courses.

What’s Next

While we’re not “certified” yet, my next goal is to earn it—even it it’s just in my own mind.  We have several projects coming in the next few months that vary in scope—some large, some small.  As I’ve learned in the past few days, focusing on organizational change and building on small successes can be key to the acceptance of Scrum in the enterprise and I plan to work with that.   The more I can demonstrate the benefits and gain acceptance—even if it’s NOT at the speed I’d like it to be, the better. 

Even if you don’t plan on being a “ScrumMaster,” but are using Scrum in your organization, if Mike’s in town, I’d recommend taking one of his CSM courses.

Backwards TDD Command in VS2008?

March 28, 2008 Comments off

From ScottGu’s list of links today, I read through John W. Powell’s “10 Tips to Boost Your Productivity with C# and Visual Studio 2008.” 

There are a few goodies in there that I’d never seen/heard of, but one just makes me twitch a bit and seems TOTALLY backwards (and I’ve never noticed the feature before now).

Create New Tests in VS2008There’s a context menu item while in code-behind or classes called “Create Unit Tests…” that will prefab a unit test of the current class. 

Oh really? 

I really do appreciate Microsoft’s continued efforts to work towards TDD and agile techniques—but doesn’t this seem backwards?  DDT?  Development Driven Tests? 

On the other hand, having a way to generate tests at all is pretty nifty, but what about those who may not understand how they’re generated or what the tests are actually doing?  Simply having “tests” isn’t the solution, but having a concrete understanding of what’s being tested and the expected outcomes.

I’d be far more impressed to click on my test while it’s in red and see a scaffold of implementation magically appear (which, can basically be done using ReSharper).


Using Generics to Update DataBoundControls – A Prototype

March 24, 2008 1 comment

NOTE: This is a prototype, an idea, a random thought expressed aloud (well, in type).  The code explains a concept and isn’t “tested” or production worthy (in my opinion). 

Feedback is always appreciated. 🙂  This is also what happens when I have a week off work and come back with ‘ideas.’

I find that a few of my projects, like the WebGallery2, all have a similar functionality.  On pages with GridViews, ListViews (which, I’m slowly replacing all my GridViews with), or other DataBoundControls, I follow a common theme for data binding:

If the data set will be cached or in session, is that session/cache null OR has the data set been explicitly modified?

  1. true – regenerate the data set and repopulate session/cache.
  2. false – read the current data set from session/cache to the control.

In a single instance, the code to do this might look like:

private void BindList(bool hasBeenModified, string sessionVariable)


       // If our session variable is null or the data has

// been explicitly modified, then rebuild the session variable.

if (Session[sessionVariable] == null || hasBeenModified)

              Session[sessionVariable] =



resultsListView.DataSource =

Session[sessionVariable] as List<WebFile>;




This would then be called with:

BindList(true, “CurrentGallery”);

However, this code really bothers me. 

The data source (the db.GetWebFilesByGalleryName method), data bound control (the resultsListView), and the type of the data source (List<WebFile>) are all hard coded.

How could this helper method use generics to add a bit of resuability?  What about when I want to use a GridView instead of a ListView, or have a List<Gallery>, List<String>, string[] of information?

First Attempt

The first attempt works.  It takes the generics as anticipated and is rather easy to use.

protected void BindDataControl<TDataControlType, TEnumerableType>

(bool hasBeenModified, string sessionVariable,

              TDataControlType dataControl, TEnumerableType dataSource)

where TDataControlType : DataBoundControl,

              new() where TEnumerableType : IEnumerable


// Add the dataSource to session.

       if (Session[sessionVariable] == null || hasBeenModified)

              Session.Add(sessionVariable, dataSource);


// Read the data from session and bind the data control.

       dataControl.DataSource =




The constructor here has both generic parameters and standard parameters.

  • TDataControlType has a generic constraint that requires it to be part of or a subclass of DataBoundControl (GridView, ListView, etc).
  • TEnumerableType requires the source to inherit from IEnumerable (List, Array, etc).

Here are a few examples of using it this bit of code:

var gv = new GridView();


BindDataControl<GridView, Array>(

true,   // this is new, so build a session variable.

       “test”, // the session variable

       gv,     // the Id of our GridView

       new[]   // The data source, an array.

              { “hello”, “world” });

When rendered, we have a simple GridView with our two data items.

GridView and Array data source.

What about a more complicated example using collections and a ListView? On the current build of the WebStorage2 project, the galleries are built in a similar method (see this post for more details).  I could just as easily replace the logic in Show.aspx’s Page_PreRender with:

BindDataControl<ListView, List<WebFile>>(





So what’s the downfall to this method? 

Downfall #1: It’s a performance nightmare. AFAIK, when passing a method (which GetWebFilesByGalleryName is a method from my LINQ DataContext), it is evaluated immediately.  So, with that in mind, the hasBeenModified is irrelevant—it may not NEED to update the session, but the method will still go out, search the database, and return the results.  That’s a bad deal.

What I don’t know and am not sure how to check is whether or not the lazy/delayed loading in LINQ would balance out this at all.  Ideas?

Second Attempt

The second attempt adds in a bit more “generic” and a lot more reflective.  By adding a reference to System.Reflection, we can simply pass a string reference to a “builder” method rather than the method it self—thus saving the prefabrication of the data source when it’s not really needed.

protected void BindDataControl<TDataControlType, TEnumerableType>

(bool hasBeenModified,

              string sessionVariable,

             TDataControlType dataControl,

             string dataSourceMethod)

where TDataControlType : DataBoundControl,

              new() where TEnumerableType : IEnumerable


// If session is null or has been modified (thus invalidated),

       // update the session state.

       if (Session[sessionVariable] == null || hasBeenModified)


              // Invoke the specified method that

// creates our data source.

             var data = Page.GetType().InvokeMember(


BindingFlags.InvokeMethod |

                    BindingFlags.NonPublic |


                    null, this, null);


// Add it to session.

             Session.Add(sessionVariable, data);



// Read the data from session and bind the data control.

       dataControl.DataSource =





In this method, the Page.GetType().InvokeMember method iterates through the methods on the page, finds the one that matches the string name passed to it, and executes it. 

Then, with our “data” results, the rest is the same as the previous method.

Unfortunately, I cannot pass the LINQ direct lookup anymore because the scope of InvokeMember is limited to the calling page.  I’ll need to create another little method, called GetResults in this case, to do the query for me.

protected List<WebFile> GetResults()


return db.GetWebFilesByGalleryName(Request.QueryString[“id”]);


And the updated BindDataControl method:

BindDataControl<ListView, List<WebFile>>(

Now, since our constructor does not contain the method to fetch our data, simply a string, the results are not refetched each time the method is called—only when the requirements are met further in the code.

Downfall #1: This method requires an additional “helper” method on every page to fetch the data.  You can’t access methods outside of the page—or can you?

Downfall #2: What happens if you need to pass parameters to your InvokeMember?  You CAN, but the syntax is nasty and becomes even more difficult if the parameters are not always in the same order (which doubtfully they would be if you’re using generics).

Third Attempt

The third attempt looks more like the signature from Hell than a real method.  There had to be a way around the “stuck on this page” snafu with the second attempt… and there was, by specifying the class too using a generic.

protected void BindDataControl

<TDataControlType, TEnumerableType, TDataSourceClass>

(bool hasBeenModified, string sessionVariable,

              TDataControlType dataControl,

string dataSourceMethod,

TDataSourceClass dataSourceClass)

where TDataControlType : DataBoundControl,

              new() where TEnumerableType : IEnumerable,

             new() where TDataSourceClass : class


// If session is null or has been modified

// (thus invalidated), update the session state.

       if (Session[sessionVariable] == null || hasBeenModified)


              // Invoke the specified method that

// creates our data source.

             var data = dataSourceClass.GetType().InvokeMember(


                     BindingFlags.InvokeMethod |

                    BindingFlags.NonPublic |

                    BindingFlags.Public |


                    null, dataSourceClass, null);


// Add it to session.

Session.Add(sessionVariable, data);



// Read the data from session and bind the data control.

dataControl.DataSource =  





Good grief. 

This method adds a third generic to the constructor—TDataSourceClass—as well as the additional constraint requirement.  I’ve also added an additional BindingFlag—Public—since most of the methods in LINQ DataContext classes are decorated public.

Rather than pulling from this.Page, we’re now calling InvokeMember from the parameter class and returning the results to the calling page.  There’s one other change—rather than looking at “this” as the Binder parameter, we’re referencing the class passed along in the constructor—the dataSourceClass.

BindDataControl<ListView, List<WebFile>, WebGalleryDataContext>(

Here we have two additional parameters, the generic parameter, TDataSourceClass, that I’ve passed the LINQ Data Context into to define the Type of the dataSourceClass parameter passed later in the constructor. 

“Verbalized”, the method’s generics read: BindDataControl to a ListView with the expected data format of a List<WebFile> using the WebGalleryDataContext class. 

The parameters (which could be reordered to make better sense) read: the data is new or has changed, so store the results in “CurrentGallery” and return them to ‘lv’ (the ListView object on the web form).  Fetch the data with GetWebFiles from the instance of db (the WebGalleryDataContext object instanciated previously in the page).

Downfall #1: Methods are still required—you cannot pass a simple data source to the BindDataControl method.

Downfall #2: Additional coding on the BindDataControl methods required to handle parameters.

This third attempt handles our most complex request—but what about the original “hello”/”world” array request?  It can be done, but, as previously mentioned, requires extracting the array list outside of the method constructor.


protected void Page_Load(object sender, EventArgs e)


var gv = new GridView();



BindDataControl<GridView, ArrayList, Page>(

true, “junk”, gv, “Get”, this.Page);    



protected ArrayList Get()


return new ArrayList {“hello”, “world”};


To reference methods that exist in the same code-behind page, this.Page and the Page class offer the correct class access.

So, is this the best way to do it?  Probably not!  How would you tidy this up or rewrite it?  I’m interested!

MOSS 2007 and Wishing I Was “In the Know”

January 17, 2008 Comments off

A rant in the joys of communication and Microsoft Office SharePoint Server 2007 configuration.

It was determined that SSP (Shared Services Providers) would run internally on 8081.  We were told nothing ran on that port in our enterprise.  After FAR too much time (not going to say for sake of my ego) fiddling with why I couldn’t get the SSP services to work in MOSS 2007.

We were lied to like the step-children we are…

After finally just hitting the root of the URL (/ssp/admin/ is the default shortcut), I discovered one of our enterprise “monitoring” softwares had a web service running on that port… which means it’s running on that port on every server and desktop in our enterprise.  wtf.  Oh, and the people who were “in the know”… knew, but didn’t feel it was important or whatever to tell us.

So, now the joys of ripping the SSP out of MOSS and reconfigure it on a different port (and praying THAT one isn’t taken).


On a side note, I’ll have a new article posted up pretty soon.  The article goes into a bit of detail on setting on a small server farm with MOSS—everything from initial installation to setting up Active Directory profiles, search services, indexing, and updating to the latest Service Pack 1.  After the past week of dinking with this, I now see why Bill English’s MOSS 2007 Administrator’s Guide is 1155 pages and heavy enough to beat someone with.  Good book, by the way—just a bit difficult to follow as there’s no “order” to it.

[UPDATE: While out scraping ice off my car, I had an idea to help myself be more “in the know”.  I use TCPView quite often to see what processes are going where—well, TCPView shows the ports! Just do a bit of monitoring, see where different services are, and go for it.  The fancy alternative, of course, could be to setup Ethereal, set a filter for “tcp.port == {your port here}” and let it run for a day or so.]

So many .NET Frameworks – Which is which?

December 10, 2007 3 comments

Over the past few weeks, Microsoft have been hammering out frameworks and structures for .NET Development. 

Microsoft Voltron Volta

Earlier last week, Microsoft Live Labs released the prototype of Microsoft Volta, a “let’s do AJAX and control the DOM without writing JavaScript” framework that is pretty cool.  I’ve whipped up a few examples that I’ll post up later today. 

The dream is that the system manages the tiered architecture of design—and automagically refactors your code onthe fly.  Think of the syntax an odd mix of Astoria (web-based data services), Nikhil Kothari’s Script# access to the DOM from C#-esque code, and Volta’s new twist of tags and attributes for async transactions—all mixed into one bit application.

So, is this the platform of the future?  AJAX-ala-C# and full DOM control with automagic architecture separation?

And yes, I keep accidently calling it Voltron.

Microsoft MVC Framework (ala 3.5 Extensions CTP)

Finally!  Late last night, ScottGu announced that the .NET 3.5 Extentions were available; read all about it and download the bits from his blog.

The MVC framework sites at the same .NET revolutionary stage as LINQ (in my opinion)—something that .NET has been missing for quite a while.  During the short works I’ve had with Java, the clear break between the layers of development made swapping in and out forms quite simplistic.  I followed a similar path with .NET development, but this required a bit more work and didn’t flow quite as easily (and you had to hand code all the handlers).

So, is this the platform of the future?  True tiered separation at design time and a step closer to the Ruby/Java world?

Silverlight 1.1 2.0 Alpha

While the Silverlight still boasts the 1.1 version number, the drastic changes between the two version (and the rumors from Microsoft) will probably see it at 2.0 when it hits CTP.  I would like to have a few good examples to show regarding Silverlight (the prototypes on are fun to play with), however, I’ve yet to get the VS2008 Extentions to work—I can create, but it won’t accept that I have Silverlight installed.  :(.

So, is this the platform of the future?  Rich, interactive applications using XAML markup and XML templates?

.NET WebForms Development

This isn’t new, but the changes in .NET 3.5 for WebForms and AJAX can’t discount this medium.  I, for one, am still more comfortable with this than the “new fangled” technologies and find the latest tools that are out (VS2008, the new controls in .NET 3.5, and even the extensions that keep coming) are making WebForms easier and easier to create.  Also, while the “new” may be cool, we have a slew of existing applications that can’t be forgotten.

So, is this the platform of the future?  The pluggable framework and web forms that allows for the easy creation of anything from personal web sites to enterprise applications?


For now, the my gut feeling is that the “platform of the future” is what works for the situation.  I can think of a few of our minor “web applications” that have no need for the complexity of Volta, MVC, or Silverlight; however, I cannot ignore the specific appears to each of the technologies for future projects.  I do not believe that these REPLACE our current WebForms, but simply add additional tools to the toolkit (and requirements for us to learn).

It keeps it new and exciting!


“Unsupported topology” in SharePoint 2003

October 2, 2007 1 comment

Okay, so we’re reupping our SharePoint 2003 environment to a better environment.  Why not MOSS 2007?  Ehh, quite honestly, because that’s politics and would make kittens cry.  But, that’s not the point of this post.

Our planned setup was:

  • Two web-front end servers, each running the web service.
  • One search server, running the search service.
  • One index server, running the indexing and job services.
  • Two SQL servers, clustered and attached to terabytes of space in the SAN.

Looked good, analyzer balked, but, according to KB887164 on Microsoft’s TechNet article, it’s a working, suggested scenario.

At the bottom of that article, it says:

Note After you follow these steps and then locate Configure Farm Topology (FarmTopologyView.aspx) on the Central Administration Web site, you may receive one or both of the following messages:

The current topology is not supported.
You selected an invalid server farm configuration.

These warnings may be safely ignored.

Yeah, well, that’s a lie. While running in an “unsupported topology,” you can’t add portal sites or do backup/restore operations.  That’s not unsupported, that’s unfunctioning.

I found an old blog post that has a thread regarding layout and that error.  The MVP helping the poster, Shane Young, responds with:

Technically it will let you setup this environment.  But then SharePoint will give you an error message telling you that you have an unsupported config.  Whenever SharePoint is displaying that error message it will not allow you to create new portals and it will not allow you to use its backup utility.  This is just how it works.

That’s just how it works, huh?  No recommendations or ideas?  Searching through Bill English’s SPS 2003 Resource Kit is of no use either.  In fact, the book has an architecutre of EXACTLY what I want in the book with no mention of issues or failures (pp. 116). 

If you keep digging (in the Resource Kit), page 282 lists the “supported” topologies.  For a large farm, it seems that there are three supported topologies:

  • small (1 all-in-one box + 1 sql source)
  • medium (2 all-in-one boxes + 1 sql source)
  • large (2–8 web boxes, 2–4 search boxes, 1–4 index boxes (plus 1 job) + 1+ sql source)

Notice that there are no in-betweens?  The medium can’t break apart the roles and the large requires a minimum of two search boxes. 

So, for now, my solution is to pull the Search box out and fall back to a hybrid of the medium configuration that seems to work.

  • 2 web + search boxes
  • 1 index + job box
  • 1 sql source

That configuration makes Backup and Restore and the portal creation tools happy, but seems like such a waste to put those search engines on the web drives when I have a server sitting here for it.

So, for those interested—unsupported messages may safely ignored as long as you’re not planning on USING your SharePoint Portal.

So many new technologies… where to start?

June 12, 2007 Comments off

The past few months have kept developers hopping with new technologies.  It’s actually difficult to know what cool new technology to latch on to and learn—because it seems that they are coming and going overnight.  Summertime is typically a bit more quiet at my office, so what’s on my list to play with this summer?  The ( ) numbers are somewhat the priority I have for learning them… 1000 is something that is my high priority and it tapers down from there.


  • Scrum (1000) – I’d love to take some time this fall and go to a ScrumMaster course in Denver, CO and really surround myself in it.  Many of the practices are odd to apply here, but over the next few months, we’re slowly going to be adding more developers into the mix; and more collaborative working.
  • TDD (800) – Even on the small scale that I’ve used, I see so much value to TDD; I just need to find a way to work it into our environment and methods.
  • Interactive Testing (400) – I’ve been rolling out NUnit tests for quite a while now, but never really saw the value in tests that I had to KNOW the answer to… The mocks (Rhino) look pretty interesting.
  • Architecture (600) – A lot of what I do now focuses more on building architecture and laying out rather than the actual coding (I code my own projects still, yeah…).  I am NOT as educated on this as I’d like to be, but honestly am not sure where to go beyond books and reviewing how others do it.  There are no leaders in our organization for this.  Hmmm.


  • ReSharper (1000) – I’ve used it for … hmm… two years now for C#; but the more I read, I’ve never “used” it, I need to change that.
  • Orcas (500) – Not really a “got to know,” but I just need to explore everything that’s changed in Orcas.


  • WCSF (800) – I’ve only used the Web Client Software Factory a few times, and for nothing that went into production. 
  • Acropolis / SCSF (600) – Ehh, this is a case where it seems like one technology is beating up another.  Acropolis looks VERY cool and slick on Vista for little applications, but I haven’t narrowed down what it offers yet besides “slickness”.
  • NHibernate (800) – I need to figure out what the pros and cons are to NHibernate or if I just want to focus on LINQ for SQL/Objects/everything.
  • Windsor/Castle/MonoRail (400) – While not things that I see getting implemented here at the office, the technologies greatly interest me.  I’ve been a bit captivated with RoR for a while now, just not the time to investigate it. 
  • Silverlight (500) – Just coolness factor.  I’ve loathed Flash for years and welcome a replacement. 
  • .NET  3.x (1000) – 3.0 came and went without much notice and now we’re to 3.5.  I hope to tackle some WPF, WCF, (D)LINQ before it’s already passè.
  • SharePoint 2007 (400) – We’ll eventually migrate from SPS2003 to MOSS2007. I need to spend some time with the technology and learn it.

Looks like a busy summer.

The Security Development Lifecycle : Oil Change or Culture Change?

Dave Ladd provides an interesting picture of how architecting (and “selling” security) to the CxO’s isn’t so much about promoting technology, but promoting culture change.

I have worked on security and privacy initiatives at Microsoft for a number of years, but it wasn’t until I came to the Security Engineering group to work on the Security Development Lifecycle that I realized I don’t actually work on security. To be clear, I do many of the tasks that one might associate with security – look at bugs, evaluate tools, provide guidance and the like – but it’s more accurate to say that I (along with everyone else in Security Engineering and Communications) am in the culture change business.

The Security Development Lifecycle : Oil Change or Culture Change?.

This is very interesting concept, especially in my field of education.  Many of our vendors, peer districts, and such are baffled by our rigerous standards for security—both in our development and our infrastructure.  I’ve spoken with only a handful of districts that place security at the level that it is a strategy—not a byproduct—of their overall technology architecture.

Why is that?  First off, I believe a lot of that comes from our CIO’s passion for security and doing things “right.”   FERPA and privacy are at the forefront of concern—and avoiding the courtroom for any mishaps.  Our applications must not only be protected from the deviant of the Internet, but from the 50,000 students and 10,000 staff members who are using our systems.  Are they all “out to get us”?  Nah, not usually—but typically the most innocent of individuals is the first to find the biggest security hole.

Second is resources.  Technology is highly valued in our environment and has almost infinite funding given proper documentation and a good sales pitch to the executive levels and board.  Because of that, our physical and software infrastructures have many of the latest and greatest gizmos, gadgets, and such to protect our information.

What’s missing?  For us, it’s standardization in both development and implementation.  We’re still struggling with fully grasping what it means to write “secure” code; however, this changes with every day as we become more adept at development and, as most, learn from our mistakes.


Do you panhandle?


I remembered a quote that seemed fitting to a meeting I had this week. I had to dig a bit in my books to find it, but… it matches our entire environment to a ‘T’. 

“The information technology organization should not have to go tin cupping to its user community for support for an enterprise information architecture.  IF the business community cannot see the clear value of the enterprise information architecture, then the business is not yet ready for it. However, there are many things that can be done to educate business management to the point where they can see value.”


Building Enterprise Information Architectures: Reengineering Information Systems by Melissa A. Cook

Unfortunately, the current mindset is to tin cup (beg) for everything from our users—rather than a collaborative relationship.  So, I wonder who the task of changing that mindset falls to?  The CIO, the department supervisors, or the workers who are interacting with the customers?  Perhaps all of them?


I believe it requires everyone’s effort—from all levels of the organizational chart—to get buy in from both internal customers (aka: your peers in the technology department) and your true customers (whether they be cross-department or external).   The higher levels reiterate the vision, create desire with the organization, and prove value (or outweigh opportunity costs) while the levels below provide proof of concept, analysis, and ROI for the implementation of these processes.



Tags: ,