Archive

Archive for the ‘SQL’ Category

Using PowerShell to Kick off SQL Server Agent Jobs

We’re in the process of a few upgrades around our office and one of those will require a full shutdown mid-business day. To expedite the shutdown, I wanted a quick way to run our backup jobs (hot backups), move them, and down the boxes.

Remember, I’m lazy–walking around our server room smacking rack switches takes effort. I want to sit and drink coffee while the world comes to a halt.

Here’s a bit of what I came up with. The script handles four things:

  • accepts a server name parameter (so I can iterate through a list of servers),
  • checks to see if the server is online (simple ping),
  • checks to see if it’s a SQL Server (looks for the MSSQLSERVER service),
  • iterates through the jobs and, if the -run switch is passed, kicks them off.

As with most, the couple of helper functions are more than the actual script itself.

param (
  [string]$server = $(Read-Host -prompt "Enter the server name"),
  [switch]$run = $false
)

# little helper function to ping the remove server.
function get-server-online ([string] $server) {
  $serverStatus = 
    (new-object System.Net.NetworkInformation.Ping).Send($server).Status -eq "Success"
  
  Write-Host "Server Online: $serverStatus"
  return $serverStatus
}

# little helper function to check to see if the remote 
# server has SQL Server running.
function is-sql-server([string] $server) {
   if (get-server-online($server)) {
    
    $sqlStatus = (gwmi Win32_service -computername $server | 
        ? { $_.name -like "MSSQLSERVER" }).Status -eq "OK"
        
    Write-Host "SQL Server Found: $sqlStatus"
    return $sqlStatus
   }
}

# uhh, because someday I may need to add more to this?
function cannot-find-server([string] $server) {
  Write-Host "Cannot contact $server..."
}

if (is-sql-server($server)) {
  
    [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null
    $srv = new-object ('Microsoft.SqlServer.Management.Smo.Server') $server
    $jobs = $srv.JobServer.Jobs | ? {$_.IsEnabled -eq $true} 

    $jobs | 
        select Name, CurrentRunStatus, LastRunOutcome, LastRunDate, NextRunDate | 
        ft -a
    
    if ($run) {
      $jobs | % { $_.Start(); "Starting job: $_ ..." }
      
    }
}
else {
  cannot-find-server($server)
}

Things left to do–someday…?

  • Allow picking and choosing what jobs get kicked off…

It’s not fancy, but it’ll save time backing up and shutting down a couple dozen SQL instances and ensuring they’re not missed in the chaos. 🙂

Automating Extracts Using External .SQL Files and PowerShell

September 29, 2009 Comments off

Rather than rely on a system task, I wanted to be able to kick off an export of one of our SQL databases on the fly, have it generate a CSV, then email the CSV to one of our developers.

Sounds like a task for PowerShell!

Why PowerShell? Well, frankly, because moving jobs around and such on our mixed-mash of SQL versions and servers is a bit annoying and I wanted to see if it worked.

So, to start off, I have a file, call it export_data.sql that contains all of the logic of my SQL query and the format I require.

I’ll also be using the Microsoft SQL query script I discussed in this blog post.

The tricky part is reading in the .sql file as a single “entity”—not the array of rows that get-content usually provides.  Thankfully, there is an easy way to do it.

@(gc ‘export_data.sql’ -readcount 0)

According to the PowerShell documentation, the readcount parameter serves two purposes: one is to set the number of lines to read at a time, the second (when set to zero) is to pull in all of the lines at once.  Great, that’s exactly what we need.

From there, it’s simply feeding our query to the sql-query function I wrote and exporting to CSV.  Nice enough, CSV exports are built-in to PowerShell!

The final command looks something like this:

@(gc ‘export_data.sql’ -readcount 0) | 
   % { sql-query server db $_ } |
   export-csv dump.csv

I could then add another field to mail the csv using the built-in command Send-MailMessage.

❤ PowerShell.

Querying SQL Server using PowerShell

August 31, 2009 2 comments

The great thing about PowerShell is direct access to objects.  For my apps, database connectivity is happily handled by NHibernate; however, that doesn’t mean we can’t take advantage of good old System.Data.SqlClient for our PowerShell scripting.

CODE: The full source of this is available here on codepaste.net.

param (
    [string]$server = “.”,
    [string]$instance = $(throw “a database name is required”),
    [string]$query
)

$connection = new-object system.data.sqlclient.sqlconnection( `
    “Data Source=$server;Initial Catalog=$instance;Integrated Security=SSPI;”);
   
$adapter = new-object system.data.sqlclient.sqldataadapter ($query, $connection)
$set = new-object system.data.dataset

$adapter.Fill($set)

$table = new-object system.data.datatable
$table = $set.Tables[0]

#return table
$table

Not too long or challenging—it’s mostly working to instantiate a quick SQL connection and pass in your query.  I even considered plugging in a check on the $query parameter to ensure it began with SELECT to ensure I wouldn’t do accidental damage to a system. Maybe I’m just paranoid. 😉

What this little snippet allows me to do is quickly add log4net checking into some of my server monitoring PowerShell scripts.

query sqlServer myDatabase “Select count(id), logger from logs group by logger” | format-table -autosize

Notice I didn’t include the format-table command in my main query script.  Why?  I wanted to keep the flexibility to select, group, and parse the information returned by my query.  Unfortunately, it seems that the format commands break that if they’re ran before a manipulation keyword.  Adding in “ft –a” isn’t difficult in a pinch.

WebStorageHandler[1]

Quick and easy…

Other uses:

  • Customer calls up with a question about data—save time and do a quick query rather than waiting for Management Studio to wind up.
  • Keep tabs on database statistics, jobs, etc.
  • and more…

AutoMappings in NHibernate – A Quick Runthrough

June 26, 2009 Comments off

For most of my projects, at least since I’ve moved to NHibernate/Fluent NHibernate, I’ve been trapped using the existing data structures of prior iterations.  Funky naming conventions (many due to cross-cultural, international column naming), missing data relationships, and general craziness.

Having used Fluent Mappings (creating a class that implements ClassMap<objectType>) in the past, they were a huge jump up from writing painful data objects, connecting them together, and recreating the wheel with “SELECT {column} from {table}” code.  Create a map, use the fluent methods to match column to property, and away you go.

In a recent project, I’ve had the opportunity to build a new system from the ground up.  With this, I decided to dive head first into using the AutoMappings functionality of Fluent NHibernate. 

This post is somewhat a rambling self-discussion of my explorations with AutoMappings.

What are AutoMappings?

The FluentNHibernate wiki provides a simple definition:

[…] which is a mechanism for automatically mapping all your entities based on a set of conventions.

Rather than hand-mapping each column to a property, we create conventions (rules) to map those.. automatically.  Hey look, auto…mappings.  😉

How?

Using the same fluent language, configuring AutoMapping is an exercise in implementing conventions for the logical naming and handling of data.

Fluently

    .Configure()

    .Database(MsSqlConfiguration.MsSql2005

                  .ConnectionString(cs => cs

                                              .Server(“server”)

                                              .Database(“db”)

                                              .Username(“user”)

                                              .Password(“password”)

                  )

                  .UseReflectionOptimizer()

                  .UseOuterJoin()

                  .AdoNetBatchSize(10)

                  .DefaultSchema(“dbo”)

                  .ShowSql()

    )

    .ExposeConfiguration(raw =>

                             {

                                 // Testing/NHibernate Profiler stuffs.

                                 raw.SetProperty(“generate_statistics”, “true”);

                                 RebuildSchema(raw);

                             })

    .Mappings(m =>

              m.AutoMappings.Add(AutoPersistenceModel

                                     .MapEntitiesFromAssemblyOf<Walkthrough>()

                                     .ConventionDiscovery.Setup(c =>

                                                                    {

                                                                        c.Add<EnumMappingConvention>();

                                                                        c.Add<ReferencesConvention>();

                                                                        c.Add<HasManyConvention>();

                                                                        c.Add<ClassMappingConvention>();

                                                                    })

                                     .WithSetup(c => c.IsBaseType = type => type == typeof (Entity)))

                  .ExportTo(@”.\”)

    );

As you can see above, the only difference from a fluent mappings configuration is in the actual Mappings area.  Good deal!  That helps ensure my existing work using fluent mappings could translate without too much headache.

I’ve specified four conventions.  Each of these conventions have interfaces that provide the necessary methods to ensure your rules are appied to the correct objects.

EnumMappingConvention

internal class EnumMappingConvention : IUserTypeConvention

{

    public bool Accept(IProperty target)

    {

        return target.PropertyType.IsEnum;

    }

 

    public void Apply(IProperty target)

    {

        target.CustomTypeIs(target.PropertyType);

    }

 

    public bool Accept(Type type)

    {

        return type.IsEnum;

    }

}

The great thing about these methods is they’re fluent enough to translate to English.

Accept… targets where the property type is an enumeration.

Apply… to the target that the “Custom Type Is” the property type of the target.
  NOTE: This translates from a ClassMap into: Map(x => x.MyEnumFlag).CustomTypeIs(typeof(MyEnum));

Accept… a type that is an enumeration.

ReferenceConvention

The Reference convention handles those reference relationships between our classes (and the foreign keys).

internal class ReferencesConvention : IReferenceConvention

{

    public bool Accept(IManyToOnePart target)

    {

        return string.IsNullOrEmpty(target.GetColumnName());

    }

 

    public void Apply(IManyToOnePart target)

    {

        target.ColumnName(target.Property.Name + “Id”);

    }

}

The most important part here is enforcing how your foreign keys are going to be named.  I prefer the simple {Object}Id format.

Car.Battery on the object side and [Car].[BatteryId] on the database side.

HasManyConvention

The HasManys are our lists, bags, and collections of objects.

internal class HasManyConvention : IHasManyConvention

{

 

    public bool Accept(IOneToManyPart target)

    {

        return target.KeyColumnNames.List().Count == 0;

    }

 

    public void Apply(IOneToManyPart target)

    {

        target.KeyColumnNames.Add(target.EntityType.Name + “Id”);

        target.Cascade.AllDeleteOrphan();

        target.Inverse();

    }

}

We want to make sure that we haven’t added any other key columns (the Count == 0), and then apply both the naming convention as well as a few properties.

Cascade.AllDeleteOrphan() and Inverse() allows our parent objects (Car) to add new child objects (Car.Battery (Battery), Car.Accessories (IList<Accessory>)) without separating them out.

ClassMappingConvention

Finally, the important Class mapping.  This convention ensures that our tables are named property with pluralization.

public class ClassMappingConvention : IClassConvention

{

    public bool Accept(IClassMap target)

    {

        return true; // everything

    }

 

    public void Apply(IClassMap target)

    {

        target.WithTable(PluralOf(target.EntityType.Name));

    }

}

I’m using a pluralization method from one of my base libraries that I borrowed from Hudson Akridge.  This helper method works really well and I don’t need to add additional references and libraries into my application just to handle the table names.

public static string PluralOf(string text)

  {

      var pluralString = text;

      var lastCharacter = pluralString.Substring(pluralString.Length – 1).ToLower();

 

      // y’s become ies (such as Category to Categories)

      if (string.Equals(lastCharacter, “y”, StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString = pluralString.Remove(pluralString.Length – 1);

          pluralString += “ie”;

      }

 

      // ch’s become ches (such as Pirch to Pirches)

      if (string.Equals(pluralString.Substring(pluralString.Length – 2), “ch”,

                        StringComparison.InvariantCultureIgnoreCase))

      {

          pluralString += “e”;

      }

      switch (lastCharacter)

      {

          case “s”:

              return pluralString + “es”;

          default:

              return pluralString + “s”;

      }

  }

Save and build.  The ExportSchema method will generate the SQL and/or regen the database based on the specifications you’ve provided to it. and you’re ready to hit the ground running!

 

Benchmarks : Comparing LINQ to NHibernate Transforms/Grouping

Yesterday, I wrote about setting up NHibernate to query up, group, count, and transform results and return them into a control.  Why did I go to such effort?  Well, the original LINQ query I had that refined the results didn’t perform up to par.  As some may note, premature optimization is never a good practice so I needed some stats to back up the claims.

Overnight, I wrote up a quick test to query up both ways and benchmark the results.  Here’s what I found.

The “test”:

public void TEMP_quick_compare_of_linq_to_nhibernate()

{

    var schoolId = 120;

 

    var benchmark = new Benchmark();

    using (var repository = new IncidentRepository())

    {

        benchmark.Start();

        var resultsFromLinq =

            repository.GetCountByIncidentCodeWithLinq(schoolId);

        foreach (var item in resultsFromLinq)

        {

            Console.WriteLine(item);

        }

        benchmark.Stop();

        Console.WriteLine(“Linq: {0}”.AsFormatFor(benchmark.ElapsedTime));

 

        benchmark.Start();

        var resultsFromNhibernate =

            repository.GetCountByIncidentCode(schoolId);

        foreach (var item in resultsFromNhibernate)

        {

            Console.WriteLine(item);

        }

        benchmark.Stop();

        Console.WriteLine(“NHibernate: {0}”.AsFormatFor(benchmark.ElapsedTime));

    }

}

Setting up the benchmark (and the NHibernate Init) are outside of the benchmark—they’re necessary overhead.  I’m also iterating through each of the results as part of the benchmark to ensure that everything is evaluated. Past that, pretty basic.  On the database side, I’ve disabled statement caching to not sway the results as much.

With 24 records (the test data in the system), the results were pretty clear. The average of running the benchmark 100 times resulted in…

Linq: 00:00:00.7050000
NHibernate: 00:00:00.0190000

With 24 records, NHibernate was about 37x faster. 

That’s nice, but what happens in a few weeks when there are a few thousand records?  I populated a few hundred of each incident type into the system, giving me almost 4000 records (the anticipated monthly load of the system by the customer).  How’d that change our averages?

Linq: 00:00:00.8869746
NHibernate: 00:00:00.1381518

Now we’re only 6x faster with NHibernate vs. LINQ.  The duration from 24 to 4000 records for LINQ  jumped ~.18 seconds for a 25% gain where as NHibernate jumped ~.11 seconds for a 626% gain.

So, with that, my original gut feeling and assumptions were wrong.  More and more records don’t really slow down the LINQ filtering.. at least not by much.  The performance gain is still appparent between the two methods (.88 sec vs. .13 sec); however, how much of that time is eaten up by rendering, server latency, etc and not by the actual processing?

Grouping and Transforming Using NHibernate

June 11, 2009 Comments off

Okay, I’ve got to be doing this wrong.

Call it premature optimization, but I forsee an old LINQ method being a performance bottleneck when we hit a few hundred thousands records—especially for the ASP.NET charting control to render in any useful time period.

So, what do I do?  I figure pushing that computation back down to the database would be a good first step.

Unfortunately, grouping, sorting, and such are a serious pain in the ass.  Unless, as I said, I’m doing it wrong.

Original Code – Grouping and Counting ala LINQ

private IList GetIncidentsGroupedByIncidentCode()

{

    using (var repository = new IncidentRepository())

    {

        var allIncidents =

            repository.GetAllBySchoolId(SessionManager.CurrentSchoolId);

 

        var incidentsByCode = from i in allIncidents

                              group i by i.IncidentCodeId

                              into grouping

                                  orderby grouping.Count()

                                  select new

                                             {

                                                 IncidentCodeId = grouping.Key,

                                                 Count = grouping.Count(),

                                                 Description =

                                                    GetIncidentCodeDescription(grouping.Key)

                                             };

        return incidentsByCode.ToList();

    }

}

Grab all incidents (using NHibernate repository) and use LINQ to transform them into a new handy anonymous type that consisted of the IncidentCodeId, a Count (by IncidentCodeId), and the Description which uses the IncidentCodeId to grab the description (the incident code description is coming from an entirely different system/database, hence the method to go fetch it).

I can simply return an IList rather than specifying the type (since it’s anonymous) and get away with loading up my Chart Control—not a problem.

Newish Code – Grouping and Counting ala NHibernate

public IList GetCountByIncidentCode(int schoolId)

{

    using (var tx = Session.BeginTransaction())

    {

        var criteria = Session.CreateCriteria(typeof (Incident));

 

        // Only get those matching the requested SchoolId

        criteria.Add(RestrictionsHelper<Incident>.Eq(x => x.SchoolId, schoolId));

 

        // Setup our projections.

        // IncidentCodeId is what we’re using as an Identifier.

        // Id is what we’re counting, so the results of the “GroupedResult” go into Result

        // and we’re grouping by IncidentCodeId

        criteria.SetProjection(Projections.ProjectionList()

                                   .Add(Projections.Property(“IncidentCodeId”), “Identifier”)

                                   .Add(Projections.Count(“Id”), “Result”)

                                   .Add(Projections.GroupProperty(“IncidentCodeId”)));

        // Order THAT mess by Result

        criteria.AddOrder(Order.Asc(“Result”));

 

        // Now, since we can’t use anonymous objects (??), we have to use a funky Java

        // method to transform it into a typed result.

        criteria.SetResultTransformer(Transformers.AliasToBean(typeof (GroupedResult)));

 

        // Convert this all to a list.

        var result = criteria.List<GroupedResult>() as List<GroupedResult>;

 

        // Commit… or get committed.

        tx.Commit();

        if (result != null)

        {

            // We can’t do this inline (??), so go back into the list and iterate through… grabbing

            // descriptions.

            result.ForEach(x =>

                                {

                                    var description =

                                        GetIncidentCodeDescription(x.Identifier.ConvertTo<int>());

                                    x.Description = description;

                                });

        }

 

        // Holy crap, we’re done!

        return result;

    }

}

What… the… heck?

Amazingly enough, that works (changing the chart’s column names, of course).  And it’s relatively quick… But woah, what a mess. 

It also adds annoying little ‘result’ objects into the mix. 

public class GroupedResult

{

    public int Identifier { get; set; }

    public string Description { get; set; }

    public int Result { get; set; }

}

Strongly typed is stellar and I’m pretty sure I could have some generic objects. [Identifier/Description/Result] could work for counts, averages, most anything that is grouped up, but that just twitches me out a bit to have random classes sitting around for data transformations.

So, good readers—how is this REALLY supposed to work?  All that to generate the guts of:

SELECT COUNT(IncidentCodeId) as Result, IncidentCodeId
FROM Incidents
WHERE SchoolId = :schoolId
GROUP BY IncidentCodeId
ORDER BY Result

Categories: .net 3.5, c#, NHibernate, SQL

Performing SELECT.. WHERE IN using a Repository

June 8, 2009 Comments off

As I’ve discussed in the past, a few of my repository pattern practices are borrowed and built on the nice S#arp Architecture project.  Here’s another situation where I needed a bit more functionality.

Disclaimer:  If there’s a better way to do this—I’m all ears and emails. 🙂

By default, the FindAll method builds up the NHibernate criteria by iterating through a key/value pair.  Easy enough.

‘Id, 12345’ generates ‘WHERE Id = 12345’.

But what happens when I want to do something with an array?

‘Id, int[] {12345, 67890}’ should generate ‘WHERE Id IN (12345, 67890)’

Thankfully, the Restrictions class has an In method, but how can I add that flexibility to the FindAll method?

Here’s what the FindAll method looks like to start off:

public T Find(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    var criteria = Session.CreateCriteria(typeof (T));

    propertyValuePairs

        .ForEach(x =>

                 criteria.Add(Restrictions.Eq(x.Key, x.Value)));

 

    return criteria.List<T>() as List<T>;

}

That’s nice.  Iterate through each, but assuming an Eq (Equals) relationship between the key and the value.

After a bit of dinking, checking to see if the object is a typeof(ICollection) seems to be the most reliable considering Restrictions.In(key,value) accepts Collections for the value parameter. 

This allows you to pass arrays, lists, and dictionaries.

public List<T> FindAll(IDictionary<string, object> propertyValuePairs)

{

    Check.Require(propertyValuePairs != null,

                  “propertyValuePairs was null or empty”);

 

    Check.Require(propertyValuePairs.Count > 0,

                  “propertyValuePairs must contain at least one pair”);

 

    ICriteria criteria = Session.CreateCriteria(typeof (T));

 

    propertyValuePairs

        .ForEach(x =>

                     {

                         if (x.Value.IsA<ICollection>())

                         {

                             // add WHERE key IN (value)

                             criteria.Add(Restrictions.In(x.Key, (ICollection) x.Value));

                         }

                         else

                         {

                             // add WHERE key = value

                             criteria.Add(Restrictions.Eq(x.Key, x.Value));

                         }

                     });

    return criteria.List<T>() as List<T>;

}

Here’s my (now) passing test that I used to test this logic as I built it:

[Fact]

public void can_find_students_by_array_of_student_ids()

{

    var studentsToFind = new int[] { 622100, 567944, 601466 };

 

    var criteria = new Dictionary<string, object>();

    criteria.Add(“Id”, studentsToFind);

    criteria.Add(“Grade”, “09”);

 

    var sut = new StudentRepository();

    var students = sut.FindAll(criteria);

 

    students.Count.ShouldBeEqualTo(1);

    students.ShouldContainMatching(x => x.Id == 567944);

    students.ForEach(x =>

        Console.WriteLine(“{0}, {1}”.AsFormatFor(x.FullName, x.Id)));

}

Test Passed.  Woot.  The generated SQL is also nice and clean (really loving NHProf… though I trimmed out the excess columns for brevity).

SELECT this_.Id            as Id7_0_, [..]

       this_.Grade         as Grade7_0_, [..]

FROM   custom.student_lkup this_

WHERE  this_.Id in (622100 /* :p0 */,567944 /* :p1 */,601466 /* :p2 */)

       and this_.Grade = 09 /* :p3 */

Categories: .net 3.5, c#, Microsoft, NHibernate, SQL