A Day In The Lyf

…the lyf so short, the craft so longe to lerne

Posts Tagged ‘SQL Server

Funcletize This!

I was recently involved in troubleshooting a bug in our staging environment. We had some code that worked in every environment we had put it in, except staging. Once there, you perform the equivalent of an update on a field (using LINQ in C#), only to be greeted by a ChangeConflictException.

I’m embarrassed by how long it took to figure out what was wrong. It was obviously an optimistic locking problem, and I even mentioned that it was because the UPDATE statement wasn’t updating anything once I first saw the exception. Optimistic locking works by adding extra fields to the WHERE clause to make sure that the data hasn’t changed since you loaded it. If one of those fields had changed, the WHERE clause wouldn’t match anything, and the O/RM would assume that somebody’s changed the data behind your back and throw an exception.

It turns out that failing to match any rows with the given filter isn’t the only way that LINQ will think no rows were updated; it’s also dependent on the NOCOUNT option in SQL Server. If the database is configured to have NOCOUNT ON, then the number of rows affected by each query won’t be sent back to the client. LINQ interprets this lack of information as 0 rows being updated, and thus throws the ChangeConflictException.

In itself, the bug wasn’t very interesting. What is interesting is what we saw when we opened Reflector to look at the LINQ code around the exception:

IExecuteResult IProvider.Execute(Expression query)
{
    // …
    query = Funcletizer.Funcletize(query);
}

Love it. Uniquifiers, Funcletizers, and Daemonizers of the world unite.

Written by Brandon Byars

October 26, 2008 at 12:13 pm

Posted in .NET, Database

Tagged with ,

Configuring ActiveRecord to work with SQL Server 2005

As much as possible, I like a zero-install configuration. In other words, I want to simply checkout a codebase, run an automated build process, and start working. Ideally, I’d like to be able to do that on a clean machine.

It doesn’t always work, of course. For instance, even though most of our production code is written in .NET, we use Ruby extensively for automation. Installing Ruby is one of those dependencies that we live with. But installing Ruby isn’t enough; we also need Rails (or at least ActiveRecord) for some data management scripts we have, Watir, and some fixes for ActiveRecord to work with SQL Server.

All of that can be done fairly easily by somebody who knows what they’re doing, but new developers often don’t know what they’re doing, and I strive to be dispensable. We wrote a script that configured
Ruby and the necessary libraries to work for our development environment.

First, we needed to install the necessary gems. This is quite easy to do on the command line, but it took me a little digging before I figured out how to do it in code:

needed_gems = ["rails"]

require "rubygems"
Gem.manage_gems

needed_gems.each do |gem|
  puts "Installing gem #{gem}..."
  Gem::GemRunner.new.run ["install", gem, "--include-dependencies"]
end

SQL Server requires an ADO adapter that doesn’t ship with Ruby. You can read about it here. All that’s needed is to download the ADO.rb file (which we have versioned next to our setup script) and copy it to the right place, creating the directory if needed:

setup_dir = File.dirname(__FILE__)
ruby_dir = "C:/ruby/lib/ruby"
ruby_version = 1.8

# The ADO adapter needed for using dbi (the standard db access)
# with SQL Server, which does not come with ruby by default.
# See http://wiki.rubyonrails.org/rails/pages/HowtoConnectToMicrosoftSQLServer.

puts "creating ADO adapter..."
ado_dir = "#{ruby_dir}/site_ruby/#{ruby_version}/DBD/ADO"
system "if not exist #{ado_dir} mkdir #{ado_dir}"
system "copy /Y #{setup_dir}/ADO.rb #{ado_dir}"

Finally, we use SQL Server 2005, and we want to use Windows Authentication for all of our Ruby scripts. Neither SQL Server 2005 nor Windows Authentication is supported by rails out of the box. The problem, described on the SQL Server wiki for rails, is the connection string rails builds. At first, we were taking the suggestions of some of the comments on the wiki and changing the sqlserver_adapter.rb file that ships with rails. This obviously isn’t ideal, so now we monkey-patch it in our code that accesses the database:

module DBI
  # We have to monkey patch this because the SQL Server adapter that comes
  # with rails (active_record\connection_adapters\sqlserver_adapter.rb)
  # doesn't work with SQL Server 2005 or with Integrated Security yet.
  class << self
    alias_method :original_connect, :connect

    def connect(driver_url, username, password)
      # Change to SQL 2005
      driver_url.sub! "SQLOLEDB", "SQLNCLI"

      # Change to Windows Authentication
      driver_url.sub! /User Id=[^;]*;Password=[^;]*;/, "Integrated Security=SSPI;"

      original_connect(driver_url, username, password)
    end
  end
end

And that’s it. You still can’t checkout the codebase and start working on a clean machine, but it’s not bad. Install Ruby, run setup.rb. All database code has been patched to deal with our environment.

Written by Brandon Byars

October 23, 2007 at 7:13 pm

.NET Database Migrations

Pramod Sadalage and Scott Ambler have suggested using a series of numbered change scripts to version your database. Start with a base schema, and every subsequent change gets its own change script, grabbing the next number. That version number is stored in a table in the database, which makes it easy to update—you just run all change scripts, in order, greater than the version stored in your database.

The Ruby on Rails team implemented this technique in their migrations code. It’s quite elegant. This blog uses a Rails application called Typo; here’s one of its migrations:

class AddArticleUserId < ActiveRecord::Migration
  def self.up
    add_column :articles, :user_id, :integer

    puts "Linking article authors to users"
    Article.find(:all).each do |a|
      u=User.find_by_name(a.author)
      if(u)
        a.user=u
        a.save
      end
    end
  end

  def self.down
    remove_column :articles, :user_id
  end
end

That migration is called 3_add_article_user_id.rb, where 3 is the version number. Notice that it’s written in Ruby, not in SQL. It adds a column called user_id to the articles table and updates the data. The data update is particularly interesting—we get to use the ActiveRecord O/RM code instead of having to do it in SQL (although you can use SQL if you need to). The Rails migration code can also rollback changes; that’s what the down method is for.

The problem I’ve always had with this scheme is that we have many database objects that I’d like to version in their own files in our source control system. For example, here’s our directory structure:

db/
  functions/
  migrations/
  procedures/
  triggers/
  views/

We have several files in each directory, and it’s convenient to keep them that way so we can easily check a subversion log and see the history of changes for the database object. For us to use the migrations scheme above, we’d have to create a stored procedure in a migration, and later alter it in a separate migration. Since the two migrations will be in separate files, our source control wouldn’t give us a version history of that stored procedure.

We came up with a hybrid solution. Schema changes to the tables use a migration scheme like Rails. Database objects are versioned in separate files. Both the schema changes and the peripheral database object changes are updated when we update the database.

For this to work, we have to be a little careful with how we create the database objects. We want them to work regardless of whether we’re creating them for the first time or updating them, which means ALTER statements won’t work. The solution is simply to drop the object if it exists, and then create it. This is a fairly common pattern.

I wrote an NAnt and MSBuild task to do the dirty work. It runs both the schema migrations and the database object updates. Both are optional, so if migrations are all you want, that’s all you need to use. It expects all migrations to be in the same directory, and match the pattern 1.comment.sql, where 1 is the version number. It will be stored in a database table whose default name is SchemaVersion, with the following structure:

CREATE TABLE SchemaVersion (
  Version int,
  MigrationDate datetime,
  Comment varchar(255)
)

I’ve only tested it on SQL Server, but I think the task should work for other DBMS’s as well (it uses OLEDB). Migrations can contain batches (using the SQL Server GO command) and are run transactionally. Unlike the Rails example, the .NET migrations use SQL, and I don’t yet have any rollback functionality.

You can include any extra SQL files you want in the DatabaseObjects property. Both NAnt and MSBuild have convenient ways to recursively add all files matching an extension.

Here’s an NAnt example:

<target name="migrate" description="Update the database">
    <loadtasks assembly="Migrations.dll" />
    <migrateDatabase
        connectionString="${connectionString}"
        migrationsDirectory="db/migrations"
        commandTimeout="600"
        batchSeparator="go">
        <fileset>
            <include name="db/functions/**/*.sql"/>
            <include name="db/procedures/**/*.sql"/>
            <include name="db/triggers/**/*.sql"/>
            <include name="db/views/**/*.sql"/>
        </fileset>
    </migrateDatabase>
</target>

And here it is using MSBuild:

<ItemGroup>
    <DatabaseObjects Include="db/functions/**/*.sql"/>
    <DatabaseObjects Include="db/procedures/**/*.sql"/>
    <DatabaseObjects Include="db/triggers/**/*.sql"/>
    <DatabaseObjects Include="db/views/**/*.sql"/>
</ItemGroup>

<Target Name="dbMigrate">
    <MigrateDatabase 
        ConnectionString="$(ConnectionString)"
        MigrationsDirectory="db/migrations"
        DatabaseObjects="@(DatabaseObjects)"
        CommandTimeout="600"
        TableName="version_info" />
</Target>

The source code and binaries can be found here.

Written by Brandon Byars

April 14, 2007 at 10:35 pm

Ruby and SQL DMO

We have a rather large legacy system that we are trying to inject a healthy dose of agility into. One of the biggest challenges has always been versioning the database, which is large, clumsy, and cluttered. We managed to get the rest of the code under a continuous integration scheme, but every time somebody made a schema change, or even updated a stored procedure needed for some tests to work, manual intervention was needed.

Pramod Sadalage and Martin Fowler wrote one of the first articles on iterative database design, and Pramod later teamed up with Scott Ambler to collaborate on Refactoring Databases. The advice, adopted by the Ruby on Rails team, was to create a separate migration file for each schema change, and number them sequentially. For example, the first migration would be 1.sql, then 2.sql, and so on. You could store the latest migration file run on a database in a version table, which would make updating a database as easy as running every migration, in order, whose version number is greater than the one stored in your database table.

We managed to do something similar, although it required a number of changes. But before it could all work, you need a starting point—a base schema to create a new developer database. We use SQL Server 2000, so initially I simply had Enterprise Manager create a SQL script for me. Not only did it not work (I don’t think it sorted the dependencies right), it was a ghastly nightmare to look at.

Why do standard vendor-supplied code generation tools create such ugly code?

I decided to do the code generation myself using SQL DMO (the same COM interfaces that Enterprise Manager was using, just poorly). I’d successfully used Ruby and ERB for code generation before, and discovered I could make very nice looking code (Code Generation in Action by Jack Herrington describes the concept nicely). Within just a couple hours, I had a SQL script that not only looked much nicer than anything Enterprise Manager spits out; but worked to boot.

First, I needed to connect to the database I wanted to dump into a SQL file:

require 'win32ole'

class SqlServer
  def initialize(server, catalog)
    sql = WIN32OLE.new("SQLDMO.SQLServer")
    sql.LoginSecure = 1
    sql.Connect(server)
    @db = sql.Databases(catalog)
  end
end

This uses Windows Authentication to connect to the appropriate SQL Server database. The DMO interface, described here, is one of those clumsy, wandering APIs commonly developed when VBScript is your target language (warning, the MSDN website for DMO only seems to work in IE, and poorly at that). I decided to wrap some of the classes to make it easier to use. First, a superclass:

class DBObject
  def initialize(object)
    @object = object
  end

  def name
    return @object.Name unless @object.Name =~ / /
    "[#{@object.Name}]"
  end

  def beautify(text)
    # strip brackets if they're not needed, and remove the .dbo prefix
    text.gsub!(/\[([^ ]*?)\]/, '\1')
    text.gsub(/dbo\./, '')
  end
end

Here I provide one of the keys to generating code that looks appealing. I really don’t like to look at all the noise of bracketing all the tables and column names, just in case the name contains a space in it. The following looks ugly to me:

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[tablename] (
    [id] [int] NOT NULL,
    [description] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]

That drives me wild. I don’t ever change any of the settings for ANSI nulls, etc, and I could care less what the collation is so long as it matches the database default. Nor do I care about the file group. It’s hard to see through all the noise.

Here’s what I want:

CREATE TABLE tablename (
    id int NOT NULL
    description varchar(100) NOT NULL
)

Our name and beautify methods will help us achieve prettier code. Here’s the most important subclass:

class Table < DBObject
  attr_reader :columns, :constraints, :keys, :indexes, :references

  def initialize(table, connection)
    super(table)
    @columns, @constraints, @keys = [], [], []
    table.Columns.each { |col| @columns << Column.new(col) }
    table.Checks.each { |check| @constraints << CheckConstraint.new(check) }
    table.Keys.each { |key| @keys << Key.new(key, connection) }
    get_indexes(table)
    get_references
  end

  private
  def get_references
    @references = []
    @keys.each do |key|
      if "" != key.referenced_table and name != key.referenced_table
        @references << key.referenced_table unless references.include?(key.referenced_table)
      end
    end
  end

  def get_indexes(table)
    @indexes = []
    # the Indexes collection includes objects already in Keys and statistics
    keys = @keys.map { |key| key.name }
    table.Indexes.each do |index|
      if not keys.include?(index.Name)
        if index.Type == 16 || index.Type == 0
          @indexes << Index.new(index)
        end
      end
    end
  end
end

You can find the classes it depends on by downloading all of the code here. Notice, however, that a database connection is needed for the Key constructor. As far as I could tell, there was no way, using nothing more than DMO, to find out if a key cascade deletes. I had to query the INFORMATION_SCHEMA views to find that information.

For our script to work, we’re going to need to order our dependencies correctly. The SQL script will fail if we try to add a foreign key to a table that doesn’t yet exist. The following should do the trick:

class SqlServer
  # returns a topological sort with parent tables in front of child tables
  def self.topologically_sorted(tables)
    sorted = []

    # We need a hash to navigate the references field, which gives only names
    table_hash = {}
    tables.each { |table| table_hash[table.name] = table }

    # first add all root tables to sorted
    sorted += tables.find_all { |table| 0 == table.references.length }

    while tables.length < sorted.length
      sorted += tables.find_all do |table|
        if sorted.include?(table)
          result = FALSE
        else
          # all dependencies must already be in sorted
          dependencies = table.references.map { |ref| table_hash[ref] }
          result = (nil == dependencies.find { |ref| not sorted.include?(ref) })
        end
        result
      end
    end
    sorted
  end
end

Now, our code is as simple as binding to some ERB templates:

require 'erb'
require 'db_object'

class SchemaWriter
  def initialize(server, catalog)
    @db = SqlServer.new server, catalog
  end

  def generate_create_script
    generate_code_for(@db.user_defined_datatypes, "create_udt")
    generate_code_for(@db.rules, "create_rule")
    generate_code_for(SqlServer.topologically_sorted(@db.user_tables), "create_table")
  end

  def generate_code_for(objects, template_name)
    file_name = template_name + ".erb"
    template = ""
    File.open(file_name) { |file| template = file.read }
    objects.each do |object|
      erb = ERB.new(template, nil, '-')
      puts erb.result(binding)
    end
  end
end

if $0 == __FILE__
  writer = SchemaWriter.new(ARGV[0], ARGV[1])
  writer.generate_create_script
end

As an example, here’s the create.table.erb template:

create table <%= object.name %>(
<% object.columns.each_with_index do |column, i| -%>
    <%= column.name %> <%= column.text %><%= "," unless i == object.columns.length - 1 %>
<% end -%>
)

<% object.keys.each do |key| -%>
alter table <%= object.name %> add constraint <%= key.name %> <%= key.text %>
<% end -%>
<% object.constraints.each do |constraint| -%>
alter table <%= object.name %> add constraint <%= constraint.name %> <%= constraint.text %>
<% end -%>
<% object.indexes.each do |index| -%>
<%= index.text %>
<% end -%>

Written by Brandon Byars

March 17, 2007 at 3:28 pm

Posted in Code Generation, Database, Ruby

Tagged with ,

Follow

Get every new post delivered to your Inbox.