A Day In The Lyf

…the lyf so short, the craft so longe to lerne

Posts Tagged ‘MSBuild

Managing Config Files

There’s a discussion on the altdotnet Yahoo group about managing configuration files. How do you manage updating multiple configuration files to change the appropriate values when deploying to a different environment?

The solution I hit on was to create a custom MSBuild task. When called from our build script, it looks something like this:

<ItemGroup>
    <ConfigFiles Include="$(DeployDir)/**/*.exe.config"/>
    <ConfigFiles Include="$(DeployDir)/**/*.dll.config"/>
    <ConfigFiles Include="$(DeployDir)/**/web.config"/>
</ItemGroup>

<ItemGroup>
    <HibernateFiles Include="$(DeployDir)/**/hibernate.cfg.xml"/>
</ItemGroup>

<ItemGroup>
    <Log4NetFiles Include="$(DeployDir)/**/log4net.config"/>
</ItemGroup>

<Target Name="UpdateConfig">
    <UpdateConfig
        ConfigFiles="@(ConfigFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\config.xml"
        Environment="$(Environment)" />
    <UpdateConfig
        ConfigFiles="@(HibernateFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\hibernate_config.xml"
        Environment="$(Environment)"
        NamespaceUri="urn:nhibernate-configuration-2.2"
        NamespacePrefix="hbm" />
    <UpdateConfig
        ConfigFiles="@(Log4NetFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\log4net_config.xml"
        Environment="$(Environment)" />
</Target>

Notice that each call to UpdateConfig takes the list of config files that will be changed and a config mapping file. That mapping file is what is read to update the config files given the environment. Here’s an example of what the mapping file looks like:


<configOptions>
    <add xpath="configuration/appSettings/add[@key='dbserver']">
        <staging>
            <add key="dbserver" value="stagingServer"/>
        </staging>
        <production>
            <add key="dbserver" value="productionServer"/>
        </production>
    </add>
</configOptions>

Each config file is scanned looking for each XPath expression in the mapping file. On each match, the entire node (and all its child nodes) of the original config file are replaced with the node under the appropriate environment tag in the mapping file. It’s a bit verbose, but simple enough, and it supports as many environments as you want to have.

The MSBuild task itself is fairly simple, delegating most of its work to a separate object called XmlMerger:

private void MergeChanges()
{
    foreach (ITaskItem item in ConfigFiles)
    {
        string configFile = item.ItemSpec;
        XmlDocument configFileDoc = LoadXmlDocument(configFile);
        XmlDocument configMappingDoc = LoadXmlDocument(configMappingFile);

        XmlMerger merger = new XmlMerger(configFileDoc, configMappingDoc);
        if (!string.IsNullOrEmpty(NamespaceUri) && !string.IsNullOrEmpty(NamespacePrefix))
            merger.AddNamespace(NamespacePrefix, NamespaceUri);

        merger.Merge(environment.ToLower());
        configFileDoc.Save(configFile);
    }
}

XmlMerger just finds the nodes that need updating and replaces them from the mapping file. Notice that it also accepts namespace information (see the NHibernate example in the build script snippet above), which is occasionally needed:

public class  XmlMerger
{
    private readonly XmlDocument configFile;
    private readonly XmlDocument configMapping;
    private readonly XmlNamespaceManager namespaces;

    public XmlMerger(XmlDocument configFile, XmlDocument configMapping)
    {
        this.configFile = configFile;
        this.configMapping = configMapping;
        namespaces = new XmlNamespaceManager(configFile.NameTable);
    }

    public void AddNamespace(string prefix, string uri)
    {
        namespaces.AddNamespace(prefix, uri);
    }

    public void Merge(string environment)
    {
        foreach (XmlNode mappingNode in configMapping.SelectNodes("/configOptions/add"))
        {
            string xpath = mappingNode.Attributes["xpath"].Value;
            XmlNode replacementNode = FindNode(mappingNode, environment).FirstChild;
            XmlNode nodeToReplace = configFile.SelectSingleNode(xpath, namespaces);
            if (nodeToReplace != null)
                ReplaceNode(nodeToReplace, replacementNode);
        }
    }

    private void ReplaceNode(XmlNode nodeToReplace, XmlNode replacementNode)
    {
        nodeToReplace.InnerXml = replacementNode.InnerXml;

        // Remove attributes not in nodeToReplace.  There's probably a cleaner solution,
        // but I didn't see it.
        for (int i = nodeToReplace.Attributes.Count - 1; i >= 0; i--)
        {
            if (replacementNode.Attributes[nodeToReplace.Attributes[i].Name] == null)
                nodeToReplace.Attributes.RemoveAt(i);
        }

        foreach (XmlAttribute attribute in replacementNode.Attributes)
        {
            if (nodeToReplace.Attributes[attribute.Name] == null)
                nodeToReplace.Attributes.Append(configFile.CreateAttribute(attribute.Name));

            nodeToReplace.Attributes[attribute.Name].Value = attribute.Value;
        }
    }

    private XmlNode FindNode(XmlNode node, string xpath)
    {
        XmlNode result = node.SelectSingleNode(xpath);
        if (result == null)
            throw new ApplicationException("Missing node for " + xpath);
        return result;
    }
}

That's it. Now the whole process is hands-free, so long as you remember to update the mapping file when needed. The config files we put into subversion are set to work in the development environment (everything is localhost), so anybody can checkout our code and start working without having to tweak a bunch of settings first. The deployment process calls our build script, which ensures that the appropriate config values get changed.

Advertisements

Written by Brandon Byars

January 10, 2008 at 9:39 pm

Posted in .NET, Configuration Management

Tagged with

.NET Database Migrations

Pramod Sadalage and Scott Ambler have suggested using a series of numbered change scripts to version your database. Start with a base schema, and every subsequent change gets its own change script, grabbing the next number. That version number is stored in a table in the database, which makes it easy to update—you just run all change scripts, in order, greater than the version stored in your database.

The Ruby on Rails team implemented this technique in their migrations code. It’s quite elegant. This blog uses a Rails application called Typo; here’s one of its migrations:

class AddArticleUserId < ActiveRecord::Migration
  def self.up
    add_column :articles, :user_id, :integer

    puts "Linking article authors to users"
    Article.find(:all).each do |a|
      u=User.find_by_name(a.author)
      if(u)
        a.user=u
        a.save
      end
    end
  end

  def self.down
    remove_column :articles, :user_id
  end
end

That migration is called 3_add_article_user_id.rb, where 3 is the version number. Notice that it’s written in Ruby, not in SQL. It adds a column called user_id to the articles table and updates the data. The data update is particularly interesting—we get to use the ActiveRecord O/RM code instead of having to do it in SQL (although you can use SQL if you need to). The Rails migration code can also rollback changes; that’s what the down method is for.

The problem I’ve always had with this scheme is that we have many database objects that I’d like to version in their own files in our source control system. For example, here’s our directory structure:

db/
  functions/
  migrations/
  procedures/
  triggers/
  views/

We have several files in each directory, and it’s convenient to keep them that way so we can easily check a subversion log and see the history of changes for the database object. For us to use the migrations scheme above, we’d have to create a stored procedure in a migration, and later alter it in a separate migration. Since the two migrations will be in separate files, our source control wouldn’t give us a version history of that stored procedure.

We came up with a hybrid solution. Schema changes to the tables use a migration scheme like Rails. Database objects are versioned in separate files. Both the schema changes and the peripheral database object changes are updated when we update the database.

For this to work, we have to be a little careful with how we create the database objects. We want them to work regardless of whether we’re creating them for the first time or updating them, which means ALTER statements won’t work. The solution is simply to drop the object if it exists, and then create it. This is a fairly common pattern.

I wrote an NAnt and MSBuild task to do the dirty work. It runs both the schema migrations and the database object updates. Both are optional, so if migrations are all you want, that’s all you need to use. It expects all migrations to be in the same directory, and match the pattern 1.comment.sql, where 1 is the version number. It will be stored in a database table whose default name is SchemaVersion, with the following structure:

CREATE TABLE SchemaVersion (
  Version int,
  MigrationDate datetime,
  Comment varchar(255)
)

I’ve only tested it on SQL Server, but I think the task should work for other DBMS’s as well (it uses OLEDB). Migrations can contain batches (using the SQL Server GO command) and are run transactionally. Unlike the Rails example, the .NET migrations use SQL, and I don’t yet have any rollback functionality.

You can include any extra SQL files you want in the DatabaseObjects property. Both NAnt and MSBuild have convenient ways to recursively add all files matching an extension.

Here’s an NAnt example:

<target name="migrate" description="Update the database">
    <loadtasks assembly="Migrations.dll" />
    <migrateDatabase
        connectionString="${connectionString}"
        migrationsDirectory="db/migrations"
        commandTimeout="600"
        batchSeparator="go">
        <fileset>
            <include name="db/functions/**/*.sql"/>
            <include name="db/procedures/**/*.sql"/>
            <include name="db/triggers/**/*.sql"/>
            <include name="db/views/**/*.sql"/>
        </fileset>
    </migrateDatabase>
</target>

And here it is using MSBuild:

<ItemGroup>
    <DatabaseObjects Include="db/functions/**/*.sql"/>
    <DatabaseObjects Include="db/procedures/**/*.sql"/>
    <DatabaseObjects Include="db/triggers/**/*.sql"/>
    <DatabaseObjects Include="db/views/**/*.sql"/>
</ItemGroup>

<Target Name="dbMigrate">
    <MigrateDatabase 
        ConnectionString="$(ConnectionString)"
        MigrationsDirectory="db/migrations"
        DatabaseObjects="@(DatabaseObjects)"
        CommandTimeout="600"
        TableName="version_info" />
</Target>

The source code and binaries can be found here.

Written by Brandon Byars

April 14, 2007 at 10:35 pm