DaveWentzel.com            All Things Data

SQL Server

The Other MD3.Create Procedures

Metadata Driven Database Deployments (MD3) is my build-and-deploy mechanism for SQL Server-based applications.  Check out the other posts in my MD3 blog series.  In the last few posts we covered MD3 and "state" which covered how MD3 deploys "stateful database objects" using The Four Rules.  If you handle The Four Rules correctly (and MD3 can greatly help you with this) then your database deployments should be a breeze.  No more hand-crafted, error-prone DDL.  In this post we are going to cover the other MD3.Create* procedures which help with database deployments and The Four Rules.  

The Other Stateful Database Objects

Here are the other stateful database objects and how MD3 handles their deployment.  In most cases this is handled with an MD3.Create* procedure.  These procedures deploy your stateful objects "smartly".  

Stateful Object MD3's deployment mechanism
Tables/columns Tables are created using standard DDL and can be scripted using SSMS tools.  Columns can be added or altered using MD3.CreateColumn, this will be discussed further in the post MD3 Script Idempotency and MD3.CreateColumn(source code).  
Indexes This was covered in the post MD3 and "state" using MD3.CreateIndex.  Source code.  
Primary Keys We'll cover this below.  Source code for MD3.CreatePrimaryKey.  
Foreign Keys We'll cover this below.  Source code for MD3.CreateForeignKey.  
CHECK constraints We'll cover this below.  Source code for MD3.CreateCheckConstraint.  
DEFAULT constraints Handled with MD3.CreateDefaultConstraint, and covered below.  This may not seem like a "stateful" database object, but it is.  Source Code.  
Model Data (states/provinces, valid order statuses, other "lookup" data) Covered in the post MD3 Model Data Patterns

Each of the above referenced MD3.Create procedures handles The Four Rules correctly.  Let's look at these routines.  

MD3.CreatePrimaryKey

Conceptually, primary keys are very similar to indexes.  You can cluster them, give them a name, give them compound keys, put them on a filegroup, etc.  Therefore it shouldn't be surprising that MD3.CreatePrimaryKey takes the exact same parameters as MD3.CreateIndex.  

The difference is what each procedure does under-the-covers.  If you change a PK you may actually have to deal with objects that depend on it.  Here's a list of objects that are dependent on PKs and how MD3 handles PK changes.  You can of course change this functionality to suit your requirements:  

  • If the table is replicated then the table must be removed from any subscribed publications.  Currently MD3 merely throws an error if this situation exists. 
  • Removes all FKs that reference the PK.  MD3.CreatePrimaryKey simply removes them.  Later, when MD3.CreateForeignKey is executed (remember, MD3 handles the dependent ordering of objects for you) the FKs will be recreated properly with the new keys.  
  • If we are changing the PK to be clustered we first drop an existing clustered index that already exists on the table.  As with foreign keys, MD3 handles the dependencies for you...meaning the dropped clustered index can be recreated later as a non-clustered index if required.  

In the next post, MD3.CreatePrimaryKey in Action I'll show you specific use cases where MD3 can save you vast amounts of time and rework when you have to modify a primary key.  Stay tuned.  

MD3.CreateForeignKey

FKs are stateful database objects because they require a properly-applied "parent" key to be in place first.  FKs can also be very expensive to recreate if the associated tables are large.  Here are the parameters (properties) for MD3.CreateForeignKey:

Like the other stateful object creator routines, MDs3.CreateForeignKey properly implements The Four Rules properly.  The parameters should be self-explanatory, except maybe @AllowNoCheck.  Constraints can be either TRUSTED or UNTRUSTED. When you create a FK using ALTER TABLE...WITH CHECK ADD CONSTRAINT you are saying that existing data in the table must adhere to the constraint, as well as any future data.  When you create a FK using ALTER TABLE...WITH NOCHECK ADD CONSTRAINT you are allowing existing data to violate the constraint, it will be enforced for future data only.  This is an untrusted constraint.  There are 2 reasons why untrusted constraints are frowned upon:  

  1. It means you may have data that violates your constraint and you should probably determine why that is, and fix it.  
  2. Some query plans will be less-optimal with an untrusted constraint, leading to possible performance issues.  

MD3 by default does NOT allow untrusted FKs to be created.  However, the @AllowNoCheck allows you to override that, if needed.  MD3 will try first to create the constraint using WITH CHECK and then will fallback to WITH NOCHECK if you have @AllowNoCheck set.  MD3 throws a WARNING in this case that you can then research later.  

MD3.CreateCheckConstraint

You may wonder why CHECK constraints are stateful database objects.  The reason is that recreating a CHECK constraint on a large table can be EXTREMELY time consuming as all data must be read and evaluated.  We also need to take into consideration whethere the constraint is trusted or not, just like with foreign keys.  Here is the set of parameters that MD3.CreateCheckConstraint uses:  

 

MD3.CreateDefaultConstraint

Defaults are also stateful objects only because they are declared very similarly to CHECK constraints and the code is very similar.  

Summary

In nodetitle I covered how MD3 covers stateful database objects, specifically indexes.  In this post I covered how MD3 deploys the other stateful database objects like PKs, FKs, CHECK constraints, and defaults.  In MD3 Script Idempotency and MD3.CreateColumn I'm going to cover what is probably the most important stateful database object...the table.  In the next post, MD3.CreatePrimaryKey in Action I'll show you specific use cases where using MD3 instead of hand-crafted DDL can save you a lot of grief.  


You have just read "The Other MD3.Create Procedures" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

MD3 and The Four Rules

In the last few posts I covered what MD3 is (Metadata Driven Database Deployments) and why it is better than hand-crafted DDL.  In this post we are goingto cover The Four Rules of Database Deployments.  Regardless of whether you decide to use MD3 or not, if you understand The Four Rules your database deployments will run much smoother.  

 
Some Background
In the last post (nodetitle) we covered a scenario where we had to write really complex DDL if we wanted to deploy an index change properly.  We need an index, called MyIndex, created on dbo.Example with a fill_factor of 90, on column Foo, with included_column Bux.  Here is the DDL:
 
DDL is kinda esoteric and difficult to remember, as seen above.  DML for master/seed/model/system data that you may need to deliver is similarly complex (we'll look at this problem in the post nodetitle).  In MD3 DDL is much simpler.  We just "declare" the properties of our object and let MD3 determine what needs to be done to get us to "model".  Basically, we enter "metadata" about our object into an MD3 table (for instance MD3.Indexes), which looks very similar to the "sys" system tables (in this case sys.indexes).  Here's a screenshot that shows how to add an index to the HumanResources.Department table:   
 
 
Pretty simple.  We have a metadata table called MD3.Indexes that looks VERY similar to sys.indexes.  We "declare" our new index with its desired properties right into that table, then we run MD3.  Under the covers MD3 is making calls to MD3.CreateIndex (which you can call instead of populating the MD3.Indexes table) similar to the nearby graphic that is adding the index nciFoo to dbo.Example.  
 
Each parameter to MD3.CreateIndex (and each column in the MD3.Indexes table (see above screenshot) is a metadata property that we can declare for an index.  These properties cover just about every option you can think of regarding an index.  There are some properties missing...for instance, there is no option for filtered indexes because I've never really found a good use case in my career where I absolutely needed a filtered index.  Since MD3 is open source you could always add this functionality if you needed it.  Here are some other index properties that are missing and why:
Index Property Why it is missing?
WHERE <filter predicate> filtered indexes.  You can always add this yourself.  
IGNORE_DUP_KEY I've never found a good use for this option.  
STATISTICS_NORECOMPUTE I've never found a need for this.  
DROP_EXISTING ON|OFF MD3.CreateIndex will smartly determine if this should be set to ON or OFF based on what is being asked.  
ONLINE = ON|OFF MD3.CreateIndex will determine whether this option should be used or not for you, based on whether the index follows the rules for ONLINE index creation.  Here is another case where MD3 is a huge help.  

So the question is...how does MD3 actually build the DDL under the covers and what makes it so much better than hand-crafted DDL? 

The Four Rules of Stateful Object Deployment
 
If you always follow The Four Rules when you code your DDL you will always have a successful deployment (no failures!) and you will never do unnecessary DDL. MD3 does all of this for you.  Assume you have some DDL that you need to apply:  
  1. if the object exists by name and by "properties", then silently continue, there is no work to do.  In other words, there was no change in the object between deployments.  The "properties" are the parameters to the MD3.Create procedures (see the screenshot above).  
  2. if an object exists with the same “properties”, but the name is different, then RENAME the object.  Example:  I have an index with the same column list, order, fill factor, on the same filegroup, with the same compression, etc...but it is named differently.  All MD3 does is run sp_rename.  This scenario occurs, for instance, when your DBA adds an emergency index to a table and you check in the same index structure, but you use your naming conventions.  
  3. if the object exists by name, but the "properties" are different, ALTER the object accordingly.  Example, you have an index with the same name and mostly the same properties, but you need to change the INCLUDED col list and FILLFACTOR.  MD3 handles this without esoteric, hand-crafted DDL. Usually an ALTER can bring the current object up to the desired properties, otherwise DROP/CREATE is used.  The most important thing to remember is that YOU don't have to code the DDL and worry about covering all possible scenarios.  
  4. Else, CREATE object with properties as defined.  At this point MD3 realizes this is a missing object and creates it according to your declarations.  
If you don't do The Four Rules correctly you risk, at a minimum, recreating expensive objects at every deployment.  You don't need MD3 to do a DDL deployment, but it makes it a lot easier and less error-prone.  
 
An Example
Let's take our example code again and determine what MD3 will do in various situations regarding The Four Rules.  The below table is kinda like the "unit tests" for MD3.CreateIndex (tsqlt unit tests available on request).  
 
Database State Expected Result Why? Covers which of The Four Rules
dbo.Example table does not exist FAILURE This scenario will be caught during continuous integration testing and will never happen during a production deployment.  (See Continuous Integration Testing with MD3) None.  
index nciFoo does not exist Index created The index will be created with the declared properties.   4
An index exists called dbaFoo with the same properties Index renamed The index will be renamed using sp_rename to nciFoo.  This ensures that a duplicate index is not created with the same properties.   2
nciFoo exists, but with different "properties" than what is declared Index altered When possible the index will be ALTERed.  Otherwise the existing index will be dropped and the desired index will be created.  In some cases, depending on what is being changed, an index cannot be ALTERed.   3
nciFoo exists with identical properties Silently continue There is no work to do.   1
 
Summary
In this post we looked at The Four Rules of stateful database object deployments.  If you follow The Four Rules when you deploy your objects you will find that you have fewer errors and more reliable, repeatable deployments.  You don't have to use MD3 to use The Four Rules.  You can use any tool, even plain 'ol DDL.  MD3 just makes everything a whole lot easier.  
 
In the next post, nodetitle, we'll cover how The Four Rules are applied to other stateful database objects.  In a later post we'll cover how MD3's model data patterns work with The Four Rules to ensure your model/system/lookup data deployments work just like your other stateful object deployments.  
 
 
 

You have just read "nodetitle" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

MD3 and "state"

Metadata Driven Database Deployments (MD3) is my build-and-deploy mechanism for SQL Server-based applications.  Check out the other posts in my MD3 blog series.  In the last post (Why do we need another database deployment tool like MD3?) I covered some reasons why database deployments give DBAs and developers fits.  In a nutshell, it's because we write too much hand-crafted DDL..  Even when we use third-party GUI tools to manage our deployments we still hit edge cases where the migration isn't smooth.  So why are schema changes so difficult?  

Database Object State and Builds
 
Quick Links
nodetitle
Download MD3 (with samples)
Documentation on CodePlex/git
Presentation on MD3
Other Posts about MD3
AdventureWorks2013 reverse-engineered into MD3 (takes 5 mins to do with ANY database)

When you "build" your Java app (...or C# or whatever...) you are really just compiling the code on your machine.  If compilation is successful then you can rest assured that it will compile on, basically, any machine (OS and processor architecture might be exceptions).  The deployment is simply a matter of copying your compiled code.  In the .NET world you usually build a Windows Installer package or perform an "xcopy deployment".  Build and deploy is really easy, as is building the necessary "CI loops" (...they run your unit tests...) to ensure your builds aren't broken when a developer checks in a change.  

 
Databases work differently.  Just because your scripts compile and deploy on your machine/database does not mean they will deploy anywhere else.
 
Why?
 
Java code, when it compiles, couldn't care less about the "state" of the previous build.  The code is “stateless”.  Some database code, on the other hand, is “stateless” while some is “stateful”.  Database developers need to worry about “history”, or "state", which is the existing data in the tables that must be maintained into the next release.  The code must do more than just compile, it must be respectful of existing data.  And data isn't the only stateful database object.  You don't want to recreate a bunch of indexes needlessly either.
 
Stateless Objects Stateful Objects
Views Tables/columns
Procedures Indexes
Functions Constraints (PKs/FKs/CHECK)
Model Data (sometimes called seed/lookup/master data).  An example is a list of states and abbreviations. This data rarely changes and when it does it changes via an approved process (ie, a Master Data Management update) Model Data (if the data can be altered by the customer.  An example is order status codes.  If you allow your customer to change (or add to) their model data then you don't want to "reset" their model data with your updates...or...maybe you do depending on the scenario.  
 
Stateless Object Patterns
Stateless db objects are the easiest to deploy...you simply compile them.  You do not care about what the previous version of the object looks like, you simply create the new version, overwriting the old.  Just like you would overwrite the last Java build with the current one.  Traditionally you code stateless objects using a pattern similar to the graphic to the right.  
 
Since the object, in this case a stored procedure, is stateless we can simply drop it and re-create it.  When doing that we lose our permissions so we must re-apply them.  It would be nice if we had the CREATE or REPLACE pattern for Transact SQL like Oracle does.  CREATE OR REPLACE is simplified syntax that obviates the need for re-applying permissions and checking if the object exists.  Oh well...maybe in the next replace of SQL Server.  
 
Generally you should deploy your stateless objects near the end of your deployment.  There are some exceptions to this rule:  
  1. You may have object dependencies which require certain stateless objects to deployed in a specific order.  For instance, if viewA references functionB which in turn references viewC then you have to ensure those objects are applied in a certain order.  
  2. if you have SCHEMABINDING applied to any of your objects and you try to first manipulate a stateful object (such as a table) you'll need to drop the object with the SCHEMABINDING clause first.  

MD3 handles all of this for you and in the demo files I give examples of this and how it is very simple to handle these edge cases. 

Stateful Objects and Patterns
As mentioned before, Java has no "stateful" code...if the code compiles you are good...if it doesn't compile you have problems.  In the database world it is the stateful code that causes database developers fits.  
 
An example of "stateful" database code that can be maddening is index DDL.  Let's see why that is.  You can download the example code here.   We start by creating a very simple table with a few columns.  Most developers just create their tables and then either worry about the indexes later or let the DBAs worry about it when performance becomes a problem.  At a later time the developes realize that they need to add an index to the Foo column.  If you are using a tool like Visual Studio db projects then you can use very simple CREATE INDEX syntax.  Otherwise you need to do an "existence check" to ensure that a subsequent execution of your script won't generate an "object already exists" error.  
 
 
The "existence check" syntax isn't terribly difficult to handle...yet.  Now let's say you notice a lot of page splitting and realize that your table design and data access patterns indicate that you need to change the fillfactor to 90.  Quick...do you remember where the fillfactor settings are stored in the metadata tables so you can modify the "existence check"?  You could just code up something quickly like this:  
 
 
The problem is that you will drop and recreate the index at every deployment.  Again, a tool like VS db projects will obviate the need for this.  Better "existence checking" code would be something like this:    
 
 
Our code is getting a little more "advanced" and much more "hand-crafted" than I am comfortable with.  
 
Here's a common scenario...once your index is released to production your DBAs notice that you have a bunch of Bookmark Lookups causing customers to report application slowness and they believe that they should add a couple of included columns to your index.  The DBAs are nice enough to tell you that they've done this and that you need to make this new index "model code".  You determine this is the new index creation code you need: 
 
 
Yikes!  That's getting to be really complicated code.  
 
Here's a slightly different example.  Your DBA calls you and tell you that he saw a gigantic "blocking storm" and needed to add an index on the Bar column.  You agree with his analysis and you both decide to make this a model index. The first block of code is what the DBA added (note the index name doesn't follow your naming conventions) and the second block is the code you made "model".  Do you see the problem?  
 
 
You really should've renamed his index, if his index exists.  Now you risk a duplicate index during a deployment.  
 
A Better Way to Handle Stateful Objects
 
DDL is kinda esoteric and difficult to remember, as seen above.  DML for master/seed/model/system data that you may need to deliver is similarly complex (we'll look at this problem later).  In MD3 DDL is much simpler.  Wouldn't it be nice if you could just enter the "properties" of the desired object in a metadata table and SQL Server just did what it needed to do to make that object for you?  MD3 does that for you.  Here's a screenshot that shows how to add an index to the HumanResources.Department table:   
 
 
Pretty simple.  We have a metadata table called MD3.Indexes that looks VERY similar to sys.indexes.  We "declare" our new index with its desired properties right into that table, then we run MD3.  
 

Now let's say you want to alter that index to change your compression settings from NONE to ROW...just update the entry in MD3.Indexes and rerun MD3. Need to add an Included column?  Change the NULL above to the name of the column you want to include and rerun MD3.  Maybe the index is no longer needed...just change the DropIndex column to 1.  
Under-the-covers MD3 is simply cursor-ing over the metadata table (MD3.Indexes) and determining if there is any work to do (more on how this is done in the next section).  It does this by calling the MD3 stored procedures that do the actual work...for instance, EXEC MD3.CreateIndex.  You can call these procedures and use them yourself without using the MD3.Indexes metadata table.  Here's an example call that creates nciFoo.  Let's say you decide to change this index to have a FillFactor of 90...how do you think you would do that?  That's right, change the parameter and run the procedure.  Want to change the filegroup (or partition)...take a guess at the process?  No complex hand-crafted DDL is required.  
 
So the question is...how does MD3 actually build the DDL under the covers and what makes it so much better than hand-crafted DDL?  MD3 uses The Four Rules.  We'll cover nodetitle in the next post.
 
 
 
 
 
 
 
 
 
 
 
 
 

You have just read "nodetitle" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

Why do we need another database deployment tool like MD3?

This is the next post in my MD3 series (Metadata Driven Database Deployment).  MD3 is my build-and-deploy system that I've evolved over the past 15 years. Quick history...I worked at an ISV and we supported 4 versions of our software (Version X back to Version X-3) and we needed to support upgrading any of those versions to "current". We had about 40 customers with the average db size ~40GB. To make matters even more complicated we supported either SQL Server or Oracle.  Both had the same schema...mostly.

Quick Links
nodetitle
Download MD3 (with samples)
Documentation on CodePlex/git
Presentation on MD3
Other Posts about MD3
AdventureWorks2013 reverse-engineered into MD3 (takes 5 mins to do with ANY database)

So we needed a tool to upgrade a database reliably without having non-database developers writing complex, error-prone, hand-crafted DDL. Most of our data developers knew Oracle or SQL Server, but few knew both.  I learned a lot about what works and what doesn't when upgrading a database in a complex data environment like this. I created a database CI loop (also included with MD3) before CI loops were sexy and ubiquitous. Every day we tested our db scripts against real customer databases and ensured everything upgraded and functioned properly. We never worried about the actual customer production upgrade event because our CI loops already upgraded the customer hundreds of times (albeit on older, stripped, and scrubbed databases).  

I've deployed versions of this tool at many different employers and clients. Each has been radically successful.  

  • The developers love how easy it is to make schema changes without hand-crafting DDL.  It is totally flexible and customizable to whatever you need.
  • DBAs and app admins love it because it is scriptable, has customizable logging, and show the SQL that is being executed.  Nothing is hidden from the DBA.   
  • Management likes the fact that it requires zero training and creates a repeatable, reliable process.  

The latest incarnation, which I haven't yet open-sourced, is a version that will do a near Zero Downtime database upgrade. Contact me if you might be interested in this.  

Why do we need another database deployment tool?

There are lots of database deployment tools out there.  They each work great...mostly.  But development teams and DBAs don't trust them when it comes to complex schema changes.  Don't believe me?  Assume the following scenario...you need to store new data in your database due to a change in business requirements.  There are lots of ways you could do this but you honestly believe the best way is to refactor and split the largest table in your database into 3 new tables.  
 
How likely are you to propose this solution to your team?  My guess is not very likely.  The reasons are simple: 
  • Modifying a table's schema requires you to "port" the old database structure to the new one.  And also the data.  This is fraught with danger.  Perhaps each version of the table has also experienced schema changes which you also need to take into consideration.  You have to then change keys (both primary and foreign) and clustering as well as all of the non-clustered indexes.  What if each customer/version has a different set of indexes and keys?  
  • Then you need to modify each database object that accesses the table.  All of your stored procs, functions, and views.  Then you need to modify your Java (or C# or whatever).  All of this scares the average database developer.  
  • Since this is a large table how can we guarantee this won't cause our customer excessive downtime while we migrate the data?  How do we verify the customers' data afterwards?  
NoSQL:  "RDBMS schemas are too hard to change"
 
Altering schemas scares developers so much that an entire cottage industry grew around the fact that relational databases are too darn hard to "evolve" with ever-changing business rules.  One of the primary reasons the NoSQL movement (whenever I hear this I think, "technology or bowel?") grew, especially document and keystores is because they touted the fact that their technologies allowed you to modify your schemas without needing to worry about modifying all of that data access code that scares us all.  And there was no need to "port" the data from old-to-new schema.  
 
But relational schemas really aren't that difficult to evolve.  You really don't need to write a bunch of complex hand-crafted DDL.  And you don't need a third-party tool either where you hit edge cases where your data won't migrate to the new schema properly.  So why are schema changes so difficult?  
 
Just Enough to Whet Your Appetite
MD3 deployments are "declarative", meaning you simply declare what you want your object to look like and MD3 figures out the best DDL commands to run (and you can review them) to get your schema to that destination.  Here is a simple example.  Assume we have a table named dbo.Example and we decide a non-clustered index on Foo would be beneficial to performance.  See the nearby graphic to see how easy this is with MD3.  (Actually, the command is even easier than that, and I'll show you that in the next post.)  But MD3 gets even better.  Let's say a DBA already noticed a performance problem and indexed Foo already but named the index dbaFooPerfProblem.  In that case the index will be renamed.  You won't get a duplicate index.  Let's say later you decide to make nciFoo into a compound index with Foo,Bar as the keys.  Simply changing Line 21 to 'Foo,Bar' will automatically change the existing nciFoo, if it exists, or build a new nciFoo with the declared properties, if it doesn't exist.  
 
In the next post I'll cover MD3 and "state", which is the key to a good database deployment routine.  
 

You have just read "Why do we need another database deployment tool like MD3?" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

Introduction to Metadata Driven Database Deployments

MD3 (Metadata Driven Database Deployment) is my build-and-deploy system that I've evolved over the past 15 years. Deploying database code tends to be challenging for most organizations, especially ISVs that may support multiple versions of their software at many customers.  MD3 handles this all very elegantly.  Over the next few posts I'll cover why I think MD3 is the best choice for your database deployments.  

Table of Contents

Page Summary
nodetitle This page.  
Why do we need another database deployment tool like MD3? Covers some scenarios where traditional database deployment tools fail and how MD3 makes hand-crafted DDL a thing of the past.  
MD3 and "state" In this post we cover stateful vs stateless database objects.  It is the stateful objects (those with "history" that we need to consider) that cause our database deployments grief.  We'll cover how MD3 handles this problem.  
nodetitle The Four Rules will ensure you always deploy your stateful database objects correctly.  You don't need MD3 to use The Four Rules, but it helps.  
nodetitle Covers the other "stateful" database objects and how MD3 handles their deployment.  

MD3.CreatePrimaryKey in Action

Shows some use cases with primary keys where MD3 can really save you a lot of time and rewrote.  We also further demonstrate how The Four Rules work.  
nodetitle We cover the last MD3.Create procedure in this post and cover the one aspect of MD3 scripts that is mandatory...scripts must be idempotent, which is just a fancy way of saying that all scripts must be rerunnable.
The OO Design Principles of MD3 There are certain object-oriented design principles I incorporated into MD3.  These principles are what sets MD3 apart from other database deployment tools.  
nodetitle Model data is the data, usually static, that you deliver with your application.  MD3 deploys this data in much the same way as other stateful objects are deployed.  We cover how this is done in this post.  
RunMD3.ps1: The MD3 Executor RunMD3.ps1 is a short Powershell script that simply executes each .sql file in each subfolder in your deployment folder.  In this post I'll cover how it does what it does, and why.  
MD3 Deployments in Action In this post we put everything together and actually show how deployments work with MD3.  
How to Reverse-Engineer Your Database with MD3 The hardest part of any new deployment tool is getting your database migrated into the new structure.  This is actually easy with MD3.  I'll demo how to reverse-engineer AdventureWorks in the MD3 format, which takes about a couple of hours to do.  
nodetitle  CI Loops are a definite development best practice.  Yet few shops do this with their database code.  And if they do CI Loops they may not be doing them correctly.  In this post I'll share some secrets to doing really good CI testing with your database code and then I'll show you how to make it really great with MD3.  
MD3: Why you should care about declarative database deployments If you still don't see the value of declarative/properties/metadata driven database deployments I'll give you some additonal arguments in this post.  
MD3 Extensibility Stories I've deployed MD3 at numerous employers and clients and it has always been a success.  In this post I'll recount some stories about how I saved tons of man-hours refactoring database schemas using MD3.  
What are the benefits of using MD3 over my existing solution? You don't have to use the entirely of MD3 to get some of its benefits.  You can pull out just pieces that you feel may help you.  This is a summary post that reviews all of benefits of the tool.  

 

Quick Links
Download MD3 (with samples)
Documentation on CodePlex/git
Presentation on MD3
Other Posts about MD3
AdventureWorks2013 reverse-engineered into MD3 (takes 5 mins to do with ANY database)

You have just read "nodetitle" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

Fun With Transactions

Demo files can be found here

This is a short presentation on some of the confusing aspects of transaction handling in SQL Server.  We start with transaction naming:  

Transaction Names

Even on a case-insensitive database this will fail:  

But this will not fail:  

And even this will work because transaction names are totally ignored when issuing a COMMIT:

Autonomous Transactions

A sorely missed feature of SQL Server is native "autonomous transactions."  This is where you "pause" the current transaction and spin up a new transaction.  When the new transaction "completes" (commits or rolls back) then we continue the paused transaction.  Some use cases:

  • out-of-band logging/auditing
  • custom sequence generators

.NET and Java developers have the concept of .REQUIRES_NEW to "pause" an existing transaction and start a new, autonomous transaction.  Under the covers this is implemented as a new spid with potentially different transaction semantics.  I give some demos on how to use Linked Servers in SQL Server to generate your own autonomous transactions. 

Implicit Transactions

Implicit Transactions (IT) cause inordinate confusion.  At their worst they can cause what appears to be massive blocking events on your server.  Most developers and data access technologies do not use IT, the notable exception being the Java world (Spring JDBC, WebSphere, etc).  When SET IMPLICIT_TRANSACTIONS ON is set a transaction is started for you...if it is needed.  That "if" statement is what causes problems because the "if" assumptions are not properly documented by Microsoft.  If your driver forces you to use IT then ensure that whenever a connection is pulled from the pool that sp_reset_connection is being called.  This will reset the IT setting.  If this isn't done then you can "leak" connections where a developer might have turned OFF IT to get finer-grain control.  That's a big problem.  

I have lots of demos on the confusing aspects of implicit transactions as well as how to monitor your server for IT anomalies that may cause blocking or "abandoned spid" problems for you.  

Demo files can be found here

 

 


You have just read "Fun With Transactions" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

Tags: 

Presenting at PSSUG next week

On Tuesday, December 2, 2014 beginning at 5:30 I'll be doing a short presentation on "Transaction Trivia" (I'll probably come up with a better title later) at the monthly PSSUG meeting at Microsoft's Malvern office.

Service Broker Demystified - Services

Services and the [DEFAULT] contract can be very confusing.  In the post I'll show you why that is and some simple ways to resolve the problems in your mind.  Then we'll look at how to model send-only and receive-only services, which is another constraint you can use in your SSB design. 

SSIS and data contracts

Have you ever tried to use a stored proc with a temp table in an SSIS package?  Did it work?  SSIS is severely hampered by the fact that this doesn't work seamlessly.  In this post I'll demo the problem and I'll give you the best fix for it.  The objective is to run a stored proc from SSIS and optionally send the output to a text file. 

Repro 

  • Create a very simple stored proc.  You can download the repro script here.  Our stored proc simply builds a temp table, inserts a few rows and then reads those rows as a result set to the caller.  No parameters...very simple.  
  • Create a new SSIS package.  
  • Add a Data Flow Task 
  •  
  • Double-click it.  
  • Add an "OLE DB Source" to the designer
  • Connect to your instance/db where you have dbo.SampleDataContract installed
  • Change the "Data access mode" to SQL Command and enter the call to our stored proc
  • Click "Preview..."
  • You should immediately generate an error 
  • The error text is:  

Exception from HRESULT:0xC020204A
Error code: 0x80004005
Description: "Invalid object name ''.".
Unable to retrieve column information from the data source. Make sure your target table in the database is available.  

This error occurs because you decided to use a temp table (#temptable...the kind in tempdb) somewhere in the text of your procedure.  That's not allowed.  SSIS (and SSRS and a number of ORM tools, etc) attempt to read the metadata about your procedure to determine what the output will look like (number of columns, names, and data types).  The error is telling you that the metadata could not be found for the temp table that you are attempting to use.  

BTW, this will not happen if you usse @TableVariables.  However, there are performance ramifications of using table variables in some versions of SQL Server.  

Data Contracts to the Rescue

I've written about Data Contracts for Stored Procedures before.  Quick recap...in the Oracle world you CANNOT return a result set from a stored procedure.  Instead you create a package to do this.  In the package "header" you declare the "data contract" (inputs and outputs, including ordinal position, name, and datatype).  Then, in the package body you write the actual code and that code must adhere to the declared data contract.  In the example, emp_actions is the package and it implements 2 procedures, a cursor (which is nothing like a SQL Server cursor, it is more like an ADO cursor in that it is a "CURrent Set Of Records"...ie a result set) and the output format of the cursor, which is known as a record type.  Note that the package body actually outputs the data when someone queries for the desc_salary result set.  

When I first started working with Oracle I thought this syntax was EXTREMELY cumbersome.  Too much typing of redundant information (a CURSOR is declared over a TYPE and then the actual query has to be bound to the CURSOR and to the TYPE.  Too much can go wrong here.  

I thought this was cumbersome until I started working with SQL Server in the Nineties.  SQL Server then only partially declared the full data contract.  Stored procs declared the inputs but not the outputs.  In later releases M$ graciously gave us functions and table-valued parameters which FULLY implement data contracts.  But they never fixed procedures to fully support data contracts.  Why do we need data contracts for stored procs?  See this connect item for a list of reasons...but having a true data contract solves the SSIS problem I noted above.  In this post a blogger lists exactly what he would want to see in a fully implemented data contract for stored procedures.  That's a great list.  

While having full-blown data contracts would be nice, you don't have to wait for M$ to roll your own data contracts and solve some of these issues.  Here's how I do it.  At the top of your stored procedure add a little code block that can NEVER execute that defines your data contract.  When these "smart tools" like SSIS, SSRS, and most ORMs try to read your metadata they see that you have a SELECT statement and determine that is the "shape" of the result set.  Even though the SELECT code block is in a IF block that will never be executed (note the IF 1 = 0).  Just doing that little bit of work is enough to fool EVERY "smart tool" that looks for result set metadata.  

Mocking up a data contract really fixes this?

Yes.  Let's demo.  Go back to SSIS to your OLE DB Source component.  Click the "Preview..." button again.  Last time we got an error.  This time we get our result set as expected.  

Summary

Creating your own mocked-up data contract is an easy way to overcome tools that attempt to read the metadata of your procedures and fail because they contain references to a temp table.  If your SSIS packages require the IsSorted flag you can even put an ORDER BY clause on your "data contract" and your output will automatically have the IsSorted flag set.  

 


You have just read "SSIS and data contracts" on davewentzel.com. If you found this useful please feel free to subscribe to the RSS feed.  

Service Broker Demystified - Why do we need Services and Queues?

People claim that they don't want to use Service Broker because it is too complicated. I started a blog series called Service Broker Demystified because SSB really isn't that difficult if you understand some basic concepts. Folks don't understand why both a "Service" and a "Queue" are needed.  Why not just have one object?  In this post I'll show you why and give you a quick rule-of-thumb to avoid the confusion.  

Pages

Subscribe to RSS - SQL Server