DaveWentzel.com            All Things Data

May 2013

Adding a Server to Central Management Server from TSQL

I've seen lots of PoSH scripts that do this, but unfortunately, not everyone knows PoSH, or is uncomfortable using it.  This is a simple SQL script to add a server to an existing Group in your Central Management Server.  We have made this a discreet step in our server provisioning process.  

Here is the listing of user-modifiable variables:

The "magic" is a procedure that MS has graciously provided us that does all of the heavy-lifting:

 

This can certainly be improved upon.  One simple thing to make it more robust would be to allow it to add new groups or validate groups being passed in.  I'll let that as an exercise for the reader.

Tags: 

BGP and BGPSec...you really should care about this

The future open-ness of the Internet is at stake.  The Secure Inter-Domain Routing Working Group is working on a replacement for the BGP protocol called BGPSEC.  This is one of those little-known technologies that affects you every day.  You really don't want one group with consolidated power over the Internet.  

What is BGP?

Tags: 

They Should Call Them Evil Spools

In my last post, SCHEMABINDING Performance Considerations, I mentioned problems with Eager Spools.  I've written about these many times.  They really should be called Evil Spools.  To be clear, they do serve an important purpose, but they can often be eliminated without detrimental data quality side effects.  

This is just a really quick follow up to that last post (here) that covers some other Eager Spool conditions and methods to remove them from query plans.  Most of this I've written about before.  

What is an Eager Spool? 

  • gets a "read-consistent" view of the data.  Solves infinite loop and "Halloween Protection" problems with UPDATE statements. 
  • HP/Eager Spools are also required for:
    • INSERT statements where the target table is also referenced in the SELECT statement.  
    • DELETE statements with self-join relationships.  
    • certain CTEs (which is one reason why CTE performance is usually worse than many other alternatives).  I avoid CTEs whenever possible due to this (the exception is recursive CTEs, which really are very elegant)
    • recursive CTEs  (very unfortunate, but HP is needed in this situation to get the read-consistent view of the data).  
    • many "Hole-filling queries"...I got this term from an excellent blog series Paul White did on the Halloween Problem.  He does a fabulous job on all of this.  More on "hole-filling queries" in a moment...
  • a "blocking" operator where that "branch" of the query serializes behind the Spool.  The Eager Spool MUST be satisfied before continuing.  An operator is non-blocking if it can take a single row and pass it to the next operator (ex Nested Loops).   An operator is blocking if the next pipeline step cannot occur until the entire operator is satisfied (ex Hash Match...probing cannot occur before the entire hash is built)
  • uses tempdb
  • (usually) uses rebinds which are worse than rewinds
  • since it is a "blocking operator" it will take more locks and longer-duration locks.  

 

 

Hole-Filling Queries

A hole-filling query is where you need to INSERT into a table where the key does not already exist.  The implementation is usually something like this:  

INSERT INTO #TempTable...WHERE NOT EXISTS (select ObjId FROM #TempTable)

The query plan will include an Eager Spool to ensure we have a read-consistent view of the data.  These can be fixed by using a LEFT JOIN and a WHERE ObjId IS NULL clause.  
 
A MERGE statement is far more elegant though.  (h/t to Paul White for pointing this out in his blog post).  
  • the WHEN NOT MATCHED BY TARGET clause *must exactly* match the USING/ON clause.  
  • And target must have *a* unique key (anything will do).  

Here is an example of a query plan that performed a hole-filling query:  

 

That's a very expensive Eager Spool.  

I rewrote the query using a LEFT JOIN and look at the difference: 

The wall-clock time improvement was just as staggering.  

SCHEMABINDING Performance Considerations

If you use TSQL UDF's (and you really shouldn't be in many cases), make sure you use WITH SCHEMABINDING.  The performance improvement is STAGGERING.  Let's look at an example.  Here we have a typical scalar UDF, it takes a series of inputs and nicely formats a telephone number.  Very simple.

For the demo I have 2 versions of this function...one WITH SCHEMABINDING and one without.  Here is the demo file.  And here are the functions I will use for the demo.  

After looking at the UDF it is clear that it does NOT actually access any data in our database.  We build a new db from scratch and have not created a table so that should be obvious.  

Whenever you create a UDF two metadata properties are flagged...whether or not the function accesses system data and whether it access user data.  Here's where it gets interesting.  If we don't specify SCHEMABINDING then SQL Server marks our function as accessing system and user data.  It is basically the DEFAULT.  

Why does SQL Server do this?  There is, I guess, no way for SQL Server to know that anything referenced by the function has not changed since the function was created.  SQL Server would need to do this check on EVERY invocation causing a big performance hit.  So the safe thing to do is to mark the function as accessing system and user data.  You can think of this as marking the UDF as UNSAFE.  Again, I'm assuming this is the rationale the folks at Microsoft used. 

Whenever the values are set to 1 then any query plan referencing the UDF will have an extra "Eager Spool" operator (I've written about these lots of times) inserted around the UDF if it is accessed as part of an INSERT, UPDATE, or DELETE statement.  Why?  "Halloween Protection".

What is Halloween Protection?

I've written about Halloween Protection before.  In its simplest form, the Halloween Problem occurs when we read and write keys of a common index.  Think of an Employee table with an index on Salary.  The requirement is to give everyone a 20% raise with a salary > 20000.  
 
UPDATE Employee SET Salary = Salary * 1.20 WHERE Salary > 20000
 
The index on Salary will be used to find the rows to be updated.  But the process of updating a row's Salary will cause the index key to migrate to a new location which could cause it to be re-read and re-updated, even causing an infinite loop situation.  Halloween Protection is the process (actually defined in the ANSI SQL standard) where a read-consistent view of the updateable keys is squirreled away in temporary storage and then used as the replacement for the WHERE clause condition.  
 
The process of building a read-consistent view of the data is implemented in SQL Server as an "Eager Spool".  These are a performance nightmare and should be eliminated whenever possible.  
 
How is this relevant to unbound scalar UDFs?
 
If a UDF is not schemabound then SQL Server must assume that the UDF may actually change data referenced in the outer query's FROM clause, meaning that we see an Eager Spool in our query plan.  
 
Continuing the Example
 
Let's build a PhoneNo table with columns that will map to our scalar UDF.  Let's also throw some junk data into the table.  
 
 
Just running a simple SELECT will not give us an Eager Spool.  Nothing is changing so we don't need to worry about it.  A standard shared lock during the read (assuming you are running in read committed mode) is good enough.  
 
But let's assume we need to INSERT the output of our scalar UDF call into a temp table, possibly for further processing in a stored procedure.  The example code below makes 4 different calls to our 4 different scalar functions.  In this screenshot we call the unbound function.  Note the Eager Spool.  
 
 
This makes sense given that the query engine must handle Halloween Protection.  It only costs 4% of the total query plan, but this is a simple example (and please hold that "only 4%" thought for a moment).  Now let's run the schemabound function:  
 
 
Cool.  No Eager Spool is needed this time, which makes sense.  
 
OK, but the Eager Spool was only 4% of the query plan.  No big deal.  
 
Wrong!
 
If we run the two tests again, but run DBCC FREEPROCCACHE first, we can see what our performance numbers look like: 
 
 
--TEST 1 is the plan with the Eager Spool.  That is incredibly poor performance due to simply forgetting to schemabind your function.  
 
Summary
 
In the example files (here and here) I also have calls to scalar UDFs that access user data.  SCHEMABINDING these UDFs doesn't yield much of a performance improvement.  In fact, in some cases the performance is EXACTLY the same, almost indicating that SCHEMABINDING makes no difference on scalar UDFs that access persisted table data.  Other times performance is slightly better.  On very rare occassions I do see staggering improvements, but it is rare.  I'm not sure what the pattern is...I'm still researching.  
 
Ultimately, the correct solution is to *not* use scalar UDFs if you absolutely do not need to.  Performance is ALWAYS better using an alternative such as inlining the function logic or using a stored procedure.  YMMV.  
 
My next post will be a deeper dive into Eager Spools.  

ALTER SCHEMA TRANSFER for Zero Downtime Database Upgrades

Our software has a Zero Downtime requirement for database upgrades.  In the past it took us hours to upgrade our databases from one release of our software to the next.  After doing some basic process-improvement stuff (like taking log backups instead of full backups after taking the system down) we next worked on making sure our index changes were able to utilize the ONLINE=ON option.  We could even refactor/change the normalization of tables and keep the data in sync using triggers and Service Broker.  There were a few things we just couldn't seem to do while the application was online.  This blog post is how, I think, you can perform just about ANY database refactoring while your application is online.  Without incurring a noticeable performance penalty or affecting concurrency.  

But first, some background.  Our software/database is deployed at about 50 customers, ranging in size from ~100GB to 13TB.  Each customer can be, unfortunately, on a different version of the software at any point in time, and some customers even run multiple versions of the software.  The database must be upgradeable from any version to the "current".  The software and databases can be hosted at the customer site or in our own "cloud".  I've written in the past about how to we reduced downtime when migrating from SQL 2005 to SQL 2008.  I also wrote about my database porting process numerous times and how it handles upgrading any version of any database to the "current" build.  Here specifically I compare and contrast it with Data Tier Applications.  All of these things get us "reduced" downtime, but not "zero" downtime.  I did a write-up a number of years ago about a skunkworks project I led where we tried to get to the elusive "zero" downtime.  

Using the method I'm about to outline we finally got to "zero" downtime.  Quick definition of "zero"...it's actually not "zero" downtime, it's "near-zero".  We actually require our application and database to be quiesced for about 10 seconds while we "true-up" the system and make the switch.  In theory we could use this technique for total zero downtime database upgrades if we spent a little more time on "rolling upgrades" of our Java tier, but the cost outweighed the benefits.  It turns out 10 seconds (or so) of downtime was a good enough requirement.  

Darwin

On to the approach.  We decided to call this process "Darwin" since it takes the current database and "evolves" it under the covers.  I am a firm believer in evolutionary database design.  The Darwin approach is to:

  • Use a separate "Stage" schema to create your n+1 release database code changes.  
  • Use a mechanism such as replication, triggers, or Service Broker to keep the data synchronized if a table's schema is changing.  
  • Do a "check-out" of some kind to ensure that Darwin is in-place correctly and data is "evolving".  
  • When you are ready to switch over simply turn off your data synchronization mechanism and issue ALTER SCHEMA TRANSFER commands to transfer out the old versions of your code to a "Deprecated" schema and then transfer your "Stage" schema to your production schemas.  The ALTER SCHEMA TRANSFER is a near-instantaneous metadata operation.  
  • Rerun your permission (GRANT) scripts.  These are the only objects that are not carried-forward with ALTER SCHEMA TRANSFER.  
  • If anything goes wrong run the reverse ALTER SCHEMA TRANSFER commands and your system is totally reverted without having to resort to a database RESTORE.  

Real World Example

For the example I show next we will assume that our production schema is "dbo" and our data synchronization mechanism is simple transactional replication.  The problem is we have a table called dbo.BatchTot that has a series of columns that hold monetary data that are defined as decimal(9,4).  The requirements have changed and now those columns must be declared as decimal(19,4).  The table holds ~2 billion rows at our largest customer.  

If you attempt to run a command like "ALTER TABLE dbo.BatchTot ALTER COLUMN Blah decimal(19,4) NOT NULL" where Blah is currently a decimal(9,4), a "size of data" operation occurs and the ENTIRE table must be rewritten to accommodate the increase in disk size required for the new datatype.  While that occurs the table is serialized (Sch-M locks) and potentially you risk running out of tran log space.  

Many of our architects proposed using what I call the "deprecation model" to accomplish this.  In essence:

  • we add new cols (called BlahNEW for instance) to hold the values from the original columns 
  • put a trigger on the table to keep the cols in sync
  • write a little SQL that will cause every row to be "updated" and fire the trigger.  
  • Change all code to use the BlahNEW cols.  
  • Drop the Blah cols

I hate the deprecation model because it requires you to change all of your references from Blah to BlahNEW (the red highlights above).  In a large application this is really hard to test.  But the real issue is a "people" problem.  As deadlines slip we of course decide that we can reduce testing to maintain project deadlines.  That's a mistake.  Or we decide that there is no time to write the code that drops the old cols, so they stay there forever, along with the deprecation code in the Java tier.  

It's just a bad pattern in my opinion.  

So we decided to use Darwin.  

Pre-Downtime

These steps run while the system is up.  They are not impactful to online users.  First, we create our Stage and Deprecated schemas.  Stage is where our new objects will go.  Deprecated is where the old dbo objects will be placed when we switch over to the new version of the software.  

 

Now we will create all of our n+1 version objects in the database.  Remember, these are applied to the Stage schema.  For simplicity I really just care about BatchTot and the new length of my columns.  

 

Now we need a mechanism to keep the data synchronized.  My choice for this is transactional replication.  So we create a new publication with an article of dbo.BatchTot.  The subscriber is the same database.  The destination is simply Stage.BatchTot.  The sync_type should be 'database snapshot' so that snapshotting the table will hold the shortest-duration locks on dbo.BatchTot as possible.  

Once replication is synchronized you can do a "check-out" of your new application if desired.  You are done with the pre-downtime steps.  The beauty is that if anything goes wrong we have not impacted our production users of the current version of our software.  Replication is only a slight performance penalty, but you should of course determine what is best for your needs.  

 

Downtime

When you are ready for your downtime you simply need to quiesce your system.  This means that you have no active transactions and distribution agents are completed. We are "truing-up" the system.   

Now we run two ALTER SCHEMA TRANSFER commands.  The first moves the current version objects to the Deprecated schema from dbo.  The second moves the Stage schema objects to dbo.  This process moves all "dependent" objects as well to the destination schema.  By "dependent" I mean any indexes, constraints, and keys.  It does not, however, move permissions.  Those are "lost" during the transfer, so you simply need to reapply them.  I would assume this is a conscious decision by Microsoft to ensure we don't introduce any gaping security holes.  

As mentioned above we now also need to teardown replication.  

Next Steps

You can also remove the Deprecated and Stage schemas whenever it is convenient.  I usually keep them around "just in case" for a few days.  

The ALTER SCHEMA TRANSFER process never seems to take us longer than 7 seconds, even for large tables or Stage schemas with many objects.  I assume under-the-covers that Microsoft has implemented this as a "metadata-only" operation.  

Summary

I have only scratched the surface of what can be done with "Darwin" and ALTER SCHEMA TRANSFER.  I believe this is an extremely simple method to get to near-zero downtime database upgrades.  It is not as costly as Oracle's GoldenGate and is conceptually simple to understand for most people.  It even has a built-in "rollback" mechanism.  I hope this blog post was interesting and helps you in the future.  

Me, Younger

Explains a lot...

 

Tags: 

TSQL Unit Testing Tool Comparisons

 

I once had a manager who liked to say, "I want to see checked-in code with tests that prove it doesn't work."  Very Yogi Berra-esque.  

I'm a big proponent of unit testing my TSQL code.  The questions become

  • What tools are available for the data professional?  
  • Which tool is the "best"?  

Tool Availability

First off, there are a lot of unit testing tools that handle database testing, but are not necessarily TSQL unit testing tools.  An example would be DbFit.  This tool allows your Fit/Fitnesse tests to actually execute against your database.  This is cool, and valuable, but really doesn't help me unit test a stored procedure.  Another example is psunit.  This tool allows you to write unit tests for your PoSH code.  I've seen lots of people, myself included use psunit as their basic assertion framework to test tsql.  Again, this is cool, but it isn't a tool designed specifically for testing tsql.  Frankly, any tool that follows the "Arrange/Act/Assert" model can be retrofit to call a database and unit test your SQL.  

So, for this blog post, I'm not going to include unit testing tools that only peripherally touch the database, I only want to compare unit testing tools that specifically focus on TSQL.  A very strong argument can be made that db unit tests should be done in something like DbFit so that the db unit tests are stored and executed with the Java tests and code.  This is exactly how DataDude handles this.  I understand the rationale, I am simply questioning whether this is always the best approach.  In many organizations the db experts are not the Java experts, so keeping the tests separate makes a lot of sense in some circumstances too.  Further, I want the best tsql unit testing tool, not the best unit testing tool for my development environment that also handle tsql.  

What Makes a Good TSQL/database TDD Tool

These are just my opinions 

  • adheres to the AAA (Arrange, Act, Assert) Model for unit testing
  • Has a robust assertion engine that is specific to the needs of a data developer.  For example, support for asserting that a result set matches the expected result set, down to the column ordinals, names, and datatypes is important.  
  • Does not require injecting special assertion or debugging code into your stored procedure.  We don't need that stuff in our production code.  
  • Mocking data should be simple.  
  • Mocking views and stored procs so they return a given output for testing.  
  • has support for automatically setting up (Arranging) and tearing down test data without being destructive to existing data in the database.  So basically I don't need to backup/restore a test database to run my tests.  (Never run unit tests on a production system).  
  • Shouldn't require you to learn another language or tool, such as PoSH to write or run tests.  It should just work in SSMS.  
  • Generates a standard output that is human-readable and automatable so other CI Tools can call it.  Bonus points if it can return HTML/XML so I can plug the output into a build automation tool's standard reporting.  
  • Good documentation and tutorials.  Most data professionals have no idea how to start with TDD and unit testing their TSQL code.  (You could probably make the same case for Java Jocks too).  

Evaluating The Available Tools

The only two tools I've found that are specifically unit testing tools for tsql and stored procedures are TSQLUnit and TSQLT.  

Feature TSQLUnit TSQLT
Open Source (so it's easy to look at the source code and extend it for your needs) Yes Mostly.  The table assertion functionality is in CLR code that is not open source.  The rest of the product is written entirely in tsql.  You are using tsql to test tsql.  
Mocks for data Yes Yes.  tsqlt.FakeTable
Stored proc and view mocking No  Yes.  tsqlt.SpyProcedure.  This is very ingenious.  
Has the ability to "arrange" and "cleanup" test data Yes Yes
Can compare result sets Yes Yes
Test output to XML/HTML No Yes
tests can be written and executed entirely in SSMS No, requires python There is even a plugin helper available from RedGate.  
support from industry vendors Not that I'm aware of RedGate's release of SQLTest is fully supported
Documentation kinda sparse Lots of documentation and tutorials.  
Support (and on-going viability) Doesn't seem to have much new development lately The two authors actively respond to bug and feature requests.  (full disclosure...I was a co-worker with one of the authors)
Deployment test code is in dbo schema with your production objects Uses separate schemas that map closely to test classes, just like Java.  They are only deployed if your tests are run.  If you follow the test class creation pattern in their tutorial you can run tests and you will have no remaining testing artifacts in your database.  

 

Clearly I think the world of TSQLT.  Even if you do not do TDD or don't understand unit testing, you really should download the tsqlt code and peruse it.  I know that I learned quite a bit about transact-sql by looking through their code.  It's just totally ingenious.  If you are a database guy and are not familiar with TDD or unit tests then try using it when you write your next stored proc.  Your unit tests will likely be amateurish but your code will be much easier to maintain and will have fewer bugs.  In a few years you'll look back and laugh at your early unit tests.  That's ok, it's a journey.  

Pay Off Your Mortgage in 9 Years

A co-worker heard on the radio that there is some kind of program that will pay off your mortgage in 9 years, legally, but it costs a few hundred dollars in fees.  He was thinking of enrolling.  I told him that using mathematics, having a basic understanding of finance, and a few simple "tricks" can accomplish the same results for free.

I'm not sure exactly how this particular "9 year payoff" program works, but I'll give you some broad strokes about how I would set up a similar program and bilk the unwitting homeowner. I also want to mention up-front that even though the math shows that these tricks work, you can still get burned. I'll give examples later. Lastly, I've read Dave Ramsey's stuff and I HIGHLY doubt he would approve of some of these tricks. I like Dave Ramsey and generally agree with his advice, but on a couple of things, like the simple finance tricks I'll show you, I really disagree with his advice.

Caveat emptor.

The Basic Idea Behind Paying Off Your Mortgage Sooner

In one sentence, these programs work by having you use a second mortgage to PAY OFF your first mortgage. This will save you lots of interest payments. Some critics think this is just shifting money from one debtor to another. It is, but it works. And if you are still paying PMI then this makes even more sense. But first you have to understand some basic finance.  Note in the first sentence that I said "PAY OFF" and not "PAY DOWN".  If you are not paying off the entire mortgage with your second mortgage, then what I'm going to show you is foolhardy.  

Basic Mortgage Concepts

These "pay off your mortgage fast" programs are popular right now because interest rates are EXTREMELY low. If rates go up the popularity of these plans will go down. Why?

If you have a second mortgage (commonly called a home equity loan or line of credit...I'll assume a HELOC [home equity line of credit] is your "second mortgage" for the rest of this blog) odds are the interest rate is MUCH lower than your primary mortgage. If yours isn't, now is the time to remedy that situation by refinancing. These programs will only work if your HELOC rate is lower than your primary mortgage's rate. Most HELOCs have a variable interest rate. Even so, it's likely your rate is capped at some percentage. Some HELOCs have a "fixed rate partition" as well, meaning you pay a fee or a bit higher interest rate for the peace of mind that your rate is fixed regardless of the future of interest rates. These programs really only work if you have a fixed or variable rate that can be capped at a rate below your mortgage's. If not, be VERY careful before you move foreward. 

 

How Mortgages Work

Assume you borrow $200,000 for 30 years with a 6% mortgage. You will pay $231,676 just in interest payments. If you have extra funds, never pay DOWN a mortgage, only pay it OFF. I'll get to the reason later.

But, let's break that rule for this example. Given the mortgage example, let's say that after 6 months you inherit $5000 and decide to pay DOWN your mortgage. That will actually erase almost $23,000 from the interest over the remaining 29.5 years. Perhaps you could do better in the stock market, but that's a nice 450% return, albeit over 29 years. That's 16% on an annualized basis. Guaranteed money.  It's likely that most people would take a guaranteed 16% return over 29 years vs any other investment out there.  

But it gets better. Instead of inheriting the $5000, let's say you take it from your HELOC. If the HELOC has a lower rate, you are saving money. Let's say every year in January you take $5,000 from your HELOC and make an extra principle payment to your mortgage, then pay off that entire $5000 by the next January.  If you do that over 30 years you ultimately pay off your mortgage in about 15 years, saving about $120,000 in interest.  But trust me, you actually DO NOT want to do this...more on why this is soon.  

Amortization vs Simple Interest

Here's the reason why this trick does work...HELOCs are "simple interest" loans whereas your mortgage is an amortized loan.  

An amortized loan has payments that are made on a periodic basis (generally monthly) and interest is calculated on a monthly basis. A simple interest loan also has payments made on a periodic basis (generally monthly) but the interest is calculated on the average daily balance. A mortgage usually also allows for a grace period for the receipt of payment without penalty. Generally if a mortgage is due on the 1st you really have until the 15th to pay. A simple interest loan generally has no late penalty, the interest simply keeps accruing daily.  Therefore, if you have a 15 day grace period on your mortgage, you really shouldn't pay your mortgage until the 15th.  I NEVER pay my mortgage until the 15th.  And utilizing the grace period does NOT negatively affect your credit rating.  

A home mortgage uses amortization to define payments. This is very different from a simple interest loan, such as a car loan...or a HELOC. Assume your mortgage payment is due on the first of the month. If you pay it 10 days early every month you will NOT pay it off sooner (actually you will, the last payment will be 10 days early, hence the loan is paid off 10 days early).

A simple interest auto loan or HELOC is different. If you can make your payment even ONE day sooner every month you will actually pay off your loan sooner. At a minimum your PAYOFF AMOUNT for the last month should be significantly less.

One More Quick Helpful Hint

If you have direct deposit into your checking account from your employer and you have a HELOC, why not just have your direct deposit go right into your HELOC account?  Not all banks will do this.  Mine does.  I freely admit to everyone that I live paycheck-to-paycheck.  I leave no extra cash-cushion in my checking account.  Checking accounts pay no (or very little) interest.  If you live paycheck-to-paycheck then you know that the first few days after you get paid you have tons of money in your account, but that balance slowly reaches zero by the next paycheck.  You can save a lot of interest having that money work for you by reducing your HELOC's average daily balance.  I like having my money work for me with no extra risk.  

Basic Finance
It's a basic axiom of investing that, all things being equal, a guaranteed return of 5% is better than a guaranteed return of 4%. In the real world all things are not equal. One person may pay a 4% mortgage rate vs his neighbor paying 5% for various reasons. But it all comes down to risk. The reason you are paying any interest is because the mortgagor is assuming some risk that you may default. So, it's probably common sense that if you had $100 to invest you'd rather make 5% than 4%, all other things being equal.  But it's also likely that the higher-yielding investment has a little more risk too.  

People struggle with the fact that costs work the same way. If I can invest at a guaranteed 5%, but my mortgage is 6%, it should be obvious that paying DOWN my mortgage should occur before I invest. But that is actually wrong, like I said earlier.  I'll get to that soon, trust me

Why Use Your HELOC to Pay Off Your Mortgage?

If your mortgage is 5% and your HELOC is 4% and you have enough HELOC funds available, then you should PAY OFF your mortgage using your HELOC. The whole trick is that you move whatever funds you have available from lower-yielding investments to higher-yielding investments.  This is how these "pay off your mortgage in x years" plans work.  Now you know.  

Pitfalls and Caveats
Pitfall Number One - Never Pay DOWN a Mortgage, Only Pay it OFF

Now we are finally getting to the controversial stuff.  

Never pay DOWN a mortgage. Only pay it OFF. The exception is if you pay PMI. An interest rate can be thought of as a risk premium you pay the bank in the event you default. By paying DOWN a loan you are still paying the bank the same rate, but now you are assuming more of the default risk.  You are giving the bank a free ride.  

But you are probably thinking about the example above where I paid off my mortgage in half the time by using $5000 annually from my HELOC.  Well, that does work mathematically...you are getting a great return, but you are taking on MUCH MORE risk.  

Think of it this way. Property values can go down.  By paying extra principle, but not all principle, the bank gets more liquid cash and less of a depreciating asset.  This is the key point that most people forgot about during the real estate boom of the 2000's.  You may even be reading this right now saying that I'm wrong, that the value of YOUR house will never go down.  You'll never lose your job and risk foreclosure.  Read on.  

When the housing market tanked in the US in 2007-2009 banks could only foreclose on some "underwater" houses, but not all. Why? Something called loan-loss reserves (think of it as "bad debt contingencies") that banks needed to maintain. If a loan is not "performing" then the bank must have more cash-on-hand to cover the default.  This is a legal issue for the banks, it helps to deter "bank runs" where people remove their cash from the banks because the bank is (or appears to be) insolvent.  The easiest way for a bank to become insolvent is to have too many non-performing loans.  

If more houses are in forebearance then more loan-loss reserves are needed for legal purposes. So the banks did a lot of borderline-illegal things to keep loans from looking "bad" on their books. Many families were allowed to live in their homes for years without paying a nickel on their mortgage because the banks COULD NOT foreclose on them due to the increase loan-loss reserves that would be needed. That's right, people lived in their homes without paying their mortgages.  Others did "strategic defaults" and "short-sales" to further improve their position with their bank.  (You can google all of this on your own.)

Now think about it for a second...if there are 2 homes that need to be foreclosed on, but the bank can only foreclose on one of them, will it foreclose on the family with the higher or lower "loan-to-value" (ie, how much equity is in the home)? The bank will always foreclose on the home with more homeowner equity. The bank can wring more value from it with less of a hit to their reserves. This means that homeowners who practice paying DOWN their first mortgage early have increased their risk of foreclosure if the economy or housing market tanks again.  So again, assume two mortgages are in forebearance.  One has a single payment left on it after 30 years.  The other has 20 years of payments remaining.  Which will be the candidate for foreclosure?  That's right, the mortgage with one remaining payment.  

Remember, even if you only have a single payment left on your mortgage, and you don't pay it, the bank gets your house and all of your equity and you get nothing.  So, by PAYING DOWN your mortgage you are helping the bank eliminate their risk.  

There is an old saying in finance..."If I owe a hundred dollars to the bank and can't pay, I have a problem. If I owe a million dollars and can't pay, THE BANK has a problem."  

More Pitfalls to This System

  • None of this will work if you have prepayment penalty clauses in your mortgage. I've never seen a conventional mortgage have this, but I know that some of these mortgage brokers can con people into better terms by sneaking in these clauses. Do some research first.
  • If your variable rate goes higher than your mortgage rate you have a problem. Watch for signs that interest rates are rising. I don't anticipate rates going higher for at least a few more years. You can always refi later if they do.  Consider a HELOC with a "fixed rate" option if possible.  There are risks using this method.  There are always increased risks with increased returns.  
  • Dave Ramsey would say that you don't get the motivation and the psychological win of paying off a debtor completely when you follow this kind of system. Bunk! If you need motivation, keep a spreadsheet of all of the interest you are NOT paying to some bank. Remember that you should always pay off anything with a higher rate/higher balance/and longer term than any other debt. That's simple math. Worry about winning the war, not some small battle.
  • All first mortgage interest is income tax deductible in the US. Only the first $100,000 of second mortgage/HELOC interest is tax deductible.  Tax considerations also affect returns.  
  • If your credit is terrible then removing a creditor from your report (ie paying off your HELOC first) might be a better idea. It depends on your situation.  
  • I use my HELOC as my "emergency fund." It's where I get my liquid cash in a pinch and pay only a small amount of interest. If you do too then note that you'll need another source for your "emergency fund".  Again, the Dave Ramseys of the world will tell you not to use a HELOC this way.  I'll again ask you whether it is smart of have an "emergency fund" in a savings account earning 0.10% interest, or working for you eliminating HELOC interest.  
  • Don't liquidate your savings or investments to pay DOWN your mortgage. Odds are you will earn more money investing the cash. That's why these tricks use a HELOC...you aren't using the money anyway, might as well use it to pay off something with a higher interest rate. But using that concept you could also invest in stocks (or anything else) as long as your return is greater than your HELOC rate and the interest savings from PAYING OFF your mortgage. This is very dangerous to do since stocks can go down. I've done this in the past with stunning results. You can too.  All you do is find a stock with a dividend yield higher than your HELOC. Then determine how much it would cost (google is your friend) to buy an at-the-money, one year put option on your stock. If the cost of the put and the HELOC's rate are less than the dividend yield, you have a riskless return on your investment. There are lots of stocks where this can be done.  

But that's another writeup.  Bottom line...your HELOC and home equity should be viewed as a source of funds that can be used as a tool to help you in your financial goals.  

Tags: 

Autonomous Transactions

IMO the most obnoxious shortcoming of Transact-SQL (after maybe the goofy error-handling that won't allow you to rethrow any and every error) is the lack of native autonomous transactions.  I wrote about this here and here.  I've only ever found 2 good reasons for autonomous transactions, but they are HUGE:  

  1. The ability to ROLLBACK a long-running transaction but maintain any error logging for later analysis.  The workaround, and it has a lot of obnoxious limitations, is to log to a @TableVariable and persist that to an actual table after you have done your ROLLBACK.  
  2. The ability to spawn off a separate transaction to get more keys from a central keystore.  In SQL 2012 we have SEQUENCEs which function the same way, unfortunately, not all ORMs and custom-built in-house code will be able to use them.  At least not right away.  

The problem with Number 2 is that we can't write stored procedures that spawn the separate transaction needed to get more keys.  Obviously there are more uses for autonomous transactions, but let's continue the demo.  In this screenshot I have a call to a key generator from within a transaction.  I ask for 10 keys and I can see my LastUsedObjId has been incremented to 645 as expected.  

Now let's assume that call is a long-running query.  Assume that another similar call is made from another connection/transaction while the first spid is executing.  What will happen?  Blocking...

 

OK.  Let's issue a ROLLBACK on both connections.  It turns out that SQL 2008 and above can get you an autonomous transaction if you are OK with using a Loopback linked server.  Here I create a loopback linked server and ensure that RPC OUT is set.  The magic is 'remote proc transaction promotion'.  This setting ensures that MSDTC does not enlist any calls to this linked server into a distributed transaction.  Essentially, it just spawned off an autonomous transaction for you.  

So, let's demo this.  All we need to do is alter our original code slightly to call the linked server.  Here is how I do it:  

Notice I build the stored proc call in a variable that simply builds my 4 part name call.  I have nothing "hard-coded" with this...it will work on a test, prod, or dev environment, which aids in supportability.  Note that my LastUsedObjId was 655 and it was incremented correctly to 665.  Now if I open a new window/spid and paste the code and execute it:  

Note that the spid is NOT blocked this time.  That's because the call to the sequence generator was called in an autonomous transaction.  Note that the new spid correctly saw the first spid's updated LastUsedObjId and properly incremented it's LastUsedObjId. 

That's what we wanted.