DaveWentzel.com            All Things Data

January 2013

Object Relational Mappers Series - Goofy Syntax

This is the next post on my series on ORMs.  ORM vendors state that our developers should be focused on solving business problems, not on data access issues.  I agree.  So, instead of learning SQL, we should learn the ORM vendor's language? That seems insane to me.  Is the ORM language vendor-neutral?  Is there an ANSI standard?  Is OQL the compatible with HQL?  Nope.

Blocking and Contention with sysjobhistory

IMHO Microsoft has a "bug" where whenever a job completes and status is logged to sysjobhistory a check is done to see if "purging" is enabled and if so a purge is performed to retain the configured amount of job history.  For most jobs this works just great.  However, the configuration by MS is often too coarse-grained for many customers, especially customers with MANY jobs that execute every minute or less, for instance.  In these situations that purging of sysjobhistory takes an excessive amount of resources.  Usually in these cases you don't really need to maintain any of this history at all.  

In these cases you may want finer-grain control over which jobs you maintain history for, as well as for how long we maintain it.  That's what I have written to overcome this problem.  But first, how do you know if you are affected by this "bug"?  

Blocking on sysjobhistory

If you are monitoring blocking on your system you will see contention against sysjobhistory coming from the procedure called sp_jobhistory_row_limiter.  This code will run whenever a job completes execution and you have job history logging enabled.  Within that stored procedure is the following code block:

SELECT @current_rows_per_job = COUNT(*)
FROM msdb.dbo.sysjobhistory with (TABLOCKX)
WHERE (job_id = @job_id)  

Notice the TABLOCKX?  That is the problem.  It essentially serializes all access against that table.  So, if you have a lot of frequently-executed jobs you can see that this will become a bottleneck and will block.  

The code I have created (available here for download) works around these issues by:  

  1. turning off job history log limits if they are enabled.  This means YOU are responsible for purging.  
  2. Creates a new table that allows you to specify, at the JobName or Category level, exactly how many rows you wish to maintain
  3. You can also specify a (default) value that will be applied when the JobName or Category does not have an entry.   
  4. We don't actually target a specific number of rows to maintain, rather, a number of days of history to maintain.  Logically, I think it makes more sense to maintain "days of history" vs number of rows.  This is just my opinion.  It's also a little bit more efficient code since I can just purge by date vs getting a target number of rows to maintain.  
  5. We can target different "days to maintain" for success vs failure rows.  This is important.  If I run a job every minute I may not care about saving ANY successes, but I may want to see multiple years' worth of failures.  
  6. Runs as its own sqlagent job.  I would have rather made this a Service Broker "task" but oh well.  

I have also included a series of TSQLT tests.  These are database unit tests.  I feel that all code should be unit tested not just to prove that it works, but also as a self-documenting tool.  There are a lot of nuances in this code and I have them well-documented in the unit tests.  

I hope someone finds this code useful.  

Tags: 

Object Relational Mappers Series - The Most Egregious Issues

This is the next post on my series on ORMs.  I'm going to start with my two biggest problems with ORMs, dynamic SQL and debugging.  Most data architects who hate ORMs will cite performance as their Number One ORM concern.  I disagree.  Debugging is the bigger problem.  Let's see why.  

Dynamic SQL vs Stored Procedures

 

Object Relational Mappers Series - Arguments for an ORM

This is the second post on my series on ORMs.  

ORM advocates will tell you something like the following:  

Object Relational Mappers Series - Overview

It's been about 5 years since I last blogged about ORMs (Object Relational Mappers).  You can read that article here.  At the time my focus was mainly on the performance aspect of why ORMs can be bad.  Since then I've learned quite a bit.  I never bothered to make note of those tidbits of wisdom so I thought it was time for another blog series on ORMs.

Resource Governor

I just started researching Resource Governor (new in SQL 2008) and this is just a collection of notes.  We are trying to determine the easiest and most effective method to take a handful of problem queries and constrain them a little bit.  

What is Resource Governor?

 

Tags: 

Myths: SQL Server Index Fragmentation and REORG vs REBUILD

I think there are a lot of misconceptions around index fragmentation in SQL Server.  I've written about this before here, here, and here.  But I guess I was not clear enough.  Some people swear by REORGs, some swear by REBUILDs.  This should not be a religious debate, they do different things and solve different problems.  

In many/most cases we want to eliminate fragmentation to make index scans better performing.  When data is logically fragmented (the left to right linked list is unordered) we incur additional physical IO which manifests itself with a waittype of PAGEIO_LATCH and far more logical/physical IO then should be reasonably necessary to satisfy the query.  I frankly do not care if an index is 98.99999% fragmented if I'm only doing an index seek (WHERE ObjId = @Param).  In that case fragmentation is MEANINGLESS and does not need to be fixed and will not cause perf problems.  An "index scan" is something like (WHERE ObjId BETWEEN x AND y).  The other ancillary problem is that data is pulled into memory a page-at-a-time.  If my pages are compact, then I can fit more rows into memory, etc.

Myth #1:  If your index is >x% fragmented, use REBUILD, not REORG.  

If your fragmentation is above x use REBUILD, else use REORG.  That's just not true.  BOL quotes numbers that really are just made up.  

 

Myth #2:  Physically ordered indexes are better, therefore I must use REBUILD
 
Let's assume an index is fragmented such that its pages on disk, in left-to-right order are (Page 18, 27, 2, 19, 78, 47, 118, 419, 12, 900, 40).  On an index scan operation the disk will thrash (SAN, SSDs, and disk virtualization technologies can sometimes help or hurt this situation, but let's just assume good 'ol rotational media) on this.  REORG fixes the logical order (Page 1, 15, 18, 27, 100, 107, 450) by shuffling pages, but REBUILD makes them physically ordered as well (Page 1,2,3,4,5,6).  Logical ordering is all we really require to avoid the disk thrashing problem and PAGEIO_LATCH waits.  We actually get almost no additional benefit from having things phyiscally ordered.  Sounds weird, but it's true.  I have even seen people quote that if your data is physically ordered then you will get larger I/O's, hence greater throughput.  I haven't seen any tests to prove or disprove this, and I don't have the time to do it, but I think the premise is wrong.  I have written previously here that Sequential Reads will get you larger I/O's than random reads, which is a general axiom of I/O theory.  When REORG puts things in "logical" order you should therefore have a sequential read.  Sequential read equates to logical ordering in my mind, not physical ordering.  That's an important distinction.  
 
Myth #3:  REBUILD is better at compaction
 
Both REBUILD and REORG have a "compaction" phase.  REORG's is very simple, it says that if a page is not referenced, remove it.  REBUILD says that, when possible, we will remove "air" from data pages and get them to resemble more closely their FILLFACTOR settings.  REBUILD is "better" at compaction because it can allocate *new* data pages to do the compaction, it isn't limited to just shuffling "existing" pages.  The last difference is that REBUILD tries to eliminate mixed extents (8 data pages belong to more than 1 object in a given extent).  In SHOWCONTIG you'll see mixed extent problems as being EXTENTS SCANNED being much less than EXTENT SWITCHES.  This all boils down to REORG doing LOGICAL only ordering and REBUILD doing PHYSICAL ordering.  But honestly, the performance difference between the two is neglible on an INDEX SCAN, which is the usual problem you are trying to solve.  Logical ordering is all you need, which is why MS gave us REORG/INDEXDEFRAG back in SQL 2000 to compliment the existing tools.  They either needed to give us that, or make all REBUILDs ONLINE, which they didn't until 2005, and even then there are lots of caveats.  
 
Myth #4:  REBUILD gets me back to my original FILLFACTOR, which is always good
 
REBUILD has a huge side-effect no one talks about and can kill performance.  Note in my last paragraph where I said REBUILD is really just getting your pages back to their original FILLFACTORs.  Let's think about that for a sec.  If my current data pages don't reflect my declared FILLFACTOR, then doesn't that mean I didn't design my FILLFACTOR correctly?  It's not SQL Server's fault!!!  I didn't factor in page splits, row modifications, and row deletions properly into my design.  When a data page does not match it's FILLFACTOR then you have a design issue.  This means that over time a whole bunch of page splits occurred which are performance-degrading.  If you run REBUILD you are removing the "air" that has naturally occurred in your data page over time.  Post-REBUILD you have a nice compact index, but SQL Server has to start page splitting again which causes even worse performance.  And what happens over time?  ...you have a fragmented index that doesn't match its FILLFACTOR.  What do you do?  You run REBUILD.  And the vicious cycle continues.  You really never solve anything doing things this way.  So, you really need to determine if your declared FILLFACTORs are correct.  
 
There are caveats where a REBUILD/resetting of FILLFACTORs might be a good idea...after a conversion event, mass data loading operation, etc.  But rarely on a steady-state system. 
 
This is a standard industry approach to fragmentation in most RDBMSs.  Example...Oracle has a REORG equivalent command, but no REBUILD.  Their solution is to fix your FILLFACTOR (they call it PCTFREE, etc) and reload your table.  Again, it's a design problem.  
 
Myth #5: REORG holds shorter locks, so it is better
 
REBUILD WITH (ONLINE=ON) generally doesn't hold locks that would affect concurrency either.  
 
Myth #6: One takes longer than the Other, Which is Why I Prefer the Latter
 
I've seen this myth as "REBUILD takes longer" and as "REORG takes longer".  If an index is gigantic and heavily fragmented then common sense dictates that REORG will take longer because it needs to visit and shuffle each page.  REBUILD just creates a nice, new compact index.  But in my mind I really don't care which takes longer, I'm more interested in ensuring I don't affect concurrency and the final product is what I need.  
 
Myth #7:  One takes more transaction log space, so it should be avoided
 
This correlates roughly to Myth #6.  And I've seen it touted that both are tran log hogs.  Again, I don't see why it matters, I want the best index at the end of the day.  It is true though that REBUILD will hold longer transactions and you risk log_reuse waits and replication delays.  
 
Summary
 
So, which should you use?  It depends.  You really have to understand the problems each is designed to solve.  In general I do have some guidance:
  • on steady-state systems I prefer REORG.  
  • If my FILLFACTORs are wrong I prefer REBUILD WITH (ONLINE=ON) so I can fix those fillfactors.
  • If I just did a bulk loading of data I like to do a REBUILD to suck out the air and get the pages nice and tight.