DaveWentzel.com            All Things Data

Service Broker

Monitoring Service Broker

 

This is an update to my old post Service Broker Monitoring Routine.  The SB monitoring routine from that post was a little difficult to understand and tried to integrate setting up Service Broker with the monitoring of a Service Broker solution.  Bad idea.  This version of my monitoring routine is far easier to understand.  

Service Broker Demystified - Services

Services and the [DEFAULT] contract can be very confusing.  In the post I'll show you why that is and some simple ways to resolve the problems in your mind.  Then we'll look at how to model send-only and receive-only services, which is another constraint you can use in your SSB design. 

Service Broker Demystified - Why do we need Services and Queues?

People claim that they don't want to use Service Broker because it is too complicated. I started a blog series called Service Broker Demystified because SSB really isn't that difficult if you understand some basic concepts. Folks don't understand why both a "Service" and a "Queue" are needed.  Why not just have one object?  In this post I'll show you why and give you a quick rule-of-thumb to avoid the confusion.  

Service Broker Demystified - Why is there no ALTER CONTRACT statement?

The concept of "contracts" in Service Broker is initially confusing to most data professionals.  I like to think of a contract as a constraint applied to a message.  Then why can't you ALTER a contract?  Because a contract is really more like a "legally-binding contract."  There are good reasons why contracts can't/shouldn't be altered.

Service Broker Demystified - Contracts and Message Types

Contracts and Message Types are the "table constraints" of the Service Broker world.  Like table constraints, they aren't required, but they keep you from doing stupid stuff with your Service Broker design.  In this post I'll cover some confusing aspects of contracts and message types.  

Service Broker Demystified - [DEFAULT] is not the DEFAULT

Contracts and Message Types have defaults, but the default is not [DEFAULT].  This leads to confusion to folks new to Service Broker.  In this post I'll clear up the confusion and give you some tricks to keep things clear.  


People tell me all the time that they don't want to use Service Broker because it is too confusing.  I started a blog series called Service Broker Demystified to dispel these misconceptions.  Today we are going to cover the [DEFAULT] objects, which are not really default objects...sometimes. 

Service Broker Demystified - SET ENABLE_BROKER vs NEW_BROKER

Service Broker can be really confusing and causes data professionals to shy away from using it.  One example is setting up SSB.  It's easy, until you need to make a copy of your database.  Then the fun begins.  In this post I'll explain when you want to use ENABLE_BROKER vs SET NEW_BROKER.

Service Broker Demystified - Encryption Weirdness

Encryption is one of those things that makes Service Broker difficult to learn.  To set up encryption correctly is a lengthy process.  There are ways to safely not use encryption.  In this post I'll show you all of the issues with encryption in SSB and how to avoid and fix the problems.  

Service Broker Demystified Series

This blog series will focus on the common complaints I hear about SQL Server Service Broker...namely that it is too confusing for a data professional to master.  The fact that there is no (good) GUI or monitoring tools doesn't help either.  In this series I'll try to simplify things and explain why things work the way they do.  SSB is a really great technology and I find new uses for it almost every day.  

Apache Flume

During my evaluation of NoSQL solutions the biggest hurdles I had, by far, was loading data into Hadoop.  The easiest way I found to do this was using Apache Flume.  But it took me a long time to figure out that my approach to data loading was wrong.  As a RDBMS data architect I was biased towards using techniques to load Hadoop that I would use to load a SQL Server.  I tried scripting (which is how most of the NoSQL solutions have you load their sample data, but tends not to work well with your "real" data) first but the learning curve was too high for a proof-of-concept.  I then tried using ETL tools like Informatica and had better success, but it was still too cumbersome.  

I began thinking like a NoSQL Guy and decided to use HiveQL.  I had better success getting the data in to Hadoop...but now I had to get the data out of my RDBMS in a NoSQL-optimized format that I could quickly use HiveQL against.  

As my journey continued I thought about how I would intend to get my data into Hadoop if we ever deployed it as a real RDBMS complement.  We would probably write something that "listened" for "interesting" data on the production system and then put it into Hadoop.  That listener is Apache Flume.  Why not just point Flume to listen in to our Service Broker events and scrape the data that way.  I had that up and running in a few hours.  

Flume works by having an agent running on a JVM listen for events (such as Service Broker messages to a JMS system).  The events are queued up into a "Channel".  The channels produce new outgoing events to Hadoop/HDFS/HBase (or whatever you use for persistence).  So, why use a channel in the middle?  Flexibility and asynchronicity.  The channels have disaster recovery mechanisms built in.  As your throughput needs change you can configure your agents to do fan-in (more agents talk to fewer channels) or fan-out (less agents talk to more channels).  The former is great if you have, for instance, multiple listeners that only need to talk to one Hadoop instance.  The latter is good if you have one source system that you want to talk to multiple Hadoop instances.  Or if you need to route messages to a standby channel to do maintenance on your Hadoop instance.  

This means that Flume is very scalable and can handle constant data streams effectively without worry of data loss.  This is great for something like streaming stock quotes.  Flume is by far the easiest way to load data quickly into Hadoop.  

Pages

Subscribe to RSS - Service Broker