DaveWentzel.com            All Things Data

Data Architecture

The CAP Theorem

Here is a little more of my experiences evaluating NoSQL solutions.  It is important to understand the CAP Theorem.  CAP is an acronym for Consistency, Availability, Partition Tolerance.  
 
In a distributed data architecture you can only have 2 of these 3 at any one time.  You, as the data architect, need to choose which 2 are the most important to your solution.  Based on your requirements you can then choose which NoSQL solution is best for you.  
 
If a node goes down (called a network partition) will your data still be queryable (available) or will it be consistent (ie, queryable, but not exactly accurate)?   
 
As a relational guy this makes sense to me.  What distributed data guys have figured out is that the CAP Theorem can be bent by saying that you can actually have all 3, if you don't expect to have all 3 at any particular exact moment in time.  With many NoSQL solutions the query language will have an option to allow a given request to honor availability over consistency or vice versa, depending on your requirements.  So, if I must have query accuracy then my system will be unavailable during a network partition, but if I can sacrifice accuracy/consistency then I can tolerate a network partition.  
 
It's usually not that simple unfortunately.  It's generally not a wise idea, for hopefully obvious reasons, to write data to one and only one node in the distributed datastore.  If we have, say, 64 nodes and every write only goes to one of those nodes, we have zero resiliency if a network partition occurs.  That node's data is lost until the node comes back online (and you may even need to restore a backup).
 
Instead, a distributed datastore will want each write to be acknowledged on more than one node before the client is notified of the write's success.  How many nodes a write must occur on is an implementation detail.  But clearly this means that writes will take a little longer in systems designed like this.  It's equivalent to a two phase commit (2PC).  
 
This "multiple node write" issue also means that if we query for a specific scalar data element that any two nodes may have different values depending on which was updated last.  This means that these datastores, while allowing queries to be leveraged against all nodes (map) and then merged to determine the correct version (reduce) will require a synchronization and versioning mechanism such as a vector clock.  I'll discuss vector clocks and other synchronization mechanisms in the next post.  
 
Other implementations may not require the "2PC" but will instead only write to one node but perform an asynchronous replication to other nodes in the background.  They may use a messaging engine such as Service Broker or JMS to do this.  Obviously, this is prone to consistency problems during a network partition.  In this type of system the designer is clearly valuing performance of writes over consistency of data.  Obviously a query may not return transactionally consistent data, always, in this type of system.  
 
The system designer has infinite latitude in how to bend the CAP Theorem rules based on requirements.  Most of us RDBMS people don't like this kind of flexibility because we don't like when certain rules, like ACID, are broken.  The interesting thing is, if you think about it, you can actually totally guarantee CAP in a distributed datastore if you remember that both readers and writers would need to execute against exactly half of the total nodes, plus one.  This would assume a "quorum".  Of course your performance on both reads and writes will be at their worst and will totally negate any reason for distributing data in the first place.  IMHO.  
 
An area where a distributed datastore may take some liberties that freaks out us RDBMS folks is that a write may not *really* be a write.  In the RDBMS world writes follow ACID.  This means that a write can't be considered successful if I just write it to buffer memory.  I have to write it to the REDO/transaction log and get a success from that.  We've all been told in years past to "make sure your disk controller has battery backup so you don't lose your write cache in the event of failure".  
 
Well, its possible that we can "bend" the write rules a little too.  Perhaps we are willing to tolerate the "edge case" when we lose a write due to failure between the time the buffer was dirtied and when it was flushed to disk.  So, some of these solutions will allow you to configure whether you need a "durable write" vs a plain 'ol "write".  Again, the system designer needs to consciously make the tradeoff between durability and performance.  This is a "setting" I've neither seen in an RDBMS nor even seen it requested as a feature.  It's just a totally foreign concept.  And in many cases the write/durable write setting can be changed at the request level, offering even more flexibility.  In fact, it appears as though Microsoft is getting on board with this concept.  It looks like Hekaton will support the concept of "delayed durability".  
 
I see one last concern with these solutions, the cascading failure problem.  If Node goes down then its requests will be routed to Node', which effectively doubles its load.  That has the potential to bring Node' down, which will cause Node" to service thrice its load, etc etc.  This can be overcome with more nifty algorithms, it's just worth noting that it's never wise to overtax your nodes.  This is why NoSQL people always opt for more nodes on cheaper commodity hardware. 

Presentation on Metadata Driven Database Deployments is tonight

As a reminder, I am giving a presentation on metadata driven database deployments using my tool at 5:30 tonight at Microsoft's Malvern office.  The presentation and source code can be downloaded from CodePlex.  

See you there.

--dave

Handling Conflicts with Eventual Consistency and Distributed Systems

In distributed data architectures like some NoSQL solutions, a mechanism is needed to detect and resolve conflicting updates on different nodes.  There are many different ways to do this.  You don't need to understand the nitty gritty details of how this is done, but understanding the basic concepts is fundamental to understanding the strengths and limitations of your distributed data system.  In this post I'll cover some of the methods.
 
Distributed Hash Tables
 
BitTorrent's distributed tracker uses this technology.  In fact, some key-value systems like Cassandra and memcached (I'll cover both in a later post) are just giant DHTs.  Essentially a data element is passed to a common hashing function to generate a key, which is then passed around to other nodes.  
 
Quorom Protocol
 
Very simply, technologies that use QP must "commit" on n number of nodes for the transaction to be considered successful.  The "commit" can be handled in one of two ways:
  1. Like a traditional two-phase commit (2PC).  In this case the write commit is delayed until a quorum of n acknowledge the change.  Obviously this can introduce a high amount of transaction latency.  
  2. The quorum is obtained on the read side.  When the data is read the quorum is obtained then.  Again, latency is introduced in this model.  
 
Gossip Protocol
 
This method allows nodes to become aware of other node crashes or new nodes joining the distributed system.  Changes to data are propogated to a set of known neighbors, who in turn propogate to a different set of neighbors.  After a certain period of time the data view becomes consistent.  The problem is that the more nodes the system contains, the longer it will take for updates to propogate, which in turn means the "eventual consistency" takes longer and therefore the possibility of conflicts occurring increases.  
 
Vector Clocks
 
A vector clock is probably the simplest way to handle conflict resolution in a distributed system.  A vector clock is a token that distributed systems pass around to keep the order of conflicting updates intact.  You could just timestamp the updates and let the last update win...if your requirements are that simple.  But if the servers are geographically disparate it may be impossible to keep the clocks synchronous.  Even using something like NTP (Network Time Protocol) on a LAN may not keep the clocks synchronized enough.  Vector clocks tag the data event so that conflicts can be handled logically by the application developer.  This is basically how Git works under-the-covers.  
 
There are many ways to implement vector clocks but the simplest way is for the client to stamp its data event with a "tag" that contains what the client knows about all of the other clients in the distributed system at that point in time.  A typical tag may look like this:
 
client1:50;client2:100;client3:78;client4:90
 
Assuming this was client1, the above tag indicates client1 knows that client2 is on its "Revision 100", client3 is on its "Revision 78", client4 is on its "Revision 90" and it is on "Revision 50".  "Revision" in this sense is a monotonically increasing identifier specific to that node.  You could use a GUID, but that is rarely done.  
 
 
Problems with Vector Clocks
 
A vector clock tag grows very large as more clients participate in the system, and as the system generates more messages.  Some vector clock-based systems like Riak have a pruning algorithm to keep the vector clock tags more compact. Vector clocks are also used by Amazon's Dynamo architecture.  

Eventual Consistency or ACIDs and BASEs

This is the next post in my NoSQL series.  A fundamental concept endorsed by many NoSQL solutions is "eventual consistency."  Each product can implement eventual consistency however it likes.  RDBMS people are familiar with ACID properties (Atomicity, Consistency, Isolation, Durability).  NoSQL people like to bend the rules a bit and I like to call these BASE properties (Basic Availability, Soft-state, Eventual Consistency).  

So, when would you use BASE transactions?  

One case is when your network is not reliable.  For instance, transactions that must span your local network as well as a cloud provider.  Local updates can be queued and batched and sent later.  While the remote transactions are in-flight the "syste" is in an inconsistent state, but eventually all updates will be applied and the system will be in balance again.  

Inventory systems can withstand eventual consistency, usually, thus are good candidates.  

In general, consider BASE transactions in cases where ACID transactions are not immediately required and the data eventually becomes consistent and is never lost.  

Data Models and Data Organization Methods

This is the next post in my series on NoSQL alternatives to RDBMSs.  How a datastore persists its data can be roughly divided into the following categories.  There may be others, depending on your perspective.  It can also be argued that some of the types below are subtypes of others.  
 
Data Model Description Has Schema/No Schema Relationships are Modeled As... Data Is... Examples Possible Disadvantages
Relational We probably all know this by now.   Has Schema Predefined Typed SQL Server, MySQL, Oracle Everybody has different opinions on this.  
Hierarchical
Probably the oldest electronic data storage mechanism.  Data is modeled in a pyramid fashion (parent and child records).
Probably the oldest electronic data storage mechanism.  Data is modeled in a pyramid fashion (parent and child records).With the advent of the relational model most hierarchical data stores died out.  However, XML started a bit of a renaissance in hierarchical data stores.  Many NoSQL solutions could be categorized as hierarchical.  
 
Has Schema Predefined Typed IBM IMS, the Windows registry,XML
  • relationships among children is not permitted. 
  • Extreme schema rigidity.  Adding a new data property usually requires rebuilding the entire data set
Network very similar to hierarchical datastores.  It was the next evolution of the hierarchical model, allowing multiple parents to have multiple children.  Much like a graph. Many healthcare data stores are based on MUMPS which falls into this category.  "data" is stored as data and methods (instructions on what to do with the data) Has Schema Predefined Typed CODASYL, Intersystems Cache and other Object-oriented databases Never gained much traction because IBM wouldn't embrace it and the relational model came along and displaced both.  
Graph could be considered the next generation of network data stores.  These are prevalent in the social-networking space where applications like Twitter, LinkedIn, and Facebook want to visualize your "network" for you.  Nodes represent users who have relationships (the edges) to each other.  Modeling this relationally is challenging at best.  A graph system can query these structures easily.  Basically, if you can model your data on a whiteboard, then a graph database can model it too.   It depends (but generally no schema) Edges Ad hoc Neo4j data is usually untyped.  Very complex.  
Document designed for storing document-oriented data, such as XML, or semi-structured data.   No Schema None (generally) Typed CouchDb, MongoDb Does not support ad hoc reporting tools, ie Crystal Reports
Columnar stores its data as columns or groups of columns of data, rather than as rows of data.  Good for OLAP applications and anywhere where aggregation query is important.   Has Schema Predefined (similar to relational) Typed HBase generally not good for OLTP applications.
Key-Value stores its data like an EAV model.  Entirely schema-less.  Data can be stored as byte-streams so you can persist the programming language's objects directly in the key-value store No Schema Links via keys Semi-typed Redis, Riak  

Tutorial D, D, Rel and Their Relation to NoSQL

I mentioned in my last NoSQL post that developers complain that SQL is a lousy declarative language.  On some levels I agree with that.  But those same developers then damn all relational theory and data stores because of shortcomings in the SQL language.  This is wrong.  It is usually vendor implementations of SQL that are the problem.  There are relational languages other than SQL.  This post will cover some of them and how they are better than SQL in many regards.  

TutorialD is not really a language, but rather a specification co-authored by CJ Date.  It describes the characteristics that a good declarative, relational language should have.  CJ Date devised TutorialD as an educational aide in one of his books.  The "D", I believe, stands for "data".  Date believed that SQL was not a particularly good language for expressing relational concepts.  D was supposed to be a better alternative to SQL.  

What are some of SQL's shortcomings according to Date?  D was supposed to eliminate the object-relational impedance mismatch that occurs between SQL and OO languages.  The best we have in SQL today to do this is various ORM implementations which are kludgy at best.  Some other SQL shortcomings?: 
  • being able to declare and query vectors/arrays directly in D.  In other words, you don't pass a result set back to Java and load it into an array to display it. 
  • better transitive closure query capabilities
  • handling recursion better.  In fact I don't believe SQL has recursion at all.  Some vendors have "recursive CTEs", for instance, that help.  
  • better readability than SQL.  Correlated subqueries are really hard to "walk".  Don't believe me...look at some of the SQL generated by ORM tools.  Tutorial D allows sections of code to be "variable-ized", almost like a macro.  
  • You can "pipeline" and string commands together.  Think of the HAVING clause in SQL.  It is really just a WHERE clause generated against a previously aggregated set.  HAVING is unneeded and doesn't exist in D.  In reality you can actually pipeline in SQL using derived tables, CTEs, and temp tables.  So you can basically have a derived table that has a GROUP BY clause on it, and then a WHERE clause on that derived table.  In this regard I really like the HAVING clause...but to each his own.  
  • JOINs in D are always NATURAL JOINs, which means the ON clause is never needed.  I really, REALLY wish TSQL had this feature.  A NATURAL JOIN assumes the ON clause is PK:FK.  This really reduces the code clutter.  In D you can always express an alternate JOIN condition, you simply use the WHERE clause.  
  • "Presentation" elements are part of the D language, ie, you can pass a properly formatted (perhaps with css tags) result set that can be used directly by your application's display engine.  Oracle does this by embedding, for instance, PL/SQL directly in Oracle Forms.  I really hated this when I used in last century, but it does work.  And some people swear by it.  
  • D just looks more like Java code, which can make it appealing for Java Jocks.  This makes it scary for report writers though.  Surprisingly, many NoSQL query languages do the exact same thing.  And one of the big complaints about NoSQL solutions is their lack of good report writing tools (there are no "plug-ins" that allow you to query, for instance, CouchDB directly from Crystal Reports).   

There is currently no real-world implementation of D, it is merely a proscriptive list of items that a good relational query language should expouse.  It is merely educational.  Rel is an open-source implementation of D, written in Java.  I'm not aware of anything that formally supports it.  

Perhaps it is time that SQL people understand what the NoSQL people hate about SQL.  Then we can focus on providing extensions that cover some of the things listed above that would make all of us more productive without having to abandon the many good things of the relational model.  

What exactly is wrong with SQL and RDBMSs?

This is my third post on my evaluation of NoSQL products for a client who wishes to replace their (rather expensive) SQL Servers.  Many relational guys ask themselves, "why all the fuss about NoSQL?  What is wrong with relational data stores?"  I felt the same way 15 years ago when people wanted to replace RDBMS with XML stores.  But the movement is underway because people feel the relational model is flawed.  In my mind, NoSQL is not a direct replacement for relational data stores, NoSQL has its place, just like relational does.  But, we relational guys are making it tough on ourselves.  We don't want to understand WHY people like relational alternatives.    

This post is a series of things that annoy me with relational people.  These annoyances bother lots of people...the same people who control the decision making powers of the data persistence engines that are chosen for new projects.  If we relational guys don't change our ways we are going to be out of jobs.  My complaints, in no particular order:  

Relational database Schemas are Inflexible

I've been to numerous NoSQL presentations and this is the key point that these vendors drive home.  

How many times have you asked your DBA to add a new column to a table and they refused?  With <insert NoSQL product here> we are schema-less.  We don't care if you add new data elements to your schema or not, because we have no schema.  Add whatever data elements you want, whenever you want.  We'll support it.  

The fact is, relational databases can do this too.  It's not the RDBMS that is inflexible, it is the DBA.  More on that in the next section.  

There are no valid reasons why a relational database schema has to be inflexible.  The reasons are man-made contrivances.  I have many blog posts where I show that you can add new columns and value them while a system is up with ZERO impact to the users.  Some examples are here,here,here, and most importantly here.  

There are some schema changes that are disallowed by the major RDBMS vendors.  For instance, adding a non-nullable column to a billion row table generally means some downtime.  But that is not a flaw of the relational model, that is a flaw in the vendor's implementation of the relational model.  There is nothing in relational theory that limits schema flexibility.  The problem is that the data modeler and data architect are not sufficiently well-versed in how to decouple the logical from the physical model such that evolutionary relational databases can be a reality.  These people want to tie table structure to its on-disk representation.  Data architects and data modelers need to update their skills.  

DBAs, Data Architects, and Database Developers SUCK

I don't feel this way, but many of our fellow software developers do.  Sorry, but it's true.  

DBAs tend to ask too many questions and we make the developers think differently about their data.  They don't like that.  We data professionals like to think we are doing our employers a valuable service by asking the difficult questions about new data requirements, but in reality we are not.  Developers are fighting us at every turn and we don't even realize it.  

We've all seen this happen:  A developer/analyst requests a new column in a table.  What does the DBA do?  

  • We ask tons of questions about WHY the data is needed.  
  • We deny the request because
    • the requestor wanted to call it "Status" and our naming conventions require the name to be StsVal.  We direct the requestor to read the 2000 page "Data Standards and Naming Conventions Document, V3.15".  I'm not against standards, I'm against the expectation that EVERYONE knows EVERY standard.  Let's be a little more helpful.  
    • the requestor wanted to use the BIT datatype but our standard is to use TINYINT.  
    • We did not specify an appropriate DEFAULT for the new column.  
    •  
    • The new column was requested to be NOT NULLable and that is disallowed.  
    • Will require too much data conversion code to be written
  • After the request is DENIED we ask that the requestor resubmit the request, making the necessary changes, and submit it for the next Change Review Board meeting, which is held the first Tuesday of every month.  So basically, wait a month.  

Get the picture?  DBAs are notorious for not being helpful.  Sorry, but that's my opinion, and the perception of many others too.  

Don't believe me.  Then why do we all see so many of the following traits in our databases:  

  • varchar(max) columns everywhere because if the requestor asks for varchar(50) and later needs it to be varchar(100) that the DBAs will deny the request.  Best to ask for "too much" than "too little".  
  • varchar(max) columns that end up storing JSON, XML, or data in some homespun markup language.  Developers do this to avoid data modeling and DBAs.  
  • EAVs.  An EAV has almost infinite schema flexibility.  
  • Tables with ZERO DRI.  The developers don't want to admit that there might be relationships to existing data because then they'll need to deal with DRI.  
  • Data Hoarding.  We constantly see tables with no data lifecycle management.  It's easier for a developer to say that data must always be maintained without being honest about the data retention requirements with the DBAs.  

To my DBA friends, PLEASE change your attitudes.  

I Can't "Discover" My Schema in a Relational Database

This is another complaint I hear too often.  Developers want to experiment and discover their schemas without a lot of up-front formal modeling.  And I agree.  And I do that every day too.  Nobody says you need to have DBA-approval to build some tables on your scrum server and do a proof-of-concept.  When you are ready then you should get proper change management approval.  This argument is another function of rigid policies and processes.  

Many companies practice rigid "waterfall" development...they MUST do data modeling before any coding is done.  And that is often WRONG.  In these companies the model is ALWAYS determined to be lacking late in the development phase.  The schema can't support all of the data requirements perfectly.  But again, the developers and analysts fear the DBAs so they "work around" the schema deficiences.  

The result?  

  • Structures that are difficult to query
  • Structures with poor referential integrity
  • Poor performance

Did you ever notice there aren't a lot of jobs for NoSQL DBAs?  This is because the NoSQL vendors don't want DBAs.  DBAs limit the creativity process.  Flexibility to assist in prototyping and experimentation is not a function solely of NoSQL.  

SQL is hard to use and is not expressive enough

This is true.  But it is getting better.  Doing "paging" in SQL 10 years ago required lots of lines of code and performed poorly.  Now we have constructs like TOP and LIMIT that are easier to use.  And there will be even more improvements to the SQL language that will make this even easier, for instance, we'll likely soon have the ability to use a TOP without needing a CTE first.  That's just a guess.  

The NATURAL JOIN syntax would be a welcome addition too.  

Here are some other things SQL really needs:

  • array structures
  • macros to remove code duplication efficiently
  • performant scalar functions

It is getting better.  And I defy you to tell me that Hive or Pig is more expressive than SQL.  

And, of course, we always have ORMs to help us avoid hand-crafting SQL for mundane, repetitive tasks.  

Summary

There are good reasons to use a NoSQL solution.  There are also bad reasons.  This blog post was an attempt to outline a few egregious reasons why people choose NoSQL solutions.  These are all issues of perception and education.  

Structuring Your ETL like a Queue

This is a follow-on to Performant ETL and SSIS Patterns.  I really need to do a longer, explanatory post on this.  Two of the largest performance problems I see with SSIS packages is their lack of parallelism and the fact that they are written to run large batches during a defined time window.  The former is totally unnecessary, the latter is unecessary if you structure your processing smartly.  

Performant ETL and SSIS Patterns

It's a helluva job market out there right now if you have ETL, SSIS, DataStage, or equivalent experience.  I guess you can make some generalized deductions about this:

  • more companies are trying to integrate their data stores.
  • more companies need to copy data from OLTP to OLAP systems.  
  • It's hard to find good ETL people.  

Unfortunately, too many job postings ask candidates to have specific ETL tooling experience such as SSIS or DataStage.  This is unfortunate.  Too many candidates have great tooling experience but have very little grounding in ETL best practices, regardless of chosen tool.  I've been called in a lot lately to help fix various ETL processes.  Each one is using a different ETL tool and each one is exhibiting the same terrible ETL anti-patterns.  When I fix those anti-patterns everything just magically runs better.  I have yet to touch actual ETL code.  

To quickly summarize the most egregious issue...developers are doing too much work in the ETL tool and not enough work in their RDBMS.  The RDBMS will almost always do things faster than the ETL tool can.  There are few exceptions to this rule (string manipulation and regexp is better in most ETL tools than in SQL for instance).   

I've written tons of blog posts (here's a link to an entire series of ETL Best Practices) about how to do performant ETL with good patterns.  However, I find myself constantly searching my blog to find a succinct list of things to check whenever I'm brought into another ETL engagement.  Here's the biggies:

  • ETL Best Practices.  I'm not going to list all of them.  That's a long blog post.  You should go reference that.  
  • Do more in SQL and less in the ETL tool.  Examples:
    • SSIS is not available in SQL Express 
    • Sorts are better handled in the RDBMS.  Use an ORDER BY clause on your SQL statement instead of relying on your ETL tool to sort.  If using SSIS, mark your OLEDB source metadata on the data source as sorted.  
    • Any set-based, relational operation will be faster in SQL than in your ETL tool.  Your RDBMS will likely automatically determine the best parallelism and memory management to use.  On the contrary, you'll never get this right in your ETL tool.  
    • Big packages are not good.  If your SSIS package is swapping to disk then you are not running efficiently.  There is a SSIS performance counter called "Buffers Spooled".  It should always stay at 0.  Always.  Otherwise you are using your swap file.  
    • If you run Integration Services on your SQL Server then use the "SQL Server Destination" vs the "OLEDB destination".  Performance is markedly better.  
  • Understand and optimize your data structures.  Examples:
    • Understand locking, Lock Escalation, and transaction isolation semantics for your given RDBMS.  If using SSIS then understand the difference between BATCHSIZE vs ROWS_PER_BATCH and ensure they are optimized for your destination RDBMS.  
    • Partitioned Tables.  Understand them and try to use them if your destination is SQL Server.  The SWITCH statement is your friend, maybe your best friend.  
    • Make sure you are always doing Minimally Logged Operations whenever possible.  And always verify with testing and performance monitoring.  Every RDBMS has different rules for what equates to a minimally logged operation.
    • Determine whether it is faster to disable/drop your indexes before your data loads.  There is lots of conflicting guidance and there is no substitute for testing.  I've found that every engagement a different setting is needed.   Here is some additional guidance you may not find elsewhere:
      • A commit size of 0 is fastest on heaps
      • if you can't use 0 because you have concurrency concerns then use the highest value you can to reduce the overhead of multiple batches and transaction control.  
      • A commit size of 0 on a clustered index is a bad idea because all incoming rows must be sorted.  This will likely cause spooling to tempdb which should be avoided.  
  • There are network-related settings that are invaluable for performance:
    • jumbo frames will increase your packet payload from 1500 bytes/frame to 9000 bytes/frame.  
    • Change sqlConnection.PacketSize.  The default is 4096 bytes.  That's small for moving around large amounts of data.  32767 would be better.  
    • use network affinity at the OS level.  
  • When using the OLEDB source, always set the Data Access Mode to "SQL Command".  Never "table or view".  Performance is almost always better.  The reason is that sp_prepare is called under the covers using the latter and therefore has a better shot at getting a good execution plan.  The former does the equivalent of a SET ROWCOUNT 1 to get its column metadata.  That can lead to bad execution plans.  
  • You should seriously consider restructuring your ETL processes so they work like a queue.  I'll cover this in the next post, Structuring Your ETL like a Queue.  

Are some new SQL Server features a response to the NoSQL movement?

Are some new SQL Server features a response to the NoSQL movement?

Indubitably!

The last few releases of SQL Server, in my mind, were focused heavily on business intelligence.  And I believe the best SQL Server jobs in the last few years are focused on BI and data warehousing.  OLTP does not seem to be where the new features and money are these days.  Here is a quick list of relatively new SQL Server features targeted to "bigger data":

  • Partitioning.  Yes, you can use it for OLTP, but specific features like SWITCH are meant specifically for data warehouses.   
  • Sparse Columns.  Great for denormalized data structures.  
  • compression.  Again, great for OLTP, but seems to be more tailored for OLAP applications.  
  • Parallel Data Warehouse
  • And there are just way too many obvious BI and SSIS enhancements that I don't even want to list them.  PowerPivot for instance.  

But there is a shift.  I see many more new features centered around "BigData" and as a response to the NoSQL movement (if it is a movement, is it of type religious...or bowel?).  Here are some examples:

  • StreamInsight:  allows you to analyze streams of incoming data in real-time.  
  • HDInsight:  The HD is for Hadoop, and is available only on Azure.  
  • Columnstore indexes:  This is where you data tuples are organized by column vs by row.  This aids in aggregations of bigger data sets that tend to be read-mostly.  First MS gave us this feature, how long will it be befoe these are updated (it looks like this may be SQL Server 2014)?  Or like SAP HANA...will magically migrated from row to column oriented, and back, as the engine optimizes your data for you.  
  • nodetitle.  In-Memory tables.  Microsoft removed the DBCC PINTABLE feature a few releases ago, but it was a good idea at the time, and it still is.
  • QL Server 2014 will introduce "delayed transaction durability" to support "extreme write workloads."  http://msdn.microsoft.com/en-us/library/dn449490(v=sql.120).aspx
     

    The latest CTP for SQL Server 2014 has a "delayed transaction durability feature".  According to the article this is meant to assist with "extreme write workloads".  It looks like this is only for Hekaton, for now.  This feature is clearly meant to help SQL Server compete with NoSQL products with "eventual consistency" features.    

  • It looks like SQL Server 2014 will support the in-memory buffer pool being extended to SSDs.  This is a great alternative to a RAMdrive.  I hope this feature makes it to RTM.  
  • Hybrid cloud features.  One of the touted benefits of NoSQL is your ability to deploy it in-house and then migrate it into the cloud.  This is definitely something Microsoft has latched on to.  

Pages

Subscribe to RSS - Data Architecture