I have developed and delivered these presentations for local User Groups as well as private consulting engagements. Where I have links, you may copy, change, or reference this material as you like. A simple attribution is always appreciated. For sessions without links I simply haven’t had the time to get the materials organized properly with customer-identifiable information removed, etc. Feel free to contact me if you want additional details.
Links to My Speaker Profiles
|Session Title||Abstract||Target Audience||Duration||Additional Information|
|Practical DevOps Using Azure DevTest Labs||Azure DevTest Labs is a great way to solve common development environment challenges. Self-service deployment can be done quickly AND cost-effectively. By using templates, artifacts, and “formulas” you can deploy the latest version of your application, whether Windows or Linux. This is great for development, testing, training, demos, and even lightweight DR environments. We’ll show you how to get started with DevTest Labs regardless of whether you use Visual Studio.||DevOps||2-4 hours|
|Self-Service Analytics: Best Practices--2018||This is from my livestream presentation at Azure DataFest.||Data Scientists, Data Engineers||1-2 hours||Link to presentation materials|
|Azure SQL Data Warehouse: Performance Tuning an MPP System||Azure SQL Datawarehouse is an MPP-version of Microsoft SQL Server running in Azure. MPP systems have unique performance tuning requirements that are sometimes counter-intuitive to the traditional data professional. In this session I show you what an MPP is by showing you how to tune it for your workload||Data Professionals||1-2 hours||Download|
|Self-Service BI||Self-Service analytics is possible when you give your staff the tools they need to do data discovery and data sandboxing. I accomplish this using Data Lakes (not necessarily using an expensive Hadoop implementation) and Kappa Architecture. Kappa allows ETL/ELT to be accomplished without expensive ETL developers. ETL can be done by your staff as part of data sandboxing. I'll put all of this together for you in this webinar and give you some case studies of implementations I've done with my clients.||Data Analysts, Developments, Program Managers||1 hour||Watch Webinar|
|Data Science for the Rest of Us||I show you just enough "theory" so you can begin your Data Science Journey. We cover tooling that anyone can learn quickly. We cover a few great uses in different industries that you can begin using today.||Data Analysts and Developers||1-2 hours||Presentation|
|Big Data and Hadoop||Don’t know where to start with your Hadoop journey? Is your company considering Hadoop and you want to get up to speed quickly? Just want to modernize your skills? If you answered YES to any of these then this session is for you. Hadoop is a hot skill in the data space but it’s challenging to learn both the new technologies (like Spark and Hive) as well as the modern concepts (like Lambda/Kappa and “streams”). We’ll break down the most important concepts that you need to know and can start using in your job TODAY, even if you don’t have a Hadoop cluster. We’ll do an overview of the important tooling and show you how to spin up a sandbox in minutes.||IT Pros with a Desire to Learn Something New||2-4 hours|
|So You Want to Be a Data Scientist?||Does R seem like an alien language to you? Does data science terminology seem overly confusing? Would you like to learn more about data science but are scared of the math? Fact is, you’re probably doing “data science” today. You don’t need to know a lot of R or python to be an effective data scientist. We’ll cover important terminology and use cases and then dive-in by exploring data with tools you are already using. We’ll deploy a modern data science workstation in just 10 minutes. Finally, we’ll put it all together and create a predictive web service using Azure Machine Learning.||Developers and Analysts||2-4 hours|
|Implicit Transactions||Have you ever seen sp_reset_connection in Profiler and wondered what it does? Did you ever see an "orphaned spid" with an open transaction and wondered where it came from? Do you run jdbc or dblib data access technologies? If you answered yes to any of these then it is important to understand how implicit transactions work. You may be surprised that transaction handling doesn't always work the way you think it does.||Intermediate to advanced DBAs and Developers||15-60 mins|
|Teach Yourself Service Broker in 15 Minutes||You know that asynchronous data eventing is the future. You've heard about Service Broke but it seems too complex and you are not even sure when you should use it. No one needs to understand esoteric queues, services, activators and conversations to get a basic Service Broker solution up and running quickly and reliably. By the end of this session we'll have a service set up to do asynchronous triggers and another service that will "tickle" you every minute and we'll be able to install and tear them down with a single command.||DBAs and Developers||15-60 mins|
|Another Way to do DDL||DDL commands can be cumbersome to write. In this session I'll show you a method to do a "properties-based" DDL deployment. By the end of the session we'll have a stored procedure that reliably handles DDL without the need for understanding esoteric DDL syntax. Certainly you can use "scripting wizards" and "SQL compare" tools to do this but I'll show you some benefits that having a custom DDL deployer can handle that the other tools can't.||DBAs and Developers||1 -2 hours||CodePlex code and presentation|
|NOLOCK||Most of us know that the use of NOLOCK and READ UNCOMMITTED is discouraged. But did you know that NOLOCK does not mean 'no locks' are taken? Did you know that a simple SELECT...WITH (NOLOCK) can cause blocking? Did you know that queries using NOLOCK can actually fail and must be retried just like a deadlock? Did you know that the use of NOLOCK can even cause (spurious) data consistency errors in your logs? We'll look at some examples and I'll show you a good pattern to always use if you absolutely MUST use NOLOCK.||Developers||15-30 mins|
|Zero Downtime Database Upgrades||Did you ever need to migrate billions of rows of data from one table structure to another with zero downtime? Did you ever need to refactor multiple huge tables without impacting your online users? Wouldn't it be nice to "stage" the "next" version of your application alongside your existing production objects and just "transfer" them in with almost no downtime? In this presentation I'll show you a little known feature of SQL Server that can do all of this for you...ALTER SCHEMA TRANSFER.||Advanced DBAs and Developers||1-2 hours|
|Slay Your UDF Halloween Hobgoblins with SCHEMABINDING||You probably know that scalar UDFs can be evil because of their RBAR (row by agonizing row) nature. But do you know why? They are meant to solve the "Halloween Problem". But you probably don't care about that. If you use SCHEMABINDING on your UDFs you can see dramatic performance improvements and you don't need to worry about Halloween Hobgoblins either.||Developers||15-30 mins|
|Get the Actual SQL From a Prepared Execution Call||Did you ever try to run Profiler to help a programmer debug a stored procedure only to find that the stored procedure is never called. Instead you see a bunch of "sp_execute 15" commands. I'll show you some quick methods that you can use to decode "sp_execute 15" to "EXEC YourProcedure".||Intermediate to Advanced DBAs and Developers||30 mins|
|Batch Processing and Error Handling||Do you have stored procedures that batch load millions of rows at a time? When one row throws an error the entire batch will rollback. In this session I'll show you what I think is the most-optimal batch size as well as a pattern that can be used to process batches that will efficiently find error rows without aborting entire batches.||Intermediate to Advanced Developers||15 mins|
|SQL Server Connectivity Stack Diagnose||Connecting to your SQL Server can become daunting if your environment is customized. How do you troubleshoot if your server listens on an alternative port? How can you determine if a firewall is causing your connection to fail? How do you troubleshoot if your client doesn't have SQL Tools installed? What if you client is Apache/Linux? Why does my handshake with SSPI fail and why am I shaking its hand anyway? Whenever you have connectivity issues the key is to follow the "stack diagnosis" process. In this session we will cover the SQL Server communication stack and how to troubleshoot it using basic network diagnostic tools found on any OS.||DBAs||30-60 mins||Troubleshooting Connectivity|
|Hadoop for the RDBMS Expert|
Struggling to learn Hadoop? Is you company considering a Hadoop deployment and you want to get up to speed quickly? Just want to update your IT skills?
If you answered YES to any of these then this session is for you.
Hadoop is the hot new skill in the data space and it’s easy to understand the basics and apply what you already know about SQL Server to get up to speed quickly. We’ll cover the latest trends in Big Data and the newest technologies in the Hadoop stack. We’ll show you some great use cases for Hadoop, how to spin up a test cluster in a few mins for free, and how to integrate SQL Server quickly. We’ll also cover cases when Hadoop is NOT the right solution for a given problem.
|Devs, DBAs, BI/Reporting Experts||120 mins|
|U-SQL in Azure Data Lake|
Microsoft created a new language called U-SQL to make big data processing easier on Azure Data Lake. U-SQL provides the power of SQL and the extensibility of C# to make processing of any data - structured or unstructured - easier. U-SQL can be run over your Data Lake without a dedicated HDInsight cluster. It can even query blob storage and Azure SQL database. We'll show you why this offering is so compelling, the development tooling, language capabilities, and use cases.
We'll also spend time networking and discussing general Hadoop questions relevant to data professionals who want to learn more about Big Data.
|Devs, Data Architects||120 mins|