Tag Archives: Business Intelligence

Hash Diff calculation with SQL Server (Datavault Series)

Updates

Change log

Date Changes
31 January, 2017 Really good news! The 8k limitation on HASHBYTES has been removed in SQL Server 2016!

Today I would like to discuss Hash Diff calculation using SQL Server 2014, from my own experience.

My purpose is NOT to be complete and/or to replicate book “Building a scalable data warehouse with Data Vault 2.0”. For full details please read the book.

However I will give a short intro for those who hardly know what I am talking about.

Introduction

Hash differences or Hash diffs

A hash difference is a hash value of all the descriptive data of a row in a satellite, plus the business key column(s).

A hash difference column is used to detect changes in a new version of a satellite’s row for a particular business key, and can be used in a comparison instead of doing a comparison on every individual descriptive attribute.
The main reason to use hash diff columns is to improve performance when loading new satellite rows (of which one or more attributes have changed).
The more columns the satellite has, the higher the performance gain will be.
To support data warehouse automation patterns, hash diffs can also be used for satellites with only a few or even one descriptive attribute.

I think a different name, like RowHash, RecordHash, DataHash or AttributeValuesHash, would better describe the contents of the column, because the value itself is not a difference, but that’s a different (relatively unimportant) discussion.

Hash keys

This article is not about Hash keys, however to be complete on hashing as far as Data Vault 2.0 is concerned, I will give a short introduction on that as well.

Hash keys replace sequence numbers (generated by the database engine) of the Data Vault 1.0 standard. They support geographically distributed data warehouses, as well as integration with big data environments like Hadoop.

A hash key is a hash value of the business key column(s) used in a Hub or Link. The advantage is that it is predictable, which enables parallel loading and a multiplatform / distributed datawarehouse architecture. For instance a customer with code NL123 will have the same hash key on all distributed parts of the data warehouse. Because the hash key always has the same length and is stored as a (fixed length) CHAR column, performance is usually better than when the business key column(s) are directly used as primary key.

How to calculate a hash difference in T-SQL

For full details about Endianness, differences of hash functions between different platforms, how to cope with new columns and other technical details I refer to the book mentioned above. If you want to work with Data Vault 2.0 you will need the book anyway.

As said, a hash difference is a hash value of all the descriptive data of a row in a satellite, plus the business key column(s). To do the hashing, we have to put all the column values together in one nvarchar value, and then apply a hashing function on it. This is the short version and the base to understand it.

DV-S01E02-hashdiff

Between the different column values you have to use a delimiter, preferably one that does not occur in the column values. Otherwise you have to “escape” the delimiter in column values.

If you want to do a case insensitive compare, you should convert all values to either upper- or lowercase. But this means that if in the source system a customer name was “hans michiels” and is changed to “Hans Michiels”, you will not detect the change and therefore do not store it in the Data Vault. So it depends on the contents of the satellite and your business view on it if this is desired behaviour or not.

The hash diff calculation results in a binary(16) value, when using the hashing algorithm MD5 (which is recommended). To enable better cross-platform- and tools support, this is then converted to a hexadecimal string of 32 characters, stored as a CHAR(32) column.

In all cases you must also convert the hexadecimal hash value to all UPPER- or all lowercase, because not all convert functions give the same output, some lower- and other uppercase. To compare values that were made by different hash functions (read on different systems) making it all uppercase (or lowercase) makes sure the values do not differ in casing only (which could lead to a false negative answer on the “are the rows the same” question).

I understand this is all dry and boring nerd stuff if you do not see an example, so .. a simple example:

Calculating_a_hash_diff.sql

SELECT
      [bla]
    , UPPER(
        CONVERT(CHAR(32),
          HASHBYTES('MD5',
            UPPER(
              CONCAT( ISNULL(CONVERT(NVARCHAR, [BusinessKeyOfHub]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), [Column01]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), [Column02]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), [Column03]), N'')
              ) -- END CONCAT
            ) -- END UPPER
          ) -- END HASHBYTES
        , 2) -- END CONVERT
      ) -- END UPPER
FROM [stg].[bla]

So far nothing shocking, but there are a few pitfalls. If you keep them in mind, you will be an excellent hasher soon. 😉

CONVERT(NVARCHAR without specifying the size

Do this only for values of 30 or less characters, otherwise the value will be cut off which could lead to wrong hash diff values, not changing when the attribute value changes. It is for instance okay for integers, decimals (unless the precision is more than 28, mind the decimal separator and possible negative sign), datetime stamps, and bits. When converting (n)varchar safest thing to do is follow the original size, for unique identifiers use 38 or 40.

Beware for CONCAT

Beware for concatenation of (N)VARCHAR(MAX), (N)TEXT and XML columns using the CONCAT function. It appears that only the first 8000 bytes of (N)VARCHAR(MAX) columns are in the concatenated result.

And even worse, HASHBYTES ..

HASHBYTES does only hash a NVARCHAR value with a length of 4000 or less. Above that the error “String or binary data would be truncated” occurs. This is really important to keep in mind.
The 8000 characters (or 4000 unicode characters) limitation on any string hashed with the HASHBYTES function has been removed in SQL Server 2016! Now you can hash larger (N)VARCHAR values!

Cope with the HASHBYTES and CONCAT limitations

4000 characters is quite something, right? Only with very wide satellites (a lot of columns) and/or usage of (N)VARCHAR(MAX), (N)TEXT or XML columns (hereafter called ‘unlimited length columns’) you can get into trouble.

So, what is wise to do? Some measures you can take are:

Make your satellites not too wide

This is a best practice, not only from hashing function perspective. Split the satellites by source system, then by rate of change. Keep in mind that the business key column(s) and all satellite column values are converted to NVARCHAR and then (including all delimiters) may not exceed the 4000 characters if you want to use the HASHBYTES function.

Size unlimited length columns according to the actual maximum length

What I mean is, if for instance a Country name is stored in an unlimited length column in the source system, you know that a size of 100 will be long enough. When you doubt just do some data profiling on the source to see what is the maximum length. Use that size plus a safety margin for the satellite column size. Problem gone.

Handle true unlimited length columns

Put true unlimited length columns (for instance free entry ‘memo’ fields or xml columns in source systems in their own satellite! Otherwise if any other (small) attribute changes, the 4k memo is copied, even when it was not changed. This absorbs storage. But that alone is not enough, as said, with HASHBYTES, above 8000 characters (4000 when unicode) you are still f*c**d. Luckily there is a system function in the master database, which uses the MD5 algorithm and takes a binary value as input. So to compute a MD5 hash value for a text column with unlimited length, you could do something like:

Hash_for_memo.sql

SELECT master.sys.fn_repl_hash_binary(CONVERT(VARBINARY(MAX),
[MyHugeMemoField)) AS [HashDiff]

But don’t make the mistake to use the function [sys].[fn_repl_hash_binary] for all hashing, because the performance of HASHBYTES is much better!
On my machine HASHBYTES is three times faster than [sys].[fn_repl_hash_binary] on the same dataset.
You can check for yourself using this script:

Performance_comparison.sql

USE [msdb]
GO

DECLARE @start DATETIME2(7)
DECLARE @stop DATETIME2(7)
DECLARE @elapsed1 BIGINT
DECLARE @elapsed2 BIGINT
--\-------------------------------------------------------------------------
---) TEST PERFORMANCE of HASHBYTES function.
--/-------------------------------------------------------------------------
SELECT @start = GETDATE();
SELECT TOP1000000
      UPPER(
        CONVERT(CHAR(32),
          HASHBYTES('MD5',
            --UPPER(
              CONCAT( ISNULL(CONVERT(NVARCHAR(128), o1.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o2.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o3.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_schema_published]), N'')
              ) -- END CONCAT
            --) -- END UPPER
          ) -- END HASHBYTES
        , 2) -- END CONVERT
      ) -- END UPPER
  FROM [sys].[objects] o1
  CROSS JOIN [sys].[objects] o2
  CROSS JOIN [sys].[objects] o3;

SELECT @stop = GETDATE();
SELECT @elapsed1 = DATEDIFF(MICROSECOND,@start, @stop);

--\-------------------------------------------------------------------------
---) TEST PERFORMANCE of [master].[sys].[fn_repl_hash_binary] function.
--/-------------------------------------------------------------------------
SELECT @start = GETDATE();
SELECT TOP 1000000
      UPPER(
        CONVERT(CHAR(32),
          master.sys.fn_repl_hash_binary(CONVERT(VARBINARY(MAX),
            --UPPER(
              CONCAT( ISNULL(CONVERT(NVARCHAR(128), o1.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o2.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o3.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_schema_published]), N'')
                 ) -- END CONCAT
            --) -- END UPPER
          ) -- END HASHBYTES
          ) -- END CONVERT
        , 2) -- END CONVERT
      ) -- END UPPER
  FROM [sys].[objects] o1
  CROSS JOIN [sys].[objects] o2
  CROSS JOIN [sys].[objects] o3;

SELECT @stop = GETDATE();
SELECT @elapsed2 = DATEDIFF(MICROSECOND,@start, @stop);

SELECT @elapsed1 AS [Elapsed_HashBytes]
, @elapsed2 AS [Elapsed_fn_repl_hash_binary]
, 'HASHBYTES is ' + CONVERT(VARCHAR, CONVERT(decimal(19, 2), @elapsed2 / @elapsed1))
+ ' times faster than [fn_repl_hash_binary]' AS [Result]

That’s about it.
Just a few more things:

  • If you use a different hashing algorithm (e.g. SHA1) you cannot use the sys.fn_repl_hash_binary function, in that case you will have to seek your solution in a C# script task or CLR stored procedure, doing the hashing in .NET code.
  • Don’t use CONCAT if you have more than one column in the satellite containing [MyHugeMemoField], or you still can run into problems.

Conclusion / Wrap up

I have explained in short why Hash diff columns are used in a Data Vault 2.0 satellite, and how these values can be calculated.

After that, you have read about some limitations of the CONCAT and HASHBYTES function when used for calculating Hash diff values, and how to cope with these limitations.

Credits
My thanks go to Vincent Kuiper for reviewing the initial draft of this article.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Data Vault and Dimensional Modelling, a happy marriage! (Data Vault Series)

We need ambassadors!

I was at this one day course recently and spoke to a professional working in the business intelligence domain.
He said that he had just followed a two week course for datawarehouse architects.
I asked him, what did they tell about Data Vault?
His answer was shocking: “Not so much ..”

If recently trained datawarehouse architects do not get trained to use Data Vault in a datawarehouse architecture, there definitely is a lot of missionary work to do.
Because the Data Vault can add so much value to your datawarehouse solution. 
And more important, it is not a choice, a Kimball dimensional model OR a Linstedt Data Vault. Use both and
combine each strengths!

Intended audience

This is a strong call for everybody who reads this to share this post with at least one person you know working in the business intelligence domain using star schemas (a.k.a. dimensional modelling) directly fed from source systems.

What are the strengths of a Dimensional Model (a.k.a. Star Schema)?

  • Because the dimensional model is already around for decades, a lot of front end tools have an excellent support for it.
  • A dimensional model is designed for reporting, slicing and dicing, for getting data out. That is what it is good at.

    This becomes clear by the following examples:

    • Query performance is usually better than of other models, like Data Vault or Relational Datawarehouse, especially for aggregations.
    • A star model is an excellent source for Cube databases, like SSAS, especially when using a multidimensional model.
    • Date selections and comparison of periods (e.g. Q1 this year versus Q1 Last Year) is easier than in Data Vault or Relational models.
  • It enables self service BI. Give business (power) users (direct) access to a dimensional model, and there is a good chance that they can get  data out in the format they want.

What are the strengths of a Data Vault?

  • A Data Vault is meant to capture all the data all the time. Therefore it stores all data from the source systems, captures the changes in it, and keeps a full history of all changes (with a sidemark, that multiple changes between two load cycles, will be seen as one).
  • A Data Vault captures all data exactly as it is stored in the source systems. “The good, the bad and the ugly”with no business rules applied to it (except, in some cases hard business rules like data types). This makes all data auditable.
  • Data Vault 2.0 supports cross platform integration with other systems (e.g. Hadoop) and decentralized datawarehouses, for instance in different global regions.
  • With Data Vault 2.0 working agile is easy, in sprints, building the system in increments.
  • Data Vault has a very good support for Data Integration. Satellites, hanging off the Hub, containing data from different source systems.

So? What are the strengths of a datawarehouse architecture using both?

  • You can change your mind! If your business rules change (and it happens!), just reload your dimensional model from the Data Vault with the new rules applied.
  • You have one version of the facts . If you have multiple dimensional models, you have not.
  • Data history is not like a destroyed monument. With a Data Vault, you capture all data from the source systems, and all changes applied to it. Source systems usually reflect only the current state of the data, so without a change capturing central storage like Data Vault, your history is gone forever!
  • Your dimensional model does not have to serve two purposes: data storage and presentation of information. These two tasks can be in conflict, for instance there is currently no report need for all data that you could grap from source systems, so you leave it out of your star schema. But if the question comes tomorrow, you have no data, no history!
  • You have full auditibility. Because the (raw) Data Vault stores the data from source systems unmodified, every measure, every dimensional attribute, can be traced back to the source it came from and when. This makes it able to refute claims that the data is not correct. Maybe now it will become clear that the old datawarehouse- or reporting system has lied for years!

How can you use both?

The answer is simple: by using a multi-tier architecture.
Data from source systems is first extracted to a Staging area, before it is moved into the Enterprise Data Warehouse using a Data Vault Model (with or without some optional components).
From there it will be distributed to dimensional models (star schemas) and cubes, and whilst the data is on its way, business rules can be applied.
DV-S01E02-multi-tier-architecture
Multi-tier architecture using both Data Vault and Dimensional Modelling techniques.

Conclusion / Wrap up

I have written this post to create more awareness about using both Data Vault and Dimensional Modelling or Star Schemas in a data warehouse architecture.
I have listed strengths and benefits of Data Vault, Dimensional Modelling and of an architecture using both.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

A Plug and Play Logging Solution (SSIS Series)

Introduction

Ever had to investigate an error that occurred during execution of a SSIS package? Then you will know that logging is the key to know what happened.

And not only logging in SSIS itself, but also in stored procedures, c# script tasks, c# console applications you might have written and are executed from within the package.
SSIS-S01E01-log-story
So the log tells you a story (I once heard “so it reads like a thrilling boys’ book”, I liked that expression, GJ).

Also you do not want to reinvent the wheel for every SSIS project, but want to use a standard solution. Plug and Play instead of spending sprints on “utitilies”.

Feature list

The features in short (as compared with what SSIS offers out of the box):

  1. It’s a Plug And Play SSIS Logging Solution in most cases: no additional programming in SSIS Packages is required*), so can be easily used for all existing packages! Only when you have multiple master packages with the same name, you will have to do a little of set up work.
  2. Log messages are labeled with a message class (a sort of severity of the message) for easy filtering.
  3. Changing the maximum message class to log is possible without redeploying any SSIS Packages (for instance enable VERBOSE logging for trouble shooting, then set it back to DEBUG or INFO).
  4. Log messages are labeled with a retention class for easy purging. There is also an “audit” retention class for messages that may never be purged/deleted.
  5. Purging can be automated, so no scheduling is needed.
  6. Logging is Integrated/centralized for SSIS Packages, stored procedures and other home made software.
  7. Deduplicated logging can be enabled, if this is done, identical SSIS logging messages (e.g. first from the OLE DB Destination, then from the Data Flow Task, and then from the SSIS Package) are logged only once as much as possible.
  8. Master / child relations for SSIS packages can be configured, so that log entries from master- and childpackages can be retrieved easily together in a single timeline. Out of the box this works using a naming convention (Master package must have “master” in the name) but can be fine tuned for every package that runs.
  9. Easy configuration by executing a stored procedure, to set retention periods, maximum log class, automatic purging on/off, deduplicated logging on/off, and naming convention for the master package.
  10. Storage of both start- and endtimes of the current timezone as well as UTC start- and endtimes.

*) As far as the logging that SSIS does is concerned. If you want logging in your own stored procedures and executables you need to implement it first.

Quick Start Guide

  1. Download the installation solution script here.
  2. Execute the entire script to install on new database [logdb]. To install on existing database, skip the part of the script where the database is created and execute the rest against an existing database. All objects the solution creates are created in database schema [log] (except for stored procedure [dbo].[sp_ssis_addlogentry], this name is needed by SSIS).
  3. Enable logging in your SSIS Packages if you have not already done that. Use a logging provider for SQL Server and select all events for all components of the package (more details below).
  4. Configure the connection managers of your SSIS Packages to use the database in which you installed the solution. Alternatively you can execute the stored procedure [log].[spCreateCrossRefs] to redirect logging for existing packages to the logging solution without any changes in your SSIS packages. See the User Manual for more details.

Directly to User Manual

Steps to follow when you want to enable logging in your SSIS Package:

SSIS-S01E01-enable_logging

User Manual

Table Of Contents

Introduction

The “hansmichiels.com Plug And Play Logging Solution” provides Plug and Play logging for SSIS Packages, stored procedures and other home made software, like command line applications or web applications.
After installing the solution, as explained in the Quick Start Guide, you can manage the solution using a few stored procedures, or accept the default settings and run it out of the box.

Understanding MessageClass and RetentionClass

MessageClass can have one of these 6 values:

  • 1 = Fatal Error (SSIS OnError event)
  • 2 = Non-fatal error (reserved for user defined logging for non-fatal errors, not used by SSIS packages)
  • 3 = Warning (SSIS OnWarning event)
  • 4 = Info message (SSIS: PackageStart, PackageEnd, OnInformation events)
  • 5 = Debug message (SSIS: OnPreValidate, OnPostValidate, OnPreExecute, OnPostExecute events)
  • 6 = Verbose messages (SSIS: Pipeline, Diagnostic and OnProgress events, typically more than 90% of all SSIS log messages)

RetentionClass aligns with MessageClass because it also has values 1 to 6 and by default the RetentionClass will be equal to the MessageClass. However, when you do custom logging in your own stored procedures or executable programs, you can provide a different value for it, for instance MessageClass 4 (INFO) but RetentionClass 1 if you want the message to be kept longer than the value for RetentionClass 4.

In addition there is a RetentionClass 0 for messages that may never be purged/deleted.

There are two ways to prevent that log messages are purged/deleted:
  1. By setting the configuration setting @RetentionPeriodClassN (where N is from 1 to 6) to 0. Then ALL messages for that RetentionClass will never be purged. However when the configuration setting is changed to a different value, the next time spPurgeLog is executed, the messages can still be deleted.
  2. By explicitly adding log entries with RetentionClass 0. Those entries do not depend on configuration settings, and are never deleted, unless you delete them directly from the [log].[Entry] table.



Back to Table of Contents

Understanding the stored procedures and user defined function of the solution

Stored procedure [log].[spConfigure]

With this stored procedure you can change a number of configuration settings which are parameters of the stored procedure.

SSIS-S01E01-configure
Configuration: a child can do the laundry

You only have to provide values for the configuration settings (parameters) that you want to change. Other settings will remain unchanged.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @MaxMessageClass: The maximum MessageClass to log. For instance 5 means that log entries with MessageClass=6 (VERBOSE) are not logged. The defaultvalue is 5 (DEBUG).
  • @RetentionPeriodClass1: Hours to keep a log entry if the [RetentionClass] column has value 1. Set the value to 0 if the log message may never be deleted. The defaultvalue is 0.
  • @RetentionPeriodClass2: The same for [RetentionClass] = 2. The defaultvalue is 0.
  • @RetentionPeriodClass3: The same for [RetentionClass] = 3. The defaultvalue is 4320 hours or 180 days.
  • @RetentionPeriodClass4: The same for [RetentionClass] = 4. The defaultvalue is 2160 hours or 90 days.
  • @RetentionPeriodClass5: The same for [RetentionClass] = 5. The defaultvalue is 336 hours or 14 days.
  • @RetentionPeriodClass6: The same for [RetentionClass] = 6. The defaultvalue is 168 hours or 7 days. Verbose messages are typically more than 90% of all messages, so a short retention period is recommended, if logged at all.
  • @DefaultMasterSourcePattern: default naming convention for master packages. Use wildcard % for nameparts. The defaultvalue is ‘%Master%’.
  • @EnableAutoPurgeOnPackageEnd: If enabled, spPurgeLog will be executed every time a SSIS Package ends. This keeps the log clean without an additional effort for scheduling, but the downside is that spPurgeLog might be executed more often than when it is scheduled daily. This is not harmful, but could have a negative impact on package performance. The defaultvalue is 0 (false).
  • @EnableDeduplicatedLogging: If enabled a staging table [log].[EntryStaging] will be created and duplicate SSIS log messages (with different sources) will be deduplicated as much as possible. Still in some situations duplicates may occur but a lot less than when this setting is not enabled. The defaultvalue is 0 (false).
  • @ResetAllToDefaultValues: Use 1 to reset all values to the values of the DEFAULT constraints on the [log].[Config] table. In that case all other parameter values are ignored.

Examples of use:

EXECUTE [log].[spConfigure] @ResetAllToDefaultValues = 1;

EXECUTE [log].[spConfigure]
@EnableDeduplicatedLogging = 1,
@EnableAutoPurgeOnPackageEnd = 1,
@MaxMessageClass = 6,
@RetentionPeriodClass2 = 8640,
@RetentionPeriodClass6 = 48;



Back to Table of Contents

Stored procedure [log].[spAddLogEntry]

You can use this stored procedure to implement logging in your own stored procedures or executable programs.

It is advised that your own stored procedures which are called from within SSIS Packages, have a parameter @ExecutionGuid (with datatype UNIQUEIDENTIFIER). Provide the value from the package variable System::ExecutionInstanceGUID.

You can set the following parameters:

  • @Event: The log event, can be NULL. If you leave it empty, the event name will be based on the value for @MessageClass.
  • @Source: The log source.
  • @SourceGuid:The log source guid, can be NULL.
  • @ExecutionGuid: A global unique identifier to be able to distinct log messages of different jobs running in parallel.
  • @StartTime: Can be NULL, if NULL the current date/time is used.
  • @EndTime: Can be NULL, if NULL the value of @StartTime is used (which can be set to current date/time).
  • @MessageClass: The MessageClass of the message: 1=Fatal Error, 2=Error, 3=Warning, 4=Info, 5=Debug, 6=Verbose
  • @RetentionClass: A numeric indicator for the time after which the log messages can be deleted or purged. Values align with MessageClass, and in addition there is a RetentionClass with value 0 for log messages that may never be deleted, e.g. for auditing purposes. Can be NULL, if NULL the value provided for MessageClass will be used.
  • @Message: The log message.

Examples of use:

— At the beginning of a stored procedure:
DECLARE @Source NVARCHAR(1024) = ‘log.spCreateCrossRefs’;
DECLARE @Message NVARCHAR(2048) = ‘Starting: ‘ + @Source;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4;

— at the end of a stored procedure:
SELECT @Message = ‘Finished: ‘ + @Source;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4;

— To log something specific:
SELECT @Message = ‘Purged ‘ + CAST(@RowCount AS NVARCHAR) + ‘ log entries with RetentionClass ‘ + CAST(@RetentionClass AS NVARCHAR) +‘.’;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4, @RetentionClass = 0;



Back to Table of Contents

Stored procedure [log].[spPurgeLog]

Schedule the execution of this stored procedure daily when you do not want to set EnableAutoPurgeOnPackageEnd to 1 (true).

This stored procedure has a parameter @ExecutionGuid used for logging the purge operation, but you can omit the parameter, then a new unique identifier value will be used.


Example of use:



EXECUTE [log].[spPurgeLog]

Back to Table of Contents

Stored procedure [log].[spCreateCrossRefs]

Stored procedure to create synonyms in a database and create or replace the stored procedure [dbo].[sp_ssis_addlogentry] (hereafter collectively called: the cross references)

Use this stored procedure for two purposes:

  1. If you want to do logging in your own stored procedures in the database specified with parameter @DatabaseName
  2. If you want to redirect logging for SSIS Packages that use the database specified with parameter @DatabaseName for logging. In this way those packages will use the logging solution without the need to change the connection string.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @DatabaseName: the name of the database in which the cross references must be created.
  • @DebugMode: if 1, more details will be shown in the output window
  • @SimulationMode: if 1, only the SQL Statements are shown in the output window, but they are not executed.
  • @DropOnly: if 1, the cross references are only dropped, but not (re)created.



Example of use:

USE [logdb]
GO
EXECUTE [log].[spCreateCrossRefs]
    @DatabaseName = 'MyOtherDb'
  , @DebugMode = 1
  , @SimulationMode = 1
  , @DropOnly = 0
GO

Back to Table of Contents

Stored procedure [log].[spSetMasterSourcePattern]

While packages run, the table [log].[Source] is filled with Source (= SSIS Package) and MasterSource information, based on the configuration setting @DefaultMasterSourcePattern.

If the value for @DefaultMasterSourcePattern is correct for all packages that run, you do not have to change anything.

However when the Master/child Source relations need to be changed, you can use stored procedure [log].[spSetMasterSourcePattern] to configure master- and child package relationships.

The definition of a master package is a package that executes another SSIS Package, the child package. When going more than one level deep, only the ultimate parent package (the one that starts everything else) has to be defined, the “package tree” is not relevant. For instance Package A executes Package B, and Package B executes Package C. In this situation A should be defined as master package for C.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @Source: The log source or log source pattern. For a pattern use a % wildcard, e.g. “Load%” for all sources of which the name starts with “Load”.
  • @SourceGuid: Source Guid, if NULL Source will be retrieved with the value of @Source only.
  • @MasterSourcePattern: The master source to use for this @Source/@SourceGuid
  • @MasterSourceGuid: Master Source Guid, if NULL the connection to the Master Source will be made using only the @MasterSourcePattern. You only need to configure @MasterSourceGuid, if you have multiple master packages in different projects with the same name (e.g. all called “Master.dtsx”) that can run at the same time. In that situation you have to link child- to masterpackages using both @MasterSourcePattern and @MasterSourceGuid.

The easiest way to configure your master- and childpackages is to use spConfigure to set @DefaultMasterSourcePattern so that it matches your masterpackage name.
Then run your masterpackage once. In this way the table [log].[Sources] is already filled with all executed (master- and child)packages, so you do not have to do that yourself.
Then use the examples below to finetune, if necessary.
Examples of use:

--\
---) Set MasterSourcePattern for all sources (use only after SSIS Packages have run for the first time).
--/
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = '%', @MasterSourcePattern = 'Master%'
GO

-- Then undo for the master sources themselves
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'Master%', @MasterSourcePattern = NULL
GO

--\
---) Set MasterSourcePattern for one source.
--/
-- Without SourceGuid (use only after SSIS Packages have run for the first time).
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'LoadDimCustomer', @MasterSourcePattern = 'DWHMaster'
GO

-- With SourceGuid and MasterSourceGuid (can be used prior to running any packages).
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'LoadDimCustomer',
  @SourceGuid = '7480D07D-5099-4B76-9404-17C2AFD0E603', -- PackageID property of SSIS Package.
  @MasterSourcePattern = 'Master',
  @MasterSourceGuid = '0F429F36-784D-4F05-8D5A-9374609A32B9' -- PackageID property of your master SSIS Package.
GO



Back to Table of Contents

Stored procedures [log].[spOnExecutionStart] and [log].[spOnExecutionEnd]

Used internally by the logging solution.


Back to Table of Contents

User defined function [log].[fnLogEntry]

User defined function to select log entries.

This function has the following parameters:

  • @Source: Packagename to select log entries for (including log entries of subpackages). Can be NULL. If NULL then the last execution of any Package will be selected.
  • @LastXExecutions: A number to indicate for how many executions you want to see the log. Defaultvalue is 1.
  • @MaxMessageClass: Filter on MessageClass, for instance use 2 to show only log messages with a MessageClass less than or equal to 2. Defaultvalue is 6.
  • @IncludeRetention0: If 1, also messages with RetentionClass 0 (never delete) will be shown, regardless of their MessageClass.

Example of use:

SELECT * FROM [log].[fnLogEntry] (‘Master’, 1, 4, 1 )
GO



Back to Table of Contents

How to implement logging in own executables

Logging in own executables (for instance C# Console Applications) is fairly easy to set up.
Keep the following in mind:

  • Generate a new guid every time the executable runs, and use it as value for @ExecutionGuid.
  • Assign the executable a persistent SourceGuid, and use it as value for @SourceGuid.
  • Use the executable name as value for @Source. While the executable runs, you can use lower level names (e.g. class and/or method names) for @Source, but still use the executable @SourceGuid, and not a different one.
  • When the executable starts, use the stored procedure spAddLogEntry to add an entry with @event=”ExecutionStart”. This triggers built in facilities of the logging solution.
  • Just before the executable stops, use the stored procedure spAddLogEntry to add an entry with @event=”ExecutionEnd”.
  • When the application is called from within a SSIS Package, use spSetMasterSourcePattern to create a master/child relationship.
  • If you are interested in a C# code example just let me know, and I will dedicate a future blog post to it.

 

Download the logging solution script here.

How about updates?

I intend to update the solution when needed.
If you find bugs or have interesting additions, I would be thankful if you can share those with me so I can improve the logging solution.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.