A Plug and Play Logging Solution (SSIS Series)

Introduction

Ever had to investigate an error that occurred during execution of a SSIS package? Then you will know that logging is the key to know what happened.

And not only logging in SSIS itself, but also in stored procedures, c# script tasks, c# console applications you might have written and are executed from within the package.
SSIS-S01E01-log-story
So the log tells you a story (I once heard “so it reads like a thrilling boys’ book”, I liked that expression, GJ).

Also you do not want to reinvent the wheel for every SSIS project, but want to use a standard solution. Plug and Play instead of spending sprints on “utitilies”.

Feature list

The features in short (as compared with what SSIS offers out of the box):

  1. It’s a Plug And Play SSIS Logging Solution in most cases: no additional programming in SSIS Packages is required*), so can be easily used for all existing packages! Only when you have multiple master packages with the same name, you will have to do a little of set up work.
  2. Log messages are labeled with a message class (a sort of severity of the message) for easy filtering.
  3. Changing the maximum message class to log is possible without redeploying any SSIS Packages (for instance enable VERBOSE logging for trouble shooting, then set it back to DEBUG or INFO).
  4. Log messages are labeled with a retention class for easy purging. There is also an “audit” retention class for messages that may never be purged/deleted.
  5. Purging can be automated, so no scheduling is needed.
  6. Logging is Integrated/centralized for SSIS Packages, stored procedures and other home made software.
  7. Deduplicated logging can be enabled, if this is done, identical SSIS logging messages (e.g. first from the OLE DB Destination, then from the Data Flow Task, and then from the SSIS Package) are logged only once as much as possible.
  8. Master / child relations for SSIS packages can be configured, so that log entries from master- and childpackages can be retrieved easily together in a single timeline. Out of the box this works using a naming convention (Master package must have “master” in the name) but can be fine tuned for every package that runs.
  9. Easy configuration by executing a stored procedure, to set retention periods, maximum log class, automatic purging on/off, deduplicated logging on/off, and naming convention for the master package.
  10. Storage of both start- and endtimes of the current timezone as well as UTC start- and endtimes.

*) As far as the logging that SSIS does is concerned. If you want logging in your own stored procedures and executables you need to implement it first.

Quick Start Guide

  1. Download the installation solution script here.
  2. Execute the entire script to install on new database [logdb]. To install on existing database, skip the part of the script where the database is created and execute the rest against an existing database. All objects the solution creates are created in database schema [log] (except for stored procedure [dbo].[sp_ssis_addlogentry], this name is needed by SSIS).
  3. Enable logging in your SSIS Packages if you have not already done that. Use a logging provider for SQL Server and select all events for all components of the package (more details below).
  4. Configure the connection managers of your SSIS Packages to use the database in which you installed the solution. Alternatively you can execute the stored procedure [log].[spCreateCrossRefs] to redirect logging for existing packages to the logging solution without any changes in your SSIS packages. See the User Manual for more details.

Directly to User Manual

Steps to follow when you want to enable logging in your SSIS Package:

SSIS-S01E01-enable_logging

User Manual

Table Of Contents

Introduction

The “hansmichiels.com Plug And Play Logging Solution” provides Plug and Play logging for SSIS Packages, stored procedures and other home made software, like command line applications or web applications.
After installing the solution, as explained in the Quick Start Guide, you can manage the solution using a few stored procedures, or accept the default settings and run it out of the box.

Understanding MessageClass and RetentionClass

MessageClass can have one of these 6 values:

  • 1 = Fatal Error (SSIS OnError event)
  • 2 = Non-fatal error (reserved for user defined logging for non-fatal errors, not used by SSIS packages)
  • 3 = Warning (SSIS OnWarning event)
  • 4 = Info message (SSIS: PackageStart, PackageEnd, OnInformation events)
  • 5 = Debug message (SSIS: OnPreValidate, OnPostValidate, OnPreExecute, OnPostExecute events)
  • 6 = Verbose messages (SSIS: Pipeline, Diagnostic and OnProgress events, typically more than 90% of all SSIS log messages)

RetentionClass aligns with MessageClass because it also has values 1 to 6 and by default the RetentionClass will be equal to the MessageClass. However, when you do custom logging in your own stored procedures or executable programs, you can provide a different value for it, for instance MessageClass 4 (INFO) but RetentionClass 1 if you want the message to be kept longer than the value for RetentionClass 4.

In addition there is a RetentionClass 0 for messages that may never be purged/deleted.

There are two ways to prevent that log messages are purged/deleted:
  1. By setting the configuration setting @RetentionPeriodClassN (where N is from 1 to 6) to 0. Then ALL messages for that RetentionClass will never be purged. However when the configuration setting is changed to a different value, the next time spPurgeLog is executed, the messages can still be deleted.
  2. By explicitly adding log entries with RetentionClass 0. Those entries do not depend on configuration settings, and are never deleted, unless you delete them directly from the [log].[Entry] table.



Back to Table of Contents

Understanding the stored procedures and user defined function of the solution

Stored procedure [log].[spConfigure]

With this stored procedure you can change a number of configuration settings which are parameters of the stored procedure.

SSIS-S01E01-configure
Configuration: a child can do the laundry

You only have to provide values for the configuration settings (parameters) that you want to change. Other settings will remain unchanged.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @MaxMessageClass: The maximum MessageClass to log. For instance 5 means that log entries with MessageClass=6 (VERBOSE) are not logged. The defaultvalue is 5 (DEBUG).
  • @RetentionPeriodClass1: Hours to keep a log entry if the [RetentionClass] column has value 1. Set the value to 0 if the log message may never be deleted. The defaultvalue is 0.
  • @RetentionPeriodClass2: The same for [RetentionClass] = 2. The defaultvalue is 0.
  • @RetentionPeriodClass3: The same for [RetentionClass] = 3. The defaultvalue is 4320 hours or 180 days.
  • @RetentionPeriodClass4: The same for [RetentionClass] = 4. The defaultvalue is 2160 hours or 90 days.
  • @RetentionPeriodClass5: The same for [RetentionClass] = 5. The defaultvalue is 336 hours or 14 days.
  • @RetentionPeriodClass6: The same for [RetentionClass] = 6. The defaultvalue is 168 hours or 7 days. Verbose messages are typically more than 90% of all messages, so a short retention period is recommended, if logged at all.
  • @DefaultMasterSourcePattern: default naming convention for master packages. Use wildcard % for nameparts. The defaultvalue is ‘%Master%’.
  • @EnableAutoPurgeOnPackageEnd: If enabled, spPurgeLog will be executed every time a SSIS Package ends. This keeps the log clean without an additional effort for scheduling, but the downside is that spPurgeLog might be executed more often than when it is scheduled daily. This is not harmful, but could have a negative impact on package performance. The defaultvalue is 0 (false).
  • @EnableDeduplicatedLogging: If enabled a staging table [log].[EntryStaging] will be created and duplicate SSIS log messages (with different sources) will be deduplicated as much as possible. Still in some situations duplicates may occur but a lot less than when this setting is not enabled. The defaultvalue is 0 (false).
  • @ResetAllToDefaultValues: Use 1 to reset all values to the values of the DEFAULT constraints on the [log].[Config] table. In that case all other parameter values are ignored.

Examples of use:

EXECUTE [log].[spConfigure] @ResetAllToDefaultValues = 1;

EXECUTE [log].[spConfigure]
@EnableDeduplicatedLogging = 1,
@EnableAutoPurgeOnPackageEnd = 1,
@MaxMessageClass = 6,
@RetentionPeriodClass2 = 8640,
@RetentionPeriodClass6 = 48;



Back to Table of Contents

Stored procedure [log].[spAddLogEntry]

You can use this stored procedure to implement logging in your own stored procedures or executable programs.

It is advised that your own stored procedures which are called from within SSIS Packages, have a parameter @ExecutionGuid (with datatype UNIQUEIDENTIFIER). Provide the value from the package variable System::ExecutionInstanceGUID.

You can set the following parameters:

  • @Event: The log event, can be NULL. If you leave it empty, the event name will be based on the value for @MessageClass.
  • @Source: The log source.
  • @SourceGuid:The log source guid, can be NULL.
  • @ExecutionGuid: A global unique identifier to be able to distinct log messages of different jobs running in parallel.
  • @StartTime: Can be NULL, if NULL the current date/time is used.
  • @EndTime: Can be NULL, if NULL the value of @StartTime is used (which can be set to current date/time).
  • @MessageClass: The MessageClass of the message: 1=Fatal Error, 2=Error, 3=Warning, 4=Info, 5=Debug, 6=Verbose
  • @RetentionClass: A numeric indicator for the time after which the log messages can be deleted or purged. Values align with MessageClass, and in addition there is a RetentionClass with value 0 for log messages that may never be deleted, e.g. for auditing purposes. Can be NULL, if NULL the value provided for MessageClass will be used.
  • @Message: The log message.

Examples of use:

— At the beginning of a stored procedure:
DECLARE @Source NVARCHAR(1024) = ‘log.spCreateCrossRefs’;
DECLARE @Message NVARCHAR(2048) = ‘Starting: ‘ + @Source;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4;

— at the end of a stored procedure:
SELECT @Message = ‘Finished: ‘ + @Source;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4;

— To log something specific:
SELECT @Message = ‘Purged ‘ + CAST(@RowCount AS NVARCHAR) + ‘ log entries with RetentionClass ‘ + CAST(@RetentionClass AS NVARCHAR) +‘.’;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4, @RetentionClass = 0;



Back to Table of Contents

Stored procedure [log].[spPurgeLog]

Schedule the execution of this stored procedure daily when you do not want to set EnableAutoPurgeOnPackageEnd to 1 (true).

This stored procedure has a parameter @ExecutionGuid used for logging the purge operation, but you can omit the parameter, then a new unique identifier value will be used.


Example of use:



EXECUTE [log].[spPurgeLog]

Back to Table of Contents

Stored procedure [log].[spCreateCrossRefs]

Stored procedure to create synonyms in a database and create or replace the stored procedure [dbo].[sp_ssis_addlogentry] (hereafter collectively called: the cross references)

Use this stored procedure for two purposes:

  1. If you want to do logging in your own stored procedures in the database specified with parameter @DatabaseName
  2. If you want to redirect logging for SSIS Packages that use the database specified with parameter @DatabaseName for logging. In this way those packages will use the logging solution without the need to change the connection string.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @DatabaseName: the name of the database in which the cross references must be created.
  • @DebugMode: if 1, more details will be shown in the output window
  • @SimulationMode: if 1, only the SQL Statements are shown in the output window, but they are not executed.
  • @DropOnly: if 1, the cross references are only dropped, but not (re)created.



Example of use:

USE [logdb]
GO
EXECUTE [log].[spCreateCrossRefs]
    @DatabaseName = 'MyOtherDb'
  , @DebugMode = 1
  , @SimulationMode = 1
  , @DropOnly = 0
GO

Back to Table of Contents

Stored procedure [log].[spSetMasterSourcePattern]

While packages run, the table [log].[Source] is filled with Source (= SSIS Package) and MasterSource information, based on the configuration setting @DefaultMasterSourcePattern.

If the value for @DefaultMasterSourcePattern is correct for all packages that run, you do not have to change anything.

However when the Master/child Source relations need to be changed, you can use stored procedure [log].[spSetMasterSourcePattern] to configure master- and child package relationships.

The definition of a master package is a package that executes another SSIS Package, the child package. When going more than one level deep, only the ultimate parent package (the one that starts everything else) has to be defined, the “package tree” is not relevant. For instance Package A executes Package B, and Package B executes Package C. In this situation A should be defined as master package for C.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @Source: The log source or log source pattern. For a pattern use a % wildcard, e.g. “Load%” for all sources of which the name starts with “Load”.
  • @SourceGuid: Source Guid, if NULL Source will be retrieved with the value of @Source only.
  • @MasterSourcePattern: The master source to use for this @Source/@SourceGuid
  • @MasterSourceGuid: Master Source Guid, if NULL the connection to the Master Source will be made using only the @MasterSourcePattern. You only need to configure @MasterSourceGuid, if you have multiple master packages in different projects with the same name (e.g. all called “Master.dtsx”) that can run at the same time. In that situation you have to link child- to masterpackages using both @MasterSourcePattern and @MasterSourceGuid.

The easiest way to configure your master- and childpackages is to use spConfigure to set @DefaultMasterSourcePattern so that it matches your masterpackage name.
Then run your masterpackage once. In this way the table [log].[Sources] is already filled with all executed (master- and child)packages, so you do not have to do that yourself.
Then use the examples below to finetune, if necessary.
Examples of use:

--\
---) Set MasterSourcePattern for all sources (use only after SSIS Packages have run for the first time).
--/
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = '%', @MasterSourcePattern = 'Master%'
GO

-- Then undo for the master sources themselves
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'Master%', @MasterSourcePattern = NULL
GO

--\
---) Set MasterSourcePattern for one source.
--/
-- Without SourceGuid (use only after SSIS Packages have run for the first time).
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'LoadDimCustomer', @MasterSourcePattern = 'DWHMaster'
GO

-- With SourceGuid and MasterSourceGuid (can be used prior to running any packages).
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'LoadDimCustomer',
  @SourceGuid = '7480D07D-5099-4B76-9404-17C2AFD0E603', -- PackageID property of SSIS Package.
  @MasterSourcePattern = 'Master',
  @MasterSourceGuid = '0F429F36-784D-4F05-8D5A-9374609A32B9' -- PackageID property of your master SSIS Package.
GO



Back to Table of Contents

Stored procedures [log].[spOnExecutionStart] and [log].[spOnExecutionEnd]

Used internally by the logging solution.


Back to Table of Contents

User defined function [log].[fnLogEntry]

User defined function to select log entries.

This function has the following parameters:

  • @Source: Packagename to select log entries for (including log entries of subpackages). Can be NULL. If NULL then the last execution of any Package will be selected.
  • @LastXExecutions: A number to indicate for how many executions you want to see the log. Defaultvalue is 1.
  • @MaxMessageClass: Filter on MessageClass, for instance use 2 to show only log messages with a MessageClass less than or equal to 2. Defaultvalue is 6.
  • @IncludeRetention0: If 1, also messages with RetentionClass 0 (never delete) will be shown, regardless of their MessageClass.

Example of use:

SELECT * FROM [log].[fnLogEntry] (‘Master’, 1, 4, 1 )
GO



Back to Table of Contents

How to implement logging in own executables

Logging in own executables (for instance C# Console Applications) is fairly easy to set up.
Keep the following in mind:

  • Generate a new guid every time the executable runs, and use it as value for @ExecutionGuid.
  • Assign the executable a persistent SourceGuid, and use it as value for @SourceGuid.
  • Use the executable name as value for @Source. While the executable runs, you can use lower level names (e.g. class and/or method names) for @Source, but still use the executable @SourceGuid, and not a different one.
  • When the executable starts, use the stored procedure spAddLogEntry to add an entry with @event=”ExecutionStart”. This triggers built in facilities of the logging solution.
  • Just before the executable stops, use the stored procedure spAddLogEntry to add an entry with @event=”ExecutionEnd”.
  • When the application is called from within a SSIS Package, use spSetMasterSourcePattern to create a master/child relationship.
  • If you are interested in a C# code example just let me know, and I will dedicate a future blog post to it.

 

Download the logging solution script here.

How about updates?

I intend to update the solution when needed.
If you find bugs or have interesting additions, I would be thankful if you can share those with me so I can improve the logging solution.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Isolate cross database “databasename” dependencies using synonyms and schemas (Data Vault Series)

Intended audience

This post could be interesting for professionals working on datawarehouse projects using the Microsoft SQL Server database engine. This post assumes that you have at least a basic understanding of Data Vault principles.

Introduction

Those who have read the book “Building a Scalable Data Warehouse with Data Vault 2.0” by Dan Linstedt and Michael Olschimke will know that database schema’s are used in the examples in the book for the “raw” and “biz” (business) layer in the Data Vault.

If you do not know this book, buy it, if you are the intended audience you should read it ( no I have no commercial interest in saying this), so check it out on Amazon.

What is done in the book is putting tables or views in the database schema’s “raw” and “biz”, respectively, making it transparent if a satellite is either virtual (and at least partially computed) or persisted in a physical table.

Putting these data layers in different schema’s is a great starting point for a my interdatabase connectivity concept.

What data layers can we have

  • Staging area
  • Raw Data Vault
  • Business (Data) Vault
  • Metrics Vault
  • Error Vault
  • Information Marts

A rule of thumb is that each layer resides in it’s own database, but also multiple layers can be in one database. What should be avoided is spreading one layer over multiple databases. This does not count for different Information Marts though, which should be put in their own database.

Whether to put layers together can depend amongst others on backup requirements (the biz layer could for instance be restored from a database backup OR by doing some reprocessing from the Raw Data Vault, which could lead to the decision to backup the database less frequently).

A typical set up could be:

  • Staging Area – separate database, “stg” database schema.
  • Raw Data Vault – together with Business Vault, “raw” database schema.
  • Business Vault – together with Raw Vault, “biz” database schema.

To keep the example simple I will leave the other data layers out for now.

What is the problem?

To be short: SQL statements copying data to a different database.

When using data flows in SSIS, this problem does not occur.

Why is it bad?

Database names are prefixed in FROM clauses, causing a dependency on the database name.

So what, I still don’t see the problem

Scenario’s where it can become a hassle to manage this include:

  • Running different environments on the same database instance (e.g. development and test), differing the environments by database names. When deploying code or copying databases, all interdatabase dependencies must be changed.
  • Running different enviroments for multiple clients in a BIAAS (Business Intelligence As A Service) solution on the same database instance.

So, lets divide datalayers using database schema’s ..

This is an example of the initial set up.

Dv01E01_dbschema1
Please note that you should keep all objects (tables, views, stored procedures, udf’s) from one data layer together in one database.

Now, to pull data from the “stg” to the “raw” layer, SQL statements could be like (simplified example):

Insert.sql
INSERT INTO [raw].[TableName]
SELECT ..
FROM [EDW_STG_TEST].[stg].[TableName]

But, as you can see the database name is prefixed before the Staging table. And this is a problem (at least it makes managing the solution more complex).

How can we solve this?

Managing interdatabase dependencies using synonyms

First, a little explanation from MSDN:

A synonym is a database object that serves the following purposes:

  • Provides an alternative name for another database object, referred to as the base object, that can exist on a local or remote server.
  • Provides a layer of abstraction that protects a client application from changes made to the name or location of the base object.

Read more on MSDN.

What if we would have made a synonym for the staging table in the “EDW_DV_TEST” database, also in a “stg” schema in that database?

Then we could omit the database name from the SQL statement:

Insert.sql
INSERT INTO [raw].[TableName]
SELECT ..
FROM [stg].[TableName]
That is what we want!

In a picture:

Dv01E01_dbschema2

The good news is, creating the synonyms can be automated with a script, so a change of the Staging database name only requires running a SQL script in the EDW_DV_TEST to drop and recreate the synonyms for the staging database objects!

One other advantage is, you could merge different layers into one database or spit them into different databases. For example all code will keep on working if you move all Staging tables to the EDW_DV_TEST database, therewith replacing the synonyms already there in the stg schema by actual tables. All you would have to do is change the connection string for the processes that fill the staging tables.

In a similar way, you could decide to separate the raw and biz layer into different databases.

Script to create or change the synonyms

DV-S01E01-Create_synonyms_for_objects_in_a_different_database.sql

--\
---) Author: Hans Michiels
---) Creates synonyms for tables, views, user defined functions
---) and stored procedures in a different database.
---) All synonyms in the specified schema are dropped
---) prior to (re)creation.
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014] -- Replace with database you want to create the synonyms in.
GO

--\--------------------------------------------------------------------------
---) DECLARATIONS
--/--------------------------------------------------------------------------

DECLARE @SourceDatabase SYSNAME
DECLARE @SourceSchema SYSNAME
DECLARE @SynonymsSchema SYSNAME

DECLARE @SynonymsPrefix NVARCHAR(128)
DECLARE @SynonymName SYSNAME

DECLARE @CurSchemaName SYSNAME
DECLARE @CurObjectName SYSNAME
DECLARE @CurObjectType CHAR(2)

DECLARE @nsql NVARCHAR(4000)
DECLARE @DebugPrint NVARCHAR(500)
DECLARE @SimulationMode CHAR(1)

--\--------------------------------------------------------------------------
---) CONFIGURATION: adjust the variable values to meet your needs.
--/--------------------------------------------------------------------------

SELECT @SourceDatabase = 'AW2014_Staging' -- Database in which objects exist
                                          -- for which synonyms must be created.
     , @SourceSchema = 'stg' -- Schema name of source objects.
     , @SynonymsSchema = 'stg' -- Schema name to put synonyms in.
     , @SynonymsPrefix = '' -- Optional Synonyms name prefix.
     , @SimulationMode = 'Y' -- Use Y if you only want the SQL statement in the
                               -- output window without it being executed.

--\--------------------------------------------------------------------------
---) INITIALIZATION
--/--------------------------------------------------------------------------

IF NOT EXISTS(SELECT 1 FROM sys.schemas WHERE name = @SynonymsSchema)
BEGIN
    SELECT @nsql ='CREATE SCHEMA ' + QUOTENAME(@SynonymsSchema) + ' AUTHORIZATION [dbo];';
    IF @SimulationMode = 'Y' PRINT '----- SIMULATION MODE: SCRIPT IS NOT EXECUTED ------ ';
    PRINT @nsql;
    IF @SimulationMode != 'Y' EXECUTE sp_executesql @nsql;
END

--\--------------------------------------------------------------------------
---) MAIN PART I: Drop all existing synonyms in the specified schema.
--/--------------------------------------------------------------------------

DECLARE ExistingObjCursor CURSOR LOCAL STATIC FOR
    SELECT syn.name AS [SynonymName]
    FROM sys.synonyms syn
    JOIN sys.schemas sch
      ON syn.schema_id = sch.schema_id
    WHERE sch.name = @SynonymsSchema
 
IF @SimulationMode = 'Y' PRINT '----- SIMULATION MODE: SCRIPT IS NOT EXECUTED ------ ';

SET @DebugPrint = '-- Dropping all existing synonyms in schema ' + QUOTENAME(@SynonymsSchema)
PRINT REPLICATE('-', LEN(@DebugPrint))
PRINT @DebugPrint
PRINT REPLICATE('-', LEN(@DebugPrint))

OPEN ExistingObjCursor
FETCH NEXT FROM ExistingObjCursor INTO @SynonymName

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @nsql = 'IF OBJECT_ID(''' + QUOTENAME(@SynonymsSchema) + '.' + QUOTENAME(@SynonymName) + ''', ''SN'') IS NOT NULL'
          + ' DROP SYNONYM ' + QUOTENAME(@SynonymsSchema) + '.' + QUOTENAME(@SynonymName) + ';'

    PRINT @nsql;
    IF @SimulationMode != 'Y' EXECUTE sp_executesql @nsql;

    FETCH NEXT FROM ExistingObjCursor INTO @SynonymName
END

CLOSE ExistingObjCursor
DEALLOCATE ExistingObjCursor

--\--------------------------------------------------------------------------
---) MAIN PART II: Recreate all synonyms in the specified schema.
--/--------------------------------------------------------------------------

IF OBJECT_ID('tempdb..#SourceObjects') IS NOT NULL BEGIN
    DROP TABLE #SourceObjects
END

CREATE TABLE #SourceObjects
    ( [SchemaName] SYSNAME NOT NULL
    , [ObjectName] SYSNAME NOT NULL
    , [ObjectType] CHAR(2) NOT NULL
    )

-- Insert USER objects
SELECT @nsql =
'INSERT INTO #SourceObjects([SchemaName], [ObjectName], [ObjectType])'
+ ' SELECT sch.name AS [SchemaName]'
+ ' , obj.name AS [ObjectName]'
+ ' , obj.type AS [ObjectType]'
+ ' FROM [' + @SourceDatabase + '].sys.objects obj'
+ ' JOIN [' + @SourceDatabase + '].sys.schemas sch'
+ ' ON sch.schema_id = obj.schema_id '
+ ' WHERE sch.name = ''' + @SourceSchema + ''''
+ ' AND obj.type IN ( '
+ ' ''FN'' /* SQL_SCALAR_FUNCTION */ '
+ ' , ''P '' /* SQL_STORED_PROCEDURE */ '
+ ' , ''IF'' /* SQL_INLINE_TABLE_VALUED_FUNCTION */ '
+ ' , ''TF'' /* SQL_TABLE_VALUED_FUNCTION */ '
+ ' , ''U '' /* USER_TABLE */ '
+ ' , ''V '' /* VIEW */ '
+ ' )'
+ ' ORDER BY obj.type, sch.name, obj.name'

PRINT '/*'
EXECUTE sp_executesql @nsql;
PRINT '*/'
DECLARE ObjectCursor CURSOR LOCAL STATIC FOR
    SELECT [SchemaName], [ObjectName], [ObjectType]
    FROM #SourceObjects
 
IF @SimulationMode = 'Y' PRINT '----- SIMULATION MODE: SCRIPT IS NOT EXECUTED ------ ';

SET @DebugPrint = '-- Recreate synonyms in schema ' + QUOTENAME(@SynonymsSchema)
PRINT REPLICATE('-', LEN(@DebugPrint))
PRINT @DebugPrint
PRINT REPLICATE('-', LEN(@DebugPrint))

OPEN ObjectCursor
FETCH NEXT FROM ObjectCursor INTO @CurSchemaName, @CurObjectName, @CurObjectType

WHILE @@FETCH_STATUS = 0
BEGIN

    SELECT @SynonymName = @SynonymsPrefix + @CurObjectName
    
    SET @nsql = 'CREATE SYNONYM ' + QUOTENAME(@SynonymsSchema) + '.' + QUOTENAME(@SynonymName)
              + ' FOR ' + QUOTENAME(@SourceDatabase) + '.' + QUOTENAME(@CurSchemaName) + '.' + QUOTENAME(@CurObjectName) + ';'

    PRINT @nsql;
    IF @SimulationMode != 'Y' EXECUTE sp_executesql @nsql;

    FETCH NEXT FROM ObjectCursor INTO @CurSchemaName, @CurObjectName, @CurObjectType
END

CLOSE ObjectCursor
DEALLOCATE ObjectCursor
GO

--\--------------------------------------------------------------------------
---) CLEAN UP
--/--------------------------------------------------------------------------
IF OBJECT_ID('tempdb..#SourceObjects') IS NOT NULL BEGIN
    DROP TABLE #SourceObjects
END

Download the script here.

Conclusion / wrap up

Database synonyms can be used to isolate interdatabase dependencies into objects that can be easily updated with a changed database name.

This makes managing the Data Vault solution easier.

This concept can also be of use in non-Data Vault database environments.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Top 10 Design Mistakes seen in OLTP production systems (SQL Server Series)

Introduction


If you ever had to source your datawarehouse from OnLine Transaction Processing databases, I suppose this top 10 will be a “joy” of recognition for you.

Top 10 Design Mistakes seen in OLTP production systems

1. Don’t have referential integrity enforced

Without actual foreign key constraints in the database, you will get orphaned records sooner or later (e.g. orderline rows for already deleted orders). This is bad for dataquality and can hide bugs in the front end software maintaining the data.

Back to top

2. Foreign key columns having a totally different name than the primary key they refer to

Especially when point 1 is the case, it is difficult to tell that a foreign key column “account_id” refers to the “customer_id” column in the customer table. When reverse engineering a OLTP design this does not really help to understand it.

Back to top

3. Foreign key columns missing “Id”, “Key” or “Guid” in the name

I personally dislike a column name “customer” when it is an integer or global unique identifier datatype. However from the datatype you can draw the conclusion that it cannot be the name, there are many situations where you only see the column name, and then it can be confusing.

Back to top

4. Foreign key columns meaning more than one thing

More than once, in standard software which is highly adjustable or configurable, the datamodel is “flexible”, e.g. link tables where object_id can be a customer, an employee, or yet something else. Apart from the problem this gives in enforcing referential integrity without a physical constraint, it makes it more difficult to understand the model and/or get data out.

Back to top

5. Accessing the datamodel through low-performance all-in-one views, protecting base tables from direct access

They exist! Views with 700+ columns, probably joining 20+ tables, to give an all-in-one view for an account. And still, then you need an extra column from an already joined table, that is not in the list of 700+ columns. You can guess what this leads to, suboptimal queries with poor performance. While, if you were allowed to write a view directly on the tables, you could optimize it for your needs.

Back to top

6. Oversized column lengths

Ever seen a varchar(max) column for a country code with a maximum length of 3? I Have. While this is an extreme example, oversized columns lead to several problems:

  • SQL Server handles the column differently. Without going into too much detail, a char(3) column for the country code would be stored much more efficiently than a varchar(max) column.
  • You do not always know how much space must be reserved for the column on reports and so on.
  • If you use the value in concatenated columns (e.g. business keys), you need to size it after the physical maximum size, even when the actual data will not fill it up.


Oversizing can happen:

  • when front end C# developers use “code first”, and the column length is not defined in the code. Therefore I recommend, when “code first” is used, a database developer or DBA has a critical look at the generated physical datamodel before going live.
  • when the column size is not specified in design documents.
  • when people just don’t care to size the column properly. Oversizing seems like a safe thing to do, if the column is big enough, no one will complain!

Back to top

7. Oversized INT Datatypes

Or: To Smallint or not to Smallint. When reference tables only contain a few rows, a smallint or even tinyint datatype for the primary key will do! If the primary key is used in foreign key constraints of large tables (with many rows) less storage is needed, indexes will be smaller, all leading to better performance.

Unfortunately, SQL Server does not support unsigned smallint, int and bigint datatypes, but nobody stops you from start counting at the minimum negative value, so -32768 instead of 1 for a smallint! Just change the seed part of the IDENTITY specification. If you do this, you can store up to 65536 rows in a table with a smallint primary key.

MSSQL-S01E01-Top 10 Design Mista…

–\
—) Author: Hans Michiels
—) Script to demonstrate how to start an
—) identity column with a negative number
–/

IF OBJECT_ID(‘[dbo].[A65kRowsTable]’, ‘U’) IS NOT NULL
DROP TABLE [dbo].[A65kRowsTable]
GO
CREATE TABLE [dbo].[A65kRowsTable](
[ID] SMALLINT IDENTITY(32768, 1) NOT NULL,
[SomeColumn] VARCHAR(50) NOT NULL
CONSTRAINT [PK_dbo_A65kRowsTable_ID] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
INSERT INTO [dbo].[A65kRowsTable] ( [SomeColumn] )
VALUES (‘-32768 is the first ID.’)
GO
SELECT [ID], [SomeColumn]
FROM [dbo].[A65kRowsTable]
GO


If you do this trick with an INT, you can store 4.2 billion rows and you might in some cases be able to prevent that you need a bigint.

Back to top

8. Lack of unique indexes for business or natural keys

Again a dataquality issue. If the primary key is surrogate (an integer type which is autonumbered), a table still can have one or more different columns that make a row unique. For example, a social security number in a citizens table. Not enforcing this constraint via an unique index, can lead to two citizens with the same social security number! This can hide bugs in the front end software, fraude, or both. Anyway, it is not wise to take the risk that this happens.

Back to top

 9. Guids stored as (n)varchar

A guid, short for Global Unique IDentifier, is a generated value of 16 bytes, guaranteed to be globally unique. It is sometimes used for primary key columns, instead of integer datatypes.
I will keep it in the middle if this is wise to do, there might be some use cases when this is a valid approach, but I would only do this if using integers is not an option.

SQL Server has the UNIQUEIDENTIFIER datatype to store guids. Internally, this datatype uses 16 bytes, but the value is represented as a hexadecimal string of 36 characters (32 for the 16 bytes, with 4 hyphens in between, sometimes surrounded by curly brackets, e.g. {935D9FA8-2C77-4C34-894C-8FCDA8E47F19}.
But if the guid is stored in a varchar(36) column, it will use 36 bytes for the value plus 2 bytes overhead, so 38 bytes in total. Compared with the 16 bytes for the UNIQUEIDENTIFIER, you can imagine that storage space and index size will increase and performance will decrease.
Not to mention Nvarchar(36), the Unicode variant of varchar, which will use 2 bytes for every character plus 2 bytes overhead, so 74 bytes in total. This is massive! I have seen nvarchar(36) in a production system to store a guid and would classify it as a “My first database” mistake. Only very unexperienced developers, hardly knowing anything about datatypes would do this. Don’t be one of them!

MSSQL-S01E01-mistakes-homer-doh

Back to top

10. Where’s the DBA?

This point is more about daily operations than about the design, but I still want to mention it. If you are using SQL Server databases you need to maintain them. A simple maintenance plan is easy to set up, and will do index and statistics maintenance for you, make backups and could prevent that the transaction log file grows excessively. If you don’t have an internal DBA, hire one to set it up and monitor your SQL Server instances.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

When Shit Hits The SQL Agent Jobstep (T-SQL Scripting Series)

Problems

Due to a bug in SQL Server, a SQL Agent jobstep for executing an SSIS package that has a DateTime parameter, can get unusable on a Windows server with European Regional Settings, and more specific, a dd-mm-yyyy dateformat.
The jobstep is stored with the datetime parameter value in a d-m-yyyy hh:nn:ss format.
This gives two problems:

  1. During execution the following error occurs: String was not recognized as a valid DateTime
  2. When the jobstep is opened for editing, the same error occurs: “Microsoft SQL Server Management Studio String was not recognized as a valid DateTime. (mscorlib)”

Solution

Correct the jobstep in the msdb database: convert the datetime to a yyyy-mm-dd format, which does not give this error when the job is executed or when the jobstep is opened for editing.
You have to run this script every time you have changed the jobstep using SQL Server Management Studio.

TSQL-S01E03-WhenShitHitsTheSQL…

--\
---) Author: Hans Michiels
---) With this script you can correct a SQL Agent jobstep that has a DateTime parameter
---) and would otherwise be uneditable due to a bug in SQL Server.
---) You have to run this script every time you have changed the jobstep using SQL Server Management Studio.
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

UPDATE js

  SET command = REPLACE(js.command, expr3.original_date, expr5.corrected_date)

FROM [msdb].[dbo].[sysjobsteps] AS js

  CROSS APPLY ( SELECT
      CHARINDEX('(DateTime)', js.command) AS [datetime_pos]
      ) AS expr1

  CROSS APPLY ( SELECT
      CHARINDEX(';”\”', js.command, expr1.datetime_pos + 1) AS [start_pos],
      CHARINDEX(' ', js.command, expr1.datetime_pos + 1) AS [end_pos]
      ) AS expr2

  CROSS APPLY ( SELECT
      SUBSTRING(js.command, expr2.start_pos + 4, expr2.end_pos - start_pos - 4) AS [original_date]
      ) AS expr3

  CROSS APPLY ( SELECT
      CASE
      WHEN SUBSTRING(expr3.[original_date], 2, 1) = '-' AND SUBSTRING(expr3.[original_date], 4, 1) = '-'
        THEN '0' + SUBSTRING(expr3.[original_date], 1, 1) + '-0' + SUBSTRING(expr3.[original_date], 3, 6)
      WHEN SUBSTRING(expr3.[original_date], 2, 1) = '-' AND SUBSTRING(expr3.[original_date], 5, 1) = '-'
        THEN '0' + expr3.[original_date]
      WHEN SUBSTRING(expr3.[original_date], 3, 1) = '-' AND SUBSTRING(expr3.[original_date], 5, 1) = '-'
        THEN SUBSTRING(expr3.[original_date], 1, 3) + '0' + SUBSTRING(expr3.[original_date], 4, 6)
      ELSE expr3.[original_date]
      END AS [fixedwidth_date]
      ) AS expr4

  CROSS APPLY ( SELECT
      CONVERT(CHAR(10),
        CONVERT(DATE, SUBSTRING(expr4.[fixedwidth_date], 7, 4)
                    + SUBSTRING(expr4.[fixedwidth_date], 4, 2)
                    + SUBSTRING(expr4.[fixedwidth_date], 1, 2), 112)
                    , 120) AS [corrected_date]
    ) expr5

WHERE js.subsystem = 'SSIS'
  AND expr1.datetime_pos > 0
  AND expr2.start_pos > 0
  AND expr2.end_pos > expr2.start_pos
  AND SUBSTRING(expr4.fixedwidth_date, 3, 1) = '-'
  AND SUBSTRING(expr4.fixedwidth_date, 6, 1) = '-'

Download the script here.

Known limitations

This script works only for jobsteps with only one DateTime parameter.

Conclusion / Wrap up

With this script you can correct a SQL Agent jobstep that has a DateTime parameter and would otherwise be uneditable due to a bug in SQL Server.
You have to run this script every time you have changed the jobstep using SQL Server Management Studio.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.