All posts by Hans Michiels

Hans is an Independent Business Intelligence and Datawarehouse Consultant & Microsoft SQL Server Consultant, working in the Netherlands. He has been working in the software industry since 1996, with SQL Server since the year 2001, and since 2008 he has a primary focus on datawarehouse- and business intelligence projects using Microsoft technology, using a Datavault and Kimball architecture. He has a special interest in Datawarehouse Automation and Metadata driven solutions. * Certified in Data Vault Modeling: Certified Data Vault 2.0 Practitioner, CDVDM (aka Data Vault 1.0) * Certified in MS SQL Server: * MCSA (Microsoft Certified Solutions Associate) SQL Server 2012 - MCITP Business Intelligence Developer 2005/2008 - MCITP Database Developer 2005/2008 - MCITP Database Administrator 2005/2008

Fixing corrupt csv files in a SSIS Package (SSIS Series)

(Promotional)

Big Data & Analytics insight 2016: save the date for an insightful conference
(as a speaker I was asked to do a bit of promotion ..)

Introduction

A few weeks ago my blog post was about using a C# Script Component to detect duplicates in a Data Flow Task.

This week I want to discuss it’s brother, the C# Script Task, which can be used in the Control Flow.

Again, extremely flexible, basicly you embed a small C# program inside your package, and you can use the entire .NET framework for all kind of things. I used it to fix malformed csv source files, before processing them in a Data Flow Task.

Pros and Problems

Csv (comma seperated files) have their pros and problems, as compared with other formats, like XML.

A few pros are that they are usually smaller in size than XML files, and easier and faster to load. These are two reasons why csv files still are used a lot these days, despite of the fact that XML files are in some cases more reliable, and more flexible as far as the file contents is concerned.

Unfortunately csv files are not so good in holding memo/comment fields that might contain carriage return/line feeds in it. Memo fields with those characters do not stay on one line in the file, therefore making the csv file more difficult to process (at least the standard flat file source can not handle it in all cases).

Also the structure is very strict. If a column is added to a csv source file, your Data Flow Task will break.

Solution

In this post I will discuss a possible solution for both problems:

  1. carriage return/line feeds in contents of csv files
  2. changing file formats over time

This post assumes that you have worked with SSIS before, know what For Each Loop Containers and Data Flow Tasks do, and recognize the csv problems above. It is a big advantage if you can read and understand C# code.

Okay, let’s start!

Scenario

My demo is based on the following hypothetical scenario:

  1. You need to process multiple CustomerDetails.csv files.
  2. In this file a pipe (|) is used as separator between the columns.
  3. The file contains a header line: SocialSecurityNumber|FirstName|LastName|Gender|Residence|Comments
  4. The Comments column is added later, your package also needs to be able to load historic files without the Comments column.
  5. CustomerDetails.csv files come from hundreds of locations and systems (hereafter called: sources) world wide. Fixing the csv files in the export processes at all these locations is impossible. Even if some sources deliver correct csv files, your SSIS package cannot rely on the fact that all sources will do so.

Overview

Here are a few pictures of the end result:
SSIS-S01E03-109
The connection managers that are needed

SSIS-S01E03-110
The Control Flow
SSIS-S01E03-111
The Data Flow

Detailed steps

Parameters and variables

First you need to add a few parameters and variables to the package.

Parameters:

  • SourceFolder: The folder that that For Each Loop Container reads for csv files.
  • ArchiveFolder: The folder to which files are moved after being processed, if no error occurred.
  • ErrorFolder: The folder to which files are moved that caused an error during processing.

All parameters are Required and their Data type is String.

SSIS-S01E03-113
The parameters of the package

Variables:

  • CRLF: This variable will be filled with a Carriage Return/Line Feed in the Script Task.
  • CRLF_Replacer: This is the character string that is used to replace CarriageReturn/LineFeeds in the Comment column of the file. It must be a character string that does not occur in the Comment field. Therefore ~CRLF~ is used. If you want a different character string, you can change it to something else.
  • SourcePathAndFilename: The full path and filename of the original filename to process. It is filled by the For Each Loop Container. The value set at design time is unimportant.
  • PathAndFilenameToProcess: The full path and filename of the file to process. This can be either the original file, or the modified temporary file. The Flat File Connection Manager uses this path and filename. The value set at design time is unimportant.

The Data type of all variables is String.

SSIS-S01E03-114
SSIS variables needed to do the job

The Control Flow – For Each Loop Container

Add a For Each Loop Container FELC – Load sourcefiles and configure it as a Foreach File Enumerator. Retrieve file name Fully qualified.

SSIS-S01E03-115

On the Collection pane, just below the Enumerator type, press the ellipsis next to Expressions. In the Property Expressions Editor configure the Property Directory to be set by
Expression @[$Package::SourceFolder].
SSIS-S01E03-116

On the Variable Mappings pane, map variable User::SourcePathAndFilename to Index 0
SSIS-S01E03-102
For Each Loop Container configuration

The Control Flow – Script Task

Next, add a Script Task SCR – Fix csv file if needed.
In the Script Task Editor, configure the following:

  • ReadOnlyVariables: User::CRLF_Replacer,User::SourcePathAndFilename
  • ReadWriteVariables: User::CRLF,User::PathAndFilenameToProcess

SSIS-S01E03-117
Variable configuration in the Script Task Editor

Then press the Edit Script… button.
Don’t get scared by the amount of C# code, if you keep your head cool, it will be fixed in 10 minutes!

Adjust the script as follows:

  • In the Namespaces region, if not already there, add:
    using System.IO;
    using System.Text;


  • SSIS-S01E03-108
    Replace the highlighted part above with the code below

    Now remove public void Main() completely and replace it with the code underneath (to avoid malformatting due to html rendering, please download the original csharp script here).
    Then check the code for !!! ATTENTION !!! comment. These are points in the script that you might need to change.

    ScriptTask.cs

      /*
      (c) Copyright 2016 - hansmichiels.com
     
      This program is free software: you can redistribute it and/or modify
      it under the terms of the GNU General Public License as published by
      the Free Software Foundation, either version 3 of the License, or
      (at your option) any later version.
     
      This program is distributed in the hope that it will be useful,
      but WITHOUT ANY WARRANTY; without even the implied warranty of
      MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
      GNU General Public License for more details.
     
      You should have received a copy of the GNU General Public License
      along with this program. If not, see http://www.gnu.org/licenses/.
      */

     
      /// This method is called when this script task executes in the control flow.
      /// Before returning from this method, set the value of Dts.TaskResult to indicate success or failure.
      /// To open Help, press F1.
      /// </summary>
     public void Main()
     {
      // Copy a few SSIS variables to C# string variables.
       string sourcePathAndFilename = Dts.Variables[“User::SourcePathAndFilename”].Value.ToString();
       string crlfReplacer = Dts.Variables[“User::CRLF_Replacer”].Value.ToString();
     
       // Set the variable that needs to hold a CarriageReturn/LineFeed.
       Dts.Variables[“User::CRLF”].Value = string.Format(“\r\n”);
     
       // Call the method that checks if the sourcefile is corrupt and if so, fixes it.
       FixFileIfCorrupt(sourcePathAndFilename, crlfReplacer);
     
       // Set returnvalue for script task.
      Dts.TaskResult = (int)ScriptResults.Success;
     }
     
      /// Csv-file can have lines with carriage return-linefeeds, causing errors during the load.
      /// By preinvestigating the file we can correct the file and prevent these errors from ocurring.
      /// Source: http://www.hansmichiels.com/2016/05/22/fixing-corrupt-csv-files-in-a-ssis-package-ssis-series/
      /// Author: Hans Michiels
      /// License: GNU General Public License, see http://www.gnu.org/licenses/
      /// </summary>
      private void FixFileIfCorrupt(string sourcePathAndFilename, string crlfReplacer)
      {
       string header = string.Empty;
       string headerAppend = string.Empty;
       string detailAppend = string.Empty;
     
       // Initialize pathAndFilenameToProcess with sourcePathAndFilename. Overwrite later if needed.
       string pathAndFilenameToProcess = sourcePathAndFilename;
     
       // Check if the header of the file is as expected, and return the header row through a ref parameter.
       // If the file header is not as expected, do not touch the file and exit this method.
       if (!CheckAndReturnHeader(sourcePathAndFilename, ref header, ref headerAppend, ref detailAppend)) return;
       
       // Get the encoding of the current file. We need this if we have to create a corrected file.
       var encoding = GetEncoding(sourcePathAndFilename);
     
       // Read all lines of the file into a string array.
       // !!! ATTENTION !!! WARNING: only do this when the files to process are relatively small.
       // !!! ATTENTION !!! When in doubt, test it and monitor memory usage.
       // (I suppose files up to 50~100 MB might not be a big problem but larger might, but testing is the way to find out)
       string[] lines = File.ReadAllLines(sourcePathAndFilename, encoding);
     
       // Use a variable to set when the file has changed due to corrections this method makes.
       bool fileHasChanged = false;
       bool lineIsOkay = true;
     
       // appendbuffer is used to keep parts of 'broken' lines.
       string appendbuffer = string.Empty;
     
       // Get lower- and upperbound of the string array into variables.
       int lower = lines.GetLowerBound(0);
       int upper = lines.GetUpperBound(0);
     
       // Loop through all lines in the file from back to front!
       for (int i = upper; i >= lower; i--)
       {
        // If there is still some text in the buffer, append it to the line and make the buffer empty again.
        if (appendbuffer != string.Empty)
        {
          lines[i] = lines[i] + appendbuffer;
          appendbuffer = string.Empty;
          fileHasChanged = true;
        }
     
        // reset lineIsOkay variable, set to true underneath when the line passes the 'line ok' checks.
        lineIsOkay = false;
     
        // !!! ATTENTION !!!
        // Here you need to add some custom code.
        // Describe one or more properties of a correct line, e.g.
        // - first 8 characters are numeric;
        // - The length is at least 12;
        // - It contains at least x delimiters;
        // etc.
        // Then convert this check to C# code. Use your imagination if needed!
     
        // I check for:
        // - the length to be 12 or more characters.
        // - the first 8 characters (the social security number) to be numeric.
        // - at least 4 pipes.
        if (lines[i].Length > 12)
        {
          string checkString = lines[i].Substring(0, 8);
          int intOutput;
     
          lineIsOkay = (Int32.TryParse(checkString, out intOutput));
     
          if (lineIsOkay)
          {
           // Check that the line contains at least 4 delimiters.
           lineIsOkay = (lines[i].Split('|').Length - 1 >= 4);
          }
        }
     
        if ((!lineIsOkay) && (i > lower))
        {
          // Line is NOT OK.
          // I have to append this line to the previous one, except when it is the header.
          // To do this put this line into appendbuffer variable.
          // I use a replacement string for the removed carriage return/line feed.
          // This will be converted back to a carriage return/line feed in the Data Flow.
          appendbuffer = string.Format(“{0}{1}”, crlfReplacer, lines[i]);
          lines[i] = string.Empty; // Package can handle empty lines.
          fileHasChanged = true;
        }
     
       }
     
       // !!! ATTENTION !!! If you do not have old fileformats you can remove this block.
       // Code to handle the old file format, without a Comments column.
       // The code below reformats the file so that it become always in new fileformat, with an empty comment.
       if (headerAppend != string.Empty)
       {
        // This is an old format without the Comments column. Append Comments column header to header and a delimiter (pipe) to detail rows.
        fileHasChanged = true;
     
        for (int i = upper; i >= lower; i--)
        {
         if (lines[i].Length > 0)
         {
          if (i > lower)
          {
            lines[i] += detailAppend;
          }
          else
          {
            lines[i] += headerAppend;
          }
         }
        }
       }
     
       if (fileHasChanged)
       {
        // Here the modified/corrected file is stored to a temporary file, so a “tmp” extension is added to the modified filename.
        // For auditing purposes I keep the original file and not the corrected one,
        // because the latter can always be recreated when needed.
        // Also, when a bug would exist in the creation of the corrected file
        // (initially I had a problem with codepages, which should now be solved),
        // the original file might get lost forever.
        pathAndFilenameToProcess = String.Format(“{0}.tmp”, sourcePathAndFilename);
     
        // Now create a new corrected csv file.
        File.WriteAllLines(pathAndFilenameToProcess, lines, encoding);
       }
     
       // Also change the SSIS Package variable, because the Flat File Source uses it.
       Dts.Variables[“User::PathAndFilenameToProcess”].Value = pathAndFilenameToProcess;
     
      }
     
      /// <summary>
      /// This method checks if the header is as expected. If not, return false.
      /// Also the value of by ref parameters 'header', 'headerAppend' and 'detailAppend' is set.
      /// Source: http://www.hansmichiels.com/2016/05/22/fixing-corrupt-csv-files-in-a-ssis-package-ssis-series/
      /// Author: Hans Michiels
      /// License: GNU General Public License, see http://www.gnu.org/licenses/
      /// <param name=”sourcePathAndFilename”></param>
      /// <param name=”header”></param>
      ///
      private bool CheckAndReturnHeader(string sourcePathAndFilename,
        ref string header, ref string headerAppend, ref string detailAppend)
      {
       // Initialize return values.
       header = string.Empty;
       headerAppend = string.Empty;
       detailAppend = string.Empty;
     
       // Open the file to read the header line, then close the file again.
       using (StreamReader reader = new StreamReader(sourcePathAndFilename))
       {
        header = reader.ReadLine();
        reader.Close();
       }
       // !!! ATTENTION !!! Here you could do a check on (parts) of the header,
       // so that you do not change files with an unexpected format.
       if (!header.ToLower().StartsWith(“socialsecuritynumber|firstname|lastname|gender|residence”))
       {
        return false;
       }
       // !!! ATTENTION !!! Here you could check for old file formats.
       // Old formats can only be handled when one or more new columns were added at the end of the csv lines.
       // In the example, there is a file format without the Comments column still being delivered.
       // By appending the Comments column with an empty value for those files, they can be processed
       // by the same Data Flow Task using only one Flat File Source Connection.
       if (!header.ToLower().Contains(“|comments”))
       {
        // This is an old format without the Comments column.
        // Append Comments column header to header and a delimiter (pipe) to detail rows.
        headerAppend = “|Comments”;
        detailAppend = “|”;
       }
       return true;
      }
     
      /// <summary>
      /// Determines a text file's encoding by analyzing its byte order mark (BOM).
      /// Defaults to ASCII when detection of the text file's endianness fails.
      /// Source: http://stackoverflow.com/questions/3825390/effective-way-to-find-any-files-encoding
      /// With thanks to the author ‘2Toad’.
      /// </summary>
      /// <param name=”filename”>The text file to analyze.</param>
      /// <returns>The detected encoding.</returns>
      public static Encoding GetEncoding(string filename)
      {
       // Read the BOM
       var bom = new byte[4];
       using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read))
       {
        file.Read(bom, 0, 4);
       }
     
       // Analyze the BOM
       if (bom[0] == 0x2b && bom[1] == 0x2f && bom[2] == 0x76) return Encoding.UTF7;
       if (bom[0] == 0xef && bom[1] == 0xbb && bom[2] == 0xbf) return Encoding.UTF8;
       if (bom[0] == 0xff && bom[1] == 0xfe) return Encoding.Unicode; //UTF-16LE
       if (bom[0] == 0xfe && bom[1] == 0xff) return Encoding.BigEndianUnicode; //UTF-16BE
       if (bom[0] == 0 && bom[1] == 0 && bom[2] == 0xfe && bom[3] == 0xff) return Encoding.UTF32;
       // return Encoding.GetEncoding(1252); // Without BOM, default to Windows Codepage 1252
       return Encoding.Default; // Without BOM, the encoding for the OS's current ANSI code page is returned by default.
      }

    When you have adjusted the script for your file, choose Rebuild from the BUILD menu in the VstaProjects – Microsoft Visual Studio Window. Then close that Window. You return to the Script Task Editor, close it by pressing OK.

    Developing the Data Flow Task

    Add a Data Flow Task to the Control Flow and name it DFT – Import Customer Details.
    Then add the following components to it:

    Flat File Source FF_SRC – CustomerDetails

    From the SSIS Toolbox, drag a Flat File Source to the Data Flow area and create a new Flat File connection manager.
    SSIS-S01E03-121

    If you want to rebuild the demo, download the csv files (and C# script) here.
    Configure the Flat File Connection as shown in the picture below:
    SSIS-S01E03-118

    Choose Advanced in the pane on the left and configure the properties for each column as follows:

    • SocialSecurityNumber: DataType string [DT_STR], OutputColumnWidth 9
    • FirstName: DataType Unicode string [DT_WSTR], OutputColumnWidth 50
    • LastName: DataType Unicode string [DT_WSTR], OutputColumnWidth 50
    • Gender: DataType string [DT_STR], OutputColumnWidth 1
    • Residence: DataType Unicode string [DT_WSTR], OutputColumnWidth 60
    • Comments: DataType Unicode string [DT_WSTR], OutputColumnWidth 1000

    SSIS-S01E03-120
    Example of column properties

    Conditional Split CSPL – Filter out empty lines

    We use a Conditional Split to filter out empty lines in the csv file. Because the SocialSecurityNumber is a required column, we assume that the line is empty (at least it is useless) if SocialSecurityNumber is empty.
    So add an Output named Empty lines with Condition SocialSecurityNumber == “”. If you like, you can check on multiple columns to be empty.
    Rename the default Output to Non-empty lines.

    SSIS-S01E03-122
    Conditional Split Configuration

    Derived Column DC – Re-insert CRLF

    Because the (corrected) file might contain replacements for carriage return/line feeds, we are going to replace them with the original carriage return/line feeds, before the value is inserted into the database table.
    In the Derived Column Transformation Editor, add a derived column, configured as follows:

    • Derived Column Name: Comments
    • Derived Column: Replace ‘Comments’
    • Expression: REPLACE(Comments,@[User::CRLF_Replacer],@[User::CRLF])

    SSIS-S01E03-123
    The Derived Column Transformation Editor

    OLEDB Destination OLE_DST stg DimCustomer

    To rebuild the demo, create a table to store the rows from the data flow task,
    add a OLEDB Destination that uses this table.

    destination_table.sql

    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [stg].[DimCustomer](
     [SocialSecurityNumber] [char](9) NOT NULL,
     [FirstName] [nvarchar](50) NULL,
     [LastName] [nvarchar](50) NULL,
     [Gender] [char](1) NULL,
     [Comments] [nvarchar](1000) NULL,
     [Residence] [nvarchar](60) NULL,
     CONSTRAINT [PK_DimCustomer_CustomerKey] PRIMARY KEY CLUSTERED
    (
     [SocialSecurityNumber] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO

    The Control Flow – File System Tasks

    Finally the File System Tasks archive the original file and delete the corrected file (if it exists).
    For auditing purposes I keep the original file and not the corrected one, because the latter can always be recreated when needed.
    Also, when a bug would exist in the creation of the corrected file (initially I had a problem with codepages, which should now be solved), the original file might get lost forever.

    Please go back to the picture of the The Control Flow at the beginning of this post, to see how the File System Tasks are connected.
    Next are the printscreens that should help you to configure the File System Tasks.

    SSIS-S01E03-124
    Configuration of “FST – Move original file to archive folder”

    SSIS-S01E03-125
    Configuration of “FST – Move original file to error folder”

    SSIS-S01E03-126
    Precedence constraint of “FST – Delete temporary file” (you need to set it twice).

    SSIS-S01E03-128
    Configuration of “FST – Delete temporary file”

    The proof

    To show you that it works, I run the package in debug mode.
    To be able to have a look at the temporary/corrected files, afterwards, I have disabled the File System Task that deletes this file.
    SSIS-S01E03-130

    When I open the “CustomerDetails.csv.tmp” file after the package has run, I can see that line 15 was corrected.
    From the second screenshot you can see that the CRLF_Replacer value was inserted twice.
    SSIS-S01E03-131
    SSIS-S01E03-132

    And when I run a query on the destination table, I can see that the Comments column contains the original CarriageReturn/LineFeeds!
    SSIS-S01E03-134

    By the way, all the data in the csv files is generated from random firstnames, lastnames and residences and fake social security numbers. Any match with existing persons would be based solely on coincidence.

    Conclusion / Wrap up

    Csv files still are useful today, because they have some advantages when compared with XML files.
    However they also have their limitations, which can be frustrating when loading those files with a Data Flow Task.
    In this blog post I have demonstrated how to overcome a few of those limitations, namely:

    1. carriage return/line feeds in contents of csv files
    2. changing file formats over time

    The solution is based on using a C# Script Task.

    Please be aware that I do not pretend this is the best solution.
    If you are very familiar with C#, you could all of the processsing in the Script Task, using SqlBulkCopy, and you would not need a temporary file.
    However the Data Flow Task offers some advantages, like lookups you can use, without the need to do this in a script.
    If you create good Interface Specification Documents on forehand (for instance containing how to handle carriage return/line feeds in column values) you can force that all sources deliver correct csv-files, and then they do not have to be corrected.

    Download the C# script and csv files here.

    (c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

A naming convention for virtual machines and home networks (Windows User Series)

Introduction

If you are a professional working in the Information Technology Domain you might have set up virtual machines to test different software versions, do Proofs of Concept or just use them for the ease of backup and restore.
But what name do you give those virtual machines?
As long as it are only a few, it doesn’t really matter.

Challenge

It becomes a bit of a challenge when you are have different virtual machines all with a specific setup, e.g. different versions of the same software.
Recognizable?

Solution

I want to share my naming convention that I use for virtual machines and my home network.
It is pulled together from different sources in the internet, as well as my own insight, so don’t be surprised if you see parts of other standards.

My device names consist of the following parts:
1. Device type
2. Operating System Line
3. Operating System Version
4. Separator
5. Purpose of the device
6. Sequence number

A naming convention for virtual machines and home networks

1. Device type

Minimum and maximum length

Device type is a single character.

Domain values

Namepart Description
V Virtual Machine
S Physical Server
L Physical Laptop
D Physical Desktop
C Cloud Computer
P Printer
T Tablet Computer
M Mobile Phone

2. Operating System Line

Minimum and maximum length

Operating System Line consists of exactly two characters.

Domain values

Namepart Description
WS Windows Server
WC Windows Desktop/Client
LS Linux Server
LC Linux Client
OX OS X

3. Operating System Version

Minimum and maximum length

Operating System Line consists of 1 to 4 characters.

Domain values

For Windows devices use the internal version number:

Namepart Description
51 Windows Server 2003
52 Windows Server 2003 R2
60 Windows Server 2008
61 Windows Server 2008 R2
62 Windows Server 2012
63 Windows Server 2012 R2
100 Windows Server 2016
A “C” can be added to server versions, if the “Core” version is installed. E.g. 63C
51 Windows XP
52 Windows XP Professional x64
60 Windows Vista
61 Windows 7
62 Windows 8
63 Windows 8.1
100 Windows 10

Get other/older versions from wikipedia.

4. Separator

Minimum and maximum length

The separator is a single character: a hyphen (-).

Domain values

Must be a hyphen (-).

5. Purpose of the device

Minimum and maximum length

Purpose of the device consists of 1 to 4 characters.

Domain values

For Servers:

Namepart Description
DC Domain Controller
FS File Server
PS Print Server
WEB Web Server
ORA Oracle database
SQL SQL Server database
DB other database(s)
EXH Microsoft Exchange
SHP Microsoft Sharepoint
CTX Citrix Server
ESX VMware ESX Server
HV Windows Hyper-V Host

This part is optional for desktop devices.
If implemented, you could use:

Namepart Description
STD or S Standard
DEV or D Used for software development
GRD or G Used for graphical design

6. Sequence number

Minimum and maximum length

The sequence number consists of 1 to 6 characters.

WIN-S01E03-image3
Picture credits: © Yan Zommer | Dreamstime Stock Photos
Numbering your Windows instances.

Domain values

A sequence number making the name unique.
You can decide to make it unique based on purpose only, so you could have:
DWC61-STD001 and LWC61-STD002 (so you do not use 001 for the latter).
If you have a limited number of devices, you could put some intelligence in the number so you know which version of the ‘purpose delivering’ software is running.

Example names

Example name Explanation
DWS63C-HV01 Physical desktop running Windows Server 2012R2 Core, being a Hyper V Host with sequence number 01.
VWS63-SQL141 Hyper V Virtual machine running Windows Server 2012R2 with SQL Server, sequence number 141 (intelligent number, indicates that it is my first virtual machine with SQL Server 2014 installed, I would use 08x for SQL 2008, 09x for SQL 2008R2, 12x for SQL 2012, you get the idea). You could use this if the number of virtual machines per SQL version is less than 10.
VWC100-DEV151 Hyper V Virtual machine running Windows 10, used for development, with sequence number 151 (intelligent number, 15 indicates it’s my Visual Studio 2015 virtual machine, as I have only one of those, 151 is an excellent number).
LWC61-STD0554 Physical Laptop with Windows 7, with software for a ‘standard workspace’, sequence number 0554.

WIN-S01E03-image1
A real life example.

Warning/disclaimer

Do NOT use this naming convention for enterprise networks! Problems that could arise if you use it include:

  • A physical location is not part of this naming convention. In enterprise networks servers often have a physical location reference in their name.
  • The operating system is in the name. This if fine for your own lab virtual machines and home computers, but can be very inconvenient in company networks where the device name is in an inventory system. Enterprises typically do not want devices to be renamed when the Operating System is upgraded.

So.. use this naming convention only for the intended purpose, and take advantage of it.

Conclusion / Wrap up

I posted my own naming convention for virtual and physical computer devices I use “AS IS”.
I do not say it is perfect or useful in all cases.
Use the parts you think are convenient and adjust further to meet your requirements.
I would be thankful if you would like to share your adjustments with me, so I can update this post.
I discourage to use this standard in enterprise networks. For more details see the warning/disclaimer above.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Detect duplicates in a Data Flow Task (SSIS Series)

Intended audience

This article assumes that you have at least a basic understanding of SSIS, and the Data Flow Task in particular. C# knowledge is an advantage but not required.

Introduction

If you ever used the T-SQL windowing function ROW_NUMBER() OVER (PARTITION BY ..) to find duplicates in a query result, this post might be of interest for you.

Because not all sources having potential duplicate entries (e.g. business keys) come from a SQL Server database. If it is a csv- or Excel file ROW_NUMBER will not automaticly be your friend (it might be when you use OPENROWSET, but I think that is a rather complicated solution).

C# script components are extremely flexible yet fairly easy to understand, develop and maintain.
This is why this is my preferred solution to tackle the problem above.
Let’s begin!

How to detect duplicates in a Data Flow Task using a C# Script component

First an overview of the end result (the Data Flow of the Data Flow Task). I will tell you step by step how to achieve this using SSIS 2014. I did not make a print screen of the Control Flow because it only contains one Data Flow Task.
Please note that I enabled two Data Viewers on both outputs of the Conditional Split component, so you can see the data that flows through (screenshot later).
SSIS-S01E02-image01
For the demo I will use the following components in a dataflow. To solve your problem just pick out the things you need.

  • An OLE DB Source
  • A script component (the most interesting part!)
  • An (optional) conditional split
  • Two OLE DB Destinations: one for the first occurrences of each business key, and one for the duplicates.

An OLE DB Source

My example is based on Advertureworks. I select customers from the [AdventureWorksDW] database, and make sure there are duplicates by using UNION ALL in the SQL statement.
In the real world it would be more likely that your source would not be a SQL Server query, but for instance a csv- or Excel file.

OLEDB Source.sql

SELECT
  [CustomerKey], [CustomerAlternateKey] AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome]
FROM
  [dbo].[DimCustomer]
UNION ALL
SELECT
  [CustomerKey], [CustomerAlternateKey] AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome]
FROM
  [dbo].[DimCustomer]
UNION ALL
SELECT
  [CustomerKey], [CustomerAlternateKey] AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome]
FROM
  [dbo].[DimCustomer]
  WHERE CustomerKey % 2 = 0

SQL Statement for the OLE DB Source

A script component (the most interesting part!)

From the SSIS Toolbox drag a Script Component to the Data flow surface. In the “Select Script Component Type” choose “Transformation”.

SSIS-S01E02-image02

To use a column value in the script, you have to define it as an input column. Select the column you want to check for duplicate values with Usage Type ReadOnly. In the demo this is the CustomerBusinessKey column.
SSIS-S01E02-image03

We also need a new column to store the occurrence number for the CustomerBusinessKey. To do this, in the Script Transformation Editor, Select Inputs and Outputs on the left side, then unfold Output 0 and Output Columns, respectively.
Then click on the Add column button to add a column with DataType four-byte signed integer [DT_I4]. Give the new column a clear name. In the demo it is called CustomerBusinessKeyOccurrence.
SSIS-S01E02-image03b

After these preparations we can start writing the C# script. Press the Edit script … button to do so.
SSIS-S01E02-image04

Adjust the script as follows (the changes are also marked inside red blocks in the picture below):

  • In the Namespaces region, add:
    using System.Collections.Generic;
  • Above PreExecute(), add:
    // The column to count duplicates for will be stored as Key (with string datatype) of the dictionary.
    // The occurrence counter will be stored as Value of the dictionary (with int datatype).
    private Dictionary<string, int> m_dictOccurrencesCount;
  • Inside PreExecute(), add:
    // Initialize the dictionary, otherwise it will be null when accessed.
    m_dictOccurrencesCount = new Dictionary<string, int>();

SSIS-S01E02-image05
Script adjustments – part 1

Then adjust the script as follows:

  • Inside Input0_ProcessInputRow(Input0Buffer Row), add:
    // Count occurrences of CustomerBusinessKeys.
    if (m_dictOccurrencesCount.ContainsKey(Row.CustomerBusinessKey))
    {
      // If the CustomerBusinessKey of the current row is already in the dictionary,
      // increase the counter with 1:
      m_dictOccurrencesCount[Row.CustomerBusinessKey]++;
      // Also assign the new value to the CustomerBusinessKeyOccurrence column,
      // so it can be used in the Data flow later.
      Row.CustomerBusinessKeyOccurrence = m_dictOccurrencesCount[Row.CustomerBusinessKey];
    }
    else
    {
      // If the CustomerBusinessKey of the current row is not yet in the dictionary,
      // add it, and set the counter to 1
      m_dictOccurrencesCount.Add(Row.CustomerBusinessKey, 1);
      // Also assign 1 to the CustomerBusinessKeyOccurrence column.
      Row.CustomerBusinessKeyOccurrence = 1;
    }

SSIS-S01E02-image06
Script adjustments – part 2

When the script is ready, Choose Rebuild from the BUILD menu in the VstaProjects – Microsoft Visual Studio Window. Then close that Window. You return to the Script Transformation Editor, close it by pressing OK.

An (optional) conditional split

Optionally you can add a Conditional Split.
I use the conditional split to store the first occurrences for every CustomerBusinessKey in one table and all duplicates in another table.
Both tables are only to show you that the script task and conditional split do their jobs properly. In real projects, you might develop this differently, for example by storing only the first occurence.

SSIS-S01E02-image07
Conditional Split Configuration

Two OLE DB Destinations: one for the first occurrences of each business key, and one for the duplicates

I basicly created a two temporary tables by composing a SELECT TOP 0 .. INTO statement from the OLE DB Source SQL Statement, because the result of the demo needs to flow to somewhere.

SSIS-S01E02-OLEDB-Source.sql
IF OBJECT_ID('[dbo].[DimCustomerFirstOne]', 'U') IS NOT NULL
  DROP TABLE [dbo].[DimCustomerFirstOne]
GO
SELECT TOP 0
  [CustomerKey], [CustomerAlternateKey] + 'B' AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome], CONVERT(int, 0) AS [OccurrenceNo]
INTO [dbo].[DimCustomerFirstOne]
FROM
  [dbo].[DimCustomer]
GO

IF OBJECT_ID('[dbo].[DimCustomerDuplicates]', 'U') IS NOT NULL
  DROP TABLE [dbo].[DimCustomerDuplicates]
GO
SELECT TOP 0
  [CustomerKey], [CustomerAlternateKey] + 'B' AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome], CONVERT(int, 0) AS [OccurrenceNo]
INTO [dbo].[DimCustomerDuplicates]
FROM
  [dbo].[DimCustomer]
GO

OLE DB Destination OLE_DST FirstOne uses the [dbo].[DimCustomerFirstOne] table.
OLE DB Destination OLE_DST Duplicate uses the [dbo].[DimCustomerDuplicates] table.

Let it flow, let if flow, let it flow

Okay, time to test.
When data flows, the “First occurence” and “Duplicates” Data Viewers pop up. Then I can check that all the rows are divided well over both tables based on the value of column CustomerBusinessKeyOccurrence: exactly 1 or higher than 1.
When I run it with the Data Viewers disabled, the package execution finished literally in a split second. So the performance is excellent.
Test succeeded!
SSIS-S01E02-image08
The Data flow in Debug Mode

SSIS-S01E02-image09b
Data Viewer for First occurrence flow: CustomerBusinessKeyOccurrence is 1 for all rows.

SSIS-S01E02-image10b
Data Viewer for Duplicates flow: CustomerBusinessKeyOccurrence is 2 or higher for all rows.

Other considerations

If multiple columns make a row unique concatenate all column values with a delimiter, and then add the concatenated result to the dictionary m_dictOccurrencesCount.
For instance you could check duplicates using the customer nameparts (firstname, middlename and lastname), the birthdate and the gender. This is a softer check than on a real business key, but sometimes you do not have that.
Of course, for this to work, you must add those columns as ReadOnly input columns to the script component.
You can create a concatenated value as follows:
Inside Input0_ProcessInputRow(Input0Buffer Row), start with:

// Concatenate FirstName, MiddleName, LastName, BirthDate and Gender, using a | as delimiter.
string businessKey = string.Format(“{0}|{1}|{2}|{3:yyyyMMdd}|{4}”,
  Row.FirstName, Row.MiddleName, Row.LastName, Row.BirthDate, Row.Gender).ToUpper();

The concatenated value is converted to uppercase, so that differences in lower- or uppercase lead to the same value.
Then use the businessKey variable (instead of Row.CustomerBusinessKey) in the if statement as shown above.

If the concatenated value becomes very long, you could calculate a hash value and add that to the dictionary instead (thanks for the suggestion, Joost). This would use less memory and might be faster, but the only way to find out if this is the case is testing .. testing ..
( I did not test this to make the demo not too complicated and because performance did not seem to be an issue ).

Thinking a bit further, you could use this concept also to calculate running totals.
Just configure the dictionary to hold a decimal instead of an int and you can calculate running totals for numeric values (e.g. sales figures) easily.

Conclusion / Wrap up

In SSIS you can use a Script Component within a Data Flow Task to detect duplicates in the source data.
This can be an alternative to the T-SQL ROW_NUMBER() function, when the source is not a SQL Server Database.

Credits
My thanks go to Joost van Rossum for reviewing the initial draft of this article. His input was very useful.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

When reverse engineering is a necessary evil .. (SQL Server Series)

.. or: How to find data (text, guids, numbers) in your entire SQL Server database ..

Introduction

A number of years ago I had to make queries on an OLTP database used by third parties’ software. As a person preferring data warehouses for too many reasons to mention here, I am not proud of it, but sometimes these requests just happen. If you have the same now and then, you are not the only one 😉 .

Problem

A little simplified, this is what happened:
I started asking around, is there documentation? “No”, there wasn’t.
Is there a data dictionary? “No”, there wasn’t, either.
Can I have access to the database? “Yes, here’s the connection string.”
So I made a connection, but .. what do all these tables mean? How are they related? How are the foreign keys? The model didn’t tell me, with no physical constraints and unclear table- and column names.
Do you have anything else for me to see how this works? “Well, we have a PROD and a TEST environment, you can have read only access to PROD and read/write to the TEST.”
Okay, that is something ..

MSSQL-S01E02-testconnection
Picture credits: © David Coleman | Dreamstime Stock Photos
When there is only a (test)connection..

Towards a solution

Could you show me how you use the system? Luckily the employee of my client was very helpful and showed me how he used the application.
I tried to get a grip on the SQL Statements caused by his actions, but due to a lack of permissions on the database, this didn’t work.
This could be arranged, but could take a while before I actually would have temporary “elevated” permissions.
As it was just some small piece of “inbetween” work, that seemed an inefficient route to follow.
Okay, I have seen what you did, can I do that on the TEST as well? “Yeah, go ahead.”
So I started to make an order, using silly names and descriptions, and choosing weird seldomly used products. So, what’s next?
Ok, if I can find where my description “First order of Hans” is stored, I can start to map the frontend to the backend, or make clear for myself where the entered information is stored in the database.

Solution

This is where this script became useful. It finds all occurrences of one or more text values in a SQL Server database.

WARNING: Do not use in production environments, as this script can potentially run pretty long (10 to 20 minutes when I used it) and can have a negative impact on database performance in general (so other users might regret that you are running the script).
So use it in a test enviroment when you can.

SearchTextInDatabase.sql

--\
---) SearchTextInDatabase script.
---) Author: Hans Michiels
---) Searches for one or more texts in all (n)(var)char and (n)text
---) columns of a SQL Server database.
---) WARNING: Can potentially run very long, my advise is not
---) to run this script in production environments!
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014]
GO

--\-----------------------------------------------------------------------------------
---) DECLARATIONS
--/-----------------------------------------------------------------------------------
DECLARE @schema_name SYSNAME
DECLARE @table_name SYSNAME
DECLARE @column_name SYSNAME
DECLARE @nsql NVARCHAR(4000)
DECLARE @param_def NVARCHAR(500)
DECLARE @search_text NVARCHAR(1000)
DECLARE @search_texts TABLE ([search_text] NVARCHAR(1000))
DECLARE @debug_print NVARCHAR(500)
DECLARE @results_count INT = 0

IF OBJECT_ID('tempdb..#search_results') IS NOT NULL BEGIN
    DROP TABLE #search_results
END
CREATE TABLE #search_results
    ( [search_text] NVARCHAR(1000)
    , [schema_name] NVARCHAR(128)
    , [table_name] NVARCHAR(128)
    , [column_name] NVARCHAR(128)
    , [the_value] NVARCHAR(1000)
    )

DECLARE columns_cursor CURSOR LOCAL STATIC FOR
    SELECT sch.name AS [schema_name]
    , tbl.name AS [table_name]
    , col.name AS [column_name]
    FROM sys.tables tbl
     JOIN sys.schemas sch
        ON sch.schema_id = tbl.schema_id
      JOIN sys.columns col
        ON col.object_id = tbl.object_id
    WHERE col.system_type_id IN (175, 167, 231, 239, 35, 99) -- (N)(VAR)CHAR, (N)TEXT

--\-----------------------------------------------------------------------------------
---) INITIALIZATION
---) Specify the texts to search for below.
---) Use the wildcard % if you want to do a 'like' search.
--/-----------------------------------------------------------------------------------
INSERT INTO @search_texts

  SELECT '%Assembly%'
  UNION SELECT 'First order of Hans'

DECLARE search_cursor CURSOR LOCAL STATIC FOR
    SELECT search_text
    FROM @search_texts

--\-----------------------------------------------------------------------------------
---) MAIN
--/-----------------------------------------------------------------------------------
OPEN columns_cursor
FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @debug_print = 'Processing ' + @table_name + '.' + @column_name;
    PRINT REPLICATE('-', LEN(@debug_print))
    PRINT @debug_print
    PRINT REPLICATE('-', LEN(@debug_print))
    
    SET @nsql = 'INSERT INTO #search_results SELECT DISTINCT @inner_search_text, '
        + '''' + @schema_name + ''' AS [schema_name], '
        + '''' + @table_name + ''' AS [table_name], '
        + '''' + @column_name + ''' AS [column_name], CONVERT(NVARCHAR(1000), ' + QUOTENAME(@column_name) + ') AS [the_value] '
        + ' FROM ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' WITH (NOLOCK) '
        + ' WHERE ' + QUOTENAME(@column_name) + ' LIKE @inner_search_text;';

    OPEN search_cursor
    FETCH NEXT FROM search_cursor INTO @search_text

    WHILE @@FETCH_STATUS = 0
    BEGIN
        PRINT ''
        PRINT '--> Processing search text ''' + @search_text + '''';
        SET @param_def = N'@inner_search_text NVARCHAR(1000)';
        EXECUTE sp_executesql @nsql, @param_def,
                              @inner_search_text = @search_text;

        FETCH NEXT FROM search_cursor INTO @search_text
    END
    
    CLOSE search_cursor

    FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name
    
END

CLOSE columns_cursor
DEALLOCATE columns_cursor

SELECT
      t.schema_name
    , t.table_name
    , t.column_name
    , t.the_value
    , 'SELECT * FROM ' + QUOTENAME(t.schema_name) + '.' + QUOTENAME(t.table_name)
      + ' WHERE ' + QUOTENAME(t.column_name) + ' = ''' + REPLACE(t.the_value, '''', '''''') + '''' AS [select_statement]
 FROM #search_results t

GO

Some time later, I also made two similar versions for searching unique identifiers and numeric values.
I know the scripts might not look so ‘pretty’, but they do the job when needed.

SearchUniqueIdentifierInDatabase.sql

--\
---) SearchUniqueIdentifierInDatabase script.
---) Author: Hans Michiels
---) Searches for one or more unique identifiers in all UNIQUEIDENTIFIER columns
---) of a SQL Server database.
---) WARNING: Can potentially run very long, my advise is not
---) to run this script in production environments!
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014]
GO

--\-----------------------------------------------------------------------------------
---) DECLARATIONS
--/-----------------------------------------------------------------------------------
DECLARE @schema_name SYSNAME
DECLARE @table_name SYSNAME
DECLARE @column_name SYSNAME
DECLARE @nsql NVARCHAR(4000)
DECLARE @param_def NVARCHAR(500)
DECLARE @search_value UNIQUEIDENTIFIER
DECLARE @search_values TABLE ([search_value] UNIQUEIDENTIFIER)
DECLARE @debug_print NVARCHAR(500)
DECLARE @results_count INT = 0

IF OBJECT_ID('tempdb..#search_results') IS NOT NULL BEGIN
    DROP TABLE #search_results
END
CREATE TABLE #search_results
    ( [search_value] UNIQUEIDENTIFIER
    , [schema_name] NVARCHAR(128)
    , [table_name] NVARCHAR(128)
    , [column_name] NVARCHAR(128)
    , [the_value] UNIQUEIDENTIFIER
    )

DECLARE columns_cursor CURSOR LOCAL STATIC FOR
    SELECT sch.name AS [schema_name]
    , tbl.name AS [table_name]
    , col.name AS [column_name]
    FROM sys.tables tbl
     JOIN sys.schemas sch
        ON sch.schema_id = tbl.schema_id
      JOIN sys.columns col
        ON col.object_id = tbl.object_id
                              -- UNIQUEIDENTIFIER
    WHERE col.system_type_id IN (36)

--\-----------------------------------------------------------------------------------
---) INITIALIZATION
---) Specify the unique identifiers to search for below.
--/-----------------------------------------------------------------------------------
INSERT INTO @search_values
SELECT 'D081136E-38D6-4D42-8FFD-19A6A8FA61E6'
UNION SELECT '9AADCB0D-36CF-483F-84D8-585C2D4EC6E9'
UNION SELECT '32A54B9E-E034-4BFB-B573-A71CDE60D8C0'
UNION SELECT '4C506923-6D1B-452C-A07C-BAA6F5B142A4'

DECLARE search_cursor CURSOR LOCAL STATIC FOR
    SELECT search_value
    FROM @search_values

--\-----------------------------------------------------------------------------------
---) MAIN
--/-----------------------------------------------------------------------------------
OPEN columns_cursor
FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @debug_print = 'Processing ' + @table_name + '.' + @column_name;
    PRINT REPLICATE('-', LEN(@debug_print))
    PRINT @debug_print
    PRINT REPLICATE('-', LEN(@debug_print))
    
    SET @nsql = 'INSERT INTO #search_results SELECT DISTINCT @inner_search_value, '
        + '''' + @schema_name + ''' AS [schema_name], '
        + '''' + @table_name + ''' AS [table_name], '
        + '''' + @column_name + ''' AS [column_name], ' + QUOTENAME(@column_name) + ' AS [the_value] '
        + ' FROM ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' WITH (NOLOCK) '
        + ' WHERE ' + QUOTENAME(@column_name) + ' = @inner_search_value ;';

    OPEN search_cursor
    FETCH NEXT FROM search_cursor INTO @search_value

    WHILE @@FETCH_STATUS = 0
    BEGIN
        PRINT ''
        PRINT '--> Processing search value ''' + CONVERT(NVARCHAR(40), @search_value) + '''';
        SET @param_def = N'@inner_search_value UNIQUEIDENTIFIER';
        EXECUTE sp_executesql @nsql, @param_def,
                              @inner_search_value = @search_value;

        FETCH NEXT FROM search_cursor INTO @search_value
    END
    
    CLOSE search_cursor

    FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name
    
    SELECT @results_count = (SELECT COUNT(*) FROM #search_results);
    
END

CLOSE columns_cursor
DEALLOCATE columns_cursor

SELECT
      t.schema_name
    , t.table_name
    , t.column_name
    , t.the_value
    , 'SELECT * FROM ' + QUOTENAME(t.schema_name) + '.' + QUOTENAME(t.table_name)
      + ' WHERE ' + QUOTENAME(t.column_name) + ' = ''' + REPLACE(t.the_value, '''', '''''') + '''' AS [select_statement]
 FROM #search_results t

GO


SearchNumberInDatabase.sql

--\
---) SearchNumberInDatabase script.
---) Author: Hans Michiels
---) Searches for one or more numeric values in all TINYINT, SMALLINT,
---) INT, FLOAT, DECIMAL and NUMERIC columns of a SQL Server database.
---) WARNING: Can potentially run very long, my advise is not
---) to run this script in production environments!
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014]
GO

--\-----------------------------------------------------------------------------------
---) DECLARATIONS
--/-----------------------------------------------------------------------------------
DECLARE @schema_name SYSNAME
DECLARE @table_name SYSNAME
DECLARE @column_name SYSNAME
DECLARE @nsql NVARCHAR(4000)
DECLARE @param_def NVARCHAR(500)
DECLARE @search_value DECIMAL(38, 2)
DECLARE @search_values TABLE ([search_value] DECIMAL(38, 2))
DECLARE @debug_print NVARCHAR(500)
DECLARE @results_count INT = 0
DECLARE @max_difference DECIMAL(38, 2) -- Use to find values 'near by'

IF OBJECT_ID('tempdb..#search_results') IS NOT NULL BEGIN
    DROP TABLE #search_results
END
CREATE TABLE #search_results
    ( [search_value] DECIMAL(38, 2)
    , [schema_name] NVARCHAR(128)
    , [table_name] NVARCHAR(128)
    , [column_name] NVARCHAR(128)
    , [the_value] DECIMAL(38, 2)
    )

DECLARE columns_cursor CURSOR LOCAL STATIC FOR
    SELECT sch.name AS [schema_name]
    , tbl.name AS [table_name]
    , col.name AS [column_name]
    FROM sys.tables tbl
     JOIN sys.schemas sch
        ON sch.schema_id = tbl.schema_id
      JOIN sys.columns col
        ON col.object_id = tbl.object_id
                              -- TINYINT, SMALLINT, INT, INT
    WHERE col.system_type_id IN (48, 52, 56, 127
                              -- FLOAT, DECIMAL, FLOAT, DECIMAL, DECIMAL, DECIMAL
                              --, 59, 60, 62, 106, 108, 122, 127
                                )

--\-----------------------------------------------------------------------------------
---) INITIALIZATION
---) Specify the texts to search for below “INSERT INTO @search_values”
---) Also set a value for @max_difference
---) @max_difference makes it possible to find nearby numbers.
--/-----------------------------------------------------------------------------------
SELECT @max_difference = 1.0
INSERT INTO @search_values
  SELECT 755
  --UNION SELECT 97867563

DECLARE search_cursor CURSOR LOCAL STATIC FOR
    SELECT search_value
    FROM @search_values

-------------------------------------------------------------------------------------
-- MAIN
-------------------------------------------------------------------------------------
OPEN columns_cursor
FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @debug_print = 'Processing ' + @table_name + '.' + @column_name;
    PRINT REPLICATE('-', LEN(@debug_print))
    PRINT @debug_print
    PRINT REPLICATE('-', LEN(@debug_print))
    
    SET @nsql = 'INSERT INTO #search_results SELECT DISTINCT @inner_search_value, '
        + '''' + @schema_name + ''' AS [schema_name], '
        + '''' + @table_name + ''' AS [table_name], '
        + '''' + @column_name + ''' AS [column_name], CONVERT(DECIMAL(38, 2), ' + QUOTENAME(@column_name) + ') AS [the_value] '
        + ' FROM ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' WITH (NOLOCK) WHERE ABS(' + QUOTENAME(@column_name) + ' - @inner_search_value) <= ' + CAST(@max_difference AS VARCHAR(50)) + ';';

    PRINT @nsql
    
    OPEN search_cursor
    FETCH NEXT FROM search_cursor INTO @search_value

    WHILE @@FETCH_STATUS = 0
    BEGIN
        PRINT ''
        PRINT '--> Processing search value ''' + CONVERT(NVARCHAR(40), @search_value) + '''';
        SET @param_def = N'@inner_search_value DECIMAL(38, 2)';
        EXECUTE sp_executesql @nsql, @param_def,
                              @inner_search_value = @search_value;

        FETCH NEXT FROM search_cursor INTO @search_value
    END
    
    CLOSE search_cursor

    FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name
    
    SELECT @results_count = (SELECT COUNT(*) FROM #search_results);
    
END

CLOSE columns_cursor
DEALLOCATE columns_cursor

SELECT
      t.schema_name
    , t.table_name
    , t.column_name
    , t.the_value
    , 'SELECT * FROM ' + QUOTENAME(t.schema_name) + '.' + QUOTENAME(t.table_name) + ' WHERE ' + QUOTENAME(t.column_name) + ' = ''' + REPLACE(t.the_value, '''', '''''') + ''''
 FROM #search_results t

GO

Download all scripts here.

Conclusion / Wrap up

When you have to understand a SQL Server datamodel, for instance to get data out, and there is no documentation but only a test environment and database connection available, these scripts can help you to find where data entered in the frontend software is stored in the database.
A different use case is when the database is using unique identifiers for primary and foreign keys, and the foreign key relationships are not physically enforced and/or flexible. In that case you can search for a specific unique identifier and see where it is used in the database.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Stop being so precise! and more about using Load(end)dates (Datavault Series)

Introduction

My web site was offline a few days ago. Sorry for that.
I got this email from my webhosting provider: “A new message or response with subject: User *** has used up 153% of their bandwidth ..”.
Shortly after they suspended my account. It was suspended for about 8 hours. I have upgraded the network traffic limit.
All right, when all bandwidth was consumed .. then somebody must be reading my blog. Thank you for that.

This weeks post is about the LoadDate and LoadEndDate.
Actually there are two things to be nerdy on:

  1. End dating without gaps
  2. Getting the best value for bytes on DATETIME2 precision

By the way, these topics apply to SQL Server, the examples are made using SQL Server 2014.

1. End dating without gaps

How end dating works

In a Data Vault Satellite, different subsequent versions of a row in the source system are distinguished through a LoadDate (which usually contains a date/time value). So the HashKey of the corresponding Hub Row plus the LoadDate are the primary key for the satellite.
However to get the right version out when querying for a specific date, this requires a relatively difficult query, and this can also have a negative impact on performance.
This is why the LoadEndDate is an optional column of a Satellite, to make querying (read: getting data out) easier and better performing.

Important to remember that the LoadEndDate is not the date/time the load(process) ended, but the date/time the row was replaced by a newer row for the same business entity in the hub. What’s in a name, if I had to choose I would just call it EndDate, but LoadEndDate is the standard, and once you know it, it is not a problem.

There are two ways to determine the value for this LoadEndDate:

  1. Exclusive: the LoadEndDate is the LoadDate of the new row that replaces this one, minus a small time fraction. Using the exclusive methods enables the use of the BETWEEN keyword in Transact-SQL to get the right row out, example:

    WHERE @SnapshotDate BETWEEN [LoadDate] AND [LoadEndDate]

    This is the method that is used in most Data Vault examples.

  2. Inclusive: the LoadEndDate is exactly equal to the LoadDate of the new row that replaces this one. This requires no computation when updating the LoadEndDate, but disqualifies the BETWEEN keyword, for getting data out you need to do something like:

    WHERE @SnapshotDate >= [LoadDate] AND @SnapshotDate < [LoadEndDate]

Problem

There is a (theoretical) problem when using the exclusive method.
If you substract to much, there will be a tiny time gap between the LoadEndDate and the LoadDate of the subsequent row.
I fully admit this is a theoretical problem, because the chances that you require the row valid exactly on this moment are astronomically small.
(There is this not so hypothetical situation that your load starts at midnight, the LoadDate is set before the first second of the new day has passed, and you substract a second.
Then you do not find a record when using the exact date (without time fraction) to get the row out. But you need a lot of bad luck for this to happen).

Solution

Still if you are a purist you want to do it right, certainly because the solution is so simple.
If you make the the “grain” of substraction from the LoadDate equal to the precision of the LoadDate, there is no gap.
E.g.
– Substract 1 second from DATETIME2(0)
– Substract 1 centisecond from DATETIME2(2) ***TIP***
– Substract 1 millisecond from DATETIME2(3)
– Substract 1 microsecond from DATETIME2(6)
– Substract 100 nanoseconds from DATETIME2(7)

Examples in T-SQL:

Exclusive enddate without gaps.sql

DECLARE @dt20 DATETIME2(0) = '2016-04-13 20:52:17'
DECLARE @dt22 DATETIME2(2) = '2016-04-13 20:52:17.00'
DECLARE @dt23 DATETIME2(3) = '2016-04-13 20:52:17.000'
DECLARE @dt26 DATETIME2(6) = '2016-04-13 20:52:17.000000'
DECLARE @dt27 DATETIME2(7) = '2016-04-13 20:52:17.0000000'

SELECT @dt20, DATEADD(SECOND, -1, @dt20)
SELECT @dt22, DATEADD(MILLISECOND, -10, @dt22)
SELECT @dt23, DATEADD(MILLISECOND, -1, @dt23)
SELECT @dt26, DATEADD(MICROSECOND, -1, @dt26)
SELECT @dt27, DATEADD(NANOSECOND, -100, @dt27)

2. Getting the best value for bytes on DATETIME2 precision

This is about a “smart” precision to choose for your LoadDate and LoadEndDate columns. Unlike the older DATETIME datatype, DATETIME2 uses less or more bytes for storage depending on the precision you specify.
The Storage size is: 6 bytes for precisions less than 3; 7 bytes for precisions 3 and 4. All other precisions require 8 bytes. (I shamelessly pasted this from MSDN).
In the book “Building a scalable data warehouse with Data Vault 2.0” a lot of examples use DATETIME2(7). You can ask yourself why. Why do you need to be precise to 100 nanoseconds? I dare to say that in most cases (except when loading realtime or near-realtime streaming data into your Data Vault), seconds would be precise enough.
But looking back a few lines, to the storage sizes, DATETIME2(0) uses 6 bytes, but DATETIME2(2) ALSO uses 6 bytes. So with the latter you get a higher precision for the same storage size. And for daily loads, the centiseconds precision that DATETIME2(2) is providing, is really precise enough, believe me.
So DATETIME2(2) gives us the best value for bytes!


DV-S01E04-precise
Why would your LoadDates and LoadEndDates be more precise than needed ?..


In this way you can save two bytes, compared with a DATETIME2(7), which uses 8 bytes and is used in the book. Because the LoadDate is in the primary key of satellite tables, also the primary key index will be smaller.
In PIT Tables, which usually have multiple LoadDates, the storage gain is even more.
And what counts for money, “who does not respect a small gain, isn’t worth a big one”, does also count for small performance gains you can get by doing this kind of tweaks. Never forget that! Multiple smaller tweaks might add up to a noticable performance improvement!

Conclusion / Wrap up

In this blog post you have read about the two methods for enddating: Inclusive and Exclusive.
Also you could read how you can use the Exclusive method for end dating in Data Vault satellites, without having gaps in the timeline.
Finally I discussed which precision will in most cases be good enough for LoadDate and LoadEndDate columns.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Hash Diff calculation with SQL Server (Datavault Series)

Updates

Change log

Date Changes
31 January, 2017 Really good news! The 8k limitation on HASHBYTES has been removed in SQL Server 2016!

Today I would like to discuss Hash Diff calculation using SQL Server 2014, from my own experience.

My purpose is NOT to be complete and/or to replicate book “Building a scalable data warehouse with Data Vault 2.0”. For full details please read the book.

However I will give a short intro for those who hardly know what I am talking about.

Introduction

Hash differences or Hash diffs

A hash difference is a hash value of all the descriptive data of a row in a satellite, plus the business key column(s).

A hash difference column is used to detect changes in a new version of a satellite’s row for a particular business key, and can be used in a comparison instead of doing a comparison on every individual descriptive attribute.
The main reason to use hash diff columns is to improve performance when loading new satellite rows (of which one or more attributes have changed).
The more columns the satellite has, the higher the performance gain will be.
To support data warehouse automation patterns, hash diffs can also be used for satellites with only a few or even one descriptive attribute.

I think a different name, like RowHash, RecordHash, DataHash or AttributeValuesHash, would better describe the contents of the column, because the value itself is not a difference, but that’s a different (relatively unimportant) discussion.

Hash keys

This article is not about Hash keys, however to be complete on hashing as far as Data Vault 2.0 is concerned, I will give a short introduction on that as well.

Hash keys replace sequence numbers (generated by the database engine) of the Data Vault 1.0 standard. They support geographically distributed data warehouses, as well as integration with big data environments like Hadoop.

A hash key is a hash value of the business key column(s) used in a Hub or Link. The advantage is that it is predictable, which enables parallel loading and a multiplatform / distributed datawarehouse architecture. For instance a customer with code NL123 will have the same hash key on all distributed parts of the data warehouse. Because the hash key always has the same length and is stored as a (fixed length) CHAR column, performance is usually better than when the business key column(s) are directly used as primary key.

How to calculate a hash difference in T-SQL

For full details about Endianness, differences of hash functions between different platforms, how to cope with new columns and other technical details I refer to the book mentioned above. If you want to work with Data Vault 2.0 you will need the book anyway.

As said, a hash difference is a hash value of all the descriptive data of a row in a satellite, plus the business key column(s). To do the hashing, we have to put all the column values together in one nvarchar value, and then apply a hashing function on it. This is the short version and the base to understand it.

DV-S01E02-hashdiff

Between the different column values you have to use a delimiter, preferably one that does not occur in the column values. Otherwise you have to “escape” the delimiter in column values.

If you want to do a case insensitive compare, you should convert all values to either upper- or lowercase. But this means that if in the source system a customer name was “hans michiels” and is changed to “Hans Michiels”, you will not detect the change and therefore do not store it in the Data Vault. So it depends on the contents of the satellite and your business view on it if this is desired behaviour or not.

The hash diff calculation results in a binary(16) value, when using the hashing algorithm MD5 (which is recommended). To enable better cross-platform- and tools support, this is then converted to a hexadecimal string of 32 characters, stored as a CHAR(32) column.

In all cases you must also convert the hexadecimal hash value to all UPPER- or all lowercase, because not all convert functions give the same output, some lower- and other uppercase. To compare values that were made by different hash functions (read on different systems) making it all uppercase (or lowercase) makes sure the values do not differ in casing only (which could lead to a false negative answer on the “are the rows the same” question).

I understand this is all dry and boring nerd stuff if you do not see an example, so .. a simple example:

Calculating_a_hash_diff.sql

SELECT
      [bla]
    , UPPER(
        CONVERT(CHAR(32),
          HASHBYTES('MD5',
            UPPER(
              CONCAT( ISNULL(CONVERT(NVARCHAR, [BusinessKeyOfHub]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), [Column01]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), [Column02]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), [Column03]), N'')
              ) -- END CONCAT
            ) -- END UPPER
          ) -- END HASHBYTES
        , 2) -- END CONVERT
      ) -- END UPPER
FROM [stg].[bla]

So far nothing shocking, but there are a few pitfalls. If you keep them in mind, you will be an excellent hasher soon. 😉

CONVERT(NVARCHAR without specifying the size

Do this only for values of 30 or less characters, otherwise the value will be cut off which could lead to wrong hash diff values, not changing when the attribute value changes. It is for instance okay for integers, decimals (unless the precision is more than 28, mind the decimal separator and possible negative sign), datetime stamps, and bits. When converting (n)varchar safest thing to do is follow the original size, for unique identifiers use 38 or 40.

Beware for CONCAT

Beware for concatenation of (N)VARCHAR(MAX), (N)TEXT and XML columns using the CONCAT function. It appears that only the first 8000 bytes of (N)VARCHAR(MAX) columns are in the concatenated result.

And even worse, HASHBYTES ..

HASHBYTES does only hash a NVARCHAR value with a length of 4000 or less. Above that the error “String or binary data would be truncated” occurs. This is really important to keep in mind.
The 8000 characters (or 4000 unicode characters) limitation on any string hashed with the HASHBYTES function has been removed in SQL Server 2016! Now you can hash larger (N)VARCHAR values!

Cope with the HASHBYTES and CONCAT limitations

4000 characters is quite something, right? Only with very wide satellites (a lot of columns) and/or usage of (N)VARCHAR(MAX), (N)TEXT or XML columns (hereafter called ‘unlimited length columns’) you can get into trouble.

So, what is wise to do? Some measures you can take are:

Make your satellites not too wide

This is a best practice, not only from hashing function perspective. Split the satellites by source system, then by rate of change. Keep in mind that the business key column(s) and all satellite column values are converted to NVARCHAR and then (including all delimiters) may not exceed the 4000 characters if you want to use the HASHBYTES function.

Size unlimited length columns according to the actual maximum length

What I mean is, if for instance a Country name is stored in an unlimited length column in the source system, you know that a size of 100 will be long enough. When you doubt just do some data profiling on the source to see what is the maximum length. Use that size plus a safety margin for the satellite column size. Problem gone.

Handle true unlimited length columns

Put true unlimited length columns (for instance free entry ‘memo’ fields or xml columns in source systems in their own satellite! Otherwise if any other (small) attribute changes, the 4k memo is copied, even when it was not changed. This absorbs storage. But that alone is not enough, as said, with HASHBYTES, above 8000 characters (4000 when unicode) you are still f*c**d. Luckily there is a system function in the master database, which uses the MD5 algorithm and takes a binary value as input. So to compute a MD5 hash value for a text column with unlimited length, you could do something like:

Hash_for_memo.sql

SELECT master.sys.fn_repl_hash_binary(CONVERT(VARBINARY(MAX),
[MyHugeMemoField)) AS [HashDiff]

But don’t make the mistake to use the function [sys].[fn_repl_hash_binary] for all hashing, because the performance of HASHBYTES is much better!
On my machine HASHBYTES is three times faster than [sys].[fn_repl_hash_binary] on the same dataset.
You can check for yourself using this script:

Performance_comparison.sql

USE [msdb]
GO

DECLARE @start DATETIME2(7)
DECLARE @stop DATETIME2(7)
DECLARE @elapsed1 BIGINT
DECLARE @elapsed2 BIGINT
--\-------------------------------------------------------------------------
---) TEST PERFORMANCE of HASHBYTES function.
--/-------------------------------------------------------------------------
SELECT @start = GETDATE();
SELECT TOP1000000
      UPPER(
        CONVERT(CHAR(32),
          HASHBYTES('MD5',
            --UPPER(
              CONCAT( ISNULL(CONVERT(NVARCHAR(128), o1.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o2.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o3.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_schema_published]), N'')
              ) -- END CONCAT
            --) -- END UPPER
          ) -- END HASHBYTES
        , 2) -- END CONVERT
      ) -- END UPPER
  FROM [sys].[objects] o1
  CROSS JOIN [sys].[objects] o2
  CROSS JOIN [sys].[objects] o3;

SELECT @stop = GETDATE();
SELECT @elapsed1 = DATEDIFF(MICROSECOND,@start, @stop);

--\-------------------------------------------------------------------------
---) TEST PERFORMANCE of [master].[sys].[fn_repl_hash_binary] function.
--/-------------------------------------------------------------------------
SELECT @start = GETDATE();
SELECT TOP 1000000
      UPPER(
        CONVERT(CHAR(32),
          master.sys.fn_repl_hash_binary(CONVERT(VARBINARY(MAX),
            --UPPER(
              CONCAT( ISNULL(CONVERT(NVARCHAR(128), o1.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o1.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o2.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o2.[is_schema_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(128), o3.[name]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[principal_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[schema_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[parent_object_id]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[type_desc]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[create_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[modify_date], 126), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_ms_shipped]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_published]), N'')
                    , ';', ISNULL(CONVERT(NVARCHAR(50), o3.[is_schema_published]), N'')
                 ) -- END CONCAT
            --) -- END UPPER
          ) -- END HASHBYTES
          ) -- END CONVERT
        , 2) -- END CONVERT
      ) -- END UPPER
  FROM [sys].[objects] o1
  CROSS JOIN [sys].[objects] o2
  CROSS JOIN [sys].[objects] o3;

SELECT @stop = GETDATE();
SELECT @elapsed2 = DATEDIFF(MICROSECOND,@start, @stop);

SELECT @elapsed1 AS [Elapsed_HashBytes]
, @elapsed2 AS [Elapsed_fn_repl_hash_binary]
, 'HASHBYTES is ' + CONVERT(VARCHAR, CONVERT(decimal(19, 2), @elapsed2 / @elapsed1))
+ ' times faster than [fn_repl_hash_binary]' AS [Result]

That’s about it.
Just a few more things:

  • If you use a different hashing algorithm (e.g. SHA1) you cannot use the sys.fn_repl_hash_binary function, in that case you will have to seek your solution in a C# script task or CLR stored procedure, doing the hashing in .NET code.
  • Don’t use CONCAT if you have more than one column in the satellite containing [MyHugeMemoField], or you still can run into problems.

Conclusion / Wrap up

I have explained in short why Hash diff columns are used in a Data Vault 2.0 satellite, and how these values can be calculated.

After that, you have read about some limitations of the CONCAT and HASHBYTES function when used for calculating Hash diff values, and how to cope with these limitations.

Credits
My thanks go to Vincent Kuiper for reviewing the initial draft of this article.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Data Vault and Dimensional Modelling, a happy marriage! (Data Vault Series)

We need ambassadors!

I was at this one day course recently and spoke to a professional working in the business intelligence domain.
He said that he had just followed a two week course for datawarehouse architects.
I asked him, what did they tell about Data Vault?
His answer was shocking: “Not so much ..”

If recently trained datawarehouse architects do not get trained to use Data Vault in a datawarehouse architecture, there definitely is a lot of missionary work to do.
Because the Data Vault can add so much value to your datawarehouse solution. 
And more important, it is not a choice, a Kimball dimensional model OR a Linstedt Data Vault. Use both and
combine each strengths!

Intended audience

This is a strong call for everybody who reads this to share this post with at least one person you know working in the business intelligence domain using star schemas (a.k.a. dimensional modelling) directly fed from source systems.

What are the strengths of a Dimensional Model (a.k.a. Star Schema)?

  • Because the dimensional model is already around for decades, a lot of front end tools have an excellent support for it.
  • A dimensional model is designed for reporting, slicing and dicing, for getting data out. That is what it is good at.

    This becomes clear by the following examples:

    • Query performance is usually better than of other models, like Data Vault or Relational Datawarehouse, especially for aggregations.
    • A star model is an excellent source for Cube databases, like SSAS, especially when using a multidimensional model.
    • Date selections and comparison of periods (e.g. Q1 this year versus Q1 Last Year) is easier than in Data Vault or Relational models.
  • It enables self service BI. Give business (power) users (direct) access to a dimensional model, and there is a good chance that they can get  data out in the format they want.

What are the strengths of a Data Vault?

  • A Data Vault is meant to capture all the data all the time. Therefore it stores all data from the source systems, captures the changes in it, and keeps a full history of all changes (with a sidemark, that multiple changes between two load cycles, will be seen as one).
  • A Data Vault captures all data exactly as it is stored in the source systems. “The good, the bad and the ugly”with no business rules applied to it (except, in some cases hard business rules like data types). This makes all data auditable.
  • Data Vault 2.0 supports cross platform integration with other systems (e.g. Hadoop) and decentralized datawarehouses, for instance in different global regions.
  • With Data Vault 2.0 working agile is easy, in sprints, building the system in increments.
  • Data Vault has a very good support for Data Integration. Satellites, hanging off the Hub, containing data from different source systems.

So? What are the strengths of a datawarehouse architecture using both?

  • You can change your mind! If your business rules change (and it happens!), just reload your dimensional model from the Data Vault with the new rules applied.
  • You have one version of the facts . If you have multiple dimensional models, you have not.
  • Data history is not like a destroyed monument. With a Data Vault, you capture all data from the source systems, and all changes applied to it. Source systems usually reflect only the current state of the data, so without a change capturing central storage like Data Vault, your history is gone forever!
  • Your dimensional model does not have to serve two purposes: data storage and presentation of information. These two tasks can be in conflict, for instance there is currently no report need for all data that you could grap from source systems, so you leave it out of your star schema. But if the question comes tomorrow, you have no data, no history!
  • You have full auditibility. Because the (raw) Data Vault stores the data from source systems unmodified, every measure, every dimensional attribute, can be traced back to the source it came from and when. This makes it able to refute claims that the data is not correct. Maybe now it will become clear that the old datawarehouse- or reporting system has lied for years!

How can you use both?

The answer is simple: by using a multi-tier architecture.
Data from source systems is first extracted to a Staging area, before it is moved into the Enterprise Data Warehouse using a Data Vault Model (with or without some optional components).
From there it will be distributed to dimensional models (star schemas) and cubes, and whilst the data is on its way, business rules can be applied.
DV-S01E02-multi-tier-architecture
Multi-tier architecture using both Data Vault and Dimensional Modelling techniques.

Conclusion / Wrap up

I have written this post to create more awareness about using both Data Vault and Dimensional Modelling or Star Schemas in a data warehouse architecture.
I have listed strengths and benefits of Data Vault, Dimensional Modelling and of an architecture using both.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

A Plug and Play Logging Solution (SSIS Series)

Introduction

Ever had to investigate an error that occurred during execution of a SSIS package? Then you will know that logging is the key to know what happened.

And not only logging in SSIS itself, but also in stored procedures, c# script tasks, c# console applications you might have written and are executed from within the package.
SSIS-S01E01-log-story
So the log tells you a story (I once heard “so it reads like a thrilling boys’ book”, I liked that expression, GJ).

Also you do not want to reinvent the wheel for every SSIS project, but want to use a standard solution. Plug and Play instead of spending sprints on “utitilies”.

Feature list

The features in short (as compared with what SSIS offers out of the box):

  1. It’s a Plug And Play SSIS Logging Solution in most cases: no additional programming in SSIS Packages is required*), so can be easily used for all existing packages! Only when you have multiple master packages with the same name, you will have to do a little of set up work.
  2. Log messages are labeled with a message class (a sort of severity of the message) for easy filtering.
  3. Changing the maximum message class to log is possible without redeploying any SSIS Packages (for instance enable VERBOSE logging for trouble shooting, then set it back to DEBUG or INFO).
  4. Log messages are labeled with a retention class for easy purging. There is also an “audit” retention class for messages that may never be purged/deleted.
  5. Purging can be automated, so no scheduling is needed.
  6. Logging is Integrated/centralized for SSIS Packages, stored procedures and other home made software.
  7. Deduplicated logging can be enabled, if this is done, identical SSIS logging messages (e.g. first from the OLE DB Destination, then from the Data Flow Task, and then from the SSIS Package) are logged only once as much as possible.
  8. Master / child relations for SSIS packages can be configured, so that log entries from master- and childpackages can be retrieved easily together in a single timeline. Out of the box this works using a naming convention (Master package must have “master” in the name) but can be fine tuned for every package that runs.
  9. Easy configuration by executing a stored procedure, to set retention periods, maximum log class, automatic purging on/off, deduplicated logging on/off, and naming convention for the master package.
  10. Storage of both start- and endtimes of the current timezone as well as UTC start- and endtimes.

*) As far as the logging that SSIS does is concerned. If you want logging in your own stored procedures and executables you need to implement it first.

Quick Start Guide

  1. Download the installation solution script here.
  2. Execute the entire script to install on new database [logdb]. To install on existing database, skip the part of the script where the database is created and execute the rest against an existing database. All objects the solution creates are created in database schema [log] (except for stored procedure [dbo].[sp_ssis_addlogentry], this name is needed by SSIS).
  3. Enable logging in your SSIS Packages if you have not already done that. Use a logging provider for SQL Server and select all events for all components of the package (more details below).
  4. Configure the connection managers of your SSIS Packages to use the database in which you installed the solution. Alternatively you can execute the stored procedure [log].[spCreateCrossRefs] to redirect logging for existing packages to the logging solution without any changes in your SSIS packages. See the User Manual for more details.

Directly to User Manual

Steps to follow when you want to enable logging in your SSIS Package:

SSIS-S01E01-enable_logging

User Manual

Table Of Contents

Introduction

The “hansmichiels.com Plug And Play Logging Solution” provides Plug and Play logging for SSIS Packages, stored procedures and other home made software, like command line applications or web applications.
After installing the solution, as explained in the Quick Start Guide, you can manage the solution using a few stored procedures, or accept the default settings and run it out of the box.

Understanding MessageClass and RetentionClass

MessageClass can have one of these 6 values:

  • 1 = Fatal Error (SSIS OnError event)
  • 2 = Non-fatal error (reserved for user defined logging for non-fatal errors, not used by SSIS packages)
  • 3 = Warning (SSIS OnWarning event)
  • 4 = Info message (SSIS: PackageStart, PackageEnd, OnInformation events)
  • 5 = Debug message (SSIS: OnPreValidate, OnPostValidate, OnPreExecute, OnPostExecute events)
  • 6 = Verbose messages (SSIS: Pipeline, Diagnostic and OnProgress events, typically more than 90% of all SSIS log messages)

RetentionClass aligns with MessageClass because it also has values 1 to 6 and by default the RetentionClass will be equal to the MessageClass. However, when you do custom logging in your own stored procedures or executable programs, you can provide a different value for it, for instance MessageClass 4 (INFO) but RetentionClass 1 if you want the message to be kept longer than the value for RetentionClass 4.

In addition there is a RetentionClass 0 for messages that may never be purged/deleted.

There are two ways to prevent that log messages are purged/deleted:
  1. By setting the configuration setting @RetentionPeriodClassN (where N is from 1 to 6) to 0. Then ALL messages for that RetentionClass will never be purged. However when the configuration setting is changed to a different value, the next time spPurgeLog is executed, the messages can still be deleted.
  2. By explicitly adding log entries with RetentionClass 0. Those entries do not depend on configuration settings, and are never deleted, unless you delete them directly from the [log].[Entry] table.



Back to Table of Contents

Understanding the stored procedures and user defined function of the solution

Stored procedure [log].[spConfigure]

With this stored procedure you can change a number of configuration settings which are parameters of the stored procedure.

SSIS-S01E01-configure
Configuration: a child can do the laundry

You only have to provide values for the configuration settings (parameters) that you want to change. Other settings will remain unchanged.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @MaxMessageClass: The maximum MessageClass to log. For instance 5 means that log entries with MessageClass=6 (VERBOSE) are not logged. The defaultvalue is 5 (DEBUG).
  • @RetentionPeriodClass1: Hours to keep a log entry if the [RetentionClass] column has value 1. Set the value to 0 if the log message may never be deleted. The defaultvalue is 0.
  • @RetentionPeriodClass2: The same for [RetentionClass] = 2. The defaultvalue is 0.
  • @RetentionPeriodClass3: The same for [RetentionClass] = 3. The defaultvalue is 4320 hours or 180 days.
  • @RetentionPeriodClass4: The same for [RetentionClass] = 4. The defaultvalue is 2160 hours or 90 days.
  • @RetentionPeriodClass5: The same for [RetentionClass] = 5. The defaultvalue is 336 hours or 14 days.
  • @RetentionPeriodClass6: The same for [RetentionClass] = 6. The defaultvalue is 168 hours or 7 days. Verbose messages are typically more than 90% of all messages, so a short retention period is recommended, if logged at all.
  • @DefaultMasterSourcePattern: default naming convention for master packages. Use wildcard % for nameparts. The defaultvalue is ‘%Master%’.
  • @EnableAutoPurgeOnPackageEnd: If enabled, spPurgeLog will be executed every time a SSIS Package ends. This keeps the log clean without an additional effort for scheduling, but the downside is that spPurgeLog might be executed more often than when it is scheduled daily. This is not harmful, but could have a negative impact on package performance. The defaultvalue is 0 (false).
  • @EnableDeduplicatedLogging: If enabled a staging table [log].[EntryStaging] will be created and duplicate SSIS log messages (with different sources) will be deduplicated as much as possible. Still in some situations duplicates may occur but a lot less than when this setting is not enabled. The defaultvalue is 0 (false).
  • @ResetAllToDefaultValues: Use 1 to reset all values to the values of the DEFAULT constraints on the [log].[Config] table. In that case all other parameter values are ignored.

Examples of use:

EXECUTE [log].[spConfigure] @ResetAllToDefaultValues = 1;

EXECUTE [log].[spConfigure]
@EnableDeduplicatedLogging = 1,
@EnableAutoPurgeOnPackageEnd = 1,
@MaxMessageClass = 6,
@RetentionPeriodClass2 = 8640,
@RetentionPeriodClass6 = 48;



Back to Table of Contents

Stored procedure [log].[spAddLogEntry]

You can use this stored procedure to implement logging in your own stored procedures or executable programs.

It is advised that your own stored procedures which are called from within SSIS Packages, have a parameter @ExecutionGuid (with datatype UNIQUEIDENTIFIER). Provide the value from the package variable System::ExecutionInstanceGUID.

You can set the following parameters:

  • @Event: The log event, can be NULL. If you leave it empty, the event name will be based on the value for @MessageClass.
  • @Source: The log source.
  • @SourceGuid:The log source guid, can be NULL.
  • @ExecutionGuid: A global unique identifier to be able to distinct log messages of different jobs running in parallel.
  • @StartTime: Can be NULL, if NULL the current date/time is used.
  • @EndTime: Can be NULL, if NULL the value of @StartTime is used (which can be set to current date/time).
  • @MessageClass: The MessageClass of the message: 1=Fatal Error, 2=Error, 3=Warning, 4=Info, 5=Debug, 6=Verbose
  • @RetentionClass: A numeric indicator for the time after which the log messages can be deleted or purged. Values align with MessageClass, and in addition there is a RetentionClass with value 0 for log messages that may never be deleted, e.g. for auditing purposes. Can be NULL, if NULL the value provided for MessageClass will be used.
  • @Message: The log message.

Examples of use:

— At the beginning of a stored procedure:
DECLARE @Source NVARCHAR(1024) = ‘log.spCreateCrossRefs’;
DECLARE @Message NVARCHAR(2048) = ‘Starting: ‘ + @Source;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4;

— at the end of a stored procedure:
SELECT @Message = ‘Finished: ‘ + @Source;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4;

— To log something specific:
SELECT @Message = ‘Purged ‘ + CAST(@RowCount AS NVARCHAR) + ‘ log entries with RetentionClass ‘ + CAST(@RetentionClass AS NVARCHAR) +‘.’;
EXECUTE [log].[spAddLogEntry] @Message = @Message, @ExecutionGuid = @ExecutionGuid,
@Source = @Source, @MessageClass = 4, @RetentionClass = 0;



Back to Table of Contents

Stored procedure [log].[spPurgeLog]

Schedule the execution of this stored procedure daily when you do not want to set EnableAutoPurgeOnPackageEnd to 1 (true).

This stored procedure has a parameter @ExecutionGuid used for logging the purge operation, but you can omit the parameter, then a new unique identifier value will be used.


Example of use:



EXECUTE [log].[spPurgeLog]

Back to Table of Contents

Stored procedure [log].[spCreateCrossRefs]

Stored procedure to create synonyms in a database and create or replace the stored procedure [dbo].[sp_ssis_addlogentry] (hereafter collectively called: the cross references)

Use this stored procedure for two purposes:

  1. If you want to do logging in your own stored procedures in the database specified with parameter @DatabaseName
  2. If you want to redirect logging for SSIS Packages that use the database specified with parameter @DatabaseName for logging. In this way those packages will use the logging solution without the need to change the connection string.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @DatabaseName: the name of the database in which the cross references must be created.
  • @DebugMode: if 1, more details will be shown in the output window
  • @SimulationMode: if 1, only the SQL Statements are shown in the output window, but they are not executed.
  • @DropOnly: if 1, the cross references are only dropped, but not (re)created.



Example of use:

USE [logdb]
GO
EXECUTE [log].[spCreateCrossRefs]
    @DatabaseName = 'MyOtherDb'
  , @DebugMode = 1
  , @SimulationMode = 1
  , @DropOnly = 0
GO

Back to Table of Contents

Stored procedure [log].[spSetMasterSourcePattern]

While packages run, the table [log].[Source] is filled with Source (= SSIS Package) and MasterSource information, based on the configuration setting @DefaultMasterSourcePattern.

If the value for @DefaultMasterSourcePattern is correct for all packages that run, you do not have to change anything.

However when the Master/child Source relations need to be changed, you can use stored procedure [log].[spSetMasterSourcePattern] to configure master- and child package relationships.

The definition of a master package is a package that executes another SSIS Package, the child package. When going more than one level deep, only the ultimate parent package (the one that starts everything else) has to be defined, the “package tree” is not relevant. For instance Package A executes Package B, and Package B executes Package C. In this situation A should be defined as master package for C.

You can set the following parameters:

  • @ExecutionGuid: Guid used for logging the configure operation. Can be NULL.
  • @Source: The log source or log source pattern. For a pattern use a % wildcard, e.g. “Load%” for all sources of which the name starts with “Load”.
  • @SourceGuid: Source Guid, if NULL Source will be retrieved with the value of @Source only.
  • @MasterSourcePattern: The master source to use for this @Source/@SourceGuid
  • @MasterSourceGuid: Master Source Guid, if NULL the connection to the Master Source will be made using only the @MasterSourcePattern. You only need to configure @MasterSourceGuid, if you have multiple master packages in different projects with the same name (e.g. all called “Master.dtsx”) that can run at the same time. In that situation you have to link child- to masterpackages using both @MasterSourcePattern and @MasterSourceGuid.

The easiest way to configure your master- and childpackages is to use spConfigure to set @DefaultMasterSourcePattern so that it matches your masterpackage name.
Then run your masterpackage once. In this way the table [log].[Sources] is already filled with all executed (master- and child)packages, so you do not have to do that yourself.
Then use the examples below to finetune, if necessary.
Examples of use:

--\
---) Set MasterSourcePattern for all sources (use only after SSIS Packages have run for the first time).
--/
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = '%', @MasterSourcePattern = 'Master%'
GO

-- Then undo for the master sources themselves
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'Master%', @MasterSourcePattern = NULL
GO

--\
---) Set MasterSourcePattern for one source.
--/
-- Without SourceGuid (use only after SSIS Packages have run for the first time).
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'LoadDimCustomer', @MasterSourcePattern = 'DWHMaster'
GO

-- With SourceGuid and MasterSourceGuid (can be used prior to running any packages).
EXECUTE [log].[spSetMasterSourcePattern]
  @Source = 'LoadDimCustomer',
  @SourceGuid = '7480D07D-5099-4B76-9404-17C2AFD0E603', -- PackageID property of SSIS Package.
  @MasterSourcePattern = 'Master',
  @MasterSourceGuid = '0F429F36-784D-4F05-8D5A-9374609A32B9' -- PackageID property of your master SSIS Package.
GO



Back to Table of Contents

Stored procedures [log].[spOnExecutionStart] and [log].[spOnExecutionEnd]

Used internally by the logging solution.


Back to Table of Contents

User defined function [log].[fnLogEntry]

User defined function to select log entries.

This function has the following parameters:

  • @Source: Packagename to select log entries for (including log entries of subpackages). Can be NULL. If NULL then the last execution of any Package will be selected.
  • @LastXExecutions: A number to indicate for how many executions you want to see the log. Defaultvalue is 1.
  • @MaxMessageClass: Filter on MessageClass, for instance use 2 to show only log messages with a MessageClass less than or equal to 2. Defaultvalue is 6.
  • @IncludeRetention0: If 1, also messages with RetentionClass 0 (never delete) will be shown, regardless of their MessageClass.

Example of use:

SELECT * FROM [log].[fnLogEntry] (‘Master’, 1, 4, 1 )
GO



Back to Table of Contents

How to implement logging in own executables

Logging in own executables (for instance C# Console Applications) is fairly easy to set up.
Keep the following in mind:

  • Generate a new guid every time the executable runs, and use it as value for @ExecutionGuid.
  • Assign the executable a persistent SourceGuid, and use it as value for @SourceGuid.
  • Use the executable name as value for @Source. While the executable runs, you can use lower level names (e.g. class and/or method names) for @Source, but still use the executable @SourceGuid, and not a different one.
  • When the executable starts, use the stored procedure spAddLogEntry to add an entry with @event=”ExecutionStart”. This triggers built in facilities of the logging solution.
  • Just before the executable stops, use the stored procedure spAddLogEntry to add an entry with @event=”ExecutionEnd”.
  • When the application is called from within a SSIS Package, use spSetMasterSourcePattern to create a master/child relationship.
  • If you are interested in a C# code example just let me know, and I will dedicate a future blog post to it.

 

Download the logging solution script here.

How about updates?

I intend to update the solution when needed.
If you find bugs or have interesting additions, I would be thankful if you can share those with me so I can improve the logging solution.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Isolate cross database “databasename” dependencies using synonyms and schemas (Data Vault Series)

Intended audience

This post could be interesting for professionals working on datawarehouse projects using the Microsoft SQL Server database engine. This post assumes that you have at least a basic understanding of Data Vault principles.

Introduction

Those who have read the book “Building a Scalable Data Warehouse with Data Vault 2.0” by Dan Linstedt and Michael Olschimke will know that database schema’s are used in the examples in the book for the “raw” and “biz” (business) layer in the Data Vault.

If you do not know this book, buy it, if you are the intended audience you should read it ( no I have no commercial interest in saying this), so check it out on Amazon.

What is done in the book is putting tables or views in the database schema’s “raw” and “biz”, respectively, making it transparent if a satellite is either virtual (and at least partially computed) or persisted in a physical table.

Putting these data layers in different schema’s is a great starting point for a my interdatabase connectivity concept.

What data layers can we have

  • Staging area
  • Raw Data Vault
  • Business (Data) Vault
  • Metrics Vault
  • Error Vault
  • Information Marts

A rule of thumb is that each layer resides in it’s own database, but also multiple layers can be in one database. What should be avoided is spreading one layer over multiple databases. This does not count for different Information Marts though, which should be put in their own database.

Whether to put layers together can depend amongst others on backup requirements (the biz layer could for instance be restored from a database backup OR by doing some reprocessing from the Raw Data Vault, which could lead to the decision to backup the database less frequently).

A typical set up could be:

  • Staging Area – separate database, “stg” database schema.
  • Raw Data Vault – together with Business Vault, “raw” database schema.
  • Business Vault – together with Raw Vault, “biz” database schema.

To keep the example simple I will leave the other data layers out for now.

What is the problem?

To be short: SQL statements copying data to a different database.

When using data flows in SSIS, this problem does not occur.

Why is it bad?

Database names are prefixed in FROM clauses, causing a dependency on the database name.

So what, I still don’t see the problem

Scenario’s where it can become a hassle to manage this include:

  • Running different environments on the same database instance (e.g. development and test), differing the environments by database names. When deploying code or copying databases, all interdatabase dependencies must be changed.
  • Running different enviroments for multiple clients in a BIAAS (Business Intelligence As A Service) solution on the same database instance.

So, lets divide datalayers using database schema’s ..

This is an example of the initial set up.

Dv01E01_dbschema1
Please note that you should keep all objects (tables, views, stored procedures, udf’s) from one data layer together in one database.

Now, to pull data from the “stg” to the “raw” layer, SQL statements could be like (simplified example):

Insert.sql
INSERT INTO [raw].[TableName]
SELECT ..
FROM [EDW_STG_TEST].[stg].[TableName]

But, as you can see the database name is prefixed before the Staging table. And this is a problem (at least it makes managing the solution more complex).

How can we solve this?

Managing interdatabase dependencies using synonyms

First, a little explanation from MSDN:

A synonym is a database object that serves the following purposes:

  • Provides an alternative name for another database object, referred to as the base object, that can exist on a local or remote server.
  • Provides a layer of abstraction that protects a client application from changes made to the name or location of the base object.

Read more on MSDN.

What if we would have made a synonym for the staging table in the “EDW_DV_TEST” database, also in a “stg” schema in that database?

Then we could omit the database name from the SQL statement:

Insert.sql
INSERT INTO [raw].[TableName]
SELECT ..
FROM [stg].[TableName]
That is what we want!

In a picture:

Dv01E01_dbschema2

The good news is, creating the synonyms can be automated with a script, so a change of the Staging database name only requires running a SQL script in the EDW_DV_TEST to drop and recreate the synonyms for the staging database objects!

One other advantage is, you could merge different layers into one database or spit them into different databases. For example all code will keep on working if you move all Staging tables to the EDW_DV_TEST database, therewith replacing the synonyms already there in the stg schema by actual tables. All you would have to do is change the connection string for the processes that fill the staging tables.

In a similar way, you could decide to separate the raw and biz layer into different databases.

Script to create or change the synonyms

DV-S01E01-Create_synonyms_for_objects_in_a_different_database.sql

--\
---) Author: Hans Michiels
---) Creates synonyms for tables, views, user defined functions
---) and stored procedures in a different database.
---) All synonyms in the specified schema are dropped
---) prior to (re)creation.
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014] -- Replace with database you want to create the synonyms in.
GO

--\--------------------------------------------------------------------------
---) DECLARATIONS
--/--------------------------------------------------------------------------

DECLARE @SourceDatabase SYSNAME
DECLARE @SourceSchema SYSNAME
DECLARE @SynonymsSchema SYSNAME

DECLARE @SynonymsPrefix NVARCHAR(128)
DECLARE @SynonymName SYSNAME

DECLARE @CurSchemaName SYSNAME
DECLARE @CurObjectName SYSNAME
DECLARE @CurObjectType CHAR(2)

DECLARE @nsql NVARCHAR(4000)
DECLARE @DebugPrint NVARCHAR(500)
DECLARE @SimulationMode CHAR(1)

--\--------------------------------------------------------------------------
---) CONFIGURATION: adjust the variable values to meet your needs.
--/--------------------------------------------------------------------------

SELECT @SourceDatabase = 'AW2014_Staging' -- Database in which objects exist
                                          -- for which synonyms must be created.
     , @SourceSchema = 'stg' -- Schema name of source objects.
     , @SynonymsSchema = 'stg' -- Schema name to put synonyms in.
     , @SynonymsPrefix = '' -- Optional Synonyms name prefix.
     , @SimulationMode = 'Y' -- Use Y if you only want the SQL statement in the
                               -- output window without it being executed.

--\--------------------------------------------------------------------------
---) INITIALIZATION
--/--------------------------------------------------------------------------

IF NOT EXISTS(SELECT 1 FROM sys.schemas WHERE name = @SynonymsSchema)
BEGIN
    SELECT @nsql ='CREATE SCHEMA ' + QUOTENAME(@SynonymsSchema) + ' AUTHORIZATION [dbo];';
    IF @SimulationMode = 'Y' PRINT '----- SIMULATION MODE: SCRIPT IS NOT EXECUTED ------ ';
    PRINT @nsql;
    IF @SimulationMode != 'Y' EXECUTE sp_executesql @nsql;
END

--\--------------------------------------------------------------------------
---) MAIN PART I: Drop all existing synonyms in the specified schema.
--/--------------------------------------------------------------------------

DECLARE ExistingObjCursor CURSOR LOCAL STATIC FOR
    SELECT syn.name AS [SynonymName]
    FROM sys.synonyms syn
    JOIN sys.schemas sch
      ON syn.schema_id = sch.schema_id
    WHERE sch.name = @SynonymsSchema
 
IF @SimulationMode = 'Y' PRINT '----- SIMULATION MODE: SCRIPT IS NOT EXECUTED ------ ';

SET @DebugPrint = '-- Dropping all existing synonyms in schema ' + QUOTENAME(@SynonymsSchema)
PRINT REPLICATE('-', LEN(@DebugPrint))
PRINT @DebugPrint
PRINT REPLICATE('-', LEN(@DebugPrint))

OPEN ExistingObjCursor
FETCH NEXT FROM ExistingObjCursor INTO @SynonymName

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @nsql = 'IF OBJECT_ID(''' + QUOTENAME(@SynonymsSchema) + '.' + QUOTENAME(@SynonymName) + ''', ''SN'') IS NOT NULL'
          + ' DROP SYNONYM ' + QUOTENAME(@SynonymsSchema) + '.' + QUOTENAME(@SynonymName) + ';'

    PRINT @nsql;
    IF @SimulationMode != 'Y' EXECUTE sp_executesql @nsql;

    FETCH NEXT FROM ExistingObjCursor INTO @SynonymName
END

CLOSE ExistingObjCursor
DEALLOCATE ExistingObjCursor

--\--------------------------------------------------------------------------
---) MAIN PART II: Recreate all synonyms in the specified schema.
--/--------------------------------------------------------------------------

IF OBJECT_ID('tempdb..#SourceObjects') IS NOT NULL BEGIN
    DROP TABLE #SourceObjects
END

CREATE TABLE #SourceObjects
    ( [SchemaName] SYSNAME NOT NULL
    , [ObjectName] SYSNAME NOT NULL
    , [ObjectType] CHAR(2) NOT NULL
    )

-- Insert USER objects
SELECT @nsql =
'INSERT INTO #SourceObjects([SchemaName], [ObjectName], [ObjectType])'
+ ' SELECT sch.name AS [SchemaName]'
+ ' , obj.name AS [ObjectName]'
+ ' , obj.type AS [ObjectType]'
+ ' FROM [' + @SourceDatabase + '].sys.objects obj'
+ ' JOIN [' + @SourceDatabase + '].sys.schemas sch'
+ ' ON sch.schema_id = obj.schema_id '
+ ' WHERE sch.name = ''' + @SourceSchema + ''''
+ ' AND obj.type IN ( '
+ ' ''FN'' /* SQL_SCALAR_FUNCTION */ '
+ ' , ''P '' /* SQL_STORED_PROCEDURE */ '
+ ' , ''IF'' /* SQL_INLINE_TABLE_VALUED_FUNCTION */ '
+ ' , ''TF'' /* SQL_TABLE_VALUED_FUNCTION */ '
+ ' , ''U '' /* USER_TABLE */ '
+ ' , ''V '' /* VIEW */ '
+ ' )'
+ ' ORDER BY obj.type, sch.name, obj.name'

PRINT '/*'
EXECUTE sp_executesql @nsql;
PRINT '*/'
DECLARE ObjectCursor CURSOR LOCAL STATIC FOR
    SELECT [SchemaName], [ObjectName], [ObjectType]
    FROM #SourceObjects
 
IF @SimulationMode = 'Y' PRINT '----- SIMULATION MODE: SCRIPT IS NOT EXECUTED ------ ';

SET @DebugPrint = '-- Recreate synonyms in schema ' + QUOTENAME(@SynonymsSchema)
PRINT REPLICATE('-', LEN(@DebugPrint))
PRINT @DebugPrint
PRINT REPLICATE('-', LEN(@DebugPrint))

OPEN ObjectCursor
FETCH NEXT FROM ObjectCursor INTO @CurSchemaName, @CurObjectName, @CurObjectType

WHILE @@FETCH_STATUS = 0
BEGIN

    SELECT @SynonymName = @SynonymsPrefix + @CurObjectName
    
    SET @nsql = 'CREATE SYNONYM ' + QUOTENAME(@SynonymsSchema) + '.' + QUOTENAME(@SynonymName)
              + ' FOR ' + QUOTENAME(@SourceDatabase) + '.' + QUOTENAME(@CurSchemaName) + '.' + QUOTENAME(@CurObjectName) + ';'

    PRINT @nsql;
    IF @SimulationMode != 'Y' EXECUTE sp_executesql @nsql;

    FETCH NEXT FROM ObjectCursor INTO @CurSchemaName, @CurObjectName, @CurObjectType
END

CLOSE ObjectCursor
DEALLOCATE ObjectCursor
GO

--\--------------------------------------------------------------------------
---) CLEAN UP
--/--------------------------------------------------------------------------
IF OBJECT_ID('tempdb..#SourceObjects') IS NOT NULL BEGIN
    DROP TABLE #SourceObjects
END

Download the script here.

Conclusion / wrap up

Database synonyms can be used to isolate interdatabase dependencies into objects that can be easily updated with a changed database name.

This makes managing the Data Vault solution easier.

This concept can also be of use in non-Data Vault database environments.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Top 10 Design Mistakes seen in OLTP production systems (SQL Server Series)

Introduction


If you ever had to source your datawarehouse from OnLine Transaction Processing databases, I suppose this top 10 will be a “joy” of recognition for you.

Top 10 Design Mistakes seen in OLTP production systems

1. Don’t have referential integrity enforced

Without actual foreign key constraints in the database, you will get orphaned records sooner or later (e.g. orderline rows for already deleted orders). This is bad for dataquality and can hide bugs in the front end software maintaining the data.

Back to top

2. Foreign key columns having a totally different name than the primary key they refer to

Especially when point 1 is the case, it is difficult to tell that a foreign key column “account_id” refers to the “customer_id” column in the customer table. When reverse engineering a OLTP design this does not really help to understand it.

Back to top

3. Foreign key columns missing “Id”, “Key” or “Guid” in the name

I personally dislike a column name “customer” when it is an integer or global unique identifier datatype. However from the datatype you can draw the conclusion that it cannot be the name, there are many situations where you only see the column name, and then it can be confusing.

Back to top

4. Foreign key columns meaning more than one thing

More than once, in standard software which is highly adjustable or configurable, the datamodel is “flexible”, e.g. link tables where object_id can be a customer, an employee, or yet something else. Apart from the problem this gives in enforcing referential integrity without a physical constraint, it makes it more difficult to understand the model and/or get data out.

Back to top

5. Accessing the datamodel through low-performance all-in-one views, protecting base tables from direct access

They exist! Views with 700+ columns, probably joining 20+ tables, to give an all-in-one view for an account. And still, then you need an extra column from an already joined table, that is not in the list of 700+ columns. You can guess what this leads to, suboptimal queries with poor performance. While, if you were allowed to write a view directly on the tables, you could optimize it for your needs.

Back to top

6. Oversized column lengths

Ever seen a varchar(max) column for a country code with a maximum length of 3? I Have. While this is an extreme example, oversized columns lead to several problems:

  • SQL Server handles the column differently. Without going into too much detail, a char(3) column for the country code would be stored much more efficiently than a varchar(max) column.
  • You do not always know how much space must be reserved for the column on reports and so on.
  • If you use the value in concatenated columns (e.g. business keys), you need to size it after the physical maximum size, even when the actual data will not fill it up.


Oversizing can happen:

  • when front end C# developers use “code first”, and the column length is not defined in the code. Therefore I recommend, when “code first” is used, a database developer or DBA has a critical look at the generated physical datamodel before going live.
  • when the column size is not specified in design documents.
  • when people just don’t care to size the column properly. Oversizing seems like a safe thing to do, if the column is big enough, no one will complain!

Back to top

7. Oversized INT Datatypes

Or: To Smallint or not to Smallint. When reference tables only contain a few rows, a smallint or even tinyint datatype for the primary key will do! If the primary key is used in foreign key constraints of large tables (with many rows) less storage is needed, indexes will be smaller, all leading to better performance.

Unfortunately, SQL Server does not support unsigned smallint, int and bigint datatypes, but nobody stops you from start counting at the minimum negative value, so -32768 instead of 1 for a smallint! Just change the seed part of the IDENTITY specification. If you do this, you can store up to 65536 rows in a table with a smallint primary key.

MSSQL-S01E01-Top 10 Design Mista…

–\
—) Author: Hans Michiels
—) Script to demonstrate how to start an
—) identity column with a negative number
–/

IF OBJECT_ID(‘[dbo].[A65kRowsTable]’, ‘U’) IS NOT NULL
DROP TABLE [dbo].[A65kRowsTable]
GO
CREATE TABLE [dbo].[A65kRowsTable](
[ID] SMALLINT IDENTITY(32768, 1) NOT NULL,
[SomeColumn] VARCHAR(50) NOT NULL
CONSTRAINT [PK_dbo_A65kRowsTable_ID] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
INSERT INTO [dbo].[A65kRowsTable] ( [SomeColumn] )
VALUES (‘-32768 is the first ID.’)
GO
SELECT [ID], [SomeColumn]
FROM [dbo].[A65kRowsTable]
GO


If you do this trick with an INT, you can store 4.2 billion rows and you might in some cases be able to prevent that you need a bigint.

Back to top

8. Lack of unique indexes for business or natural keys

Again a dataquality issue. If the primary key is surrogate (an integer type which is autonumbered), a table still can have one or more different columns that make a row unique. For example, a social security number in a citizens table. Not enforcing this constraint via an unique index, can lead to two citizens with the same social security number! This can hide bugs in the front end software, fraude, or both. Anyway, it is not wise to take the risk that this happens.

Back to top

 9. Guids stored as (n)varchar

A guid, short for Global Unique IDentifier, is a generated value of 16 bytes, guaranteed to be globally unique. It is sometimes used for primary key columns, instead of integer datatypes.
I will keep it in the middle if this is wise to do, there might be some use cases when this is a valid approach, but I would only do this if using integers is not an option.

SQL Server has the UNIQUEIDENTIFIER datatype to store guids. Internally, this datatype uses 16 bytes, but the value is represented as a hexadecimal string of 36 characters (32 for the 16 bytes, with 4 hyphens in between, sometimes surrounded by curly brackets, e.g. {935D9FA8-2C77-4C34-894C-8FCDA8E47F19}.
But if the guid is stored in a varchar(36) column, it will use 36 bytes for the value plus 2 bytes overhead, so 38 bytes in total. Compared with the 16 bytes for the UNIQUEIDENTIFIER, you can imagine that storage space and index size will increase and performance will decrease.
Not to mention Nvarchar(36), the Unicode variant of varchar, which will use 2 bytes for every character plus 2 bytes overhead, so 74 bytes in total. This is massive! I have seen nvarchar(36) in a production system to store a guid and would classify it as a “My first database” mistake. Only very unexperienced developers, hardly knowing anything about datatypes would do this. Don’t be one of them!

MSSQL-S01E01-mistakes-homer-doh

Back to top

10. Where’s the DBA?

This point is more about daily operations than about the design, but I still want to mention it. If you are using SQL Server databases you need to maintain them. A simple maintenance plan is easy to set up, and will do index and statistics maintenance for you, make backups and could prevent that the transaction log file grows excessively. If you don’t have an internal DBA, hire one to set it up and monitor your SQL Server instances.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.