Tag Archives: mssql

Using alternate (non-time) triggers to start a SQL Agent job (SQL Server Series)

Updates

After its first publication of this post, I have made a number of modifications on the tool:

Change log

Date Changes
25 September, 2016 Major update:
(1) extended to use an umlimited number of jobtrigger conditions;
(2) added SQLAgentJobResult trigger conditions;
(3) configuration via config file no longer supported.
28 January, 2017 Major update:
(1) added “Running” as a state that can be checked for a job (in the previous version the values where “Succeeded”, “Failed” or “Completed”);
(2) added jobcondition property “reverse_condition”, which enables negative triggers, e.g. a file may not be found, a job may not be running;
(3) a check is done that the jobs used for jobtriggers and jobtrigger conditions actually exist, if not an error is thrown, so this can prevent a silent failure, when a job is renamed or removed after a jobtrigger has been made.

Introduction

Today some SQL Agent stuff. I thought the SQL Agent works just great, until I got the question: can you start this job, when there is a file dropped in folder X, but only if also another process has already done something in the database. Huh?

Problem

With the SQL Agent I can only schedule a job to using time triggers, while I need different triggers like
existence of one or more files, a certain database state and/or other SQL Agent Job(s) having succeeded, failed or just completed.

Solution

So I needed something that checked if a file exists in a folder, and if something was done in the database, and if both is true, a SQL Agent job must be started.
I decided to create a C# Console Application named SQLAgentJobStarter, that could do this job for me. But I wanted it to be a bit more flexible, using one or more trigger conditions of different types, like file trigger or a sql trigger, or both (as in the requirement above).
Later on I extended this with jobtrigger conditions, and I changed the definitions a bit. A trigger can now have one or more conditions of different types:

  • file jobtrigger condition
  • sql jobtrigger condition
  • SQLAgentJobResult jobtrigger condition

The different condition types can be combined in one jobtrigger, e.g. SQLAgentJob “Job_A” must have succeeded and file test.txt must be in folder C:\temp.

In this article I want to concentrate on instructing you how to use this tool, instead of going into deep from a coding point of view.
I have made this tool an open source project on GitHub, so you still can look in the code (SQLAgentJobStarter_VisualStudio2015_SourceCode.zip) if you want, but that’s up to you.

DISCLAIMER: please use this tool at your own risk. It is in a bèta state, you should not use it in production environments, until you have convinced yourself that it does what it should do.

Installation and demonstration of the tool

One important comment first. This tool is not a Windows Service. It is “just” a Console Application that needs to be scheduled using a SQL Agent job or Windows Task Scheduler.
Having said that, what are we going to do?

  • We are going to create two tables in the msdb database: [dbo].[sysjobtriggers] and [dbo].[sysjobtrigger_conditions]. All other SQL Agent job data is also in this database, therewith this seems a fair place. Still I want to emphasize the importance of backing up this database on a regular base, preferably daily. But if you have a DBA and a maintenance plan set up, this might be needless to say.
  • We are going to create a few stored procedures in the msdb database to maintain those tables.
  • We are going to install the executable HansMichiels.com.Tools.SQLAgentJobStarter.exe and corresponding config file in the C:\Program Files\SQLAgentJobStarter\MSSQL13.MSSQLSERVER folder. MSSQL13.MSSQLSERVER represents the SQL Version number and Instance name, in the example a default instance of SQL Server 2016. Change it to the name of your SQL Server version and instance. In this way you can install the tool side by side for different SQL Instances on the same machine.
  • Before start using the tool, check the SQLConnectionString in the HansMichiels.com.Tools.SQLAgentJobStarter.exe.config file. By default, this is configured as follows: key=”SQLConnectionString” value=”Data Source=(local);Initial Catalog=msdb;Integrated Security=SSPI;”. This is okay for a local default SQL Server Instance, but not usable when you are using named instances.
  • We are going to schedule the tool using a SQL Agent job.
  • Now you are ready to rock! All that needs to be done is adding jobtriggers and jobtrigger_conditions to the msdb database. But to help you with this, I will show you a demo now.

Installation – technical steps

  • Download all the needed files here.
  • Unzip the files in SQLAgentJobStarter_ProgramFiles.zip, and copy them to a folder on your server, I used C:\Program Files\SQLAgentJobStarter\MSSQL13.MSSQLSERVER.
  • Review the script msdb_install.sql and if needed adjust the foldername here: @command=N'”C:\Program Files\SQLAgentJobStarter\MSSQL13.MSSQLSERVER\HansMichiels.com.Tools.SQLAgentJobStarter.exe”‘,
  • After reviewing, execute the script msdb_install.sql
  • Review the SQL Agent job ##SQLAgentJobStarter##, especially the schedule. For the demo the polling interval of the schedule is 1 minute, but I advice to set it to 3 or 5 minutes for production environments, because this interval usually is good enough.

    Also enable the schedule, it is installed in a disabled state.

So far, so good. The tool is installed.

Installation – preparation for the demo.

If you want to join the demo, also do the following:

  • execute script demo_setup_sqlagentjobs.sql: this script will set up dummy SQL Agent jobs that are used for the demo.
  • execute script demo_setup_storedprocedures.sql: this script will create a database [MSSQL_S01E04] with some stored procedures that are used for sql triggers.
  • create a folder C:\Temp, if it does not exist yet, and create a file with a .txt extension in it.
  • execute script demo_setup_sysjobtriggers.sql: this script will add demo rows to the tables [dbo].[sysjobtriggers] and [dbo].[sysjobtrigger_conditions]
  • enable all schedules. To do this you can right-click on the Jobs node of SQL Server Agent and click on Manage Schedules, then select all the schedules for the demojobs (their names start with Schedule for MSSQL_S01E04) and the schedule for ##SQLAgentJobStarter##, if not enabled yet.


The Manage Schedules menu-item ..


Check the Enabled checkbox for the applicable schedules ..

The jobs that are created so far

mssql-s01e04-342
The tables in msdb that hold the trigger information ..

mssql-s01e04-355The stored procedures in msdb used to maintain the jobtriggers ..

mssql-s01e04-334The [MSSQL_S01E04] database with stored procedures, used for the demo.

mssql-s01e04-335The C:\Temp folder, that should contain one or more .txt files ..

Viewing the demo results.

Now just wait for about 10 minutes.
In the meantime I can tell you some expected testresults:

  • MSSQL_S01E04_Job01_SqlConditionTrue, MSSQL_S01E04_Job02_FileConditionTrue and MSSQL_S01E04_Job03_FileAndSqlConditionTrue will be executed every time after the job ##SQLAgentJobStarter## has run, because their triggers (C:\Temp\*.txt and [MSSQL_S01E04].[sp].[SqlTriggerExample], both return True.
  • MSSQL_S01E04_Job04_TwoJobSuccessConditions will only be executed after the triggering jobs have succeeded both.
  • MSSQL_S01E04_Job05_JobFailureCondition will only be executed after the triggering job has failed.
  • MSSQL_S01E04_Job06_JobConditionThatNeedsToComplete will only be executed after the triggering job has either succeeded or failed.
  • MSSQL_S01E04_Job07_RunningJob will only be executed when the triggering job is running.
  • MSSQL_S01E04_Job08_NonRunningJob will only be executed when the triggering job is not running. Because job 7 and 8 both have the same triggering job, either MSSQL_S01E04_Job07_RunningJob or MSSQL_S01E04_Job08_NonRunningJob will be started after the job ##SQLAgentJobStarter## has run.
  • MSSQL_S01E04_Job09_SqlConditionFalse will never be executed because the jobtrigger uses a sql trigger using stored procedure [MSSQL_S01E04].[sp].[SqlTriggerFalse], that always returns 0 (false).
  • MSSQL_S01E04_Job10_ReversedFileCondition_RunsWhenFileDoesNotExist will only be executed when the file “C:\Temp\NowJob10DoesNotRun.txt” does not exist.

Let’s see if it worked out!


Viewing the job history of job 1 to 3..
In the job history I can see that:

  • MSSQL_S01E04_Job01_SqlConditionTrue, MSSQL_S01E04_Job02_FileConditionTrue and MSSQL_S01E04_Job03_FileAndSqlConditionTrue were executed every time after the job ##SQLAgentJobStarter## had run.
  • ##SQLAgentJobStarter##, the job that caused this, was also executed every minute, as scheduled.


Viewing the job history of job 4..

  • MSSQL_S01E04_Job4 was only being executed after the triggering jobs had succeeded both.


Viewing the job history of job 5..

  • MSSQL_S01E04_Job5 was only being executed after the triggering job had failed.


Viewing the job history of job 6..

  • MSSQL_S01E04_Job6 was being executed after the triggering job had either succeeded or failed.


Viewing the job history of job 7 and 8..

  • MSSQL_S01E04_Job07_RunningJob was being executed when the triggering job was still running (job MSSQL_S01E04_Trigger_for_Job07_and_Job08 runs every other minute for 30 seconds).


Viewing the job history of job 9..

  • MSSQL_S01E04_Job09_SqlConditionFalse was never executed: it is checked on the left, but does not occur on the right.


Viewing the job history of job 10 when the file “C:\Temp\NowJob10DoesNotRun.txt” does not exist.

Viewing the job history of job 10 after the file “C:\Temp\NowJob10DoesNotRun.txt” is created.

  • MSSQL_S01E04_Job10_ReversedFileCondition_RunsWhenFileDoesNotExist was only executed when the file “C:\Temp\NowJob10DoesNotRun.txt” did not exist.

But there is more. The tool will write to the Windows Eventlog when an error occurs.
mssql-s01e04-337To view the windows Eventlog, press <Windows Key>+R, then type eventvwr


Example of errors that are logged.

Make sure you clean up your server! Use the script demo_cleanup.sql for that. Otherwise the tool will keep on executing the demo jobs. Also change the interval of the ##SQLAgentJobStarter## from 1 to 3 or 5 minutes.

Understanding the stored procedures of the solution

Stored procedure [dbo].[usp_upsert_jobtrigger]

With this stored procedure you can insert or update a jobtrigger.

You can set the following parameters:

  • @trigger_id: If provided as input, it will be used for checking if an update or insert needs to be done. If left null or smaller than or equal to 0, it will be filled in the stored procedure.
  • @enabled: Only enabled jobtriggers are processed. Values 0 or 1 (required).
  • @name: Name for the jobtrigger (unique + required). Is used for checking if an update or insert needs to be done, when @trigger_id is null or smaller than or equal to 0
  • @sqlagent_jobname: The name of the SQL Agent job that must be started when all jobtrigger conditions are true (required).
  • @processing_order: Indicates the order in which jobtriggers must be processed (required). Duplicate processing order values are allowed.
  • @description: An optional description.

Stored procedure [dbo].[usp_upsert_jobtrigger_condition]

With this stored procedure you can insert or update a jobtrigger condition of any type.

You can set the following parameters:

  • @condition_id: If provided as input, it will be used for checking if an update or insert needs to be done. If left null or smaller than or equal to 0, it will be filled in the stored procedure.
  • @trigger_id: The trigger_id of the jobtrigger to which the jobtrigger condition belongs.
  • @enabled: Only enabled jobtrigger conditions are processed. Values 0 or 1 (required).
  • @processing_order: Indicates the order in which jobtrigger conditions must be processed (required). Duplicate processing order values are allowed.
  • @description: An optional description.
  • Note regarding the parameters underneath: You must provide values for the one @sql* parameter OR the three @file* parameters OR the three @job* parameters.
  • @sql_stored_procedure: The fully qualified name (including database name) of a stored procedure that returns a single row with a single column named [SqlTriggerResult] of the BIT datatype.
  • @file_watch_folder: The folder where is searched for files.
  • @file_recurse_subfolders: Indicates if also subfolders of the watchFolder must be checked.
  • @file_name_pattern: A filename or filename pattern with wildcards of the file that must exist for the triggercondition to be true.
  • @job_name: The name of a SQLAgentjob that must be completed with the desired value for @job_result for the triggercondition to be true.
  • @job_result: Domainvalues [S | F | C] meaning Succeeded, Failed or Completed (=Succeeded or Failed).
  • @job_result_maxage: An age in seconds for the maximum time ago the job was completed. Use -1 for unlimited. For instance, you want a sql agent job to start when 4 other jobs, that are scheduled daily and partially run simultaneously, have all succeeded.
    The time difference between the first and the last estimated completion time is 60 minutes.
    The time difference between the first scheduled starttime and the last estimated completion time is 120 minutes.
    Any value (recalculated as seconds) between 60 minutes (plus a safety margin) and 22 hours (24 hours minus 120 minutes) would be fine.
  • @reverse_condition: If the value 1 (true) is used, the condition must have the opposite result to trigger the job to start. For example a file may not be found or a job may not be running.
    A few things are important to understand:

    1. Job results “Succeeded”, “Failed” or “Completed” check for a result; job state “Running” (or not) checks for a current state. Do not get confused about this.
    2. @reverse_condition does not work for job results “Succeeded” and “Failed” and there is no need to: the opposite of “Succeeded” is “Failed”, you should implement it that way. The opposite of “Not Succeeded” implemented with @reverse_condition = 0, would be any result except “Succeeded”, so “Failed”, “Completed” or “No execution found”.
    3. @reverse_condition does not work for “Completed” for the same reason: The opposite of “Completed” implemented with @reverse_condition = 0, would be any result except “Completed”, so “Failed”, “Succeeded” or “No execution found”. Instead of “Not Completed” you could check for “Running”.
    4. If @job_result = ‘R’ (Running) and @reverse_condition = 0 then @job_result_maxage is not used: when job is running (and the condition is met) @sqlagent_jobname is only started, if it was not started yet after the current execution of the running job (so a running job will not cause the other job to run more than once).
    5. If @job_result = ‘R’ (Running) and @reverse_condition = 1 then @job_result_maxage is used differently: when job is not running (and the condition is met) @sqlagent_jobname is only started, if it was not started the last @job_result_maxage seconds. Therefore it is best, to use “Not Running” only in conjunction with some other condition.

Stored procedure [dbo].[usp_upsert_sql_jobtrigger_condition]

With this stored procedure you can insert or update a jobtrigger condition of type ‘sql’ (using a stored procedure).

You can set the following parameters:
@condition_id, @trigger_id, @enabled, @processing_order, @description, @sql_stored_procedure, @reverse_condition: as described above.

Stored procedure [dbo].[usp_upsert_file_jobtrigger_condition]

With this stored procedure you can insert or update a jobtrigger condition of type ‘file’.

You can set the following parameters:
@condition_id, @trigger_id, @enabled, @processing_order, @description, @file_watch_folder, @file_recurse_subfolders, @file_name_pattern, @reverse_condition: as described above.

Stored procedure [dbo].[usp_upsert_jobresult_jobtrigger_condition]

With this stored procedure you can insert or update a jobtrigger condition of the SQLAgentJobResult type.

You can set the following parameters:
@condition_id, @trigger_id, @enabled, @processing_order, @description, @job_name, @job_result, @job_result_maxage, @reverse_condition: as described above.

Stored procedure [dbo].[usp_delete_jobtrigger]

With this stored procedure you can delete a jobtrigger by trigger_id or name.

Stored procedure [dbo].[usp_delete_jobtrigger_condition]

With this stored procedure you can delete a jobtrigger condition by using the condition_id.

Stored procedure [dbo].[usp_delete_jobtrigger_conditions_by_trigger_id]

With this stored procedure you can delete all jobtrigger conditions for a given trigger_id.

Examples of usage of the stored procedures

Among the files you can download there is the file demo_setup_sysjobtriggers.sql, that uses the upsert stored procedures for different use cases.

Conclusion / Wrap up

I have tried to explain how the SQLAgentJobStarter tool can be used for starting SQL Agent jobs triggered by the existence of one or more files, by the result of a stored procedure execution or by the execution result of one or more other SQL Agent Jobs.
A few important sidemarks I still have to make:

  1. In real life scenarios, make sure the files you are polling for are moved after processing, so that the tool does not try to start the job again.
  2. To create stored procedures that are used as a sql trigger you can use the storedprocedure_template.sql file as a template.
  3. To create your own jobtriggers and triggerconditions please check demo_setup_sysjobtriggers.sql in which some of the stored procedures are used.
  4. I am aware that the sourcecode of the tool might need some refactoring. It does the job, however.
  5. Anyone that can help me with creating a graphical user interface to manage the jobtriggers and triggerconditions using C# and WinForms is more than welcome. If you are interested, please contact me. I do not have enough time to do this myself.

(c) 2016/2017 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Zeros, bloody zeros! (Data Vault Series)

Introduction

I must admit I have a weakness for British humour.
When I had to cope with leading zeros in business keys some time ago, I spontaneously came up with the title of this post, not knowing that it would serve as such.
For those who do not know, “Meetings, bloody meetings” is a British comedy training film in which John Cleese plays a main role. It was made in 1976, and a remake was made in 2012.
It tells in a funny way what can go wrong at meetings and how you can do better, check it out if you can.

DV-S01E05-meetingsMr John Cleese

But, obviously, this post is not about meetings but about zeros.

Problem

I can be short about that: leading zeros in business key values.
For instance a customer number is delivered to the data warehouse as 0001806 (instead of 1806).
This would not be a problem it is would always be delivered exactly like that. But to be honest, you can and will not know that upfront. Even this might be the case now, it might not be in the future.
When other tools are used, leading zeros could suddenly disappear (for instance when a csv file is modified using Excel), or (more rarely) the number of leading zeros could change (01806, 00001806). When this happens you have a problem, because for the data warehouse 01806, 0001806, 00001806 and 1806 are all different business keys! Even if you have only two variants, it is already a problem.
Because every business key gets a different row in the hub, and this customer now exists multiple times!

DV-S01E05-zeros(No acting here, this is how I look sometimes)

Solution

If you are familiar with Data Vault, you might already think of same-as-links to solve this.
But I think the solution should be implemented earlier, to avoid having multiple hub rows.
Simply always remove leading zeros when the sourcecolumn is (part of) a business key (either primary or foreign key) and seems a number or ID but is delivered as a string/varchar. In this way 1806 will always be 1806! And I think it is pretty impossible that 001806 and 1806 would refer to two different customers.
Unless, of course, they would come from different source systems. But in that situation, depending on leading zeros would be a bad thing to do, because when then leading zeros dropped off, satellite rows of different customers (in different source systems) could end up as connected to the same hub row! In this situation, in a non-integrated Raw Vault, it would be better to prefix the business key with the source system code and remove the leading zeros, for instance, CRM.1806 and ERP.1806.
In all cases, you can still store the original value (with leading zeros) as an ordinary attribute in a satellite for auditing reasons.

How to implement the solution

There are many ways to remove leading zeros. When I was searching for this I had two requirements:

  • No casting from and to an integer may take place, otherwise all business keys need to be numeric, so this would make the solution less reliable.
  • No function, routine or assembly may be called, this could negatively impact performance. I was looking for an “inline” conversion.

After some research I found an expression that was the same for SQL and SSIS and quite okay (T-SQL version by Robin Hames, my credits for his work), but appeared to change a string with only one or more zeros to an empty string. And because a 0 can have a meaning – and is certainly different from an empty string – this is undesired behavior, IMHO.
So I had to add some logic to it: a SELECT CASE in T-SQL and an inline condition (format {condition} ? {true part} : {false part} ) to the SSIS expression.
Furthermore I came on a different method for T-SQL as well, using the PATINDEX function, which is more compact than the other solution.
For SSIS I still use the ‘Robin Hames’ method, because the PATINDEX function is not available in SSIS Expressions.
So .. this is what it has become:

T-SQL

Remove_leading_zeros.sql

SELECT
    example.[id_with_leading_zeros],
   CASE
      WHEN LTRIM(example.[id_with_leading_zeros]) = '' THEN ''
      WHEN PATINDEX( '%[^0 ]%', example.[id_with_leading_zeros]) = 0 THEN '0'
      ELSE SUBSTRING(example.[id_with_leading_zeros], PATINDEX('%[^0 ]%', example.[id_with_leading_zeros]), LEN(example.[id_with_leading_zeros]))
   END AS [id_without_zeros_method1],

   CASE
      WHEN LTRIM(example.[id_with_leading_zeros]) = '' THEN ''
      WHEN PATINDEX( '%[^0 ]%', example.[id_with_leading_zeros]) = 0 THEN '0'
      ELSE REPLACE(REPLACE(LTRIM(REPLACE(-- Robin Hames' method
            REPLACE(LTRIM(example.[id_with_leading_zeros]), ' ', '!#!') -- replace existing spaces with a string that does not occur in the column value, I have chosen '!#!'
            , '0', ' ') -- replace '0' with ' '
            ) -- end of LTRIM to remove leading '0's that have been changed to ' 's
            , ' ', '0') -- change ' ' back to '0'
            , '!#!', ' ') -- change '!#!' back to ' '
   END AS [id_without_zeros_method2]
FROM
    (
    SELECT
        TOP 1000 RIGHT('00000000000' + CONVERT(NVARCHAR(12), object_id), 14) AS [id_with_leading_zeros]
    FROM
        master.sys.objects
    UNION
    SELECT N' 00000 '
    UNION
    SELECT N'00'
    UNION
    SELECT N' '
    UNION
    SELECT ' 0099990 A '
    UNION
    SELECT '-5550'
    ) example

SSIS Expression (can be used in Derived Column)

(LTRIM(REPLACE(id_with_leading_zeros,"0", "")) == "" && LTRIM(id_with_leading_zeros) != "") ? "0" : REPLACE(REPLACE(LTRIM(REPLACE(REPLACE(LTRIM(id_with_leading_zeros)," ","!#!"),"0"," "))," ","0"),"!#!"," ")

DV-S01E05-151In a Derived Column Transformation this looks for instance like this

Conclusion / Wrap up

In this post I have motivated why I think you should remove leading zeros from business keys when data is loaded from source systems to a data warehouse.
This post also contains different ways to remove leading zeros, two for T-SQL and one for a SSIS expression.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

SQL Server 2016 tempdb configurations benchmark (SQL Server Series)

Introduction

When you have installed SQL Server 2016 you will have noticed that the configuration for tempdb has changed.
Microsoft made a proper tempdb configuration easy to do via the GUI during installation.

Today I want to go through the settings offered and will measure performance of different setups.

What are the options?

During installation you can set the following values:
For TempDB datafiles:

Setting Default value
Number of files 4
Initial size (MB) 8
Autogrowth (MB) 64
Data directories Default data directory, in my case C:\MSSQL\Instance root\MSSQL13.MSSQLSERVER\MSSQL\Data

For TempDB logfile:

Setting Default value
Initial size (MB) 8
Autogrowth (MB) 64
Log directory Default data directory, in my case C:\MSSQL\Instance root\MSSQL13.MSSQLSERVER\MSSQL\Data

MSSQL-S01E03-tempdb-setup
Configuration screen during SQL Server 2016 installation (I changed the directories already).

So what is the best setup?

This subject is not new for SQL Server 2016, but the installation procedure now just gives a better opportunity to set it up right the first time.
There are roughly five differentiators in the setup:

  • Initial size (MB)
  • Autogrowth (MB)
  • Number of data files
  • Location of data files
  • Read/write performance of the different storage media

What are the considerations when setting these values?

Initial size (MB)

A larger initial size will accomplish two things:

  1. It will take longer before the file will have the need to grow.
  2. The file can be stored less fragmented on disk.

If you have enough disk space, the larger the better, there is no real downside on making the initial size quite large. More about this in my conclusion.

Autogrowth (MB)

With autogrowth the choice is a bit more delicated, as a small or large value will both have pros and cons, as pointed out in the schedule below.

Autogrowth value Pros Cons
Small More frequent smaller growth actions will cause less noticable hick ups TempDB datafiles will become more fragmented.
Large Tempdb datafiles will become less fragmented. Less frequent larger growth actions might cause noticable hick ups

Number of data files

In general, the recommendation is one data file per (logical) processor. You can configure more files than the number of (logical) processors, but do this only if you notice that this gives a better performance.

Location of data files

Locating the data files on different drives can have two effects:

  1. Queue length per disk drive in Windows (all scenarios, also with virtual disks residing on the same physical drive).
  2. Physical writes to the diskdrive can go in parallel (only when different physical drives).

The tests

The testcases

Creating combinations of all differentiators would just create *a lot* of testscenarios.
So I will be a bit pragmatic.
The testcases are chosen, so that they can be compared with at least one other testcase, of which one differentiator is different.

Test # # Data files Init size (MB) Auto growth (MB) # Fol- ders Scenario
0 4 8 (x4) 64 1 Default installation (data files on OS disk)
1 4 8 (x4) 64 1 Data files on dedicated drive
2 4 8 (x4) 64 4 Data files on more drives
3 4 2048 (x4) 64 4 Data files on more drives + Larger initial size
4 4 8 (x4) 256 4 Data files on more drives + Larger autogrowth
5 4 2048 (x4) 64 1 Data files on dedicated drive + Larger initial size
6 4 2048 (x4) 64 1 Data files on dedicated drive + Larger initial size + TempDB on different SSD
7 8 2048 (x8) 64 1 Data files on dedicated drive + Larger initial size + More data files
8 4 8 (x4) 64 1 Default installation (data files on OS disk) + Larger initial size

Test results

I will present you both two graphs as well as the raw data. I think especially the first graph gives some useful insights, as it compares all testcases with testcase 0 (default installation), by using an index 100 for testcase 0. If a different testcase performs better, the index will be less than 100. Also, in this way, the average result of both testsets (A and B) can be shown.
I left testcase 6 out of the graph, later I will tell why I did this.
MSSQL-S01E03-testresult-indexIndex-based comparison of testresults

MSSQL-S01E03-testresult-secondsExecution times of testcases for testset A and B

Test results figures

Test # Scenario Testresult testset A (ms) Testresult testset B (ms)
0 Default installation 30359 232795
1 Data files on dedicated drive 28344 261467
2 Data files on more drives 22514 248484
3 Data files on more drives + Larger initial size 28040 210223
4 Data files on more drives + Larger autogrowth 24173 200632
5 Data files on dedicated drive + Larger initial size 19608 192587
6 Data files on dedicated drive + Larger initial size + TempDB on different SSD 123289 387646
7 Data files on dedicated drive + Larger initial size + More data files 19692 211501
8 Default installation + Larger initial size 20149 361844

Conclusion

Data files on dedicated drive + Larger initial size (test 5) gives the best performance for both testsets. Putting tempdb files on multiple disks can not win from this, however the differences are relatively small.

Surprisingly performance dramaticly decreases when the virtual disk on which tempdb is stored is moved to a different (faster) SSD (this is test 6)! I have no explanation for this. Therefore I will leave test 6 out of the analysis, by doing this physical storage is the same for all testcases and changes in performance are only caused by the other differentiators.

My most important conclusion is that the default values offered during installation are quite okay, except for two things:

The Initial size (MB)

You should really set the initial size to a big number (gigabytes if possible). Note that if you leave the number of files 4 (which I recommend), the space used is four times the size that you use for Initial size.
If you have a dedicated disk for tempdb (which is recommended), you might want to set the initial size of the datafile(s) to fill up the disk. You should then disable Autogrowth.
Also on a production server, check the size of tempdb files after a week of uptime (also the SQL instance may not have been restarted). Say the tempdb size of all files together (can also be one file) is 22GB. You could then set your initial size to 28GB in total, so 7GB per file if you configured 4 files. This would mean that during normal operation, the tempdb never has to grow.

Dedicated disk

It is a common best practive to put tempdb on a seperate disk so that the Windows Disk Queue does not have to compete with other read/write actions to that disk.

Other settings are somewhat less important but … think for yourself (my disclaimer)

In my opinion both putting tempdb files on multiple different disks and setting Autogrowth to a different value do not offer much performance benefit.
The Autogrowth value is even irrelevant if the initial size is so large that temdb never has to grow during normal operation.
But every server is different! Test and think for yourself.
My testresults are presented “AS IS” in the hope they will be useful as a starting point for further own analysis. I am not responsible for any disadvantage you might have by using my test results for your tempdb setup.

What if SQL Server is already installed and/or it is an older version?

No worries, you still can change the tempdb configuration.
You can move and add files using SQL scripts.
Initial size and Autogrowth you can also set with Management Studio.
MSSQL-S01E03-166Go to properties of tempdb

MSSQL-S01E03-167Here you can change Initial size and Autogrowth

By the way, I noticed that SQL Server does not delete moved tempdb files. So if you move tempdb files, you should delete the files on the old location, so that they do not use disk space and do not cause confusion for administrators.

Test justification

  • All tests are done on a Hyper V Virtual Machine running Windows Server 2012 R2 and SQL Server 2016 Developer Edition.
  • Change of physical storage type is done by moving vhdx files to different physical disks, and then change the file location in the settings of the Hyper V Virtual Machine (while it is not running, of course).
  • Each test is done on an initial copy of the Virtual Machine, therewith the startsituation is equal for every test (e.g. no tempdb growth by previous tests).
  • Tests where done with two sets of scripts (set A and B) that inserted, updated and deleted rows in temporary tables in tempdb. Set A consists of three sql scripts, and set B of five.
  • All scripts were developed, so that all tempdb inserts started at the same time (maybe milliseconds difference in starttime only). This to ensure that every testrun was executed in the same way.
  • All scripts were executed in SQL Server Management Studio.
  • The first file of tempdb is stored on its own (virtual) disk except for testcase 0, where tempdb files are on the C-drive where Windows is installed.
  • When spreading files over multiple virtual disks, the disks created for database files, log files, and backup files are used. This was done for pragmatic reasons during the test. This was defendable, because no other activity on databases and log files was going on during the test. It is however not recommended to spread tempdb like that in production environment, unless every disk is dedicated to be used for tempdb only.
  • The host machine had the same applications open during every test, being a Windows Explorer and Microsoft Excel. No other applications that used CPU and could influence performance of the virtual machines were running.
  • The tempdb log file was placed on a seperate disk (the same one) for each test. Therewith it was no subject of performance investigation. This would have at least doubled the number of testcases.

Wrap up

Microsoft did a good job in improving the setup options for tempdb in the installation procedure.
It is a bit of a pity that they did not go the last mile, giving the tempdb data files a more realistic Initial size.
Luckily this is easy to change during installation, and it is a quick win for anyone installing SQL Server 2016.

Download all the used scripts here.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Zip sourcefiles in a SSIS Package (SSIS Series)

Introduction

As I am in a (un)zipping mood, this week the opposite problem of last week.
The example is not so ready-to-use as in my previous post, as it depends on choices made in your development process of other SSIS packages.
Still I hope you can adjust this example to match your needs.

Problem

If you have imported and archived (csv- or xml) files, these files can take up a lot of disk space. The archive folder can also become overwhelmingly full with ten thousands of files, which can make it slowly accessible and difficult to manage.

Solution

To tackle this problem I have made a fully configurable SSIS package, that can be executed from a different package with an Execute Package Task.
The package is made using SSIS 2014.

The zipping itself is done with 7za.exe, for more information on how to get and install this free program see my previous blog post.

Scenario

When I used this concept, a different SSIS package moved files to a subfolder of an archive folder after they were processed, but their name was prefixed with the LoadCycleID with leading zeros, for instance 001345_customers.xml.

The archive folder and subfolder were separated, because the archive folder was an environment variable and the same for all packages, while the subfolder was different for different SSIS packages, and contained the name of the source system of the files.
If you only have one archive folder, just change the example to use that instead of the concatenation of archive folder and subfolder.

Overview

Here is a picture of the end result:

SSIS-S01E06-124
The Control Flow

Detailed steps

Parameters and variables

First you need to add a few parameters and variables to the package.

Parameters:

Name Data type Required Description
_7zaPathAndFilename String False The full path and filename of 7za.exe.
ArchiveFolder String True The root folder for archiving processed files.
ArchiveSubFolder String True The subfolder of the ArchiveFolder in which the files to be zipped exist, the subfolder can for instance be named after a source system like SAP or CRM.
LoadCycleID Integer True The ID of the LoadCycle that is processing these files. In your situation it could also be called ExecutionID, SessionID and so on, the name is not really important.

SSIS-S01E06-125
Package parameters

Variables:

However not all variables might be necessary, I like the concept of being able to configure all values in one place.
The data type of all variables is String, except LoadCycleID which is an Integer.
Every variable uses an expression to set its value.
Here is a detailed list of all variables and the expression used for each one.

Name Expression
ArchiveFilesToDeleteInFolder @[$Package::ArchiveFolder] + "\\" + @[$Package::ArchiveSubFolder]
ArchiveFilesToDeletePattern @[User::LoadCycleIDString] + "*"
ArchiveFilesToZipPattern @[$Package::ArchiveFolder] + "\\" + @[$Package::ArchiveSubFolder] + "\\" + @[User::LoadCycleIDString] + "*.*"
ArchiveFilesWorkingDirectory @[$Package::ArchiveFolder]
ArchiveZipFile @[User::LoadCycleIDString] + "_" + @[$Package::ArchiveSubFolder] + ".zip"
ArchiveZipPathFile_Final @[$Package::ArchiveFolder] + "\\" +@[$Package::ArchiveSubFolder] + "\\" + @[User::LoadCycleIDString] + "_" + @[$Package::ArchiveSubFolder] + ".zip"
ArchiveZipPathFile_Temp @[$Package::ArchiveFolder] + "\\" + @[User::LoadCycleIDString] + "_" + @[$Package::ArchiveSubFolder] + ".zip"
LoadCycleID @[$Package::LoadCycleID]
LoadCycleIDString RIGHT("00000" + (DT_STR, 10, 1252) @[$Package::LoadCycleID], 6)

Important note: if you expect that the value for LoadCycleID will grow beyond 999999 over time, you should add more prefixed zero’s to LoadCycleIDString and increase the number 6, the second parameter of the RIGHT function.

SSIS-S01E06-126
SSIS variables of the package

The Control Flow

The control flow is quite straight forward, only three tasks and we’re done.

EPR Zip files in archive subfolder

This task creates one zipfile containing multiple other files in the ArchiveSubFolder.
From the SSIS Toolbox drag an Execute Process Task to the Control Flow panel and open the Execute Process Task Editor.
On the General and Process panes you do not have to change anything.

SSIS-S01E06-128
SSIS-S01E06-129
No changes needed here

On the Expressions pane, press the ellipsis next to Expressions. In the Property Expressions Editor add the following:

Property Expression
Arguments "a "+ "\"" + @[User::ArchiveZipFile] + "\" " + "\"" + @[User::ArchiveFilesToZipPattern] + "\" -mx=9"
Executable @[$Package::_7zaPathAndFilename]
WorkingDirectory @[User::ArchiveFilesWorkingDirectory]

Please note that in the Arguments expression -mx=9 is used. This is the compression level, level 9 means “ultra” compression. This level gives the smallest zip files but is also the slowest compression level. You can for instance also use 5 for “normal” or 3 for “fast” compression, if this is what you want.

SSIS-S01E06-127
Expressions for the Execute Process Task

SCR Delete uncompressed files

This task deletes the files that were added to the zip file in the previous task.

In the Script Task Editor, add both variables User::ArchiveFilesToDeleteInFolder and User::ArchiveFilesToDeletePattern to ReadOnlyVariables, then press Edit Script ….

Then in the namespaces region add one namespace, as indicated below.

Namespaces region
#region Namespaces
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
// Added namespace:
using System.IO;
#endregion

Adjust public void main() as follows:

public void Main

public void Main()
{
  // Copy SSIS variable values to C# string variables.
  string archiveFilesToDeleteInFolder = Dts.Variables["User::ArchiveFilesToDeleteInFolder"].Value.ToString();
  string archiveFilesToDeletePattern = Dts.Variables["User::ArchiveFilesToDeletePattern"].Value.ToString();

  // Delete each file that just has been added to the zip archive.
  foreach (string f in Directory.EnumerateFiles(archiveFilesToDeleteInFolder, archiveFilesToDeletePattern))
  {
    File.Delete(f);
  }

 Dts.TaskResult = (int)ScriptResults.Success;
}


WARNING: Make sure that no files that meet the file name pattern of files to add to the zip file are added to the ArchiveSubFolder while or after the zipfile is (being) created. Otherwise files that are NOT zipped might get deleted!

Download the script here.

FST Move zip file

The File System Task is to move the zip file to the ArchiveSubFolder, because the zipfile is initially created in a temporary folder and not in the ArchiveSubFolder.
This is to prevent that it would be deleted by coincidence, if you use a value for ArchiveFilesToDeletePattern that ends with .*
Configure the File System Task as shown in the picture below:

SSIS-S01E06-130
File System Task Configuration

Download the script here.

Let the car drive

Time to test the package and tell the result in pictures (if you don’t mind).

SSIS-S01E06-131
Before: files of LoadCycleID 001346 need to be zipped ..

SSIS-S01E06-132
Execute the package ..

SSIS-S01E06-133
All csv files replaced with one zip file ..

SSIS-S01E06-134
.. that contains the csv files ..

Conclusion / Wrap up

In this blog post you could read the steps needed to build a SSIS package that creates a zip file containing a number of files in an archive(sub)folder.
This makes the archive smaller and easier to manage.
However beware that you only delete files that were added to the zip file. For more details see the warning above.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Unzip specific files from lots of zipped archives (SSIS Series)

Introduction

This week I want to discuss the trouble you may have when searching hundreds of zip files for specific files inside them.
Recognizable?
At least I faced this problem some time ago, and solved it using SSIS and the commandline version of 7-zip, named 7za.exe.
The choice for SSIS was purely based on convenience, as you could tackle this problem also using Powershell, C# or almost any other scripting language.
But I thought SSIS is just a neat and quick way to do this.

Problem

Finding specific files in huge amounts of zip files.

Solution

To tackle this problem I have made a fully configurable SSIS package, that can be executed with different parameters.
The main task of the package is to extract specific files (based on a file name pattern) from zipfiles in a folder and – optionally – subfolders.
The unzipping itself is done with 7za.exe, which is the commandline version of 7z, the great open source zip program, that exists already for several years and is used by millions of users.

By the way, the package is made using SSIS 2014.

Preparations: download and install 7za

It is important to download this package from the official 7z web site, just to avoid that you would get a copy from some deceitful web site infected with malware.
To go to the official 7-zip download page click on this link:
7-zip download page
Most important is the 7-Zip Extra: standalone console version.
For the example I have used version 16.02 (2016-05-21) for Windows, but it should also work with version 9.20 (2010-11-18), then called 7-Zip Command Line Version.

If you download the newest version, you also need software to unzip the 7z160-extra.7z file, for instance the regular 32-bit or 64-bit version of 7-zip, which can also be downloaded on the same page.

SSIS-S01E05-100
Download page of 7-zip

I assume you will use the newest version 7z160-extra.7z
Once unzipped, just take the time to read License.txt, readme.txt and history.txt.
Then copy the three files 7zxa.dll, 7za.dll and 7za.exe to a folder on the Windows computer where SSIS runs, usually a Windows Server.
If you run a x64 version of Windows, you can take the x64 version of the 7za files, which are in a subfolder “x64” of the unzipped files.

SSIS-S01E05-101
Unzipped 7za files ..

I have created a subfolder “7za” in the “C:\Program files” folder of my virtual Windows Server, and copied the x64-version of the three mentioned files to it, but you can put them in any folder you like.
When you have done this, the real fun can start ..

Overview

Here is a picture of the end result:

SSIS-S01E05-102
The Control Flow

Detailed steps

Parameters and variables

First you need to add a few parameters and variables to the package.

Parameters:

Name Data type Required Description
_7zaPathAndFilename String False The full path and filename of 7za.exe.
FilenamePattern String True The filename pattern of files you want to unzip.
SourceRootFolder String True The main folder where you want to search for zip files.
TraverseSubfolders Boolean False Indication if you want to look only in SourceRootFolder, or also in its subfolders.
UnzipRootFolder String True The folder to which files are unzipped. Important: this may NOT be a subfolder of SourceRootFolder.
ZipFilenamePattern String True The filename pattern of the zipfiles from which you want to unzip files.

SSIS-S01E05-103
Package parameters

Variables:

Name Data type Description
FLC_UnzipFolder String The subfolder where the files will be unzipped. This is a subfolder of the UnzipRootFolder, with the same relative subpath as the zipfile has in SourceRootFolder. Additionally, the zipfilename has been replaced with a subfolder that is the same as the zipfilename without the file extension.
FLC_ZipPathAndFileName String The zipfile full path, that is target of investigation if any files with FilenamePattern are zipped inside.

SSIS-S01E05-104
SSIS variables needed to do the job

Developing the package

FELC Unzip files

This foreach loop container finds all zip files.
In the Foreach Loop Editor, select Collection in the left pane.
As we want to make the solution generic by using all those parameters mentioned above, we have to add a few expressions.

SSIS-S01E05-105
Foreach Loop container Collection configuration

Add the following three expressions (in my opinion the editor is not very userfriendly, the window is too small, but it can be done):

Property Expression
Directory @[$Package::SourceRootFolder]
FileSpec @[$Package::ZipFilenamePattern]
Recurse @[$Package::TraverseSubfolders]

Please note that these expressions will set the values for Folder, Files and the Traverse subfolders checkbox in the Enumerator configuration part of the screen (inside the purple square in the picture above).
If you close and reopen the Foreach Loop Editor, you will notice that the configuration fields now use the values from the corresponding parameters, except the value Fully qualified for Retrieve file name, which is hardcoded.

SSIS-S01E05-106
Example of how to set an expression..

Then, still in the Foreach Loop Editor, select Variable Mappings in the left pane and map variable User::FLC_ZipPathAndFileName to Index 0, as shown in the picture below.

SSIS-S01E05-107
Variable Mappings

EXPR Set Unzip folder

The next nifty thing that needs to be done, is creating a folder to store the unzipped files. As we do not know if zipped files with the same name exist in multiple zip files, the safest thing to do is to create a directory structure identical to the directory structure of the zip files.
Additionally, the basename of the zip file (so without file extension) is used as an extra subfolder. In this way we are sure that unzipped files with the same name can be stored without overwriting other ones.

To do this, from the SSIS Toolbox drag an Expression Task inside the Foreach Loop container.
Open the Expression Task and paste this expression into the Expression textbox.

@[User::FLC_UnzipFolder] =
REPLACE(LEFT(@[User::FLC_ZipPathAndFileName],
FINDSTRING(@[User::FLC_ZipPathAndFileName], "."
+ TOKEN(@[User::FLC_ZipPathAndFileName], ".",
TOKENCOUNT(@[User::FLC_ZipPathAndFileName], ".")),1) -1),
@[$Package::SourceRootFolder], @[$Package::UnzipRootFolder])

Then press the Evaluate Expression button to check that the expression is valid.

SSIS-S01E05-108
Configuring the Expression Task

FSYS Create Unzip folder

In the previous step we have composed the full path to unzip files into, but this folder structure might not exist yet.
Therefore we use A File System Task named FSYS Create Unzip folder to create the folder structure.
The good news is that the File System Task can create multiple (sub)folders at once.
Configure it as shown in the picture.
SSIS-S01E05-109
Configuration of FSYS Create Unzip folder

EPR Unzip Selected Files

Below you see a few screenshots for the configuration of the Execute Process Task named EPR Unzip Selected Files.
The funny thing is, that in the Process pane, you do not have to change anything. Executable and Arguments are set using expressions (coming next) and all the other properties can keep their default values.

SSIS-S01E05-110
Process configuration.

SSIS-S01E05-111
Setting Executable and Arguments using expressions
Arguments (copy-friendly):

"-y x \"" + @[User::FLC_ZipPathAndFileName] + "\" -o" + @[User::FLC_UnzipFolder] + " " + @[$Package::FilenamePattern] + " -r"

SCR Remove Empty folders

There is one problem with the steps done so far. If a zipfile does not contain a file that matches the FilenamePattern, then the folderstructure is made for nothing.
However you could just leave these empty folders, I think the solution is a bit better if these empty folders are deleted.
To do this, from the SSIS Toolbox drag an Script Task inside the Foreach Loop container.

SSIS-S01E05-112
In the Script Task Editor, add both variables User::FLC_UnzipFolder and $Package::UnzipRootFolder to ReadOnlyVariables

In the Script Task Editor window, press Edit Script ….

Please note that you can download the C# script here..

Then in the namespaces region add two namespaces, as indicated below.

Namespaces region
#region Namespaces
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
// Added namespaces:
using System.IO;
using System.Linq;
#endregion

Adjust public void main() as follows:

public void Main

public void Main()
{
  // Copy SSIS variable value of FLC_UnzipFolder to C# string variable.
  string unzipFolder = Dts.Variables["User::FLC_UnzipFolder"].Value.ToString();

  // Source: http://stackoverflow.com/questions/22520761/delete-all-empty-directories-in-specified-path
  // My credits go to author "Jon" from Berlin.
  // Because not all zipfiles might contain files with the FilenamePattern we are looking for, an empty path might have been created.
  // This script deletes these empty folders, so that in the end only folders will exist with unzipped files or subfolders in it.
  var di = new DirectoryInfo(unzipFolder);
  var root = Dts.Variables["$Package::UnzipRootFolder"].Value.ToString(); // no trailing slash!
  while (di.FullName != root
          && !di.EnumerateFiles().Any()
          && !di.EnumerateDirectories().Any())
  {
    di.Delete();
    di = di.Parent;
  }

 Dts.TaskResult = (int)ScriptResults.Success;
}

Download the script here.

The proof of the pudding is in the eating

Time to do some testing.
I have created a few folders with subfolders and subsubfolders. You can see the folder structure in the last picture below.
As you might have noticed the default values for my parameters already prepared for the testcase: I want to unzip only customer files from zip files with 201605 in the name.

SSIS-S01E05-113
Executing the package in Debug mode..

Below you can see the result. Only customer files are unzipped. The subfolders under the Archive folder that contain no customer zip files (but only currency files), do not exist under the Unzipped folder.

SSIS-S01E05-114
Folder structure of Archive and Unzipped rootfolders..

SSIS-S01E05-115
Unzipped “Customer” files ..

Important notes for using with huge amounts of zip files

From my experience I must also warn you. If you run this package on a folder containing gigabytes of zipfiles, there are two things to be aware of:

  1. The execution of the package can potentially take a long time, several hours is not unusual.
  2. Make sure you have enough disk space. Suppose you have the following hypothetical scenario: zip files: 100GB, average zip compression rate: 20%, percentage of files to be unzipped: 30%. In this case you are going to unzip files that have a zipped size of 30% of 100GB, is 30GB. Once unzipped, their size will be 30GB * (1/0,20) = 150GB. You need to have that disk space available on the disk where the Unzipped folder resides.

Conclusion / Wrap up

With this SSIS package you can selectively unzip files from archived zip files.
When using it, make sure you have enough disk space, and be aware that the package can run for several hours, depending on the number and (average) size of the zip files to process.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Adding custom header and footer to csv files (SSIS Series)

Introduction

This article wasn’t planned. SSIS again? No I wanted to write a post about something else.
Nevertheless I got this question on one of my previous posts and thought, I might as well create a blog post about it.

Problem

“Hi Hans,
I have created my ssis package for flat file destination. My problem is that I want to add header and footer in my flat file destination to send it to FTP location format will be

0|MMDDYYYY|HHMMSS
1|Row1
1|Row2
1|Row3
1|ROW4
9|6

The last line contains no of row count including header and trailer
Please let me know how to do that.. I am completely stuck with that and no good luck.. “

So to summarize, both the header and the footer line of the csv file need to contain dynamic content.
Also each row starts with a ‘row type’: 0 for the header, 1 for every detail row, and 9 for the footer.

Solution

As an example I just use a simple Data Flow Task with an OLEDB Source connected to table DimCurrency from the AdventureWorksDW database.
The data then flows to a Script Component used as a Destination.
Here are a few printscreens to demonstrate what I mean. The Script Component will create the file.
The package is made using SSIS 2014.

Developing the package

SSIS-S01E04-099
The demo package just contains one Data Flow Task

SSIS-S01E04-205
Design of the Data Flow Task

SSIS-S01E04-101
OLEDB Source: DimCurrency from the AdventureWorksDW database

SSIS-S01E04-102
OLEDB Source: select all three columns

SSIS-S01E04-208
When you drag the Script Component to the Data Flow, you will get this dialog. Use the Script Component as a Destination

SSIS-S01E04-207
Variables that the package contains.

SSIS-S01E04-209
In the Script Transformation Editor, add both variables User::HeaderTimestamp and User::TargetCsvPathAndFileName to ReadOnlyVariables

Now comes the most interesting part! We use the endless possibilities of C# to fix our problem.
In the Script Transformation Editor window, press Edit Script ….

Please note that you can download the entire script here, so you do not have to copy the (sometimes malformatted) code from this webpage..

Then in the namespaces region add two namespaces, as indicated below.

Namespaces region
#region Namespaces
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
// Added namespaces:
using System.IO;
using System.Text;
#endregion

Above public override void PreExecute() add the copyright notice and the following two regions: Configuration and Private variables.
Please note that these regions do not exist yet, so you have to copy the code including the #region and #endregion lines.

New regions
  /*
  (c) Copyright 2016 - hansmichiels.com
 
  This program is free software: you can redistribute it and/or modify
  it under the terms of the GNU General Public License as published by
  the Free Software Foundation, either version 3 of the License, or
  (at your option) any later version.
 
  This program is distributed in the hope that it will be useful,
  but WITHOUT ANY WARRANTY; without even the implied warranty of
  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  GNU General Public License for more details.
 
  You should have received a copy of the GNU General Public License
  along with this program. If not, see http://www.gnu.org/licenses/.
  */

 
  #region ***** Configuration *****

  // Here you can change the column delimiter and the text qualifier easily.
  private const string _delimiter = “|”;
  private const string _textQualifier = “”;
  // To set the text qualifier to ” use:
  // private const string _textQualifier = “\””; // I recommend to use a text qualifier.
  
  // Every x rows, write the buffer to the file, and clear the buffer.
  // This is faster than writing every individual line to the file.
  // The higher the value of _bufferRows is, the more memory will be used and the less write actions are done to the csv file.
  // A higher value is a good idea if you have enough memory in your server.
  private const int _bufferRows = 100000;

  #endregion ***** Configuration *****

  #region Private variables

  // Variable to hold the path and filename
  private string _fileName;
  // Variable to count the rows
  private int _rowCount;
  // StringBuilder to hold a buffer of text that still needs to be written to the csv file.
  private StringBuilder _fileContentsBuffer;

  #endregion Private variables

PreExecute() is called before the data starts to flow through. Here we do some preparation and compose the headerline of the file.

PreExecute()

 &nbs p;/// <summary>
  /// This method is called once, before rows begin to be processed in the data flow.
  /// </summary>
  public override void PreExecute()
  {
    base.PreExecute();

    // Initialize variables
    _rowCount = 0;
    _fileContentsBuffer = new StringBuilder();
    _fileName = Variables.TargetCsvPathAndFileName;
    DateTime headerTimestamp = Variables.HeaderTimestamp;

    // Append the new line to the buffer.
    // This will be written to file with the first detail rows.
    // If there are no detail rows, no file will be created.
    _fileContentsBuffer.AppendFormat(“{1}0{1}{0}{1}{2:ddMMyyyy}{1}{0}{1}{2:HHmmss}{1}\r\n”, _delimiter, _textQualifier, headerTimestamp);
    }

PostExecute() is called after the data has flowed through. Here we append the last lines in the _fileContentsBuffer variable and the footerline to the file.

PostExecute()
  /// <summary>
  /// This method is called after all the rows have passed through this component.
  ///
  /// You can delete this method if you don't need to do anything here.
  /// </summary>
  public override void PostExecute()
  {
    base.PostExecute();
    // If no detail rows, bail out. No file will be created when there are no detail rows.
    if (_rowCount == 0) return;

    using (StreamWriter sw = File.AppendText(_fileName))
    {
      // Append a footer line with the rowcount, example value: 9|105
      // The rowcount is a count of the detail rows, but the footer count
      // needs to include header and footer, so we add 2 to the value of _rowCount.
      _fileContentsBuffer.AppendFormat(“{1}9{1}{0}{1}{2}{1}”, _delimiter, _textQualifier, _rowCount + 2);
      sw.Write(_fileContentsBuffer.ToString());
      _fileContentsBuffer.Clear();
    }
  }

Input0_ProcessInputRow(Input0Buffer Row) is the place where all the data processing takes place. As this script is the Destination for the data, we need to do something with those data. A Flat File Destination is not there and will not do this for us.
So here we append every detail row to the _fileContentsBuffer variable. Once in a while, we write the buffer to the file, and clear the buffer again.

Input0_ProcessInputRow

  /// <summary>
  /// This method is called once for every row that passes through the component from Input0.
  /// </summary>
  /// <param name=”Row”>The row that is currently passing through the component</param>
  public override void Input0_ProcessInputRow(Input0Buffer Row)
  {
    // Increase the rowcounter
    _rowCount++;

    // Append the new line to the buffer.
    // The format needs to be like “{1}[middle part]{1}\r\n”, so that the row starts and ends with the text qualifier.
    // Replace [middle part] with {1}{0}{1}{x} for every column that you want to append to the file, where x is 2 for column 1, 3 for column 2 and so on.
    _fileContentsBuffer.AppendFormat(“{1}1{1}{0}{1}{2}{1}{0}{1}{3}{1}{0}{1}{4}{1}\r\n”, _delimiter, _textQualifier,
      SafeValue(Row.CurrencyKey), // First column, {2} in the format
      SafeValue(Row.CurrencyAlternateKey), // Second column, {3} in the format
      SafeValue(Row.CurrencyName) // Third column, {3} in the format
             // etc.
      );
    
    // Every x rows, write the buffer to the file, and clear the buffer.
    if (_rowCount % _bufferRows == 0)
    {
      using (StreamWriter sw = File.AppendText(_fileName))
      {
      sw.Write(_fileContentsBuffer.ToString());
      _fileContentsBuffer.Clear();
      }
    }
  }

Multiple overloaded versions of the SafeValue method (for different datatypes like string, int, DateTime, you can add more when needed) add escape characters to the column values when needed.

SafeValue method and overloads
  /// <summary>
  /// Escapes a double quote delimiter in a column value by doubling it.
  /// </summary>
  /// <param name=”value”></param>
  /// <returns></returns>
  private string SafeValue(string value)
  {
    // If _textQualifier is null or empty, just return the value.
    if (string.IsNullOrEmpty(_textQualifier)) return value;

    if (value.Contains(_textQualifier))
    {
    // If the value contains one or more text qualifiers,
    // escape them by replacing them with two text qualifiers.
    return value.Replace(_textQualifier, _textQualifier + _textQualifier);
    }
    return value;
  }

  /// <summary>
  /// Overloaded method for DateTime value (meant as an example)
  /// For other data types, create new overloaded methods,
  /// and in that overloaded method convert the value to string,
  /// before calling the 'SafeValue' method with a string parameter.
  /// </summary>
  /// <param name=”value”></param>
  /// <returns></returns>
  private string SafeValue(DateTime value)
  {
    return SafeValue(value.ToString(“yyyy-MM-dd HH:mm:ss.fff”));
  }

  /// <summary>
  /// Overloaded method for int value
  /// </summary>
  /// <param name=”value”></param>
  /// <returns></returns>
  private string SafeValue(int value)
  {
    return SafeValue(value.ToString());
  }
}

When the script is ready, Choose Rebuild from the BUILD menu in the VstaProjects – Microsoft Visual Studio Window. Then close that Window. You return to the Script Transformation Editor, close it by pressing OK.

Testing the package

I simply run the package in debug mode. 105 currency rows are added to the csv file.
Including the header and footer, the csv file contains 107 rows, and this is the number that is written to the footer line.

SSIS-S01E04-110
Running the package in debug mode

SSIS-S01E04-111
First and last part of the created csv file.

Please note that to change the delimiter and text qualifier you only have to make two changes in the ***** Configuration ***** region.

Configuration
  #region ***** Configuration *****

  // Here you can change the column delimiter and the text qualifier easily.
  private const string _delimiter = “;”;
  private const string _textQualifier = “\””;
. . .

  #endregion ***** Configuration *****

By this change the file will look as follows:
SSIS-S01E04-112
First and last part of the created csv file with a different delimiter and text qualifier.

I also did a test with a source query that returned over 1.1 million rows, and the csv file of 25 MB was created in just 3 seconds (on a core i7 laptop with SSD).
So my conclusion is that the performance of this solution is very satisfying.

Download the C# script for the script component here.

Conclusion / Wrap up

In this post I have demonstrated how to use SSIS 2014 to create a csv file with a dynamic header (with a timestamp) and footer line (including a rowcount) using a Script Component as a Destination in a Data Flow Task.

(Promotional)

Big Data & Analytics insight 2016: save the date for an insightful conference
(as a speaker I was asked to do a bit of promotion ..)

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Fixing corrupt csv files in a SSIS Package (SSIS Series)

(Promotional)

Big Data & Analytics insight 2016: save the date for an insightful conference
(as a speaker I was asked to do a bit of promotion ..)

Introduction

A few weeks ago my blog post was about using a C# Script Component to detect duplicates in a Data Flow Task.

This week I want to discuss it’s brother, the C# Script Task, which can be used in the Control Flow.

Again, extremely flexible, basicly you embed a small C# program inside your package, and you can use the entire .NET framework for all kind of things. I used it to fix malformed csv source files, before processing them in a Data Flow Task.

Pros and Problems

Csv (comma seperated files) have their pros and problems, as compared with other formats, like XML.

A few pros are that they are usually smaller in size than XML files, and easier and faster to load. These are two reasons why csv files still are used a lot these days, despite of the fact that XML files are in some cases more reliable, and more flexible as far as the file contents is concerned.

Unfortunately csv files are not so good in holding memo/comment fields that might contain carriage return/line feeds in it. Memo fields with those characters do not stay on one line in the file, therefore making the csv file more difficult to process (at least the standard flat file source can not handle it in all cases).

Also the structure is very strict. If a column is added to a csv source file, your Data Flow Task will break.

Solution

In this post I will discuss a possible solution for both problems:

  1. carriage return/line feeds in contents of csv files
  2. changing file formats over time

This post assumes that you have worked with SSIS before, know what For Each Loop Containers and Data Flow Tasks do, and recognize the csv problems above. It is a big advantage if you can read and understand C# code.

Okay, let’s start!

Scenario

My demo is based on the following hypothetical scenario:

  1. You need to process multiple CustomerDetails.csv files.
  2. In this file a pipe (|) is used as separator between the columns.
  3. The file contains a header line: SocialSecurityNumber|FirstName|LastName|Gender|Residence|Comments
  4. The Comments column is added later, your package also needs to be able to load historic files without the Comments column.
  5. CustomerDetails.csv files come from hundreds of locations and systems (hereafter called: sources) world wide. Fixing the csv files in the export processes at all these locations is impossible. Even if some sources deliver correct csv files, your SSIS package cannot rely on the fact that all sources will do so.

Overview

Here are a few pictures of the end result:
SSIS-S01E03-109
The connection managers that are needed

SSIS-S01E03-110
The Control Flow
SSIS-S01E03-111
The Data Flow

Detailed steps

Parameters and variables

First you need to add a few parameters and variables to the package.

Parameters:

  • SourceFolder: The folder that that For Each Loop Container reads for csv files.
  • ArchiveFolder: The folder to which files are moved after being processed, if no error occurred.
  • ErrorFolder: The folder to which files are moved that caused an error during processing.

All parameters are Required and their Data type is String.

SSIS-S01E03-113
The parameters of the package

Variables:

  • CRLF: This variable will be filled with a Carriage Return/Line Feed in the Script Task.
  • CRLF_Replacer: This is the character string that is used to replace CarriageReturn/LineFeeds in the Comment column of the file. It must be a character string that does not occur in the Comment field. Therefore ~CRLF~ is used. If you want a different character string, you can change it to something else.
  • SourcePathAndFilename: The full path and filename of the original filename to process. It is filled by the For Each Loop Container. The value set at design time is unimportant.
  • PathAndFilenameToProcess: The full path and filename of the file to process. This can be either the original file, or the modified temporary file. The Flat File Connection Manager uses this path and filename. The value set at design time is unimportant.

The Data type of all variables is String.

SSIS-S01E03-114
SSIS variables needed to do the job

The Control Flow – For Each Loop Container

Add a For Each Loop Container FELC – Load sourcefiles and configure it as a Foreach File Enumerator. Retrieve file name Fully qualified.

SSIS-S01E03-115

On the Collection pane, just below the Enumerator type, press the ellipsis next to Expressions. In the Property Expressions Editor configure the Property Directory to be set by
Expression @[$Package::SourceFolder].
SSIS-S01E03-116

On the Variable Mappings pane, map variable User::SourcePathAndFilename to Index 0
SSIS-S01E03-102
For Each Loop Container configuration

The Control Flow – Script Task

Next, add a Script Task SCR – Fix csv file if needed.
In the Script Task Editor, configure the following:

  • ReadOnlyVariables: User::CRLF_Replacer,User::SourcePathAndFilename
  • ReadWriteVariables: User::CRLF,User::PathAndFilenameToProcess

SSIS-S01E03-117
Variable configuration in the Script Task Editor

Then press the Edit Script… button.
Don’t get scared by the amount of C# code, if you keep your head cool, it will be fixed in 10 minutes!

Adjust the script as follows:

  • In the Namespaces region, if not already there, add:
    using System.IO;
    using System.Text;


  • SSIS-S01E03-108
    Replace the highlighted part above with the code below

    Now remove public void Main() completely and replace it with the code underneath (to avoid malformatting due to html rendering, please download the original csharp script here).
    Then check the code for !!! ATTENTION !!! comment. These are points in the script that you might need to change.

    ScriptTask.cs

      /*
      (c) Copyright 2016 - hansmichiels.com
     
      This program is free software: you can redistribute it and/or modify
      it under the terms of the GNU General Public License as published by
      the Free Software Foundation, either version 3 of the License, or
      (at your option) any later version.
     
      This program is distributed in the hope that it will be useful,
      but WITHOUT ANY WARRANTY; without even the implied warranty of
      MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
      GNU General Public License for more details.
     
      You should have received a copy of the GNU General Public License
      along with this program. If not, see http://www.gnu.org/licenses/.
      */

     
      /// This method is called when this script task executes in the control flow.
      /// Before returning from this method, set the value of Dts.TaskResult to indicate success or failure.
      /// To open Help, press F1.
      /// </summary>
     public void Main()
     {
      // Copy a few SSIS variables to C# string variables.
       string sourcePathAndFilename = Dts.Variables[“User::SourcePathAndFilename”].Value.ToString();
       string crlfReplacer = Dts.Variables[“User::CRLF_Replacer”].Value.ToString();
     
       // Set the variable that needs to hold a CarriageReturn/LineFeed.
       Dts.Variables[“User::CRLF”].Value = string.Format(“\r\n”);
     
       // Call the method that checks if the sourcefile is corrupt and if so, fixes it.
       FixFileIfCorrupt(sourcePathAndFilename, crlfReplacer);
     
       // Set returnvalue for script task.
      Dts.TaskResult = (int)ScriptResults.Success;
     }
     
      /// Csv-file can have lines with carriage return-linefeeds, causing errors during the load.
      /// By preinvestigating the file we can correct the file and prevent these errors from ocurring.
      /// Source: http://www.hansmichiels.com/2016/05/22/fixing-corrupt-csv-files-in-a-ssis-package-ssis-series/
      /// Author: Hans Michiels
      /// License: GNU General Public License, see http://www.gnu.org/licenses/
      /// </summary>
      private void FixFileIfCorrupt(string sourcePathAndFilename, string crlfReplacer)
      {
       string header = string.Empty;
       string headerAppend = string.Empty;
       string detailAppend = string.Empty;
     
       // Initialize pathAndFilenameToProcess with sourcePathAndFilename. Overwrite later if needed.
       string pathAndFilenameToProcess = sourcePathAndFilename;
     
       // Check if the header of the file is as expected, and return the header row through a ref parameter.
       // If the file header is not as expected, do not touch the file and exit this method.
       if (!CheckAndReturnHeader(sourcePathAndFilename, ref header, ref headerAppend, ref detailAppend)) return;
       
       // Get the encoding of the current file. We need this if we have to create a corrected file.
       var encoding = GetEncoding(sourcePathAndFilename);
     
       // Read all lines of the file into a string array.
       // !!! ATTENTION !!! WARNING: only do this when the files to process are relatively small.
       // !!! ATTENTION !!! When in doubt, test it and monitor memory usage.
       // (I suppose files up to 50~100 MB might not be a big problem but larger might, but testing is the way to find out)
       string[] lines = File.ReadAllLines(sourcePathAndFilename, encoding);
     
       // Use a variable to set when the file has changed due to corrections this method makes.
       bool fileHasChanged = false;
       bool lineIsOkay = true;
     
       // appendbuffer is used to keep parts of 'broken' lines.
       string appendbuffer = string.Empty;
     
       // Get lower- and upperbound of the string array into variables.
       int lower = lines.GetLowerBound(0);
       int upper = lines.GetUpperBound(0);
     
       // Loop through all lines in the file from back to front!
       for (int i = upper; i >= lower; i--)
       {
        // If there is still some text in the buffer, append it to the line and make the buffer empty again.
        if (appendbuffer != string.Empty)
        {
          lines[i] = lines[i] + appendbuffer;
          appendbuffer = string.Empty;
          fileHasChanged = true;
        }
     
        // reset lineIsOkay variable, set to true underneath when the line passes the 'line ok' checks.
        lineIsOkay = false;
     
        // !!! ATTENTION !!!
        // Here you need to add some custom code.
        // Describe one or more properties of a correct line, e.g.
        // - first 8 characters are numeric;
        // - The length is at least 12;
        // - It contains at least x delimiters;
        // etc.
        // Then convert this check to C# code. Use your imagination if needed!
     
        // I check for:
        // - the length to be 12 or more characters.
        // - the first 8 characters (the social security number) to be numeric.
        // - at least 4 pipes.
        if (lines[i].Length > 12)
        {
          string checkString = lines[i].Substring(0, 8);
          int intOutput;
     
          lineIsOkay = (Int32.TryParse(checkString, out intOutput));
     
          if (lineIsOkay)
          {
           // Check that the line contains at least 4 delimiters.
           lineIsOkay = (lines[i].Split('|').Length - 1 >= 4);
          }
        }
     
        if ((!lineIsOkay) && (i > lower))
        {
          // Line is NOT OK.
          // I have to append this line to the previous one, except when it is the header.
          // To do this put this line into appendbuffer variable.
          // I use a replacement string for the removed carriage return/line feed.
          // This will be converted back to a carriage return/line feed in the Data Flow.
          appendbuffer = string.Format(“{0}{1}”, crlfReplacer, lines[i]);
          lines[i] = string.Empty; // Package can handle empty lines.
          fileHasChanged = true;
        }
     
       }
     
       // !!! ATTENTION !!! If you do not have old fileformats you can remove this block.
       // Code to handle the old file format, without a Comments column.
       // The code below reformats the file so that it become always in new fileformat, with an empty comment.
       if (headerAppend != string.Empty)
       {
        // This is an old format without the Comments column. Append Comments column header to header and a delimiter (pipe) to detail rows.
        fileHasChanged = true;
     
        for (int i = upper; i >= lower; i--)
        {
         if (lines[i].Length > 0)
         {
          if (i > lower)
          {
            lines[i] += detailAppend;
          }
          else
          {
            lines[i] += headerAppend;
          }
         }
        }
       }
     
       if (fileHasChanged)
       {
        // Here the modified/corrected file is stored to a temporary file, so a “tmp” extension is added to the modified filename.
        // For auditing purposes I keep the original file and not the corrected one,
        // because the latter can always be recreated when needed.
        // Also, when a bug would exist in the creation of the corrected file
        // (initially I had a problem with codepages, which should now be solved),
        // the original file might get lost forever.
        pathAndFilenameToProcess = String.Format(“{0}.tmp”, sourcePathAndFilename);
     
        // Now create a new corrected csv file.
        File.WriteAllLines(pathAndFilenameToProcess, lines, encoding);
       }
     
       // Also change the SSIS Package variable, because the Flat File Source uses it.
       Dts.Variables[“User::PathAndFilenameToProcess”].Value = pathAndFilenameToProcess;
     
      }
     
      /// <summary>
      /// This method checks if the header is as expected. If not, return false.
      /// Also the value of by ref parameters 'header', 'headerAppend' and 'detailAppend' is set.
      /// Source: http://www.hansmichiels.com/2016/05/22/fixing-corrupt-csv-files-in-a-ssis-package-ssis-series/
      /// Author: Hans Michiels
      /// License: GNU General Public License, see http://www.gnu.org/licenses/
      /// <param name=”sourcePathAndFilename”></param>
      /// <param name=”header”></param>
      ///
      private bool CheckAndReturnHeader(string sourcePathAndFilename,
        ref string header, ref string headerAppend, ref string detailAppend)
      {
       // Initialize return values.
       header = string.Empty;
       headerAppend = string.Empty;
       detailAppend = string.Empty;
     
       // Open the file to read the header line, then close the file again.
       using (StreamReader reader = new StreamReader(sourcePathAndFilename))
       {
        header = reader.ReadLine();
        reader.Close();
       }
       // !!! ATTENTION !!! Here you could do a check on (parts) of the header,
       // so that you do not change files with an unexpected format.
       if (!header.ToLower().StartsWith(“socialsecuritynumber|firstname|lastname|gender|residence”))
       {
        return false;
       }
       // !!! ATTENTION !!! Here you could check for old file formats.
       // Old formats can only be handled when one or more new columns were added at the end of the csv lines.
       // In the example, there is a file format without the Comments column still being delivered.
       // By appending the Comments column with an empty value for those files, they can be processed
       // by the same Data Flow Task using only one Flat File Source Connection.
       if (!header.ToLower().Contains(“|comments”))
       {
        // This is an old format without the Comments column.
        // Append Comments column header to header and a delimiter (pipe) to detail rows.
        headerAppend = “|Comments”;
        detailAppend = “|”;
       }
       return true;
      }
     
      /// <summary>
      /// Determines a text file's encoding by analyzing its byte order mark (BOM).
      /// Defaults to ASCII when detection of the text file's endianness fails.
      /// Source: http://stackoverflow.com/questions/3825390/effective-way-to-find-any-files-encoding
      /// With thanks to the author ‘2Toad’.
      /// </summary>
      /// <param name=”filename”>The text file to analyze.</param>
      /// <returns>The detected encoding.</returns>
      public static Encoding GetEncoding(string filename)
      {
       // Read the BOM
       var bom = new byte[4];
       using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read))
       {
        file.Read(bom, 0, 4);
       }
     
       // Analyze the BOM
       if (bom[0] == 0x2b && bom[1] == 0x2f && bom[2] == 0x76) return Encoding.UTF7;
       if (bom[0] == 0xef && bom[1] == 0xbb && bom[2] == 0xbf) return Encoding.UTF8;
       if (bom[0] == 0xff && bom[1] == 0xfe) return Encoding.Unicode; //UTF-16LE
       if (bom[0] == 0xfe && bom[1] == 0xff) return Encoding.BigEndianUnicode; //UTF-16BE
       if (bom[0] == 0 && bom[1] == 0 && bom[2] == 0xfe && bom[3] == 0xff) return Encoding.UTF32;
       // return Encoding.GetEncoding(1252); // Without BOM, default to Windows Codepage 1252
       return Encoding.Default; // Without BOM, the encoding for the OS's current ANSI code page is returned by default.
      }

    When you have adjusted the script for your file, choose Rebuild from the BUILD menu in the VstaProjects – Microsoft Visual Studio Window. Then close that Window. You return to the Script Task Editor, close it by pressing OK.

    Developing the Data Flow Task

    Add a Data Flow Task to the Control Flow and name it DFT – Import Customer Details.
    Then add the following components to it:

    Flat File Source FF_SRC – CustomerDetails

    From the SSIS Toolbox, drag a Flat File Source to the Data Flow area and create a new Flat File connection manager.
    SSIS-S01E03-121

    If you want to rebuild the demo, download the csv files (and C# script) here.
    Configure the Flat File Connection as shown in the picture below:
    SSIS-S01E03-118

    Choose Advanced in the pane on the left and configure the properties for each column as follows:

    • SocialSecurityNumber: DataType string [DT_STR], OutputColumnWidth 9
    • FirstName: DataType Unicode string [DT_WSTR], OutputColumnWidth 50
    • LastName: DataType Unicode string [DT_WSTR], OutputColumnWidth 50
    • Gender: DataType string [DT_STR], OutputColumnWidth 1
    • Residence: DataType Unicode string [DT_WSTR], OutputColumnWidth 60
    • Comments: DataType Unicode string [DT_WSTR], OutputColumnWidth 1000

    SSIS-S01E03-120
    Example of column properties

    Conditional Split CSPL – Filter out empty lines

    We use a Conditional Split to filter out empty lines in the csv file. Because the SocialSecurityNumber is a required column, we assume that the line is empty (at least it is useless) if SocialSecurityNumber is empty.
    So add an Output named Empty lines with Condition SocialSecurityNumber == “”. If you like, you can check on multiple columns to be empty.
    Rename the default Output to Non-empty lines.

    SSIS-S01E03-122
    Conditional Split Configuration

    Derived Column DC – Re-insert CRLF

    Because the (corrected) file might contain replacements for carriage return/line feeds, we are going to replace them with the original carriage return/line feeds, before the value is inserted into the database table.
    In the Derived Column Transformation Editor, add a derived column, configured as follows:

    • Derived Column Name: Comments
    • Derived Column: Replace ‘Comments’
    • Expression: REPLACE(Comments,@[User::CRLF_Replacer],@[User::CRLF])

    SSIS-S01E03-123
    The Derived Column Transformation Editor

    OLEDB Destination OLE_DST stg DimCustomer

    To rebuild the demo, create a table to store the rows from the data flow task,
    add a OLEDB Destination that uses this table.

    destination_table.sql

    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [stg].[DimCustomer](
     [SocialSecurityNumber] [char](9) NOT NULL,
     [FirstName] [nvarchar](50) NULL,
     [LastName] [nvarchar](50) NULL,
     [Gender] [char](1) NULL,
     [Comments] [nvarchar](1000) NULL,
     [Residence] [nvarchar](60) NULL,
     CONSTRAINT [PK_DimCustomer_CustomerKey] PRIMARY KEY CLUSTERED
    (
     [SocialSecurityNumber] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO

    The Control Flow – File System Tasks

    Finally the File System Tasks archive the original file and delete the corrected file (if it exists).
    For auditing purposes I keep the original file and not the corrected one, because the latter can always be recreated when needed.
    Also, when a bug would exist in the creation of the corrected file (initially I had a problem with codepages, which should now be solved), the original file might get lost forever.

    Please go back to the picture of the The Control Flow at the beginning of this post, to see how the File System Tasks are connected.
    Next are the printscreens that should help you to configure the File System Tasks.

    SSIS-S01E03-124
    Configuration of “FST – Move original file to archive folder”

    SSIS-S01E03-125
    Configuration of “FST – Move original file to error folder”

    SSIS-S01E03-126
    Precedence constraint of “FST – Delete temporary file” (you need to set it twice).

    SSIS-S01E03-128
    Configuration of “FST – Delete temporary file”

    The proof

    To show you that it works, I run the package in debug mode.
    To be able to have a look at the temporary/corrected files, afterwards, I have disabled the File System Task that deletes this file.
    SSIS-S01E03-130

    When I open the “CustomerDetails.csv.tmp” file after the package has run, I can see that line 15 was corrected.
    From the second screenshot you can see that the CRLF_Replacer value was inserted twice.
    SSIS-S01E03-131
    SSIS-S01E03-132

    And when I run a query on the destination table, I can see that the Comments column contains the original CarriageReturn/LineFeeds!
    SSIS-S01E03-134

    By the way, all the data in the csv files is generated from random firstnames, lastnames and residences and fake social security numbers. Any match with existing persons would be based solely on coincidence.

    Conclusion / Wrap up

    Csv files still are useful today, because they have some advantages when compared with XML files.
    However they also have their limitations, which can be frustrating when loading those files with a Data Flow Task.
    In this blog post I have demonstrated how to overcome a few of those limitations, namely:

    1. carriage return/line feeds in contents of csv files
    2. changing file formats over time

    The solution is based on using a C# Script Task.

    Please be aware that I do not pretend this is the best solution.
    If you are very familiar with C#, you could all of the processsing in the Script Task, using SqlBulkCopy, and you would not need a temporary file.
    However the Data Flow Task offers some advantages, like lookups you can use, without the need to do this in a script.
    If you create good Interface Specification Documents on forehand (for instance containing how to handle carriage return/line feeds in column values) you can force that all sources deliver correct csv-files, and then they do not have to be corrected.

    Download the C# script and csv files here.

    (c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Detect duplicates in a Data Flow Task (SSIS Series)

Intended audience

This article assumes that you have at least a basic understanding of SSIS, and the Data Flow Task in particular. C# knowledge is an advantage but not required.

Introduction

If you ever used the T-SQL windowing function ROW_NUMBER() OVER (PARTITION BY ..) to find duplicates in a query result, this post might be of interest for you.

Because not all sources having potential duplicate entries (e.g. business keys) come from a SQL Server database. If it is a csv- or Excel file ROW_NUMBER will not automaticly be your friend (it might be when you use OPENROWSET, but I think that is a rather complicated solution).

C# script components are extremely flexible yet fairly easy to understand, develop and maintain.
This is why this is my preferred solution to tackle the problem above.
Let’s begin!

How to detect duplicates in a Data Flow Task using a C# Script component

First an overview of the end result (the Data Flow of the Data Flow Task). I will tell you step by step how to achieve this using SSIS 2014. I did not make a print screen of the Control Flow because it only contains one Data Flow Task.
Please note that I enabled two Data Viewers on both outputs of the Conditional Split component, so you can see the data that flows through (screenshot later).
SSIS-S01E02-image01
For the demo I will use the following components in a dataflow. To solve your problem just pick out the things you need.

  • An OLE DB Source
  • A script component (the most interesting part!)
  • An (optional) conditional split
  • Two OLE DB Destinations: one for the first occurrences of each business key, and one for the duplicates.

An OLE DB Source

My example is based on Advertureworks. I select customers from the [AdventureWorksDW] database, and make sure there are duplicates by using UNION ALL in the SQL statement.
In the real world it would be more likely that your source would not be a SQL Server query, but for instance a csv- or Excel file.

OLEDB Source.sql

SELECT
  [CustomerKey], [CustomerAlternateKey] AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome]
FROM
  [dbo].[DimCustomer]
UNION ALL
SELECT
  [CustomerKey], [CustomerAlternateKey] AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome]
FROM
  [dbo].[DimCustomer]
UNION ALL
SELECT
  [CustomerKey], [CustomerAlternateKey] AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome]
FROM
  [dbo].[DimCustomer]
  WHERE CustomerKey % 2 = 0

SQL Statement for the OLE DB Source

A script component (the most interesting part!)

From the SSIS Toolbox drag a Script Component to the Data flow surface. In the “Select Script Component Type” choose “Transformation”.

SSIS-S01E02-image02

To use a column value in the script, you have to define it as an input column. Select the column you want to check for duplicate values with Usage Type ReadOnly. In the demo this is the CustomerBusinessKey column.
SSIS-S01E02-image03

We also need a new column to store the occurrence number for the CustomerBusinessKey. To do this, in the Script Transformation Editor, Select Inputs and Outputs on the left side, then unfold Output 0 and Output Columns, respectively.
Then click on the Add column button to add a column with DataType four-byte signed integer [DT_I4]. Give the new column a clear name. In the demo it is called CustomerBusinessKeyOccurrence.
SSIS-S01E02-image03b

After these preparations we can start writing the C# script. Press the Edit script … button to do so.
SSIS-S01E02-image04

Adjust the script as follows (the changes are also marked inside red blocks in the picture below):

  • In the Namespaces region, add:
    using System.Collections.Generic;
  • Above PreExecute(), add:
    // The column to count duplicates for will be stored as Key (with string datatype) of the dictionary.
    // The occurrence counter will be stored as Value of the dictionary (with int datatype).
    private Dictionary<string, int> m_dictOccurrencesCount;
  • Inside PreExecute(), add:
    // Initialize the dictionary, otherwise it will be null when accessed.
    m_dictOccurrencesCount = new Dictionary<string, int>();

SSIS-S01E02-image05
Script adjustments – part 1

Then adjust the script as follows:

  • Inside Input0_ProcessInputRow(Input0Buffer Row), add:
    // Count occurrences of CustomerBusinessKeys.
    if (m_dictOccurrencesCount.ContainsKey(Row.CustomerBusinessKey))
    {
      // If the CustomerBusinessKey of the current row is already in the dictionary,
      // increase the counter with 1:
      m_dictOccurrencesCount[Row.CustomerBusinessKey]++;
      // Also assign the new value to the CustomerBusinessKeyOccurrence column,
      // so it can be used in the Data flow later.
      Row.CustomerBusinessKeyOccurrence = m_dictOccurrencesCount[Row.CustomerBusinessKey];
    }
    else
    {
      // If the CustomerBusinessKey of the current row is not yet in the dictionary,
      // add it, and set the counter to 1
      m_dictOccurrencesCount.Add(Row.CustomerBusinessKey, 1);
      // Also assign 1 to the CustomerBusinessKeyOccurrence column.
      Row.CustomerBusinessKeyOccurrence = 1;
    }

SSIS-S01E02-image06
Script adjustments – part 2

When the script is ready, Choose Rebuild from the BUILD menu in the VstaProjects – Microsoft Visual Studio Window. Then close that Window. You return to the Script Transformation Editor, close it by pressing OK.

An (optional) conditional split

Optionally you can add a Conditional Split.
I use the conditional split to store the first occurrences for every CustomerBusinessKey in one table and all duplicates in another table.
Both tables are only to show you that the script task and conditional split do their jobs properly. In real projects, you might develop this differently, for example by storing only the first occurence.

SSIS-S01E02-image07
Conditional Split Configuration

Two OLE DB Destinations: one for the first occurrences of each business key, and one for the duplicates

I basicly created a two temporary tables by composing a SELECT TOP 0 .. INTO statement from the OLE DB Source SQL Statement, because the result of the demo needs to flow to somewhere.

SSIS-S01E02-OLEDB-Source.sql
IF OBJECT_ID('[dbo].[DimCustomerFirstOne]', 'U') IS NOT NULL
  DROP TABLE [dbo].[DimCustomerFirstOne]
GO
SELECT TOP 0
  [CustomerKey], [CustomerAlternateKey] + 'B' AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome], CONVERT(int, 0) AS [OccurrenceNo]
INTO [dbo].[DimCustomerFirstOne]
FROM
  [dbo].[DimCustomer]
GO

IF OBJECT_ID('[dbo].[DimCustomerDuplicates]', 'U') IS NOT NULL
  DROP TABLE [dbo].[DimCustomerDuplicates]
GO
SELECT TOP 0
  [CustomerKey], [CustomerAlternateKey] + 'B' AS [CustomerBusinessKey],
  [Title], [FirstName], [MiddleName], [LastName],
  [NameStyle], [BirthDate], [MaritalStatus], [Suffix],
  [Gender], [EmailAddress], [YearlyIncome], CONVERT(int, 0) AS [OccurrenceNo]
INTO [dbo].[DimCustomerDuplicates]
FROM
  [dbo].[DimCustomer]
GO

OLE DB Destination OLE_DST FirstOne uses the [dbo].[DimCustomerFirstOne] table.
OLE DB Destination OLE_DST Duplicate uses the [dbo].[DimCustomerDuplicates] table.

Let it flow, let if flow, let it flow

Okay, time to test.
When data flows, the “First occurence” and “Duplicates” Data Viewers pop up. Then I can check that all the rows are divided well over both tables based on the value of column CustomerBusinessKeyOccurrence: exactly 1 or higher than 1.
When I run it with the Data Viewers disabled, the package execution finished literally in a split second. So the performance is excellent.
Test succeeded!
SSIS-S01E02-image08
The Data flow in Debug Mode

SSIS-S01E02-image09b
Data Viewer for First occurrence flow: CustomerBusinessKeyOccurrence is 1 for all rows.

SSIS-S01E02-image10b
Data Viewer for Duplicates flow: CustomerBusinessKeyOccurrence is 2 or higher for all rows.

Other considerations

If multiple columns make a row unique concatenate all column values with a delimiter, and then add the concatenated result to the dictionary m_dictOccurrencesCount.
For instance you could check duplicates using the customer nameparts (firstname, middlename and lastname), the birthdate and the gender. This is a softer check than on a real business key, but sometimes you do not have that.
Of course, for this to work, you must add those columns as ReadOnly input columns to the script component.
You can create a concatenated value as follows:
Inside Input0_ProcessInputRow(Input0Buffer Row), start with:

// Concatenate FirstName, MiddleName, LastName, BirthDate and Gender, using a | as delimiter.
string businessKey = string.Format(“{0}|{1}|{2}|{3:yyyyMMdd}|{4}”,
  Row.FirstName, Row.MiddleName, Row.LastName, Row.BirthDate, Row.Gender).ToUpper();

The concatenated value is converted to uppercase, so that differences in lower- or uppercase lead to the same value.
Then use the businessKey variable (instead of Row.CustomerBusinessKey) in the if statement as shown above.

If the concatenated value becomes very long, you could calculate a hash value and add that to the dictionary instead (thanks for the suggestion, Joost). This would use less memory and might be faster, but the only way to find out if this is the case is testing .. testing ..
( I did not test this to make the demo not too complicated and because performance did not seem to be an issue ).

Thinking a bit further, you could use this concept also to calculate running totals.
Just configure the dictionary to hold a decimal instead of an int and you can calculate running totals for numeric values (e.g. sales figures) easily.

Conclusion / Wrap up

In SSIS you can use a Script Component within a Data Flow Task to detect duplicates in the source data.
This can be an alternative to the T-SQL ROW_NUMBER() function, when the source is not a SQL Server Database.

Credits
My thanks go to Joost van Rossum for reviewing the initial draft of this article. His input was very useful.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

When reverse engineering is a necessary evil .. (SQL Server Series)

.. or: How to find data (text, guids, numbers) in your entire SQL Server database ..

Introduction

A number of years ago I had to make queries on an OLTP database used by third parties’ software. As a person preferring data warehouses for too many reasons to mention here, I am not proud of it, but sometimes these requests just happen. If you have the same now and then, you are not the only one 😉 .

Problem

A little simplified, this is what happened:
I started asking around, is there documentation? “No”, there wasn’t.
Is there a data dictionary? “No”, there wasn’t, either.
Can I have access to the database? “Yes, here’s the connection string.”
So I made a connection, but .. what do all these tables mean? How are they related? How are the foreign keys? The model didn’t tell me, with no physical constraints and unclear table- and column names.
Do you have anything else for me to see how this works? “Well, we have a PROD and a TEST environment, you can have read only access to PROD and read/write to the TEST.”
Okay, that is something ..

MSSQL-S01E02-testconnection
Picture credits: © David Coleman | Dreamstime Stock Photos
When there is only a (test)connection..

Towards a solution

Could you show me how you use the system? Luckily the employee of my client was very helpful and showed me how he used the application.
I tried to get a grip on the SQL Statements caused by his actions, but due to a lack of permissions on the database, this didn’t work.
This could be arranged, but could take a while before I actually would have temporary “elevated” permissions.
As it was just some small piece of “inbetween” work, that seemed an inefficient route to follow.
Okay, I have seen what you did, can I do that on the TEST as well? “Yeah, go ahead.”
So I started to make an order, using silly names and descriptions, and choosing weird seldomly used products. So, what’s next?
Ok, if I can find where my description “First order of Hans” is stored, I can start to map the frontend to the backend, or make clear for myself where the entered information is stored in the database.

Solution

This is where this script became useful. It finds all occurrences of one or more text values in a SQL Server database.

WARNING: Do not use in production environments, as this script can potentially run pretty long (10 to 20 minutes when I used it) and can have a negative impact on database performance in general (so other users might regret that you are running the script).
So use it in a test enviroment when you can.

SearchTextInDatabase.sql

--\
---) SearchTextInDatabase script.
---) Author: Hans Michiels
---) Searches for one or more texts in all (n)(var)char and (n)text
---) columns of a SQL Server database.
---) WARNING: Can potentially run very long, my advise is not
---) to run this script in production environments!
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014]
GO

--\-----------------------------------------------------------------------------------
---) DECLARATIONS
--/-----------------------------------------------------------------------------------
DECLARE @schema_name SYSNAME
DECLARE @table_name SYSNAME
DECLARE @column_name SYSNAME
DECLARE @nsql NVARCHAR(4000)
DECLARE @param_def NVARCHAR(500)
DECLARE @search_text NVARCHAR(1000)
DECLARE @search_texts TABLE ([search_text] NVARCHAR(1000))
DECLARE @debug_print NVARCHAR(500)
DECLARE @results_count INT = 0

IF OBJECT_ID('tempdb..#search_results') IS NOT NULL BEGIN
    DROP TABLE #search_results
END
CREATE TABLE #search_results
    ( [search_text] NVARCHAR(1000)
    , [schema_name] NVARCHAR(128)
    , [table_name] NVARCHAR(128)
    , [column_name] NVARCHAR(128)
    , [the_value] NVARCHAR(1000)
    )

DECLARE columns_cursor CURSOR LOCAL STATIC FOR
    SELECT sch.name AS [schema_name]
    , tbl.name AS [table_name]
    , col.name AS [column_name]
    FROM sys.tables tbl
     JOIN sys.schemas sch
        ON sch.schema_id = tbl.schema_id
      JOIN sys.columns col
        ON col.object_id = tbl.object_id
    WHERE col.system_type_id IN (175, 167, 231, 239, 35, 99) -- (N)(VAR)CHAR, (N)TEXT

--\-----------------------------------------------------------------------------------
---) INITIALIZATION
---) Specify the texts to search for below.
---) Use the wildcard % if you want to do a 'like' search.
--/-----------------------------------------------------------------------------------
INSERT INTO @search_texts

  SELECT '%Assembly%'
  UNION SELECT 'First order of Hans'

DECLARE search_cursor CURSOR LOCAL STATIC FOR
    SELECT search_text
    FROM @search_texts

--\-----------------------------------------------------------------------------------
---) MAIN
--/-----------------------------------------------------------------------------------
OPEN columns_cursor
FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @debug_print = 'Processing ' + @table_name + '.' + @column_name;
    PRINT REPLICATE('-', LEN(@debug_print))
    PRINT @debug_print
    PRINT REPLICATE('-', LEN(@debug_print))
    
    SET @nsql = 'INSERT INTO #search_results SELECT DISTINCT @inner_search_text, '
        + '''' + @schema_name + ''' AS [schema_name], '
        + '''' + @table_name + ''' AS [table_name], '
        + '''' + @column_name + ''' AS [column_name], CONVERT(NVARCHAR(1000), ' + QUOTENAME(@column_name) + ') AS [the_value] '
        + ' FROM ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' WITH (NOLOCK) '
        + ' WHERE ' + QUOTENAME(@column_name) + ' LIKE @inner_search_text;';

    OPEN search_cursor
    FETCH NEXT FROM search_cursor INTO @search_text

    WHILE @@FETCH_STATUS = 0
    BEGIN
        PRINT ''
        PRINT '--> Processing search text ''' + @search_text + '''';
        SET @param_def = N'@inner_search_text NVARCHAR(1000)';
        EXECUTE sp_executesql @nsql, @param_def,
                              @inner_search_text = @search_text;

        FETCH NEXT FROM search_cursor INTO @search_text
    END
    
    CLOSE search_cursor

    FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name
    
END

CLOSE columns_cursor
DEALLOCATE columns_cursor

SELECT
      t.schema_name
    , t.table_name
    , t.column_name
    , t.the_value
    , 'SELECT * FROM ' + QUOTENAME(t.schema_name) + '.' + QUOTENAME(t.table_name)
      + ' WHERE ' + QUOTENAME(t.column_name) + ' = ''' + REPLACE(t.the_value, '''', '''''') + '''' AS [select_statement]
 FROM #search_results t

GO

Some time later, I also made two similar versions for searching unique identifiers and numeric values.
I know the scripts might not look so ‘pretty’, but they do the job when needed.

SearchUniqueIdentifierInDatabase.sql

--\
---) SearchUniqueIdentifierInDatabase script.
---) Author: Hans Michiels
---) Searches for one or more unique identifiers in all UNIQUEIDENTIFIER columns
---) of a SQL Server database.
---) WARNING: Can potentially run very long, my advise is not
---) to run this script in production environments!
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014]
GO

--\-----------------------------------------------------------------------------------
---) DECLARATIONS
--/-----------------------------------------------------------------------------------
DECLARE @schema_name SYSNAME
DECLARE @table_name SYSNAME
DECLARE @column_name SYSNAME
DECLARE @nsql NVARCHAR(4000)
DECLARE @param_def NVARCHAR(500)
DECLARE @search_value UNIQUEIDENTIFIER
DECLARE @search_values TABLE ([search_value] UNIQUEIDENTIFIER)
DECLARE @debug_print NVARCHAR(500)
DECLARE @results_count INT = 0

IF OBJECT_ID('tempdb..#search_results') IS NOT NULL BEGIN
    DROP TABLE #search_results
END
CREATE TABLE #search_results
    ( [search_value] UNIQUEIDENTIFIER
    , [schema_name] NVARCHAR(128)
    , [table_name] NVARCHAR(128)
    , [column_name] NVARCHAR(128)
    , [the_value] UNIQUEIDENTIFIER
    )

DECLARE columns_cursor CURSOR LOCAL STATIC FOR
    SELECT sch.name AS [schema_name]
    , tbl.name AS [table_name]
    , col.name AS [column_name]
    FROM sys.tables tbl
     JOIN sys.schemas sch
        ON sch.schema_id = tbl.schema_id
      JOIN sys.columns col
        ON col.object_id = tbl.object_id
                              -- UNIQUEIDENTIFIER
    WHERE col.system_type_id IN (36)

--\-----------------------------------------------------------------------------------
---) INITIALIZATION
---) Specify the unique identifiers to search for below.
--/-----------------------------------------------------------------------------------
INSERT INTO @search_values
SELECT 'D081136E-38D6-4D42-8FFD-19A6A8FA61E6'
UNION SELECT '9AADCB0D-36CF-483F-84D8-585C2D4EC6E9'
UNION SELECT '32A54B9E-E034-4BFB-B573-A71CDE60D8C0'
UNION SELECT '4C506923-6D1B-452C-A07C-BAA6F5B142A4'

DECLARE search_cursor CURSOR LOCAL STATIC FOR
    SELECT search_value
    FROM @search_values

--\-----------------------------------------------------------------------------------
---) MAIN
--/-----------------------------------------------------------------------------------
OPEN columns_cursor
FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @debug_print = 'Processing ' + @table_name + '.' + @column_name;
    PRINT REPLICATE('-', LEN(@debug_print))
    PRINT @debug_print
    PRINT REPLICATE('-', LEN(@debug_print))
    
    SET @nsql = 'INSERT INTO #search_results SELECT DISTINCT @inner_search_value, '
        + '''' + @schema_name + ''' AS [schema_name], '
        + '''' + @table_name + ''' AS [table_name], '
        + '''' + @column_name + ''' AS [column_name], ' + QUOTENAME(@column_name) + ' AS [the_value] '
        + ' FROM ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' WITH (NOLOCK) '
        + ' WHERE ' + QUOTENAME(@column_name) + ' = @inner_search_value ;';

    OPEN search_cursor
    FETCH NEXT FROM search_cursor INTO @search_value

    WHILE @@FETCH_STATUS = 0
    BEGIN
        PRINT ''
        PRINT '--> Processing search value ''' + CONVERT(NVARCHAR(40), @search_value) + '''';
        SET @param_def = N'@inner_search_value UNIQUEIDENTIFIER';
        EXECUTE sp_executesql @nsql, @param_def,
                              @inner_search_value = @search_value;

        FETCH NEXT FROM search_cursor INTO @search_value
    END
    
    CLOSE search_cursor

    FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name
    
    SELECT @results_count = (SELECT COUNT(*) FROM #search_results);
    
END

CLOSE columns_cursor
DEALLOCATE columns_cursor

SELECT
      t.schema_name
    , t.table_name
    , t.column_name
    , t.the_value
    , 'SELECT * FROM ' + QUOTENAME(t.schema_name) + '.' + QUOTENAME(t.table_name)
      + ' WHERE ' + QUOTENAME(t.column_name) + ' = ''' + REPLACE(t.the_value, '''', '''''') + '''' AS [select_statement]
 FROM #search_results t

GO


SearchNumberInDatabase.sql

--\
---) SearchNumberInDatabase script.
---) Author: Hans Michiels
---) Searches for one or more numeric values in all TINYINT, SMALLINT,
---) INT, FLOAT, DECIMAL and NUMERIC columns of a SQL Server database.
---) WARNING: Can potentially run very long, my advise is not
---) to run this script in production environments!
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [AdventureWorks2014]
GO

--\-----------------------------------------------------------------------------------
---) DECLARATIONS
--/-----------------------------------------------------------------------------------
DECLARE @schema_name SYSNAME
DECLARE @table_name SYSNAME
DECLARE @column_name SYSNAME
DECLARE @nsql NVARCHAR(4000)
DECLARE @param_def NVARCHAR(500)
DECLARE @search_value DECIMAL(38, 2)
DECLARE @search_values TABLE ([search_value] DECIMAL(38, 2))
DECLARE @debug_print NVARCHAR(500)
DECLARE @results_count INT = 0
DECLARE @max_difference DECIMAL(38, 2) -- Use to find values 'near by'

IF OBJECT_ID('tempdb..#search_results') IS NOT NULL BEGIN
    DROP TABLE #search_results
END
CREATE TABLE #search_results
    ( [search_value] DECIMAL(38, 2)
    , [schema_name] NVARCHAR(128)
    , [table_name] NVARCHAR(128)
    , [column_name] NVARCHAR(128)
    , [the_value] DECIMAL(38, 2)
    )

DECLARE columns_cursor CURSOR LOCAL STATIC FOR
    SELECT sch.name AS [schema_name]
    , tbl.name AS [table_name]
    , col.name AS [column_name]
    FROM sys.tables tbl
     JOIN sys.schemas sch
        ON sch.schema_id = tbl.schema_id
      JOIN sys.columns col
        ON col.object_id = tbl.object_id
                              -- TINYINT, SMALLINT, INT, INT
    WHERE col.system_type_id IN (48, 52, 56, 127
                              -- FLOAT, DECIMAL, FLOAT, DECIMAL, DECIMAL, DECIMAL
                              --, 59, 60, 62, 106, 108, 122, 127
                                )

--\-----------------------------------------------------------------------------------
---) INITIALIZATION
---) Specify the texts to search for below “INSERT INTO @search_values”
---) Also set a value for @max_difference
---) @max_difference makes it possible to find nearby numbers.
--/-----------------------------------------------------------------------------------
SELECT @max_difference = 1.0
INSERT INTO @search_values
  SELECT 755
  --UNION SELECT 97867563

DECLARE search_cursor CURSOR LOCAL STATIC FOR
    SELECT search_value
    FROM @search_values

-------------------------------------------------------------------------------------
-- MAIN
-------------------------------------------------------------------------------------
OPEN columns_cursor
FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name

WHILE @@FETCH_STATUS = 0
BEGIN

    SET @debug_print = 'Processing ' + @table_name + '.' + @column_name;
    PRINT REPLICATE('-', LEN(@debug_print))
    PRINT @debug_print
    PRINT REPLICATE('-', LEN(@debug_print))
    
    SET @nsql = 'INSERT INTO #search_results SELECT DISTINCT @inner_search_value, '
        + '''' + @schema_name + ''' AS [schema_name], '
        + '''' + @table_name + ''' AS [table_name], '
        + '''' + @column_name + ''' AS [column_name], CONVERT(DECIMAL(38, 2), ' + QUOTENAME(@column_name) + ') AS [the_value] '
        + ' FROM ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' WITH (NOLOCK) WHERE ABS(' + QUOTENAME(@column_name) + ' - @inner_search_value) <= ' + CAST(@max_difference AS VARCHAR(50)) + ';';

    PRINT @nsql
    
    OPEN search_cursor
    FETCH NEXT FROM search_cursor INTO @search_value

    WHILE @@FETCH_STATUS = 0
    BEGIN
        PRINT ''
        PRINT '--> Processing search value ''' + CONVERT(NVARCHAR(40), @search_value) + '''';
        SET @param_def = N'@inner_search_value DECIMAL(38, 2)';
        EXECUTE sp_executesql @nsql, @param_def,
                              @inner_search_value = @search_value;

        FETCH NEXT FROM search_cursor INTO @search_value
    END
    
    CLOSE search_cursor

    FETCH NEXT FROM columns_cursor INTO @schema_name, @table_name, @column_name
    
    SELECT @results_count = (SELECT COUNT(*) FROM #search_results);
    
END

CLOSE columns_cursor
DEALLOCATE columns_cursor

SELECT
      t.schema_name
    , t.table_name
    , t.column_name
    , t.the_value
    , 'SELECT * FROM ' + QUOTENAME(t.schema_name) + '.' + QUOTENAME(t.table_name) + ' WHERE ' + QUOTENAME(t.column_name) + ' = ''' + REPLACE(t.the_value, '''', '''''') + ''''
 FROM #search_results t

GO

Download all scripts here.

Conclusion / Wrap up

When you have to understand a SQL Server datamodel, for instance to get data out, and there is no documentation but only a test environment and database connection available, these scripts can help you to find where data entered in the frontend software is stored in the database.
A different use case is when the database is using unique identifiers for primary and foreign keys, and the foreign key relationships are not physically enforced and/or flexible. In that case you can search for a specific unique identifier and see where it is used in the database.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Stop being so precise! and more about using Load(end)dates (Datavault Series)

Introduction

My web site was offline a few days ago. Sorry for that.
I got this email from my webhosting provider: “A new message or response with subject: User *** has used up 153% of their bandwidth ..”.
Shortly after they suspended my account. It was suspended for about 8 hours. I have upgraded the network traffic limit.
All right, when all bandwidth was consumed .. then somebody must be reading my blog. Thank you for that.

This weeks post is about the LoadDate and LoadEndDate.
Actually there are two things to be nerdy on:

  1. End dating without gaps
  2. Getting the best value for bytes on DATETIME2 precision

By the way, these topics apply to SQL Server, the examples are made using SQL Server 2014.

1. End dating without gaps

How end dating works

In a Data Vault Satellite, different subsequent versions of a row in the source system are distinguished through a LoadDate (which usually contains a date/time value). So the HashKey of the corresponding Hub Row plus the LoadDate are the primary key for the satellite.
However to get the right version out when querying for a specific date, this requires a relatively difficult query, and this can also have a negative impact on performance.
This is why the LoadEndDate is an optional column of a Satellite, to make querying (read: getting data out) easier and better performing.

Important to remember that the LoadEndDate is not the date/time the load(process) ended, but the date/time the row was replaced by a newer row for the same business entity in the hub. What’s in a name, if I had to choose I would just call it EndDate, but LoadEndDate is the standard, and once you know it, it is not a problem.

There are two ways to determine the value for this LoadEndDate:

  1. Exclusive: the LoadEndDate is the LoadDate of the new row that replaces this one, minus a small time fraction. Using the exclusive methods enables the use of the BETWEEN keyword in Transact-SQL to get the right row out, example:

    WHERE @SnapshotDate BETWEEN [LoadDate] AND [LoadEndDate]

    This is the method that is used in most Data Vault examples.

  2. Inclusive: the LoadEndDate is exactly equal to the LoadDate of the new row that replaces this one. This requires no computation when updating the LoadEndDate, but disqualifies the BETWEEN keyword, for getting data out you need to do something like:

    WHERE @SnapshotDate >= [LoadDate] AND @SnapshotDate < [LoadEndDate]

Problem

There is a (theoretical) problem when using the exclusive method.
If you substract to much, there will be a tiny time gap between the LoadEndDate and the LoadDate of the subsequent row.
I fully admit this is a theoretical problem, because the chances that you require the row valid exactly on this moment are astronomically small.
(There is this not so hypothetical situation that your load starts at midnight, the LoadDate is set before the first second of the new day has passed, and you substract a second.
Then you do not find a record when using the exact date (without time fraction) to get the row out. But you need a lot of bad luck for this to happen).

Solution

Still if you are a purist you want to do it right, certainly because the solution is so simple.
If you make the the “grain” of substraction from the LoadDate equal to the precision of the LoadDate, there is no gap.
E.g.
– Substract 1 second from DATETIME2(0)
– Substract 1 centisecond from DATETIME2(2) ***TIP***
– Substract 1 millisecond from DATETIME2(3)
– Substract 1 microsecond from DATETIME2(6)
– Substract 100 nanoseconds from DATETIME2(7)

Examples in T-SQL:

Exclusive enddate without gaps.sql

DECLARE @dt20 DATETIME2(0) = '2016-04-13 20:52:17'
DECLARE @dt22 DATETIME2(2) = '2016-04-13 20:52:17.00'
DECLARE @dt23 DATETIME2(3) = '2016-04-13 20:52:17.000'
DECLARE @dt26 DATETIME2(6) = '2016-04-13 20:52:17.000000'
DECLARE @dt27 DATETIME2(7) = '2016-04-13 20:52:17.0000000'

SELECT @dt20, DATEADD(SECOND, -1, @dt20)
SELECT @dt22, DATEADD(MILLISECOND, -10, @dt22)
SELECT @dt23, DATEADD(MILLISECOND, -1, @dt23)
SELECT @dt26, DATEADD(MICROSECOND, -1, @dt26)
SELECT @dt27, DATEADD(NANOSECOND, -100, @dt27)

2. Getting the best value for bytes on DATETIME2 precision

This is about a “smart” precision to choose for your LoadDate and LoadEndDate columns. Unlike the older DATETIME datatype, DATETIME2 uses less or more bytes for storage depending on the precision you specify.
The Storage size is: 6 bytes for precisions less than 3; 7 bytes for precisions 3 and 4. All other precisions require 8 bytes. (I shamelessly pasted this from MSDN).
In the book “Building a scalable data warehouse with Data Vault 2.0” a lot of examples use DATETIME2(7). You can ask yourself why. Why do you need to be precise to 100 nanoseconds? I dare to say that in most cases (except when loading realtime or near-realtime streaming data into your Data Vault), seconds would be precise enough.
But looking back a few lines, to the storage sizes, DATETIME2(0) uses 6 bytes, but DATETIME2(2) ALSO uses 6 bytes. So with the latter you get a higher precision for the same storage size. And for daily loads, the centiseconds precision that DATETIME2(2) is providing, is really precise enough, believe me.
So DATETIME2(2) gives us the best value for bytes!


DV-S01E04-precise
Why would your LoadDates and LoadEndDates be more precise than needed ?..


In this way you can save two bytes, compared with a DATETIME2(7), which uses 8 bytes and is used in the book. Because the LoadDate is in the primary key of satellite tables, also the primary key index will be smaller.
In PIT Tables, which usually have multiple LoadDates, the storage gain is even more.
And what counts for money, “who does not respect a small gain, isn’t worth a big one”, does also count for small performance gains you can get by doing this kind of tweaks. Never forget that! Multiple smaller tweaks might add up to a noticable performance improvement!

Conclusion / Wrap up

In this blog post you have read about the two methods for enddating: Inclusive and Exclusive.
Also you could read how you can use the Exclusive method for end dating in Data Vault satellites, without having gaps in the timeline.
Finally I discussed which precision will in most cases be good enough for LoadDate and LoadEndDate columns.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.