Executing your own .NET console application from SSIS (SSIS Series)

Introduction

The SSIS Script Task is a very powerful component to use in a SSIS package, and most of the time you can achieve with it what you want.
Still I have faced a few situations where a Script Task was not the best solution. In those cases I made a .NET Console application and executed it from within a SSIS package.

The full article is posted on SQLShack.com, so you can read further there ..

The Data Warehouse Automation Tools Survey Results (Data Warehouse Series)

Introduction

As you might know, I started a survey on the usage and opinion about Data Warehouse Automation (DWA) Tools.
Today I am presenting the results.
I sincerely want to thank all the participants for sharing your thoughts.

The participants

First some insight into the persons that filled in the survey. You were not with so many (62 completed surveys).

In statistics there is a formula to compute the minimum sample (number of participants) given a certain margin of error, reliability level and total population. If I fill in 10% for error margin, 90% for reliability level and 20000 for total population (this number does not have much impact on the end result) the minimum sample (number of participants) should be 68.
Please note that with 10% error margin and 90% reliability level that there are considerable chances that the answers are not representative for the entire population of data warehouse professionals.

Participants by country

Participants are spread over 23 countries on several continents, with a concentration in the Netherlands and the United States.

Detailed list:

Country No of participants
Argentina 1
Australia 1
Austria 1
Belgium 3
Brazil 1
Bulgaria 1
Denmark 1
Germany 3
Ghana 1
Hungary 1
Iceland 1
India 4
Lithuania 1
Netherlands 20
New Zealand 1
Norway 1
Romania 1
Slovakia 1
Spain 1
Switzerland 3
Turkey 1
United Kingdom 2
United States 10

Jobroles of participants

Almost half of the participants selected Data Warehouse Architect as best describing their current job role.
Furthermore there were 12 ETL developers and a number of other job roles, as shown in the graph.

Exact counts:

Jobrole No of participants
BI & Analytics/Data Science manager 2
BI Consultant 1
Business analyst 1
Data warehouse architect 30
Director 2
ETL Developer 12
Front end BI Developer 1
Manager 1
Multi-disciplined BI Professional 4
Project manager 6
SQL Developer 1

Familiarity with the investigated tools

First question to answer is: how familiar are the respondents with those tools?
I used 5 levels of familiarity with a short name and explanation. The option values have the following meaning:

  1. Don’t know: I do not know this product.
  2. Heard of: I have heard of this product but I have not worked in a project where it was tried or used.
  3. Tried: I have tried this product but not used it in production environments.
  4. Used: I have worked in a project where this product was used.
  5. Using: I am currently working in a project where this product is being used.

The results in a graph:

And the exact figures, first absolute, then percentages:

Issuer – Productname Don’t know Heard of Tried Used Using
AnalytiX DS – CatfX / Data Vault Code Gen Bundle 44 14 3 0 0
Attunity – Attunity Compose (formerly BiReady) 36 21 4 0 1
Gamma Systems – Data Warehouse Studio 55 5 1 0 0
Javlin – CloverETL 51 7 2 1 0
Magnitude – Magnitude Kalido 39 20 1 1 0
Qosqo – Quipu 43 12 3 2 1
TimeXtender – TimeXtender 42 14 3 0 2
Trivadis – biGenius 52 5 1 1 2
Varigence – BimlExpress / Bidshelper 43 3 4 3 8
Varigence – Mist / BimlStudio 46 4 9 1 1
WhereScape – RED 35 10 6 5 5
Average 44,2 10,5 3,4 1,3 1,8
Issuer – Productname Don’t know Heard of Tried Used Using
AnalytiX DS – CatfX / Data Vault Code Gen Bundle 72,1% 23% 4,9% 0% 0%
Attunity – Attunity Compose (formerly BiReady) 58,1% 33,9% 6,5% 0% 1,6%
Gamma Systems – Data Warehouse Studio 90,2% 8,2% 1,6% 0% 0%
Javlin – CloverETL 83,6% 11,5% 3,3% 1,6% 0%
Magnitude – Magnitude Kalido 63,9% 32,8% 1,6% 1,6% 0%
Qosqo – Quipu 70,5% 19,7% 4,9% 3,3% 1,6%
TimeXtender – TimeXtender 68,9% 23% 4,9% 0% 3,3%
Trivadis – biGenius 85,2% 8,2% 1,6% 1,6% 3,3%
Varigence – BimlExpress / Bidshelper 70,5% 4,9% 6,6% 4,9% 13,1%
Varigence – Mist / BimlStudio 75,4% 6,6% 14,8% 1,6% 1,6%
WhereScape – RED 57,4% 16,4% 9,8% 8,2% 8,2%
Average 72,3% 17,2% 5,6% 2,1% 2,9%

I think most remarkable is that most of the tools are not very well known: the answer “Don’t know” is on the average given by 72,3% of the participants.

Product satisfaction

The product satisfaction question gave the insight that there are no ‘bad’ tools. All tools fulfill a need, and the digits go from 3 to 5 (on a scale of 1 to 5). So the products with the lowest mark for satisfaction, still score a ‘C’ (or as we say in the Netherlands, a ‘vijfje’).

Value for money

Value for money is obviously a very subjective matter. What one person thinks is expensive, the other might think of as cheap. Most vendors are not transparent about their prices.
Marks vary from 1,80 to 3,46. What we can conclude from this is that some tools have the perception of a relatively low value for money, while others provide a better value for money, according to the participants of the survey.
The products that have an average score of 3 or higher (again on a scale of 1 to 5) are both Varigence products (BimlExpress/Bidshelper and Mist/BimlStudio), AnalytiX DS (CatfX / Data Vault Code Gen Bundle) and Trivadis (biGenius).

By the way, the question in the survey had the following scale: 1=cheap and 5=expensive. I translated this to a value for money by reversing the answers (so 1 became 5 and vice versa and 2 became 4 and vice versa). In this way, a higher score means a ‘cheaper’ pricing perception, thus a better value for money.

Other tools that were mentioned

A number of other tools were also mentioned, many of them only once or twice.
Remarkable is that also tools that do not meet the “DWA Tool” definition were mentioned, for instance Microsoft BI Products.
What I learned from this is that I should have given a definition of what I meant with a “Data Warehouse Automation Tool”.
From Wikipedia:
“Data warehouse automation or DWA refers to the process of accelerating and automating the data warehouse development cycles, while assuring quality and consistency. DWA is believed to provide automation of the entire lifecycle of a data warehouse, from source system analysis to testing to documentation. It helps improve productivity, reduce cost, and improve overall quality.”

Worth mentioning is the open source tool ODE. I didn’t know it but tried it recently. It’s a sql based, metadata driven tool without graphical user interface, and I could not get an “AdventureWorks” example working due to errors that occurred during working through the example. So I think this tool needs to become more stable or at least should have a better manual on how to use it. But it’s Open Source and that puts a smile on my face.
From a company/human resources perspective, working with such a tool can however lead to a hidden cost because using it can be more laborious than working with a commercial, ‘more stable’ product. And once you start changing the source code (for yourself or your company, not as member of the Open Source Development team) you can’t upgrade to newer versions anymore and are stuck with the maintenance of your copy of the tool. And if you join the Open Source Team you are now doing two projects instead of one!
In the Netherlands we have an old-fashioned saying “Bezint eer gij begint” which means “think well before you start”, and I think this is applicable here as well. A license for a commercial tool might be worth the money.

What also appears to be the case, that a number of those tools are not for sale, but are tools that consultancy firms use for doing projects.

List of all the other tools that were mentioned:

Issuer – Productname Mentions
Adante – Datawarehouse on Demand 1
Automic – One Automation 2
Birst – Birst 1
Centennium – TBD engine (template based development) 1
Cubido – Cubikit 1
DIKW – DHW Accelerator 1
Grip Op Data B.V. – GDIP 1
Infent – Infenture Insight 1
Informatica – Powercenter 2
Informatica – B2b data exchange 1
Informatica – Informatica 1
Informatica – MFT 1
I-refact – I-refactory 1
Kadenza – Smart Datawarehouse generator 1
Microsoft – MSBI(SSIS,SSAS,SSRS) 9
Microsoft – Power BI 1
Optimalbi Limited – ODE (Open source) 1
Oracle – ODI 1
Oracle – OWB 1
Own build – Excelbased tool 1
Pentaho – Kettle 2
Pentaho – Big Data 1
Rabobank – Foundation Toolkit 1
SAP – Data Services 2
SAP – Information Steward 1
Tableau – Server 1
Talend – Data Fabric 1
Talend – Open Studio 1
TopBi – Data Vault Generator 1
TopBI – HSG 1
Varigence – BimlFlex 1
Varigence – BimlOnline 1
Viz Analytics – Data Warehouse Studio 1

Comments given about all the tools

Participants could give their opinion about the tools in free text format – and they did!
Here are just the bare comments.

Attunity – Attunity Compose (formerly BiReady)
  • “Like features, makes work flow easily “
Qosqo – Quipu
  • “This one could be split up between the free and paid variant for this research.”
  • “Most advanced options on data vault modeling and generation in one tool. Does not support data marts, requires Groovy knowledge to change templates.”
TimeXtender – TimeXtender
  • “vendor lock-in, no best practices applied when generating SSAS cubes”
  • “lock in, very good features, easy to use (even for business users)”
Trivadis – biGenius
  • “if they extend SSIS generation no lock in, very flexibel, fast, cool”
  • “No Vendor-Lock (generated Code can be scheduled and executed native), Microsoft and Oracle-Support, Data Vault Support”
  • “Integrated product, short learning curve, easy to adapt”
Varigence – BimlExpress / Bidshelper
  • “Great to building SSIS packages.”
  • “Free, but you need to write the generating code first. Ideal for consultants though.”
  • “Nice combination, but needs to mature.”
  • “no lock in, very cool that you generate SSIS/SSAS, free, a lot of work coding, not for everyone”
  • “Very flexible, but long learning curve”
  • “Second best tool to generate SSIS packages. Requires a lot of exceptions if you want to alter parts of code for specific tables. Debugging is hell.”
Varigence – Mist / BimlStudio
  • “Has potential but is progressing slowly. Pricing is fair.”
  • “No vendor lock-in, as the result of BIML are just SSIS packages”
  • “More mature version.”
  • “easier, expensive, nice you can easily re-engineer from existing DWH”
  • “There is no reverse engineering module.We can not convert old package to use BimlStudio”
  • “Very flexible”
  • “Best tool to generate SSIS packages. Still requires a lot of manual labor for low level changes.”
WhereScape – RED
  • “We were able to increase development speed with a factor 3. “
  • “Vendor-Lock (needs App-Server for Scheduling, Processing), Microsoft not fully supported (only Beta), Data Vault Support”
  • “Creates stored procedures which clutters db’s, powerful tool, but has some annoyances”
  • “Several products which are not completely integrated, some features not “ready”, several limitations (especially when working with datasets, expensive for what you get, only based overseas (except smalll base in UK)”
  • “You have to click and type almost everything, then it “generates” a stored procedure. Also stores its repository in the same database and schema as your data warehouse.”
  • “Changed projects into initiatives enabling business analytics. “
  • “Like lineage, like support for multiple generation types (such as data vault), dislike performance, dislike excessive storage, dislike cost of consulting, dislike unreliability, dislike UI, dislike lack of adapters, dislike reliance on database procedures, dislike scheduler, dislike target specific repository”

Comments for other tools (not all available for sale)

Adante – Datawarehouse on Demand
  • “Very easy to use automation tool for Dimensional modeling”
Centennium – TBD engine (template based development)
  • “SQL server based metadata based staging, vault en datamart generator. Developed by Centennium, for the larger part made adult by me.”
I-refact – I-refactory
  • “A very solid, well thought solution for the data management. It’s not just for dwh, but for the whole data platform of your company.”
Kadenza – Smart Datawarehouse generator
  • “Model driven generator c-sharp”
Talend – Data Fabric
  • “Talend is a very friendly for completing complex operations like Data Integration, Data Quality and Big Data (towards Hadoop) with minimum programming skills”
Viz Analytics – Data Warehouse Studio
  • “Custom built from the ground up, with own ETL engine, very quick to build and implement DWs.”

Used technologies

One of the questions was about the technology that is being used (from Gartner Magic Quadrant for BI and Analytics). As more than one answer was allowed, the total number of responses is more than the number of participants, because some participants use two or more technologies.
In the Top 5 we see: Microsoft, Oracle, Tableau, SAP and Qlik. To be honest, nothing that surprises me.

Technology Mentions
Alteryx 7
Birst 3
Board International 3
Datameer 3
IBM 11
Information Builders 3
Microsoft 47
MicroStrategy 7
Oracle 25
Pentaho 6
Qlik 13
SAP 16
SAS 5
Sisense 1
Tableau 20
TIBCO Software 2
Yellowfin 1
Zoomdata 1

Conclusion / Wrap up

With this survey I tried to give the business intelligence community more insight into the available Data Warehouse Automation Tools.
Due to the low number of participants, conclusions come with large uncertainty margins. I think this is a pity.
Nevertheless this investigation could be good starting point for further analysis on DWA Tools or a DWA Tool selection.

(c) 2017 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

How to mimic a wildcard search on Always Encrypted columns with Entity Framework

Introduction

The title of this post should have been “How to implement wildcard search functionality with Always Encrypted, make deterministic encryption safer, and load initial data using SqlBulkInsert”, but as you understand, that’s just too long for a title.

A number of years ago I built a web application with “Always Encrypted” as VARBINARY columns, before Microsoft offered this feature out of the SQL Server Box. So in case the database server would be compromised by hackers, no client details could be revealed.

Just like Microsoft my Data Access Layer, built in .NET did the encryption and decryption, transparent for other code that accessed it. And I had the same challenge how to search on encrypted columns.

So with (maybe more than average) interest I was looking forward to know how the Always Encrypted feature of SQL Server 2016 works.

The full article is posted on SQLShack.com, so you can read further there ..

Temporal Table applications in SQL Data Warehouse environments (SQL Server Series)

Today the subject of investigation is the Temporal Table, which is a new feature in SQL Server 2016. My focus will slightly be on how to use it in Data Warehouse environments, but there is some general information passing by as I write.
I want to cover next topics:

  1. What is a temporal table (in short)?
  2. Can I use a temporal table for a table in the PSA (Persistent Staging Area)?
  3. Can I use a temporal table for a Data Vault Satellite?
  4. Is using temporal tables for full auditing in an OLTP system a good idea?

The full article is posted on SQLShack.com, so read further there ..

Using a Persistent Staging Area: What, Why, and How

Change log

Date Changes
21 February, 2017 Due to a very useful discussion on LinkedIn with Dan Linstedt, I have made some changes in this article to emphasize that a PSA does not have a permanent place in any EDW
(with exception, as Dan suggests, being the Hadoop NoSQL platform used for slower storage mechanisms).

Today I want to discuss the Persistent Staging Area in an Enterprise Data Warehouse, or PSA for short. First I want to give a credit to Roelant Vos who has blogged about this subject before. He triggered me to go further with this concept to see “what’s in it for me” and to do a proof of concept using SQL Server 2016.

What is a Persistent Staging Area or PSA?

As an introduction, I want to tell what a (non-persistent) Staging Area is. Some of you will already know, so be it. A Staging Area is a “landing zone” for data flowing into a data warehouse environment. The data in a Staging Area is only kept there until it is successfully loaded into the data warehouse. Between two loads, all staging tables are made empty again (or dropped and recreated before the next load).
A Persistent Staging Area on the other hand, is a Staging Area that is not wiped out between loads: it contains full history from the source(s) that deliver data to it.


Picture credits: © creativecommonsstockphotos | Dreamstime Stock Photos
The “Staging Area” for passengers on an airport ..

Why would you want to make use of a PSA, and what are its pros and cons?

A PSA makes it possible to reload the next layer in the Data Warehouse (for example a Data Vault) when requirements change during development of the Data Vault / EDW without going back to the original source(s). The original source(s) might not contain all history anymore (e.g. because it is an OLTP system or a web service returning only current data) or it is cumbersome to do such a reload because the source data is delivered in large numbers of flat files.

As you might know for data warehouses with (semi)structured data I always preach for a multi-tier architecture including a Data Vault for storing full history of unmodified source data and one or more dimensional models (virtual if performance allows, otherwise physical) as data feeds for cubes, reports, dashboards and BI tools.

So, I hear you think, in a Data Vault you have already full history, why would you need a PSA then? The answer is simple: because building a Data Vault takes time. You do not load all data on day 1, maybe there are uncertainties about the model and your resources are restricted, most likely. So you build the Data Vault in increments. But you can start capturing as much source data as you can! When Data Warehouse Automation tools are used, this is fairly easy to build. So even though you do not take the data further into your data warehouse than the (Persistent) Staging Area, you already start collecting your history of data.

Note: a different reason to use a PSA could be that your source systems contain high data volumes and are geographically spread: you could then create a PSA on premise near the source system, and export only delta data from it – each row would get an I, U or D indication for Insert, Update and Delete – to massively reduce data traffic between source and EDW. I will not discuss this use case scenario today, because then pros and cons would be completely different, and time travelling and full reload would not be applicable.

Using a PSA has it’s pros and cons. To mention a few:

Pros:

  • You collect data, even for those parts of the data warehouse you have not properly designed and developed yet.
  • You can clear the Data Vault, or even change the model completely (this can happen, for instance when business concepts initially were not well understood), create new ETL Code to load data to the modified Data Vault and do a replay of loading full history of the data captured in the PSA.
  • Important is that the PSA is implemented with changed rows only. If done so, it does not have excessive storage needs. So like a Data Vault Satellite, a new row for a table in the PSA is only added, when something has changed for the primary (surrogate or natural) key. A start- and end date/time can be used to implement this.
  • A (relational) PSA should have an optional, temporary character, and is not part of the core EDW tiers. A (relational) PSA should have an optional, temporary character, and is not part of the core EDW tiers! This was written twice on purpose! Do not forget! It is useful during building the EDW, but must be disarmed when the EDW has gone into production and has reached a maintenance phase. A good way to make sure this happens is to plan a decommission date and stick to the plan (can also be a plan like “after sources X, Y and Z are connected and are data feeds in production for 3 months”). Do not keep the PSA around for no good reason.
    In this way, most of the cons below are very likely to be not applicable!

Cons:

  • You can argue that you are in fact building a second data warehouse.
  • When sources change over time, your PSA might not reflect the current structure of the source system(s) anymore (e.g. a column is removed in the source system but you leave it in the PSA so that you can add new data with the column empty but keep the data in this column for older rows). This might be confusing.
  • When source systems are not used anymore (for instance when accounting software is replaced by a similar product from a different vendor) you need to add tables of the new system to the PSA, while you also keep the tables from the old system, which will not receive new data anymore. Now you need to keep the ETL code to load from the “old systems” PSA tables, in case a full reload is needed, and you also need to develop code to get data from the “new systems” PSA, and might need logic for which point in time which tables should be loaded. As you can understand, now the maintenance of your PSA can start to become a heavy burden. So going back to the pros, make sure your PSA is optional and can be disarmed when the maintenance efforts do no longer justify the benefits.

How could you implement a PSA?

To give an idea how this fits in my favourite architecture, I’ve added the PSA to it.
For a normal load the PSA is just loaded from the Staging Area (preferably in parallel with loading the Enterprise Data Warehouse from the Staging Area).
A Virtual Staging Access Layer (for example implemented with views) ensures that ETL code for loading the Enterprise Data Warehouse can switch easily between Staging Area and PSA for loading data. An example how to implement this will follow below.


How the PSA fits in the EDW architecture

For the next schema I owe you an explanation, because I know it’s a bit weird to mix architecture with processing in one schema, but in this case I thought it made sense and could clarify how to reload the EDW from the PSA: no sources and Staging Area are involved, but instead a time travelling mechanism queries the data from the PSA through the Virtual Staging Access Layer, to replay all loads that have been done so far, but with modified ETL Code to load data into a modified or extended Data Vault.


Reloading the DWH from the Persistent Staging Area

To set up time travelling, you could create a list of load sessions and their start date/times, then substract a small time fraction and use that value as Point-In-Time date time value to retrieve all rows from the previous load.
Note: For the DATETIME2 columns, I use a precision of centiseconds, so DATETIME2(2). For justification see one of my older blog posts: Stop being so precise! and more about using Load(end)dates (Datavault Series). If needed you can use a higher precision for your data warehouse.

In a table:

Load no LoadStartDatetime Point-In-Time value
1 2017-02-01 02:00:01.74 2017-02-02 02:00:00.88
2 2017-02-02 02:00:00.89 2017-02-02 02:00:02.11
3 (last load so far) 2017-02-03 02:00:02.12 9999-12-31 23:59:59.98 (! not 99 !)

The Time Travel icon with the arrows in the picture above refers to the process that loops through all Start date/time values chronologically and triggers execution of the load process from PSA to EDW for every Point-In-Time value.

SQL Server 2016 implementation: how could you implement loading the PSA from the Staging Area?

This logic is very similar to loading a Data Vault Satellite. For each staging table you should create a “History” version in the PSA database, with two extra columns: EffectiveStartDts and EffectiveEndDts and a primary key defined as the primary key of the staging table plus the EffectiveStartDts. Most concepts of loading a satellite could be applied: you could add rows that are new, insert new versions of rows that have changed, and mark absent rows (missing in the source) as such. For the PSA I would suggest just to update the end date time so that this row will simply not be in the selection when a later point-in-time is used. Also there is no need to create a separate “hub” table for each staging table.

SQL Server 2016 implementation: how could you implement the historic reload from the PSA?

In a previous blog post Isolate cross database “databasename” dependencies using synonyms and schemas I already advocated the concept of putting each tier of the EDW in its own database schema, and use synonyms to achieve interconnectivity between objects in different databases. In the rest of this article I will assume that you understand this concept (if not, you could read this blog post first).

With some imagination, we can use a modified version of this concept by creating the Virtual Staging Access Layer into the PSA database. This virtual layer is needed for loading data from the PSA instead of the Staging Area.
Suppose the Staging Database has a database schema stg and that schema contains all tables that can be loaded into the EDW / Raw Data Vault.
What we can do is do a similar thing in the PSA database, so create a stg schema, but instead of tables this schema contains views with the same name and same columns as the tables in the Staging Database! These views select the rows valid at a certain point in time.

The challenge to retrieve only the valid rows at a point in time is an easy one: To the PSA database we add a table [stg].[PointInTime] with only one row and a [CurrentPointInTime] column of datatype DATETIME2. Now we can build a SELECT statement on the PSA table (same as the staging table, but with start- and end date/time stamps to hold historic changed and deleted rows) with a CROSS JOIN on the PointInTime table and a WHERE clause to select only the rows valid at the given point in time.

An example how such a view could look like:

CREATE_VIEW_stg_Customer.sql

CREATE VIEW [stg].[Customer]
AS
/*
==========================================================================================
Author: Hans Michiels
Create date: 15-FEB-2017
Description: View that has exactly the same structure as a table with the same name
             in the Staging Database.
             This view can be used for full reloads of all data in the PSA Database.
==========================================================================================
*/

SELECT
    [CustomerID],
    [FirstName],
    [Initials],
    [MiddleName],
    [SurName],
    [DateOfBirth],
    [Gender],
    [SocialSecurityNumber],
    [Address],
    [PostalCode],
    [Residence],
    [StateOrProvince],
    [Country],
    [RowHash]
FROM
    [psa].[CustomerHistory] hist
CROSS JOIN
    [psa].[PointInTime] pit
WHERE
    hist.EffectiveStartDts <= pit.CurrentPointInTime
    AND hist.EffectiveEndDts >= pit.CurrentPointInTime -- When INCLUSIVE enddating is used.
    -- AND hist.EffectiveEndDts > pit.CurrentPointInTime -- When EXCLUSIVE enddating is used.
GO

The next steps are:

  1. Make sure your ETL Code to load the data has a mechanism to skip loading data from the sources to the Staging Area and from the Staging Area to the PSA when the PSA is used instead of the normal Staging area. For instance you could add a view [stg].[IsPSA] to both the Staging database and the PSA database, returning a single row with a single value “False” or “True”, respectively. In your package you could retrieve this value using the Staging Connection Manager, and change the flow of your package depending on the fact if the Staging database is connected or the PSA database.
  2. Build a time travelling mechanism (could be a FullReloadMaster.dtsx package) that updates the stg.PointInTime table for every point-in-time date/time and after every update starts the ETL Process (e.g. a Master.dtsx SSIS package) with a changed connection string, connected to the PSA instead of “normal” Staging database!

Here is a possible implementation of the [stg].[IsPSA] view, based on the fact that the PSA database will have _PSA_ in the database name and the staging database will not. This allows to use the same version of the view for the Staging and PSA database.

CREATE_VIEW_stg_IsPSA.sql

CREATE VIEW [stg].[IsPSA]
AS
/*
==========================================================================================
Author: Hans Michiels
Create date: 10-FEB-2017
Description: View that returns one rows with a bitcolumn [IsPSA] that indicates
             if the current database is the PSA or the STG.
==========================================================================================
*/

SELECT
  CONVERT(BIT,
    CASE
    WHEN CHARINDEX('_PSA_', DB_NAME()) > 0 THEN 1
    ELSE 0
    END) AS [IsPSA]
GO


Example how a FullReloadMaster.dtsx package could look like. Important is that the Staging database connection string is a parameter of Master.dtsx, and is passed by FullReloadMaster.dtsx, but it now is a connection to the PSA database.

Now the next challenge is how to get this working. If you use a SSIS dataflow with a OLEDB Source (and a destination), it is not so difficult: you can just change the connection string of the OLEDB Source. I have done a proof of concept and it worked! You do not have to redeploy SSIS Packages. If views in the PSA database are used with exactly the same structure as the tables in the Staging database, the SSIS Package agrees with the “metadata” and “VS_NEEDSNEWMETADATA” or “VS_ISBROKEN” errors do not occur. Of course you could also make views for reading the Staging tables in the Staging database. This would create an additional layer of abstraction, because then in both the Staging Area and PSA you read the data from the views, and you could, when needed, implement views of which the structure differs from the underlying tables.

If you use SQL code (e.g. stored procedures) this will be a little more work, but still doable. You can follow one of the following strategies for storing the SQL Code:

  • a push strategy: SQL code is in the PSA database, and should then also be in the Staging database. From there, you push the data to the raw Data Vault.
  • a pull strategy: SQL code is in the raw Data Vault. From there you pull the data from the Staging Area or PSA. You will need to switch via synonyms: drop stg synonyms and recreate them but them pointing to the PSA database before a historic reload is done. Afterwards, the reverse action needs to take place to let them point to the Staging database again.
  • an outsider strategy: SQL code is in a separate database, and has synonyms for both staging and raw Data Vault tables and views. Now the same switch via synonyms method can be used.

A few considerations:

  • a push strategy forces you to install your SQL code twice: in the Staging and the PSA database. Also a Staging database might not feel like the “right place” for stored procedures.
  • a pull strategy and an outsider strategy force you to change synonyms, something that normally only has to be done during a deployment, but now before a full reload (and afterwards the reverse action).

I have a slight preference for switching via synonyms, assumed that a full reload will not happen often.
Still using the push strategy could be a valid choice too.

Conclusion / Wrap up

In this post you could read more about the concept of a Persistent Staging Area. Although I have not used it in production environments yet, I think in some projects it can be very useful.
Important to keep in mind:

  • Make sure that your data warehouse can survive if the PSA is switched off. So do not expose it to end users or make it a mandatory part of your ETL Code.
  • Plan a decommission moment when the PSA is created and stick to the plan. Do not keep the PSA around for no good reason.
  • Use code generation tools to update the PSA from the staging area to limit the development effort to set it up.
  • Virtualize the read access to the staging tables, for instance by using views.

If you stick to this “rules” I think a PSA can prove its value for your Enterprise Data Warehouse!

(c) 2017 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Is SSDT ready for Visual Studio 2017 RC already? (Tooling Series)

Introduction

At the moment of writing, Microsoft released Visual Studio 2017 RC (Release Candidate) a week ago. It has been the fourth release of a Visual Studio 2017 “preview” since November 2016.
The final release is planned for the first half of 2017 (source: Redmond Channel Partner website)
Time to see what’s in the box!
And I am especially interested in how will it cooperate with SQL Server Data Tools (SSDT).

Download and installation

Visual Studio 2017 RC1 can be downloaded here. It’s a webinstaller, so during installation it will download the components you have selected.
Installation is quite straightforward.
A quiet install (with my Powershell script) does not work yet, I did not put effort in it to find out why.
I just started the executable for a normal install.
The number of Components you can install is impressive.


Installable parts of Visual Studio 2017 ..

I have selected Data storage and processing and .NET desktop development (for my open source console applications).
After selection they appear in the summary on the right (see picture below).


Installation summary ..

The installation doesn’t take too long; I did not use a stopwatch, but think it took about 10 to 15 minutes.

What draws the attention?

I have choosen the blue color theme, and Visual Studio 2017 looks very similar to Visual Studio 2015.


Choosing the color theme ..

What is new is a Visual Studio Installer shortcut in the “Apps” of Windows.


Visual Studio Installer shortcut ..
You can use it to modify your installation easily.


Visual Studio Installer screen ..

However slightly out of scope, I couldn’t help that I wanted to open one of my console apps, SQLAgentJobStarter, in this version to check if any upgrade or modification would be necessary. And the answer is: no. It just opens.
What draws my attention are features in the editor that in the past could only be achieved with third party tools. For example a code change is suggested, and the preview of the change is shown before you apply it. Useful!

Visual Studio suggests code changes and shows a preview of the change ..

Is SSDT ready for Visual Studio 2017?

No. Sorry.
I downloaded Download SQL Server Data Tools (17.0 RC1), the description says “Includes support for SQL Server vNext CTP1, but not recommended for production use.”.
Apparently it is more related to “SQL Server vNext” (SQL Server 2018?) than to “Visual Studio vNext”.
And it’s just SSDT for Visual Studio 2015.
It’s doesn’t even install, probably because VS 2015 is not installed.

SSDT (17.0 RC1) does not install ..

But is this bad? Again, a no. Just have some patience.
SSDT has a different release cycle than Visual Studio. On Microsoft’s SSDT blog there is no news on any SSDT activity for VS 2017.
I have read on some forum (sorry could not find the link anymore) that it usually takes a few months after a new release of Visual Studio before an update for SSDT will become available.

Conclusion / Wrap up

So, you could already install VS 2017, but for BI projects (SSIS, SSRS, SSAS) you will still have to use SSDT for VS 2015.
Visual Studio 2017 is expected to be released in the first half of 2017, you have to add a few months for SSDT.

(c) 2017 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

SQL Agent Tips and Recap (SQL Server Series)

Introduction

When I would have planned this upfront, I might have made a series about SQL Agent, but I didn’t.
So far this is what my blog contains about SQL Agent:

The reasons I have picked this subject again today, are two:

  • I want to share some tips on using SQL Agent Schedules.
  • Because I have made a major update to the SQLAgentJobStarter tool (the first post was about this) and wanted to bring this to your attention.

Tips about schedules

There are two things about SQL Agent jobschedules that “got” me in the past:

  • Using meaningless names for schedules like “Schedule” because I thought it did not matter.
  • Use a scripted SQL Agent job with a schedule as a starting point for a new job.

But both ways of working do have disavantages, which you can read below.

Using meaningless names for schedules

SQL Server Management Studio has an option for managing schedules, which is really convenient.

However if your schedules have meaningless names, you have to click through for every schedule (the number in the last column is a hyperlink) to find out which job uses this schedule.

On the other hand, if you add the jobname to the schedule name (I use the format “Schedule for <jobname>”), this is what you see! Looks far more clear to me.

Use a scripted SQL Agent job with a schedule as a starting point for a new job

What I did: script a job (that had a schedule), change the jobname in the script, and in that way, created a new job to edit further.
After I did this I painfully found out that both jobs shared the same schedule! I changed the schedule of one job, and then the change was made for both jobs!
This was caused by the (same) value for @schedule_uid in both scripts/jobs.
So if you work this way, make sure to generate a new guid for the second job (for instance with the T-SQL function NEWID()), to avoid that the schedule will be shared.

The scripted value for @schedule_uid ..

Using alternate (non-time) triggers to start a SQL Agent job

My tool SQLAgentJobStarter has got a major update:

  1. added “Running” as a state that can be checked for a job (in the previous version the values where “Succeeded”, “Failed” or “Completed”);
  2. added jobcondition property “reverse_condition”, which enables negative triggers, e.g. a file may not be found, a job may not be running;
  3. a check is done that the jobs used for jobtriggers and jobtrigger conditions actually exist, if not an error is thrown, so this can prevent a silent failure, when a job is renamed or removed after a jobtrigger has been made.

Read more here..

Conclusion / Wrap up

Because a lot of my time was consumed by the SQLAgentJobStarter improvements, this article is a short one.
Still I hope I have shared some useful insights about the SQL Server Agent and job schedules.

(c) 2017 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

How a SSIS Lookup works different from a T-SQL JOIN (SSIS Series)

Introduction

With a Lookup in a dataflow in a SSIS Package you can achieve a similar result as with a T-SQL Join clause in a SELECT statement: you can check if a row exists in a different dataset, based on one or more (business / natural / surrogate) key columns. However there are a number of differences. One of these differences took me several hours of investigating some time ago, when I had this lookup in SSIS that did not find any row. I did not understand why, because I had a similar T-SQL statement with a JOIN that found all rows. What could be different (it appeared to be a CHAR column connected to a VARCHAR)?
So today I am going to compare the two.

Subjects of comparison:

  1. Perform a lookup when the datatypes are different
  2. Perform a lookup on a different SQL Server Instance
  3. Perform a lookup when the join is not a equi-join
  4. When the source is not SQL Server
  5. When the joined columns differ in Case (Uppercase / Lowercase / Mixed case)

Simple setup for two lookup transformations using the Adventureworks database

Perform a lookup when the datatypes are different

A Transact-SQL query with a join is more forgiving than a SSIS Lookup when different datatypes are joined.

Connect a nvarchar to a varchar column

In a SSIS Lookup this does not work because an error is shown during design:

In T-SQL, this join can be made and will work:

different_datatypes.sql
SELECT
    *
FROM
    [Production].[STG_Product] stg
JOIN
    [Production].[ProductCategory] pc
    ON pc.name = stg.VARCHAR_ProductCategoryName

Connect a fixed length CHAR column to a variable length VARCHAR column (or NCHAR to NVARCHAR)

In SSIS the lookup can be developed, but will not work in practice, when the CHAR column value is not the maximum length (and thus has trailing spaces).
A “join” of a nvarchar to a nchar column ..
.. will not work !

In T-SQL, this join can be made and will work:

fixed_length_column.sql
SELECT
    *
FROM
    [Production].[STG_Product] stg
JOIN
    [Production].[ProductCategory] pc
    ON pc.name = stg.NCHAR_ProductCategoryName

Perform a lookup on a different SQL Server Instance

In a SSIS Lookup you can use different connection managers for source and Lookup transformation, so this is easy to do.

In T-SQL this would be more complex to implement, for instance you could use linked servers if your security policy allows this.

Perform a lookup when the join is not a equi-join

An equi-join is a join where the column values that are joined simply must be equal to each other.

By default a SSIS Lookup will do an equi-join. In the visual editor you connect one or more columns from the Available Input Columns to the Available Lookup Columns. Designed in this way it will be an equi-join.
There are two ways I know of to implement a different kind of join:

  1. in the Advanced Tab of the Lookup Transformation Editor you can modify the SQL statement.
  2. use a OLE DB Command in the Data Flow, but this can have a negative impact on performance.

In T-SQL you can do more complex joins, for instance with a BETWEEN clause for an EffectiveStartDate and EffectiveEndDate.

When the source is not SQL Server

In a SSIS Lookup you can use different connection managers for source and Lookup transformation, so this is easy to do.

In T-SQL this would be practicly impossible, or at least much more complex to accomplish (e.g. with OPENROWSET).

When the joined columns differ in Case

A SSIS Lookup is case-sensitive when Full cache is used, so differences in casing cause that lookup values are not found.
However when you switch to No Cache or Partial Cache , the lookup can become case-insensitive! Keith Mescha brought this to my attention (see below). Thank you, Keith.
So the safest thing to do is convert the column values for columns to connect from Available Input Columns to Available Lookup Columns to either upper- or lowercase on both sides. In T-SQL you can use the LOWER() and UPPER() functions for this, in SSIS you can add a Derived column to your dataflow, where you also can use the LOWER() and UPPER() functions. Depending on your needs, you can replace the existing column or add the upper or lower value as new column.

Example of a derived column that converts the value of a column of the Available Input Columns to uppercase.

In T-SQL the result of this join will depend on collation settings of the SQL Server Instance or column(s) affected in the join.
By default, SQL Server is installed using a case-insensitive collation (e.g. Latin1_General_CI_AS, where CI stands for Case Insensitive), and when the collation is not overruled on column-level, the join will work case-insensitively.

Performance considerations

From my own experience I know SSIS Lookups can be faster when developed correctly. This means:

  • Use full cache whenever the dataset allows this (so memory usage is acceptable given the amount of RAM on the server)
  • In the query of the lookup transformation, only include the columns that are needed in the Lookup.

Conclusion / Wrap up

In this blog post I have compared the SSIS Lookup transformation with the T-SQL Join.
In general T-SQL is more flexible and more forgiving, but when you need to go outside the SQL Server Instance for source or lookup it is easier to use a SSIS Lookup, but beware of the peculiarities of the SSIS Lookup, as described in this article.
Performance-wise, a SSIS Lookup can perform better than a T-SQL join, but this of course depends on a lot of design factors in both your Lookup as well as SQL database.

(c) 2017 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Free download: Blog Yearbook 2016

I have made a compilation of all my blog posts of the past year, and it was more work than you would think.
Especially because it also contains a non-published new article.

After filling in the form you can download the e-book for free.


I promise you the following:

  • I will not give or sell your email address to any third party.
  • I will not spam you.
  • If I start with a newsletter, you can easily unsubscribe or change the frequency.

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.

Making your SQL Agent job scripts generic for all servers (SQL Server Series)

Introduction

If you schedule jobs and you are using SQL Server (not Express Edition) SQL Agent is your best friend. I think it is easier to use than Windows Scheduler for most use cases.
The GUI that SQL Server Management Studio has for creating and editing SQL Agent Jobs is also quite good, I think.

Problem

However, after the agent jobs are created, the SQL Agent Job scripting done by SQL Server Management Studio has a number of issues (Microsoft developers of the SQL Server team, please read!).
Where shall I start ..

  • I like to write repeatable scripts, that raise an error only when something is really wrong, so when the script is being executed as expected, no errors are thrown. The generated scripts however use brute force to delete any existing jobs. If they do not exist yet, an error occurs. This is not what I would call a real error, because it is logical that when a job does not exist yet, it cannot be deleted. A simple IF statement could check if the job exists and delete the job only if it exists.
  • Even worse, the generated scripts use the @job_id, a unique identifier, to delete the job. By definition, this will work only once! After the job is (re)created, the job_id will be different! It would be much better to use the @job_name as parameter for sp_delete_job (yes this is possible), it is just that the SQL Server developer team made the choice to use the @job_id .. 🙁
  • Because scripts are always delete and (re)create, instead of alter/modify, your entire job history is gone, when you change anything in your script and execute it on your server (sorry, no workaround for this).
  • Any (SQL) Server specific values, like SSIS Environment IDs and SQLInstance names, can make your scripts only suitable for one server, the one you created the job on with the SSMS GUI.

Solution

Given all the challenges above, I was looking for a way so:

  • I could create the SQL Agent jobs with the SSMS GUI.
  • Then script the jobs and put them under source control.
  • Then also make sure I can install the SQL Agent jobs on a different server (development/test/production) with the same (nearly) unmodified script.

For this to work, we are again visiting the system objects area of the msdb database.

The plan is roughly as follows:

  1. Create a wrapper stored procedure for sp_create_jobstep.
  2. Create a generic install script that partially is manual labour, that you have to do only once, and partially contains a script generated by SSMS for creating jobs.

Create a wrapper stored procedure for sp_create_jobstep.

This wrapper stored procedure, called [dbo].[usp_add_jobstep_wrapper] consists of the following three parts:

  • PART 1: Check if temp table #jobstep_replacements exists with project-/customer specific replacements: this temporary table can be created in a script that creates jobsteps and contains info about textual replacements to be done in any (N)VARCHAR parameter of the stored procedure sp_create_jobstep. However it would be neater to pass a table variable as parameter to the stored procedure, this would make adjusting scripted jobs more work, because a parameter would have to be added to the EXECUTE [dbo].[usp_add_jobstep_wrapper] command.
  • PART 2: Fix environment references in ssis commands: a feature that is open for improvement in a future SQL Server Service Pack is that when you create a jobstep that executes a SSIS Package, and you use a reference to an environment, in the script a technical ID is used, that is most likely to be different on another server. This is also not very easy to fix manually, or you have to look up all the new id’s with a query, and then change this in the script. Good as occupational therapy, but not so good for your productivity.
    So this part of the stored procedure fixes this /ENVREFERENCE for you.
  • PART 3: After doing replacements in parameter values, and fixing the /ENVREFERENCE, the built-in stored procedure sp_add_jobstep is executed.
    This implies that we simply can execute [dbo].[usp_add_jobstep_wrapper] instead of [dbo].[sp_add_jobstep] , and PART 1 and 2 will be done extra.

And here it is (can also be downloaded).

msdb.dbo.usp_add_jobstep_wrapper.sql

--\
---) hansmichiels.com [msdb].[dbo].[usp_add_jobstep_wrapper]
---) Author: Hans Michiels
---) Stored procedure that can help to make SQL Server Agent job scripts usable for multiple servers.
--/
/*
(c) Copyright 2016 - hansmichiels.com
 
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
 
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
 
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
*/

USE [msdb]
GO
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[usp_add_jobstep_wrapper]') AND type in (N'P', N'PC'))
  DROP PROCEDURE [dbo].[usp_add_jobstep_wrapper]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE PROCEDURE [dbo].[usp_add_jobstep_wrapper]
  @job_id UNIQUEIDENTIFIER = NULL, -- Must provide either this or job_name
  @job_name sysname = NULL, -- Must provide either this or job_id
  @step_id INT = NULL, -- The proc assigns a default
  @step_name sysname,
  @subsystem NVARCHAR(40) = N'TSQL',
  @command NVARCHAR(max) = NULL,
  @additional_parameters NVARCHAR(max) = NULL,
  @cmdexec_success_code INT = 0,
  @on_success_action TINYINT = 1, -- 1 = Quit With Success, 2 = Quit With Failure, 3 = Goto Next Step, 4 = Goto Step
  @on_success_step_id INT = 0,
  @on_fail_action TINYINT = 2, -- 1 = Quit With Success, 2 = Quit With Failure, 3 = Goto Next Step, 4 = Goto Step
  @on_fail_step_id INT = 0,
  @server sysname = NULL,
  @database_name sysname = NULL,
  @database_user_name sysname = NULL,
  @retry_attempts INT = 0, -- No retries
  @retry_interval INT = 0, -- 0 minute interval
  @os_run_priority INT = 0, -- -15 = Idle, -1 = Below Normal, 0 = Normal, 1 = Above Normal, 15 = Time Critical)
  @output_file_name NVARCHAR(200) = NULL,
  @flags INT = 0, -- 0 = Normal,
                                                     -- 1 = Encrypted command (read only),
                                                     -- 2 = Append output files (if any),
                                                     -- 4 = Write TSQL step output to step history,
                                                     -- 8 = Write log to table (overwrite existing history),
                                                     -- 16 = Write log to table (append to existing history)
                                                     -- 32 = Write all output to job history
                                                     -- 64 = Create a Windows event to use as a signal for the Cmd jobstep to abort
  @proxy_id INT = NULL,
  @proxy_name sysname = NULL,
  -- mutual exclusive; must specify only one of above 2 parameters to
  -- identify the proxy.
  @step_uid UNIQUEIDENTIFIER = NULL OUTPUT
AS
BEGIN
  DECLARE @retval INT

  DECLARE @sort_order INT
  DECLARE @max_sort_order INT
  DECLARE @subsystems NVARCHAR(128)
  DECLARE @replace_scope NVARCHAR(128)
  DECLARE @scripted_value NVARCHAR(128)
  DECLARE @replace_value NVARCHAR(128)
  DECLARE @message NVARCHAR(MAX)
  DECLARE @divider_length INT = 120

  DECLARE @folder_name NVARCHAR(128)
  DECLARE @project_name NVARCHAR(128)
  DECLARE @startpos INT
  DECLARE @endpos INT
  DECLARE @endpos_project INT
  DECLARE @reference_id INT = -1
  DECLARE @new_reference NVARCHAR(128)
  DECLARE @new_command NVARCHAR(MAX)
  
  PRINT ' ';
  PRINT REPLICATE('-', @divider_length);
  PRINT 'Stored proc : [usp_add_jobstep_wrapper]'
  PRINT 'Copyright : (c) 2016 - hansmichiels.com'
  PRINT 'License : GNU General Public License, see http://www.gnu.org/licenses/'

  IF @job_name IS NOT NULL
  BEGIN
  -- PRINT 'Stored proc:
    SELECT @message = 'Job : ' + @job_name;
  END ELSE BEGIN
    SELECT @message = 'Job : ' + ISNULL((SELECT TOP 1 j.name FROM dbo.sysjobs j WHERE job_id = @job_id), N'(unknown)');
  END
  PRINT @message;
  SELECT @message = 'Jobstep : ' + @step_name;
  PRINT @message;
  PRINT ' ';

  --\---------------------------------------------------------------------------------------------------------
  ---) PART 1: Check if temp table #jobstep_replacements exists with project-/customer specific replacements.
  --/---------------------------------------------------------------------------------------------------------

  IF OBJECT_ID('tempdb..#jobstep_replacements', 'U') IS NULL
  BEGIN

    PRINT 'No jobstep_replacements were found, installing original script.';

  END ELSE BEGIN

    PRINT 'If any replacements are made, they will be shown below.';

    DECLARE replacements_cursor CURSOR LOCAL STATIC FOR
    SELECT TOP (2147483647) -- Otherwise ORDER BY might not work.
        CASE
          WHEN ISNULL(v.[subsystems], N'') IN (N'*', N'')
          THEN N'*'
          ELSE N',' + v.[subsystems] + N','
          END AS [subsystems],
        CASE
          WHEN ISNULL(v.[replace_scope], N'') IN (N'*', N'')
          THEN N'*'
          ELSE N',' + v.[replace_scope] + N','
          END AS [replace_scope],
        v.[scripted_value],
        v.[replace_value]
    FROM
        #jobstep_replacements v
    ORDER BY
        v.sort_order,
        v.id;
              
    OPEN replacements_cursor;

    FETCH NEXT FROM replacements_cursor
      INTO @subsystems, @replace_scope, @scripted_value, @replace_value;

    WHILE @@FETCH_STATUS = 0
    BEGIN

      IF (@subsystems = N'*' OR CHARINDEX(N',' + @subsystem + N',', @subsystems, 1) > 0)
          AND @replace_value IS NOT NULL
          AND @scripted_value != @replace_value
      BEGIN

        IF (@replace_scope = N'*' OR CHARINDEX(N',@command,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @command, 1) > 0
        BEGIN
            SET @message = 'Replacement in @command: ' + @scripted_value + ' by ' + @replace_value;
            PRINT ' '
            PRINT @message;
            PRINT 'Old @command: ' + @command;
            SELECT @command = REPLACE(@command, @scripted_value, @replace_value);
            PRINT 'New @command: ' + @command;
        END

        IF (@replace_scope = N'*' OR CHARINDEX(N',@additional_parameters,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @additional_parameters, 1) > 0
        BEGIN
            SET @message = 'Replacement in @additional_parameters: ' + @scripted_value + ' by ' + @replace_value;
            PRINT ' '
            PRINT @message;
            PRINT 'Old @additional_parameters: ' + @additional_parameters;
            SET @additional_parameters = REPLACE(@additional_parameters, @scripted_value, @replace_value);
            PRINT 'New @additional_parameters: ' + @additional_parameters;
        END

        IF (@replace_scope = N'*' OR CHARINDEX(N',@server,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @server, 1) > 0
        BEGIN
            SET @message = 'Replacement in @server: ' + @scripted_value + ' by ' + @replace_value;
            PRINT ' '
            PRINT @message;
            PRINT 'Old @server: ' + @server;
            SET @server = REPLACE(@server, @scripted_value, @replace_value);
            PRINT 'New @server: ' + @server;
        END

        IF (@replace_scope = N'*' OR CHARINDEX(N',@database_name,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @database_name, 1) > 0
        BEGIN
            SET @message = 'Replacement in @database_name: ' + @scripted_value + ' by ' + @replace_value;
            PRINT ' '
            PRINT @message;
            PRINT 'Old @database_name: ' + @database_name;
            SET @database_name = REPLACE(@database_name, @scripted_value, @replace_value);
            PRINT 'New @database_name: ' + @database_name;
        END

        IF (@replace_scope = N'*' OR CHARINDEX(N',@database_user_name,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @database_user_name, 1) > 0
        BEGIN
            SET @message = 'Replacement in @database_user_name: ' + @scripted_value + ' by ' + @replace_value;
            PRINT REPLICATE('', @divider_length);
            PRINT @message;
            PRINT 'Old @database_user_name: ' + @database_user_name;
            SET @database_user_name = REPLACE(@database_user_name, @scripted_value, @replace_value);
            PRINT 'New @database_user_name: ' + @database_user_name;
        END

        IF (@replace_scope = N'*' OR CHARINDEX(N',@proxy_name,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @proxy_name, 1) > 0
        BEGIN
            SET @message = 'Replacement in @proxy_name: ' + @scripted_value + ' by ' + @replace_value;
            PRINT ' '
            PRINT @message;
            PRINT 'Old @proxy_name: ' + @proxy_name;
            SET @proxy_name = REPLACE(@proxy_name, @scripted_value, @replace_value);
            PRINT 'New @proxy_name: ' + @proxy_name;
        END

        IF (@replace_scope = N'*' OR CHARINDEX(N',@output_file_name,', @replace_scope, 1) > 0) AND CHARINDEX(@scripted_value, @output_file_name, 1) > 0
        BEGIN
            SET @message = 'Replacement in @output_file_name: ' + @scripted_value + ' by ' + @replace_value;
            PRINT ' '
            PRINT @message;
            PRINT 'Old @output_file_name: ' + @output_file_name;
            SET @output_file_name = REPLACE(@output_file_name, @scripted_value, @replace_value);
            PRINT 'New @output_file_name: ' + @output_file_name;
        END

      END

      FETCH NEXT FROM replacements_cursor
        INTO @subsystems, @replace_scope, @scripted_value, @replace_value;
      
    END

    CLOSE replacements_cursor
    DEALLOCATE replacements_cursor
          
  END

  --\---------------------------------------------------------------------------------------------------------
  ---) PART 2: Fix environment references in ssis commands.
  --/---------------------------------------------------------------------------------------------------------
         
  --\
  ---) First check if there is something to do
  --/
  IF @subsystem = N'SSIS' AND CHARINDEX(N'/ENVREFERENCE', @command, 1) > 0 AND CHARINDEX(N'/ISSERVER “\”\SSISDB\', @command, 1) > 0
  BEGIN
    --\
    ---) Pull out @folder_name and @project_name from the @command variable value
    --/
    SELECT
      @startpos = CHARINDEX(N'/ISSERVER “\”\SSISDB\', @command, 1) + LEN(N'/ISSERVER “\”\SSISDB\'),
      -- @endpos = CHARINDEX(N'dtsx\””', @command, @startpos),
      @endpos = CHARINDEX(N'\', @command, @startpos + 1),
      @endpos_project = CHARINDEX(N'\', @command, @endpos + 1),
      @folder_name = SUBSTRING(@command, @startpos, @endpos - @startpos),
      @project_name = SUBSTRING(@command, @endpos + 1, @endpos_project - @endpos - 1);

      --\
      ---) Armed with the correct @folder_name and @project_name get the environment reference id.
      --/
      SELECT @reference_id = ISNULL((
          SELECT TOP 1
              er.reference_id
          FROM
              SSISDB.catalog.environments AS env
          JOIN
              SSISDB.catalog.folders AS fld
              ON fld.folder_id = env.folder_id
          JOIN
              SSISDB.catalog.projects AS prj
              ON prj.folder_id = fld.folder_id
          JOIN
              SSISDB.catalog.environment_references AS er
              ON er.project_id = prj.project_id
          WHERE
              fld.name = @folder_name
              AND prj.name = @project_name
      ), -1)

  END

  --\
  ---) If a valid environment reference id was found ..
  --/
  IF @reference_id != -1
  BEGIN
    SELECT
      --\
      ---) .. adjust the /ENVREFERENCE part of the @command so that it is followed by the retrieved @reference_id.
      --/
      @startpos = CHARINDEX(N'/ENVREFERENCE ', @command, 1),
      @endpos = CHARINDEX(N' ', @command, @startpos + + LEN(N'/ENVREFERENCE ') + 1);

      SELECT
          @new_reference = '/ENVREFERENCE ' + CONVERT(NVARCHAR, @reference_id),
          @new_command = LEFT(@command, @startpos - 1) + @new_reference + SUBSTRING(@command, @endpos, 8000);
      IF @new_command != @command
      BEGIN
          SET @message = 'Replacement in @command: fixing /ENVREFERENCE';
          PRINT ' '
          PRINT @message;
          PRINT 'Old @command: ' + @command;
          PRINT 'New @command: ' + @new_command;
          SET @command = @new_command;
      END
  END

  --\---------------------------------------------------------------------------------------------------------
  ---) PART 3: Now we have done our trick with the parameter values,
  ---) execute the built-in stored procedure sp_add_jobstep.
  --/---------------------------------------------------------------------------------------------------------

  EXECUTE @retval = dbo.sp_add_jobstep
      @job_id = @job_id,
      @job_name = @job_name,
      @step_id = @step_id,
      @step_name = @step_name,
      @subsystem = @subsystem,
      @command = @command,
      @additional_parameters = @additional_parameters,
      @cmdexec_success_code = @cmdexec_success_code,
      @on_success_action = @on_success_action,
      @on_success_step_id = @on_success_step_id,
      @on_fail_action = @on_fail_action,
      @on_fail_step_id = @on_fail_step_id,
      @server = @server,
      @database_name = @database_name,
      @database_user_name = @database_user_name,
      @retry_attempts = @retry_attempts,
      @retry_interval = @retry_interval,
      @os_run_priority = @os_run_priority,
      @output_file_name = @output_file_name,
      @flags = @flags,
      @proxy_id = @proxy_id,
      @proxy_name = @proxy_name,
      @step_uid = @step_uid OUTPUT

  RETURN(@retval)
END
GO

Create a generic install script

This install script consists of the following parts:

PART 1: Create temporary table #jobstep_replacements with specific replacements.

This is the part where you customize for your own environments.
Two things require further explanation: subsystems and replace-scope.
Each jobstep is defined for a specific subsystem. Valid values are:

Full name Code
Operating system (CmdExec) CmdExec
PowerShell PowerShell
Replication Distributor Distribution
Replication Merge Merge
Replication Queue Reader QueueReader
Replication Snapshot Snapshot
SQL Server Analysis Services Command ANALYSISCOMMAND
SQL Server Analysis Services Query ANALYSISQUERY
SQL Server Integation Services Package SSIS
Transact-SQL script (T-SQL) TSQL

The subsystem codes are used in SQL scripts generated by SSMS to create jobsteps.

Replace scopes is not an official term, I just called it that way.
Valid values are any (n)varchar parameter names of sp_add_jobstep, so:

  • @command
  • @additional_parameters
  • @server
  • @database_name
  • @database_user_name
  • @proxy_name
  • @output_file_name

The temporary table #jobstep_replacements contains the columns [subsystems] and [replace_scope].
Both can be filled with a commaseparated list to control the subsystems and parameters where the replacement should take place. A * wildcard means “all subsystems” or “all parameters”.

An example of part 1:

MSSQL-E08-jobscript-part1

--\------------------------------------------------------------------------------------
---) PART 1: Create temporary table #jobstep_replacements with specific replacements.
--/------------------------------------------------------------------------------------

IF OBJECT_ID('tempdb..#jobstep_replacements', 'U') IS NOT NULL
DROP TABLE #jobstep_replacements;

IF @@SERVERNAME != 'VWS63-SQL161' -- The server where the script was generated.
BEGIN
    CREATE TABLE #jobstep_replacements
    --\------------------------------------------------------------------------------------
    ---) IMPORTANT; do not change the structure of this temporary table because
    ---) stored procedure [usp_add_jobstep_wrapper] depends on it.
    --/------------------------------------------------------------------------------------
    (
    [id] INT IDENTITY(1, 1),
    [subsystems] NVARCHAR(128) NOT NULL, -- Use wildcard * or comma separated list with subsystem codes e.g. SSIS,TSQL (don't use spaces!)
    /*
    SUBSYSTEMS
    -------------------------------------- ---------------------
    Full name Code
    -------------------------------------- ---------------------
    Operating system (CmdExec) CmdExec
    PowerShell PowerShell
    Replication Distributor Distribution
    Replication Merge Merge
    Replication Queue Reader QueueReader
    Replication Snapshot Snapshot
    SQL Server Analysis Services Command ANALYSISCOMMAND
    SQL Server Analysis Services Query ANALYSISQUERY
    SQL Server Integation Services Package SSIS
    Transact-SQL script (T-SQL) TSQL
    */

    [replace_scope] NVARCHAR(128) NOT NULL, -- Use wildcard * or comma separated list with subsystem codes e.g. @command,@proxy_name (don't use spaces!)
    /*
    --------------------------------------
    REPLACE-SCOPES
    --------------------------------------
    @command
    @additional_parameters
    @server
    @database_name
    @database_user_name
    @proxy_name
    @output_file_name
    */

    [scripted_value] NVARCHAR(128) NOT NULL,
    [replace_value] NVARCHAR(128) NULL,
    [sort_order] INT NOT NULL DEFAULT(100)
    );
END

--\
---) Fill the temporary table, based on the current SQL Server Instance name.
---) Include the server on which the scripts were made in one IF statement, so the records can be updated later.
--/
IF @@SERVERNAME = 'VWS63-SQL161\TABULAR'
BEGIN
    INSERT INTO #jobstep_replacements
    ( [sort_order], [subsystems], [replace_scope], [scripted_value], [replace_value])
    VALUES
      (1, N'SSIS',
          N'@command',
          N'VWS63-SQL161\”',
          N'VWS63-SQL161\TABULAR\”'),

      (2, N'SSIS',
          N'@command',
          N'DEV',
          N'TST'),

     (3, N'TSQL',
          N'@command,@database_name',
          N'Demo_DEV',
          N'Demo_TST'),

     (4, N'PowerShell',
          N'@command',
          N'F:\Files\Development',
          N'F:\Files\Test')
END
ELSE IF @@SERVERNAME = 'PRODSERVER'
BEGIN
    INSERT INTO #jobstep_replacements
    ( [sort_order], [subsystems], [replace_scope], [scripted_value], [replace_value])
    VALUES
      (1, N'SSIS',
          N'@command',
          N'VWS63-SQL161\”',
          N'VWS63-SQL163\PRD\”'),

      (2, N'SSIS',
          N'@command',
          N'DEV',
          N'PRD'),

     (3, N'TSQL',
          N'@command,@database_name',
          N'Demo_DEV',
          N'Demo_PRD'),

     (4, N'PowerShell',
          N'@command',
          N'F:\Files\Development',
          N'F:\Files\Production')
END

PART 2: Smart sp_delete_job statements, using @job_name and IF EXISTS.

Those statements can be generated using a SQL query. This query is available as comment in the MSSQL-E08-jobscript.sql script (download the scripts here). Just select it and execute. Then you copy and paste from the query output window.

An example of part 2:

MSSQL-E08-jobscript-part2

--\------------------------------------------------------------------------------------
---) PART 2: Smart sp_delete_job statements, using @job_name and IF EXISTS
--/------------------------------------------------------------------------------------

USE [msdb]
GO
/* -- Query to generate sp_delete_job statements:
SELECT
    CASE sql.line_no
      WHEN 1 THEN 'IF EXISTS (SELECT 1 FROM msdb.dbo.sysjobs WHERE [name] = N'''+ name + ''')'
      WHEN 2 THEN ' EXEC msdb.dbo.sp_delete_job @job_name=N'''+ name + ''', @delete_unused_schedule=1;'
      WHEN 3 THEN 'GO'
    END AS [drop_sql_for_copy_and_paste]
FROM
    msdb.dbo.sysjobs j
CROSS JOIN -- To force new lines in the output, a cross join with 3 line_no rows is used.
    (SELECT 1 AS [line_no] UNION SELECT 2 UNION SELECT 3) AS [sql]
WHERE
    j.name like 'MSSQL-E08%' -- IMPORTANT: here you filter on the jobs to create delete statements for.
ORDER BY
    j.name,
    sql.line_no
*/

IF EXISTS (SELECT 1 FROM msdb.dbo.sysjobs WHERE [name] = N'MSSQL-E08-Job1-SSIS')
  EXEC msdb.dbo.sp_delete_job @job_name=N'MSSQL-E08-Job1-SSIS', @delete_unused_schedule=1;
GO
IF EXISTS (SELECT 1 FROM msdb.dbo.sysjobs WHERE [name] = N'MSSQL-E08-Job2-TSQL')
  EXEC msdb.dbo.sp_delete_job @job_name=N'MSSQL-E08-Job2-TSQL', @delete_unused_schedule=1;
GO
IF EXISTS (SELECT 1 FROM msdb.dbo.sysjobs WHERE [name] = N'MSSQL-E08-Job3-Powershell')
  EXEC msdb.dbo.sp_delete_job @job_name=N'MSSQL-E08-Job3-Powershell', @delete_unused_schedule=1;
GO

PART 3: Create jobs, script generated by SSMS.

Here you paste the script to create jobs that was generated by SQL Server Management Studio. After pasting, replace sp_add_jobstep by usp_add_jobstep_wrapper.

An example of part 3:

MSSQL-E08-jobscript-part3

--\------------------------------------------------------------------------------------
---) PART 3: Create jobs, script generated by SSMS
--/------------------------------------------------------------------------------------
--\
---) IMPORTANT NOTE: You can generate the part below as follows:
---) In SQL Server Management Studio select a SQL Agent job, then press F7
---) In the Object Explorer Details Pane, select the jobs you want to create a script for.
---) Then rightclick and in the context menu select:
---) > Script Job As > CREATE To > New Query Editor Window
---) In the script generated with SQL Server Management Studio, replace
---) sp_add_jobstep
---) by
---) usp_add_jobstep_wrapper
--/
USE [msdb]
GO

/****** Object: Job [MSSQL-E08-Job1-SSIS] Script Date: 19-12-2016 18:59:58 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]] Script Date: 19-12-2016 18:59:58 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'MSSQL-E08-Job1-SSIS',
  @enabled=1,
  @notify_level_eventlog=0,
  @notify_level_email=0,
  @notify_level_netsend=0,
  @notify_level_page=0,
  @delete_level=0,
  @description=N'No description available.',
  @category_name=N'[Uncategorized (Local)]',
  @owner_login_name=N'VWS63-SQL161\Hans', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [AnyPackage.dtsx] Script Date: 19-12-2016 18:59:58 ******/
EXEC @ReturnCode = msdb.dbo.usp_add_jobstep_wrapper @job_id=@jobId, @step_name=N'AnyPackage.dtsx',
  @step_id=1,
  @cmdexec_success_code=0,
  @on_success_action=1,
  @on_success_step_id=0,
  @on_fail_action=2,
  @on_fail_step_id=0,
  @retry_attempts=0,
  @retry_interval=0,
  @os_run_priority=0, @subsystem=N'SSIS',
  @command=N'/ISSERVER “\”\SSISDB\DEV\SSIS-E10-P2\AnyPackage.dtsx\”” /SERVER “\”VWS63-SQL161\”” /ENVREFERENCE 4 /Par “\”$ServerOption::LOGGING_LEVEL(Int16)\””;1 /Par “\”$ServerOption::SYNCHRONIZED(Boolean)\””;True /CALLERINFO SQLAGENT /REPORTING E',
  @database_name=N'master',
  @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Schedule for MSSQL-E08-Job1-SSIS',
  @enabled=0,
  @freq_type=4,
  @freq_interval=1,
  @freq_subday_type=1,
  @freq_subday_interval=0,
  @freq_relative_interval=0,
  @freq_recurrence_factor=0,
  @active_start_date=20161216,
  @active_end_date=99991231,
  @active_start_time=60000,
  @active_end_time=235959,
  @schedule_uid=N'6d24a58d-f800-4341-ab5b-41fbe4923da8'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

/****** Object: Job [MSSQL-E08-Job2-TSQL] Script Date: 19-12-2016 18:59:58 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]] Script Date: 19-12-2016 18:59:58 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'MSSQL-E08-Job2-TSQL',
  @enabled=1,
  @notify_level_eventlog=0,
  @notify_level_email=0,
  @notify_level_netsend=0,
  @notify_level_page=0,
  @delete_level=0,
  @description=N'No description available.',
  @category_name=N'[Uncategorized (Local)]',
  @owner_login_name=N'VWS63-SQL161\Hans', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Demo_DEV] Script Date: 19-12-2016 18:59:58 ******/
EXEC @ReturnCode = msdb.dbo.usp_add_jobstep_wrapper @job_id=@jobId, @step_name=N'Demo_DEV',
  @step_id=1,
  @cmdexec_success_code=0,
  @on_success_action=1,
  @on_success_step_id=0,
  @on_fail_action=2,
  @on_fail_step_id=0,
  @retry_attempts=0,
  @retry_interval=0,
  @os_run_priority=0, @subsystem=N'TSQL',
  @command=N'SELECT * from sys.objects',
  @database_name=N'Demo_DEV',
  @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Schedule for MSSQL-E08-Job2-TSQL',
  @enabled=0,
  @freq_type=4,
  @freq_interval=1,
  @freq_subday_type=1,
  @freq_subday_interval=0,
  @freq_relative_interval=0,
  @freq_recurrence_factor=0,
  @active_start_date=20161216,
  @active_end_date=99991231,
  @active_start_time=70000,
  @active_end_time=235959,
  @schedule_uid=N'a25c43d5-8543-4723-903b-beeb6d9a07a3'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

/****** Object: Job [MSSQL-E08-Job3-Powershell] Script Date: 19-12-2016 18:59:58 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]] Script Date: 19-12-2016 18:59:58 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'MSSQL-E08-Job3-Powershell',
  @enabled=1,
  @notify_level_eventlog=0,
  @notify_level_email=0,
  @notify_level_netsend=0,
  @notify_level_page=0,
  @delete_level=0,
  @description=N'No description available.',
  @category_name=N'[Uncategorized (Local)]',
  @owner_login_name=N'VWS63-SQL161\Hans', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [Powershell step] Script Date: 19-12-2016 18:59:59 ******/
EXEC @ReturnCode = msdb.dbo.usp_add_jobstep_wrapper @job_id=@jobId, @step_name=N'Powershell step',
  @step_id=1,
  @cmdexec_success_code=0,
  @on_success_action=1,
  @on_success_step_id=0,
  @on_fail_action=2,
  @on_fail_step_id=0,
  @retry_attempts=0,
  @retry_interval=0,
  @os_run_priority=0, @subsystem=N'PowerShell',
  @command=N'dir “F:\Files\Development\*.csv”',
  @database_name=N'master',
  @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Schedule for MSSQL-E08-Job3-Powershell',
  @enabled=0,
  @freq_type=4,
  @freq_interval=1,
  @freq_subday_type=1,
  @freq_subday_interval=0,
  @freq_relative_interval=0,
  @freq_recurrence_factor=0,
  @active_start_date=20161216,
  @active_end_date=99991231,
  @active_start_time=110000,
  @active_end_time=235959,
  @schedule_uid=N'a6f0e0ca-d27a-4d3f-a349-4a53c0392541'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

It’s demo time ..

It is also time to set up a demo, otherwise you have to believe me that this works, and I always like to prove it.
For this I have deployed two SSIS projects from an earlier blog post to two SQL Instances on my virtual server. One instance is the default instance MSSQLSERVER and the other one is called TABULAR (it has SSAS installed in TABULAR mode).
For the demo I will pretend however that the default instance is my development environment and the TABULAR instance is my test.
On the default instance I create three SQL Agent jobs for SSIS-, TSQL and PowerShell jobsteps, respectively.

Jobs created for the demo ..

Then I create a script from the jobs, paste it into script MSSQL-E08-jobscript.sql and replace sp_add_jobstep by usp_add_jobstep_wrapper.

Generating a CREATE script for the jobs ..

After doing the necessary preparation to install usp_add_jobstep_wrapper on both instances, I can create the SQL Agent jobs on the TABULAR instance.
The output window shows which replacements were done.

As you can see in the output window, multiple replacements will be applied one after the other ..

Replacement in the TSQL jobstep ..

Replacement in the PowerShell jobstep..

I can still use the script on the default instance, on that instance simply no replacements will be done.

No replacements done on the default instance where the CREATE script was generated ..

Download the scripts here.

Conclusion / Wrap up

In this blog post you could read how to cope with some peculiarities of SQL Server Management Studio when it creates SQL Agent job scripts.
I have explained how you can make your script generic for all environments, which makes it more suitable for putting it under source control.
For this a special stored procedure, [dbo].[usp_add_jobstep_wrapper] is installed in the [msdb] database and a generic install script must be made that is partly developed and partly pasted from the SSMS Script.

And, of course, have a merry Christmas and a happy New Year!

(c) 2016 hansmichiels.com – Do not steal the contents – spread the link instead – thank you.