(Read Part 1 here...)
The current architecture of the ETL (not written by myself) is a series of MS SQL job steps, mostly consisting of calls to stored procedures. To determine the time it took for the steps, I used SQL Profiler. I determined where would be the best place inside the 50+ steps to update the ETL_StepHistory table, and updated the corresponding stored procedure to write to the table.
I also created a few stored procedures:
RecordETLSteps and takes a parameter @comment (varchar(255))
This proc is called at the six random spots inside the ETL steps, calls the stored procedure and passes whatever description I want to display, an example: pulling data to fill lookup tables, or loading forecast data.
TestJobStatus which takes a parameter of @jobname (varchar(255)) - so I could potentially reuse this to check any other SQL Job. I had nothing in mind at the moment, but always a good idea to make even simple procedures scalable in my mind.
In my web app, I do a dreaded Thread.Sleep(2000) and then check the step history table, displaying the details in an asp:Repeater in an asp:UpdatePanel. Then checks the job status utilizing the above proc, sending in the name of the SQL Job. If the return value states the job has completed, I enable the "Run ETL" button once again for all users, and save the status: "Job completed, sending e-mail" into the steps data table and page display.
The e-mail is sent from within SQL to an AD group of all users who would need to know the ETL just finished.
Stay tuned for Part 3 where I'll discuss the MicroStrategy piece, including code snippets.