Tuesday, November 30, 2010

Connecting to a Web Service, you CAN set timeout on the client...


This information would have been helpful, ohhhhh, a few months back, after countless (and random, of course) timeout occurrences and lots of hair loss and shoulder shrugging.

Background: I built a .net console application that downloads our HR data from a “major” HR provider. This project consisted of security certificates (purchasing and exchanging) on all provider server as well as client server; logins and passwords and research, research, research. The company provided sample code, which basically showed how to query one module at a time and one module at a time.

But my task was to download all the data for all employees every day, including data for terminated employees. We have over 600 active employees, which employee has, obviously, multiple rows in most, and sometimes more than one, in each module of data. I call them modules, for lack of a better term. This equates to downloading approximately 70,000 records once a day. The connection and download takes approximately 9 minutes. Of that 9 minutes I’d say a few are spent waiting for a response, I’d guess that the actual workings take about 5 minutes, not bad I think but I don’t have much to compare it to at this point.

The provider does not provide a “changed” mechanism (or at least they did not tell me they have that mechanism – although many of the hurdles I came across on this journey were questions that when asked multiple times were simple answers – maybe the question just had to be asked of the right person?). Anyway, I stray… One of the first errors I would receive, randomly, would be a timeout attempting to download data from a module. And one particular module would always timeout so that I had to break up the requests by choosing an additional filter.

So, seemed to be pretty stable with a timeout occurring once a week or two, I could handle that, especially since when I asked them the question and sent them the error messages, I was given the response “it’s not on our side”… Time passed, and timeouts increased to a point where I couldn’t get that one module, previously broken down into four filtered requests, to even budge.

I sought empathy from our network engineer to monitor the traffic back and forth and also re-wrote most of the application to save the SQL insert commands to a lengthy string and execute after each module completed, so I wasn’t shooting off single SQL insert statements continually. Still nothing seemed to help. I narrowed it down, with debug printable statements, and could show that there was over a 1.5 minute delay from when a request was sent and a response on that one pesky module. Everything I’d researched stated the timeout value was set inside IIS on the provider server.

After compiling my error list and debug statements into a very convincing e-mail to our HR application provider, I finally received a response from them telling me to up my timeout setting to 5 minutes… Huh?

And this is what you get with someone who learns C# from a book and sample code, over having someone who is an expert mentor you…

Simple addition of this line in each module call to download data:

[ServiceName] proxy = new [ServiceName]();
proxy.Timeout = 300000;

Yep, just that simple. Months of frustration being fixed with 23 characters. Sometimes you feel like an idiot at the end of the day. Live and learn.

Thursday, November 4, 2010

Determine Size of All Tables in a database...

This script has come in handy a few times, enough to warrant me posting this so I don't have to remember where I put it. Elephant wishes...

Wednesday, September 22, 2010

Find Slow Running Queries - great T-SQL script find!

If you've ever used sp_who and/or sp_who2, you'll appreciate the insight this free script Adam Machanic wrote and shared, sp_whoisactive. Way more useful than the out of the box scripts.

There is a great video tutorial and link to the free download here: brentozar, and even a way to save the results over time to a table.

Monday, July 5, 2010

SQL DBA Best Practices - Primary not your Default filegroup

I rarely see this one implemented correctly in the field, and if things go wrong it will cripple you.

After creating a database, change the default filegroup from primary to anything else. If you only have one filegroup, create a secondary one. Then alter that filegroup to become the default.

My normal procedure is to create the database, specifying a secondary file group in the Filegroups page in the New Database screen in SSMS. After that, I create a file in the new Filegroup and make that new Filegroup the default.

--alter database, add file to Filegroup ALTER DATABASE [DBNAME]
ADD FILE (NAME=[NewFileName], FILENAME='D:\Data\[NewFileName].ndf')
TO FILEGROUP [FileGroupName] --(the name used when creating the Filegroup

ALTER DATABASE [
DBNAME]
MODIFY FILEGROUP
[FileGroupName] DEFAULT

Reasoning: When the database is created the primary filegroup is by default marked as the default filegroup in which all new objects are stored. The primary filegroup also happens to be where all the system objects are held.

So once you create a database and make any other filegroup the primary, all the system objects (system tables for the database) are in their own filegroup with I/O isolation. Seeing as these system objects do not change as often as all the other database objects, this minimizes write activity to the primary data file and will assist with reducing corruption due to hardware failures.

Another benefit this will give you is to be able to do maintenance at the filegroup level. More to come about the benefits of filegroups and files in a future post.

Saturday, June 12, 2010

Push the button... One way to automate boring manual processes. (Part 4 of 4)

Final thoughts on automating this process...

(Read Part 1, Part 2 and Part 3)

I rarely give up, rarely say things like "I can't"... so with persistence comes big pay off. 
  • I've created a simple front end giving the finance users the ability to "push" their own button. 
  • I make sure the ETL isn't already running
  • I save all  button pushes to a history table for displaying in the web app
  • I display some feedback to the user (or any visitor for that matter), letting them know what major milestone the application is working on
  • I've combined .NET, T-SQL, MicroStrategy .dll's, a SQL Server Job, and a Windows Scheduled Task, and a SQL stored proc to send the final e-mail, notifying all Finance Managers the process is complete
In case you need it, code to send e-mail (after making sure this isn't the standard nightly ETL that runs on a schedule):
It brings a smile to my face when I wake up in the morning to several e-mails notifying me that someone overseas has pushed their own button without waking me. 

Sunday, June 6, 2010

Push the button... One way to automate boring manual processes. (Part 3 of 4)

Alright, on to the MicroStrategy part of this puzzle.

(Read Part 1, or Part 2)

You'd need the MicroStrategy SDK license if you want to be compliant with the usage of this code. FYI - we were on MSTR 9.0.209.
  1. I added a operating system command (CmdExec) step to the end of my SQL Job, called it Run Batch File to Refresh MSTR.
  2. I created the code in C#, just a simple console application, called MSTR_Refresh.exe and deployed it to the same server that held our MicroStrategy Intelligence Server service.
  3. I created a Windows Scheduled Task on the MicroStrategy Intelligence Server which runs the code, located locally, along with the necessary MicroStrategy dll: Interop.DSSCOMMMasterLib.dll
  4. The program consists of defining the connections, and calls the ExpireAllCaches(0) (clearing the report cache)
  5. And finally it updates all cubes. 
  6. For this particular example, I only needed this on one project, but you could loop through multiple if needed. 
As always, back everything up before attempting this. Enjoy!

Clear All MicroStrategy Caches:


Update MicroStrategy Intelligence Cubes:


Saturday, June 5, 2010

Push the button... One way to automate boring manual processes. (Part 2 of 4)

(Read Part 1 here...)

The current architecture of the ETL (not written by myself) is a series of MS SQL job steps, mostly consisting of calls to stored procedures. To determine the time it took for the steps, I used SQL Profiler. I determined where would be the best place inside the 50+ steps to update the ETL_StepHistory table, and updated the corresponding stored procedure to write to the table.

I also created a few stored procedures:

RecordETLSteps and takes a parameter @comment (varchar(255))
This proc is called at the six random spots inside the ETL steps, calls the stored procedure and passes whatever description I want to display, an example: pulling data to fill lookup tables, or loading forecast data. 

TestJobStatus which takes a parameter of @jobname (varchar(255)) - so I could potentially reuse this to check any other SQL Job. I had nothing in mind at the moment, but always a good idea to make even simple procedures scalable in my mind.

In my web app, I do a dreaded Thread.Sleep(2000) and then check the step history table, displaying the details in an asp:Repeater in an asp:UpdatePanel. Then checks the job status utilizing the above proc, sending in the name of the SQL Job. If the return value states the job has completed, I enable the "Run ETL" button once again for all users, and save the status: "Job completed, sending e-mail" into the steps data table and page display. The e-mail is sent from within SQL to an AD group of all users who would need to know the ETL just finished.

Stay tuned for Part 3 where I'll discuss the MicroStrategy piece, including code snippets.

Tuesday, June 1, 2010

Push the button... One way to automate boring manual processes. (Part 1 of 4)

I don't know about you, but I don't like doing things over and over and over again. 

When I first arrived at my current job, in addition to the nightly ETL run, whenever a finance manager uploaded his or her data through a custom application, they would send an email to me to "push the button". Since the company is world wide, some of these requests would be after my normal working hours. Their ETL process consisted of kicking off a MS SQL job, waiting until the job completed it's nearly 50 steps, then log into MicroStrategy desktop to clear the report cache and intelligent cubes. Once that process was completed, if I was still paying attention in the middle of coding, notice and send all the finance managers in the company an email stating "ETL has been run..."

Well after the fifth time of "pushing the button," I'd had enough.

Plan:
Create an internal simple web application to give finance managers the ability to press their own button. This would kick off the ability to start the ETL, and then the MicroStrategy processes, and at the end would send the email to all concerned.

Hurdles:
  • I needed to make sure only one person could push the button and the rest would be locked out from pushing the button while it was already running.
  • I wanted a way of keeping track of who pushed the button when (so a history).
  • Web app needed to be locked down to have two levels of security.
    • Admin for users to see history
    • Regular finance users, and I didn't want to maintain the users but utilize AD groups.
  • How to interface with MicroStrategy...

I piggy-backed on our existing MicroStrategy web server for the internal web server for the location of the front end, and had my system admin create an internal DNS for it for ease of accessibility. I also procured the MicroStrategy SDK license so I had access to their documentation and .dll's.

Building the web app:

Security: I locked down a subfolder (Admin) of the new site and used the which locked down the admin pieces to all others. Then under the "" tag at the main level I added my AD group for the finance managers, and denied all others.

Two main pages to start for the general users, an ETL_Run page and ETL_History page. The history is a simple list view connected to a SQL connection which reads from a simple history table I created. 

New Data Objects:

Two new tables: ETL_History and ETL_StepHistory. The ETL_History table has just an identity column, as well as the date and user. The ETL_StepHistory is updated at each major milestone in the ETL (to give some feedback to the user as to where in the process it is as it can take up to 30 minutes for the entire process to complete). This way the front end just pings this ETL_StepHistory table and displays the data. When a user kicks off a new ETL, this StepHistory table is truncated, and has a step identity column, description and date/time columns.

Tune into Push the button, part 2 coming soon.