Feeds:
Posts
Comments

Archive for the ‘SQL Mayhem’ Category

Recently, I tweeted about having to restore fourteen databases in SQL Server 2000. I wasn’t surprised to receive some good natured ribbing about still having databases in SQL Server 2000, and I’m okay with that… for the most part.  Yes, I know and agree that it is not good (major understatement of the year/decade) to still be on SQL Server 2000. Why?  Not just because it is woefully out-of-date, not supported by Microsoft, and plain miserable to deal with at times, but there are some wonderful features in the later versions that it’d be awesome to take advantage of.  Not to mention the embarrassment of admitting that, yes, we still use SQL Server 2000. At least we’re not running SQL Server 6.5 or 7.0, right? That I know of, at least. That has got to count for something.

It could be worse, right?

It could be worse, right?

While I’m at it, I might as well confess mention that we are also running SQL Server 2005, 2008, 2008 R2, and 2012… and Oracle 11gR2.. and MS Access (don’t ask what versions, please. For the love of all things data, please don’t ask!) What? No SQL 2014?  Nope. Not yet, anyway.  We are painfully aware that having so many versions isn’t necessarily a good thing. Why can’t I do a merge or a CTE? Oh yeah… I’m in a SQL 2000 database. Ick.  I am hopeful that we can come up with a good plan in the next year (or so) to somehow consolidate all (at least some!) of these instances into something more recent. At least management is aware of our need to consolidate/ migrate  to the current century. It’s a start. Plus there are quite a bit of other important projects in progress at the moment.

But Why?

So you may be wondering (or not)… for the love of all things SQL why are you still on SQL Server 2000?!  I admit that I envy anyone in a position where they can keep upgrading to the newer versions soon after they come out or at least before they are ridiculed aren’t supported anymore. So what can prevent someone from upgrading? Many things. Others may have  different reasons, but some of the major ones I’ve encountered include:

The 90s Called. They Want Their VAX Back. (a.k.a. Higher Priority Projects)

Seriously, I don't go anywhere near it.

Seriously, I don’t go anywhere near it.

What could be worse than running SQL Server 2000? How about a legacy VAX system from the 1990s? Now what if your core business applications including the database were still running on said VAX system? And what if the five year project to migrate off said VAX turned into a ten+ year project for various reasons and it’s still not done? That is a very long story and I’m not blaming anyone. Stuff happens. Let’s just move on and get it done. Yep. That trumps the SQL Server 2000 boxes by at least a few years. At least most of one of those major applications is on SQL Server… 2005. I know. I know. Yes, that one needs to be upgraded as well. <insert painful sigh here> The good news is we’ve renewed our efforts to get those apps off the VAX much quicker. I’m hoping it’ll happen in the next year or two. Positive thoughts. Positive thoughts. And, no. It’s not going to SQL 2000 or 2005… itsgoingtooracle11gr2orpossibly12c. All kidding aside, I’m okay with that. No, really. I am. The “why” on that one is a bit of a long story, but I’m good with it, and that’s probably where we’ll end up putting everything.

Oh, What a Tangled Web We Weave

Here is another reason why upgrading isn’t so easy or as straightforward as one may think… or hope… or wish. One of our main production SQL Server systems is still running on SQL 2005. I don’t like doing in place upgrades, if at all possible, because of the risks involved. So I’d much rather migrate this system to another box running SQL Server 2012 (at least) or even *gasp* 2014.  So what is delaying us from at least moving to another bigger/better/more beautiful box? All the flippin’ interfaces. When I stop to count up all the different applications and systems that access this box and the databases on it, my eyes start to glaze over, my brain starts to throb, and I feel a bit faint.  It would be do-able but I strongly believe it would be a fairly large/nightmarish undertaking; one that we must start at some point  in this decade soon.

This box contains at least five major databases. Of these five major databases, one very critical database is heavily used by multiple applications for different departments (I’ve counted at least four so far). The interfaces to it (I’ve counted twelve off the top of my head) would have to be analyzed and tested.  Their respective databases may need to be upgraded as well at some point.   That database is also replicated to two different servers (SQL 2005 and SQL 2008), internal and external. There are jobs to import/export data to/from other servers. There are linked servers from this one to other servers. Other servers link to it as well. Can’t forget the SSRS servers that also use it. At least those are running on SQL 2008 R2.

light_reading

All together this major system  is actually spread across at least eight different servers of various database versions including Ingres, Oracle, SQL 2000, SQL 2005, SQL 2008, SQL 2008 R2, and SQL 2012. To say it’s a bit of a mess is an understatement. And that’s just one system. We currently support over 70 SQL Server instances of various versions, not to mention Oracle and MS Access. Some are in-house developed and some are vendor-supported. Some we control and some are out of our control.

Kill It! Kill It With Fire!

So how did it get that way? I’m not really sure because I inherited a lot of this and don’t have the background/history as to why certain decisions were made… or I may have just blocked it from memory. It could just be the result of put this here, put this there, and before you know it you have a Frankenstein of a system/environment.  Having said that, I’m trying to not focus so much on the “how or why” but more on “where do we go from here?” and “how can we make this better?”   “Kill it with fire” is not a preferred option… yet. So please put the pitchforks away… for now.

Where to start...

Where to start…

I am hopeful that we can start on a plan to tackle these out-of-date servers once we’ve finally moved everything off the VAX or even sooner. Unfortunately, it’s going to take some time for all of it. Not to mention production issues that pop up and being pulled in on other important projects (so-and-so bought a new application with a new database and now we have to put it somewhere and figure out the licensing and by the way you need to learn Oracle and you need to learn APEX and SOA and how that’s going to fit into everything and don’t forget we need to migrate those MS Access databases to Oracle because we’re thankfully not supporting MS Access anymore…)  

Please don’t take this wrong. I am actually looking forward to learning more about Oracle and APEX and SOA and whatever comes next. It can just be a bit overwhelming at times. In addition, I don’t say it enough, but I am extremely thankful to have a wonderful and very talented team whom I can count on to help make important decisions and help us get to where we need to be – in the current century or at least somewhere closer, if not beyond. :-D

Here's To You

Here’s To You

Here’s To You!

So if you’re on the latest and greatest versions, I commend you. No, seriously. I do. No jokes. No sarcasm. It’s great. Really. It is. However, if you’re like us and are still running on old versions that aren’t supported anymore, I feel for you. It can be painful and challenging.  If you’re like me and tend to feel depressed embarrassed by the old technology, don’t be. Instead, hold your head up high. You’re probably doing the best you can with what you’re given and for your specific environment. Some things are out of our control.  We can only do our best to get where we need to be.  Hopefully, you won’t need a miracle to make it happen – just some support from your peers and quite possibly a few shots of your favorite beverage from time to time. So good luck and may all your databases stay online… and up-to-date.

Read Full Post »

The Situation

Umm... we have a problem

Umm… we have a problem

It’s 11 am. You’re sitting at your desk at work (where else would you be?) trying to determine which of the 20 help desk tickets you’re actively working on to work on next. That includes at least 5 actual projects (I think it’s 5), not just fixing various issues. Not to mention trying to figure out how to explain to someone why you shouldn’t include a “rollback transaction” as part of error handling in a stored procedure that contains just your run-of-the-mill select query. It’s all part of learning, right? Oh and don’t forget that Oracle conference call at 1:30 pm. Did I mention you’re new minion, err…Oracle DBA has some good valid questions for you about the Oracle system as well? Wait. Did you talk to the Access guy yet about the tasks he’s working on? Oh yeah. You did that earlier this morning. Given all that, you’re actually feeling pretty good because one of the projects you’ve been working on went live that morning with no problems. Go team! However, before you can say “I wonder what I should have for lunch today”, you have  3 developers and 1 manager at your desk (or was that 2 developers and 2 managers?) asking for help with a SQL Server performance issue. It’s actually pretty important considering the end users are on site doing user acceptance testing for a major system release.  Dun dun dun…

The Issue

A stored procedure runs fine on Server A but times out on Server B. Both are non-production servers. Both servers have SQL Server 2005 build 9.0.4035. Note: I included the build instead of the service pack level because I didn’t want to look it up and I don’t have it memorized yet. Did I mention we’re running SQL Server 2000, 2005, 2008, 2008 R2 and soon-to-include 2012? Oh and that’s for somewhere around 73 instances and 800+ databases. Oh and then there’s Oracle Exadata.  Continuing on… The databases are identical because they were restored from the same backup file. Still, you verify that the record counts match and the structures match. No problems there. You can run the procedure within SSMS (SQL Server Management Studio) just fine on both servers. No problem.  You breakdown and give the developer db_owner permissions on both databases just to prove it’s not a permissions issue. Plus it’s not production. So no worries. They had no problems running the procedure in SSMS on both servers.  However, when the procedure is executed from the application or from within Visual Studio (2010, I believe), it times out on Server B. There are no error messages here, there, or anywhere. Not in a log. Not in a… Where was I? Oh yeah…

What about the stored procedure itself? It returns two datasets from two queries. From doing a SQL Profiler trace I found it was getting hung up on the first one. The first query is a select with inner joins on four other tables. Nothing too complicated, at least. I probably shouldn’t mention the two optional parameters are included in the inner join clause instead of using a where clause.

One of These Things Isn’t Like the Other

One of these things isn't like the other..

One of these things isn’t like the other..

There are so very many pieces to look at and consider, but this is what I did. I probably should come up with a good checklist for the future so I’m not scrambling. Good intentions and all that, right?

So what could possibly differ between these two systems?  The record counts on the tables are the same. The structures are the same. The indexes are identical. Hmm.

Maybe something with memory? The cache? Could it be the execution plan? I attempted to retrieve the actual plan from Server B and guess what happened? It kept running and running and running. Just like what the developer experienced. I had no problems retrieving the actual execution plan from Server A, though. It ran in about 5 seconds. Double hmm.

So I generated the estimated plan from both systems with no problem and compared them. Gee. They were completely different. That wasn’t a huge surprise but still somewhat surprising considering the usage on both systems should be about the same.  What was interesting was the plan on Server B said an index was missing on one of the tables. Really? The index is there but it turns out the number of statistics on that table was different than the ones on the table on the other server. So why were the statistics so different?  We have maintenance plans in place on both servers to reorganize the indexes and update the statistics every Sunday. It ran on both servers this last weekend just fine. They should be the same but for giggles I thought I’d check them. Guess what? They were different. Dude! Different how? Different in that on Server B it updated the statistics and then reorganized the indexes. This is the server where the procedure hangs when executed. On Server A, the indexes are reorganized before the statistics are updated. Wow. Could this be it? I think it very well could be the problem!

The Test

So on Server B, I reorganized the indexes on only the tables used by this procedure and then updated the statistics. Guess what? I could then easily retrieve the actual execution plan without it hanging. I then asked the developer to try executing the procedure. Ya know what? It ran just fine! Sweet!

Hindsight is 20/20

Should I have gathered a bunch of info from DMVs and what-not first? Yep. Probably. There’s a million things I probably should have done but considering the time crunch and

sheer number of other tasks that have fallen to me, I think I did okay. I solved the problem and made everyone happy so they can continue testing and I can continue on my merry way.

But Whyyyyy?

Now that is the million dollar question. Why does it matter if you reorganize your indexes before updating statistics? Well… you probably won’t like this but I’m going to save that for another post on another day. :-) That’s  my plan anyway.

Do I really want to look under that?

Do I really want to look under the covers?

Hey, Ya’All Ain’t Gonna Believe This!

I do have to give a shout out to our new minion, oops. I mean Oracle DBA. Even though he doesn’t know SQL Server, he asked very intelligent questions which helped me to think through the process and what could be wrong. We made a pretty good team today which is awesome in itself. :-)

Read Full Post »

Warning!

It’s Friday and I’m feeling a bit wacky (what else is new, right?). So… I thought I would write about something that’s been on my list for a few months now and put an amusing spin on it. Well, at least I find it amusing. :-)

Several months ago, we inherited several database servers from another department. Our job was to bring them up to our standards when they were brought onto our network. Luckily, they all had SQL Server 2005, 2008 or 2008 R2 installed on them. Whew!  It was by far a very enlightening experience considering these servers were not set up by database administrators. However, they didn’t do all that bad of a job considering. Note: These servers were also brought onto our domain from a different one which involved lots of Active Directory account additions which affected accounts on these servers.  

Anyway, to help relieve some of the stress, I couldn’t resist putting together a list of steps to be performed while bringing the servers onto our network and up to our standards. This list does not contain everything we did but it’s somewhat close. On a serious note, many months of planning and hard work went into this project by all of IT. There was quite a bit more that went into it.  This is but a small slice of our part. While this list was written in jest, there may actually be a few useful nuggets of information in there. Disclaimer: I put this list together for fun to release some stress. We did not actually partake of every step outlined. I’m hopeful you can spot the “what we actually did” steps versus the “wishful thinking” steps. ;-)

 Generic Work Breakdown Steps (WBS) for Database Server Integration:

DBA Survival Kit Option #1
  1. Retrieve your DBA Survival Kit. It should contain the following items:
    • 1 shot glass
    • 1-3 large bags of dark chocolate, dependent on the duration of the integration and the number of non-DBAs involved
    • 1 large bottle of your choice beverage
    • 1 tin of breath mints
    • 1-3 rolls of duct tape, dependent on the duration of the integration and the number of non-DBAs involved
    • 1 sarcastic ball (note: it is like a Magic 8 ball but displays sarcastic answers such as “whatever” or “ask me if I care”)
    • 1 pillow and blanket set
    • Music playlist of your choice
  2. Retrieve your stash of chocolate and partake of one piece for quality assurance testing.
    • Test a few more pieces just to be sure it’s worthy.
  3. Open the bottle of your choice beverage. Help yourself to one shot to ensure it’s of good quality.
  4. Obtain all SQL login passwords including the SA account.
  5. Start music playlist.
  6. Ignore the voices in your head.
  7. Verify/add domain account as a local admin onto the server for the Database team to use to manage the servers.
  8. Turn on the SQL Agent, if it’s disabled.
    • If it’s been turned off, smack person responsible upside the head unless they have a good reason why it’s been disabled.
  9. Change all SQL-related service accounts to the domain account dedicated to running the SQL services.
    • If the service accounts were running under any local accounts, find out why.
    • If it’s the wrong answer, smack person responsible upside the head.
    • Help yourself to a piece of chocolate.
  10. Manually back up all databases on all instances to a designated area including all system databases.
    • Make note of where you put them for future reference. Feel free to be descriptive.
    • Note: “where the sun doesn’t shine” doesn’t count.
    • Tell the voices in your head to shut up.
  11. Script all logins and save the script to the network.
    • Again, make note of where you put it.
  12. Add the SQL Server domain account to the SQL Server instance as sysadmin.
  13. If they brought their own chocolate, add your team’s Active Directory (AD) accounts to the SQL Server instances as sysadmin.
  14. Coordinate with the applications team to determine how the applications are connecting to the databases.
    • May need to run Profiler traces.
    • Help yourself to a generous shot of your choice beverage.
  15. Work with the applications team during the changing of all sysadmin SQL account passwords including the SA account since it is possible (and very likely) applications are using those accounts.
    • Have some more chocolate… followed by another shot of your choice beverage… or four…

      DBA Survival Kit Option #2

  16. Work with the application team during the addition of any new accounts and disabling the old accounts to ensure the application still works.
    • Add new AD accounts.
    • Set permissions for new AD accounts.
    • Change database owners.
      • WARNING! WARNING! Changing the database owner may break something!
      • Down one shot of your choice beverage per database owner changed followed by a few more pieces of chocolate.
    • Disable the old AD accounts.
    • Pray to the SQL gods everything still works.
    • Help yourself to another shot of your choice beverage just in case. Down another one to appease the SQL gods. Better safe than sorry, right?
  17. Configure / reconfigure database mail
    • Send a test email to the server admins informing them the database servers have unanimously decided the DBA team is worthy of more chocolate and it would please the database servers greatly if chocolate was sent to the DBAs… preferably good quality dark chocolate… on a regular basis…

      A tutu? Seriously?

  18. Verify the original database assessment.
    • Note any changes or discrepancies.
    • Help yourself to two more shots of your choice beverage – one for you and one for the bacon slinging monkey dancing around on your desk wearing a pink tutu. Keep the duct tape out of the monkey’s reach…
  19. Inform your minions, err… the applications team that they have now been bequeathed permission to allow the peasants, err… users to test their logins to ensure they are able to access the databases through the applications.
  20. Work with them to troubleshoot any database-related issues. Keep the sarcastic ball in reach.
    • Finish off another shot of your choice beverage.
    • Scarf down more chocolate.
    • Repeat until the issues are resolved or you run out of your choice beverage and chocolate, whichever comes first.
  21. Set up maintenance plans, if they don’t exist.
    • Ensure the maintenance plans work.
    • Cross your fingers and toes and everything else you can think of.
    • Finish off your choice beverage and chocolate.

      ahhhh... where's that duct tape? hey! who took my blanket?

  22. Retrieve your blanket and pillow making yourself comfy.
  23. Fling mints at the monkey to keep it from dancing an Irish jig on your bladder.
  24. Apply duct tape to keep yourself from falling off the face of the earth.

Read Full Post »

Some of  you may or may not have heard, but for those who haven’t, we are getting Oracle databases at work. And the crowd goes wild!  I’m really excited and actually looking forward to it. It’s a great opportunity to learn something new which is one thing I love about my job. I’m really happy we’re getting training too. Yay! It will be nice to take some classes for once since I haven’t really had any formal SQL Server training. It’s all been on the job training with the exception of one Reporting Services architecture class. I understand there will be consultants to help us out along the way too. I’m really hoping to learn a lot from them. I’ve had some less than stellar experiences in the past with consultants at other jobs, but I’m going to strive to be positive that it will work out just fine in this case. Call me naive but I really do hope it works out. Crossing fingers and toes and everything else I can think of!

Uhh... sorry, but I'm a little busy right now...

It’s my understanding that we’ll have both Windows and Linux boxes. The Oracle version we’re getting is Oracle 11g R2. I was very excited to hear we’re getting Linux. In one of my past jobs, I did a little roundabout UNIX administration and really enjoyed it.  Feel the power! Muhahahaha… Now I need to dust off the cobwebs. So far Linux is extremely similar to the UNIX I used way back when. Hopefully the learning curve won’t be all that bad. It may help that my husband is currently taking IT classes and this semester it so happens he’s taking a UNIX class. Note to self: need to swipe…err, borrow his UNIX book… Luckily, I have a little experience with Oracle 9i from several years ago. However, it was just writing stored procedures in PL/SQL for reports. Now we need to learn to be Oracle Administrators first and then development comes later on. What is really truly going to help us out is that our manager is Oracle certified (even though it was a while ago) and our newest DBA has experience with Oracle. At this point, I think we need all the help we can get. :-)

Our first big Oracle project just kicked off this last week. The time line is very aggressive and tight with production scheduled for mid September. Plus we have current ongoing projects that will keep moving forward at the same time in addition to any production issues that arise. We’re also taking over the administration of several SQL Servers from other departments that are moving onto our network including one SQL 2008 cluster. That will keep us pretty busy too.  Whew!

Having said all that, it’s time for the bad news. I honestly don’t know how much time I’ll be able to devote to my blog in the near future. I want you all to know I haven’t abandoned writing on this blog, but I will try to post when I can. Beware there may be some Oracle lurking in there somewhere with SQL clustering and what not. You have been warned! :-)

My main purpose of writing this post is that I’m really curious what your experiences with Oracle are.  Please feel free to comment!  I’m looking foward to reading your comments and opinions – the good, the bad and hopefully not too ugly. :-)

Read Full Post »

Better late than never, right? :-)

Day 3 – So far I’ve managed to survive the work week with the rest of my team out of town. Overall, it was fairly quiet. Either word got around that I my DBA buddy gave me duct tape as part of my survival kit, or everyone thought I’d be really busy holding down the SQL fort by myself. Regardless, it was nice to have a quiet day.

I managed to get quite a bit of work done and didn’t make a fool of myself during the manager’s meeting, thankfully. Note: I’m not a manager. I just took my boss’s place in her absence. Although, it does amuse me that I receive mail at work indicating I’m an IT manager. Have no idea where that came from since I’ve never claimed to be any sort of manager. I tend to forget to water the plants in the office when my DBA buddy is away. So there goes my career as plant manager. Get it? Ha! Bad joke. Sorry.

The only SQL issue I ran into was attempting to grant the right permissions for certain users on one of our SQL Server 2008 development servers. Certain users have db_datareader and db_datawriter permissions on certain databases on this server. They needed the ability to view the list of tables via SQL Server Management Studio (SSMS). Since there are thousands of tables (no exaggeration) in each of these databases, I didn’t really want to grant view definition on every single table. Yes, I could script it but I was hoping for a better/easier/quicker way. There are 2-4 schemas that own different tables. So instead, I granted them view definition on each schema thinking it would solve my problem. Nope. It didn’t work.

After researching it a bit, I ended up just granting them view definition on each database which shouldn’t hurt since it is development. It appears to have worked just fine. If anyone has better ideas, I’d love to hear them. :-) 

I will admit that I am relieved that the office is closed tomorrow. Now I just need to work on my CTE presentation for the user group meeting next week. :-)

Read Full Post »

Meant to post this yesterday evening. Oops!

Day 2 – Today was fairly uneventful. The highlight of my day was when I found my stash of pistachios. There is a saying that you are what you eat. I like to eat nuts like pistachios. Therefore, I am nuts. Hmm… that could explain quite a few things.

Ever have one of those days when your code fix breaks something else? Yeah, it was one of those days. Thankfully, I resolved the coding issues… for now. It turned out that I just needed some good test cases.

So far there haven’t been any major production issues. I don’t think the servers have realized the rest of the team is out of town. They are blissfully unaware, so far. I probably just jinxed myself. Note to self: stock up on pixie dust.

I finally broke out the chocolate. Been sharing it with our adopted developer. I think she needs it more than I. She keeps laughing and giggling at her computer which may or may not be a good sign, depending on how you look at it. Note to self: stay away from the project she’s working on, if at all possible.

Found some odd code in a function that doesn’t make a lot of sense.  Here is basically what I found:

set @currentDate = dateadd(dd,datediff(dd,1,@currentDate),1)
set @ageInDays = datediff(dd,@dateOfBirth,@currentDate)

What I don’t understand is what was done to @currentDate prior to calculating the age in days. When I add some code to test it out, it doesn’t seem to do anything to the current date or age calculation. I am guessing they are trying to add one day to the current date. To test it out, here is what I did:

declare @dateOfBirth datetime
,@currentDate datetime
,@ageInDays int

set @dateOFBirth = '1/1/1975'
set @currentDate = getdate()

select @currentDate currentDatePrior

set @currentDate = dateadd(dd,datediff(dd,1,@currentDate),1)
set @ageInDays = datediff(dd,@dateOfBirth,@currentDate)

select @currentDate currentDateAfter, @ageInDays ageInDays

Here are the results from running the above code:

The only difference I can see is that after using the dateadd(), the time is removed from the currentDate. Whether or not that was the intention, I do not know. If so, then there are different ways of doing this. To me, it would be a bit more straightforward to do this:

select cast(convert(varchar(10), getdate(), 101) as datetime) sameDateDifferentWay

You would get the same result and it may be a bit more obvious what the intention is. This is a great example of why comments would be beneficial. Then again, that’s just my opinion. :-)

Read Full Post »

Running Amok! Amok! Amok!

Day 1 – My partners-in-crime, err… co-workers have left me in charge of our little data kingdom whilst they rub elbows with the SQL elite at the PASS conference  in Seattle. Luckily for everyone, they anticipated my needs by bequeathing to me a survival kit in their absence.  Surprisingly, I succumbed to the cookies must later than anticipated. It’s doubtful the chocolate will last the week. May have to resort to less-than-savory methods to acquire more chocolate, if necessary. Considering it’s a 3 day work-week, that may not be what is referred to as a “good sign.” The duct tape has proved invaluable already. One developer asked what was in the kit. When shown the duct tape, she left post-haste. Hmm… must use this to my advantage.

So far it has been unnaturally quiet. I managed to get some work done today with few interruptions which lead to actual *gasp* progress!  Note to self: create mini-replicas of database team so the servers do not notice they are absent. It is also essential to locate pixie dust in the event of a server revolt.

Per request from one of our devoted minions, I created a script *an evil script?* to kill multiple user spawned processes for a specific database. I shall document it here in my journal for posterity. Authors Note: There is no guarantee a user process won’t have a spid less than 50 or a system process won’t have a spid greater than 50, I’m striking my original comment about it and removing it from the code below.  Must ensure the process is greater than 50 to preserve system processes.    

--SQL Server 2000:
declare @processList varchar(200)
set @processList = ''
if exists(select 1
          from master.dbo.sysprocesses
          where db_name(dbid) = 'kingdomOfNee')
          -- and spid > 50)
begin
    select @processList = @processList + ' kill ' + cast(spid as varchar(4)) + ' '
    from master.dbo.sysprocesses
    where db_name(dbid) = 'kingdomOfNee'
    -- and spid > 50


    execute(@processList)
    select spid from master.dbo.sysprocesses where db_name(dbid) = 'kingdomOfNee' --and spid > 50
end

In addition, the code above will work great for SQL Server 2000. For SQL Server 2005 and later, there’s a DMV for that. It’s called sys.dm_exec_sessions.  Big thanks to Aaron Bertrand for reminding me of it’s existance :-)  What’s great about it is that it has a column called “is_user_process” that tells you if that process is *you guessed it* a user process. If the value is a 1, it’s a user process. If it’s a 0, it’s a system process. Simple, right?  The only issue I have with using it is that it doesn’t have a column for the database id. The sys.dm_exec_requests DMV does, but this DMV returns information on active requests – not all connections. This particular server has more than one user database on it and I need to drop the connections to just the one database. Luckily, there is yet another option that Mr. Bertrand reminded me of. Yes, I’m having too many chocolate deprivation moments lately. :-) 

Another option is to set the database to single user mode with immediate rollback. The immediate rollback part will rollback any transactions that are running at that time. This should drop all connections to the database. The actual command looks like this: 

ALTER DATABASE 'KingdomOfNee' SET SINGLE_USER WITH ROLLBACK IMMEDIATE 

To put it back to allow multiple connections, the command looks like this:

ALTER DATABASE 'KingdomOfNee' SET MULTI_USER

It should work. Will need to think about it more and run it by my team since the script will have to run via a job at 0′dark thirty in the morning during the weekend.

Must find time on days off to work on my first ever presentation for our user group meeting next week. My DBA buddy informed me she will actually attend if I speak on CTEs (Common Table Expressions). Hmm… mind control experiment recommencement may prove to be successful. My evil plan will soon be a reality. It is only a matter of time… muhahaha… Note to self: find mindless minions to do evil bidding.

Sort-of-Serious Side Note: Rumors of my demise have been greatly exaggerated. No, really. Well, provided anyone noticed. Kidding! :-) This last month has been fraught with kin folk and perilous quests (i.e. family time and work projects). Therefore, I haven’t had time to work on my blog. It is my desire to remedy that. Therefore, I plan to at least ease my way back to blogging through a little journal of what happens when left in charge of the databases for a week since the rest of my team is in Seattle attending the PASS conference. My intent is to finish my little series on my daily database server script very soon, hopefully this Friday or possibly the next. Worst case, I should just aim for “before the end of the year… this year.” :-)

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 31 other followers