Feeds:
Posts
Comments

Archive for the ‘SQL Fun’ Category

Happy Holidays: The First SQL

The First SQL

A parody of the song “The First Noel”.  Merry Christmas, Happy Holidays, and so on and so forth…!

The first SQL the server did say
Could not parse this statement please write it this way.
In code where it lay a scanning the heap
On a production server I wanted to weep.
SQL SQL SQL SQL
Formed is the Team of S-Q-L.
 
I looked down and saw a star
Glaring in the code at me thus far
And to my eyes it gave great fright
And so it continued to my “delight”.
SQL SQL SQL SQL
Formed is the Team of S-Q-L.
 
And by the fright of that same star
DBAs came for the coding fubar;
To seek why a ping threw an event
And to destroy the RBAR whatever it meant.
SQL, SQL, SQL, SQL,
Formed is the Team of S-Q-L.
 
This star awry it went possessed;
Causing mayhem it did not rest,
And there it did not stop or stay,
Right inside my trace – zero disk space.
SQL, SQL, SQL, SQL,
Formed is the Team of S-Q-L.
 
They appeared with Admin’s decree,
I fell brazenly trying to flee,
And shuddered scared in their presence
Their scold and slur, I then did tense.
SQL, SQL, SQL, SQL,
Formed is the Team of S-Q-L.
 
Then we worked the code we abhorred
The select star was finally no more
That frickin bug we had finally caught
Was on a test server we forgot.
SQL, SQL, SQL, SQL,
Formed is the Team of S-Q-L.
 

Read Full Post »

Can’t Group This

Yep! Another one! This is another one of those dratted fun and exciting posts inspired by a co-worker planting a little seed in my brain many months ago. It was fun to write and took a long time. I had no idea the song was THAT long! *sigh* Sadly (depending on your point of view), I have another one already in the works. Writing these parodies is actually a great stress reliever for me. It’s not perfect but I hope you enjoy it anyway! One of these days I’ll get back to a more serious post.

Can’t Group This

A SQL parody of MC Hammer’s “U Can’t Touch This” 

You can’t group this
You can’t group this
You can’t group this
You can’t group this

Your, your, your query puts me on guard
Makes my day, now I’m floored
Spank you for stressing me
With a query to fix, no time to Tweet

It’s not good when you know you see
A distinct group by in a CTE
And I’ve seen so much
And this text field, uh, you can’t group

I told you, Code Boy
You can’t group this
Yeah, I’m too forgiving and you know
You can’t group this

Look at this code, man!
You can’t group this
Yo, let me bust the funky queries
You can’t group this

Fresh conflicts, no grants
You can’t do that, now, you know you wanna code
So move, onto your feet
And let this Princess do a CTE

While I’m codin’, hold on
Drop this data bit and I’ll show you what’s goin’ on
Call that, sys stats

Stuck in recursion so roll it back
Let me know if this is too much
And this image, uh, you can’t group

Yo, I showed you
You can’t group this
Why you doin’ this, man?
You can’t group this

Yo, what the hell! Try again, busta
You can’t group this

Give me a sum on a whim
Or better yet, that’s why I’m codin’ em
Now, you know
You talkin’ about aggregates, you talkin’ about some rows

Data types, that’s right
Varchars are maxin’ so set them just right
No escape, just merge
What’s it gonna be in your T-SQL search

Distinct? Admit
It’s not that hard, you need to drop this bit
That alias you know…
You can’t group this

You can’t group this

Break it down!
Stop, summarize!

Get outta this funk, go ahead
Write your funky code like this so the server won’t drop dead

So run your scans on this “where”
Adjust your views, run your try-catch with some flare

Make it fit, join with inner
Code like this and you’re not a beginner
Remove, tried and dumped
Wait just a minute don’t do that! Thump, thump, thump

Yeah… You can’t group this
Hey, man! You can’t group this

Get better with code. Oy! It’s time to grow 

You can’t group this
Be alert, start again

You can’t group this

Break it down!
Stop, summarize!

You can’t group this
You can’t group this
You can’t group this

Break it down!
Stop, summarize! 

Any time with CTEs
Let your fingers take flight
There’s code to explore and heaps of queries to write.

Now you can start coding with some success
With others writing queries that make you guess.
A new world, unfurled, from awkward to child’s play
I concur, you defer, and we’ll infer, uh, no duhr
And they all can go away.

You can’t group this
You can’t group this
You can’t group this
You can’t group this

….

Read Full Post »

SQL Woes from A to Z

Ever have one of those days when you’re working with a colleague on a database issue and one of you has a fun idea that just takes on a life of it’s own?  Well, that’s exactly what happened today while we were doing some actual work.  Imagine that!  Below is what we came up with for your reading pleasure.

Many thanks to my friend, Erin, for collaborating on this fun little poem with me!

A is for the Alter that shouldn’t be run.

B is for the Backup that should’ve been done.

C is for the Cluster that flew into bits.

D is for the Data that no longer fits.

E is for the Errors we saw in the logs.

F is for the Faults that were NOT in the logs!

G is for the GO that couldn’t be found.

H is for the Heap that couldn’t be bound.

I is for the Index, non-clustered and disabled.

J is for the Job which needs that index enabled.

K is for the Kill that was run with a cursor.

L is for the Locks it caused you son-of-a… grrr!

M is for the Month I’ll never get back.

N is for the NULLS hiding in the stack.

O is for the Order By that killed my query.

P is for the Performance I needed so dearly.

Q is for the Query that we redesigned.

R is for the Ranks that are now undefined.

S is for the Select star I found in a proc.

T is for the Time that it lingered in a Lock.

U is for the Update that was lacking a Where.

V is for the Values it swiftly plopped in there.

W is for the When that was found without a Case.

X is for the XQuery we slapped in its place.

Y is for the Year as varchar, we couldn’t believe.

Z is for the Zero pad left, for which we all grieve.

Read Full Post »

These are the hysterical ramblings of a frustrated DBA. Her relentless mission: to upgrade strange old systems, to seek out new projects and bad design specs, to boldly index where no one has indexed before.

The Incident at Carmulus

DBA’s Log, SELECT GETDATE() as ‘Star Date’.  After many weeks of deliberation and preparation, tomorrow marks the dawn of a new day for the Carmulus system. The Alliance recently passed a not-so-secret-squirrel mandate effective 0800 tomorrow morning. Much rejoicing has commenced throughout the system.  I, for one, am relieved everything seems to be in place.

“Status report, Mr. Plock,” I commanded as I stepped onto the bridge.

“Captain, we’re receiving an alert from the Carmulus system. Their database backup job has failed. Initial reports indicate possible corruption. Manual backup attempts have also failed. However, the server appears to be operating within normal parameters. We have not received any distress signals from the inhabitants.”

“Thank you, Mr. Plock. What about the other databases? Were they backed up?”

“Yes, sir.  The other databases have been backed up successfully. However, the SQL Server error log is reporting a “cyclic redundancy check” message, sir.  I initiated a DBCC CHECKDB command with physical_only, no_infomsgs as well.”

“And the results, Mr. Plock?”

“Output indicates 0 allocation errors, 3 consistency errors in 1 table and 12 consistency errors in the database. The minimum repair level recommended is repair_allow_data_loss.”

“That. Is not a good sign.” I contemplated while sipping my Dulthian latte. “When was the last good backup taken?”not a good sign2

“Sunday night, sir.”

“Check the recovery model on the database. It should be full. Do we have any valid transaction log backups?”

“Yes, sir. We appear to have valid hourly transaction log backups since the last full valid backup on Sunday.”

“Good. I’d rather not risk losing any data using the repair_allow_data_loss option unless we have no other choice. One more thing, Mr. Plock. Have you checked the server event logs by any chance?”

“Sir, the system event logs are reporting the Virtual Disk Service terminated unexpectedly after 1900 hours, a hard disk is reporting a bad block, and a logical drive returned a fatal error.”

“Good. God! It’s worse than I thought!  Mr. Chalulu, patch me through to Engineering!”

“Engineering. This is Chief Engineer Mr. Shcot.”

“Mr. Shcot, as you are in no doubt aware of our current situation, what are our options?”

“Well, Cap’n. Seein’ as how some of the disk errors it’s showing make no sense and the server hasn been updated in several years, I recommend we patch the blimey thing as well as rebootin’ it.”

“Thank you, Mr. Shcot. How much time do you need?”

“Aboot one and a half hours, Cap’n.”

“Mr. Chalulu, contact the Carmulan ambassador and patch her through. I’ll be in my Ready Room.”

“Aye, aye, Captain.”

DBA’s Log, Supplemental. After contacting the Carmulan ambassador and conveying the seriousness of the situation, she has contacted the inhabitants of Carmulus to negotiate an outage.  In the meantime, I have directed my crew to investigate recovery options for the database. Luckily, it is of the 2008 variety and not 2000.

“Status report, Mr. Plock,” I utter as I stagger back onto the bridge and contemplate the contents of that Dulthian latte.

“Sir, using the restore verifyonly command, I verified the full backup from Sunday is valid. I was then able to restore it under a different name. After which I restored all of the transaction log backups up through the current one that just ran. I then ran the DBCC CHECKDB command against it. It’s still valid. Meaning, the inhabitants should not lose any of their data from yesterday and today provided the transaction log backups remain intact.”

“Good work, Mr. Plock. You have the bridge while I ah… complete some ah… paperwork. I’ll be in my quarters.”

DBA’s Log, Supplemental+1.  Preparations are now underway for patching the Carmulan server after hours. The inhabitants have been made aware they risk losing today’s and yesterday’s data the longer we wait. Attempts have been made to convey the dire circumstances we face.  However, they insist we wait until after hours. So be it. We decided against any attempt to repair the actual database due to the risk of data loss. Restoring it from the backups should work in our favor. May the SQL deities have mercy us on our souls tonight, or what’s left of them anyway.

DBA’s Log, SELECT STUFF(Supplemental, 7, 0, ‘waitforit’). After what seems like an endless number of hours of patching, I have declared the mission a success.  The hard disk errors have been eradicated. The database was successfully restored using the full backup from Sunday along with the multitude of transaction log backups. I am also happy to report no loss of data was incurred and backups are functioning properly once again.

Summary below per request. Sorry for the delay.  [Updated 09/24/2013)
Mission Summary: The day before a major system change was to be implemented we discovered that a database backup job failed reporting the database may be corrupted. The manual backup attempts failed as well. The users did not notice any unusual behavior with their system and nothing else seemed wrong.  The error reported was a “cyclic redundancy check”. When we ran “CHECKDB command with physical_only, no_infomsgs”, it showed “0 allocation errors, 3 consistency errors in 1 table and 12 consistency errors in the database. The minimum repair level recommended is repair_allow_data_loss.”

The Windows system event logs also showed the Virtual Disk Service terminated unexpectedly that night, a hard disk reported a bad block, and a logical drive returned a fatal error. After talking with a server admin about it, they recommended patching the server and rebooting it.

Since we had a valid full database backup from the weekend along with hourly transaction log backups, we decided to restore that backup along with all the corresponding transaction logs under a different database name. We then ran the DBCC CHECKDB command against it to verify it wasn’t corrupted. It was fine.  So after the patching completed and fixed the hard disk errors, we restored the database using the full backup from the weekend along with the transaction log backups and all was fine.

Read Full Post »

The Situation

Umm... we have a problem

Umm… we have a problem

It’s 11 am. You’re sitting at your desk at work (where else would you be?) trying to determine which of the 20 help desk tickets you’re actively working on to work on next. That includes at least 5 actual projects (I think it’s 5), not just fixing various issues. Not to mention trying to figure out how to explain to someone why you shouldn’t include a “rollback transaction” as part of error handling in a stored procedure that contains just your run-of-the-mill select query. It’s all part of learning, right? Oh and don’t forget that Oracle conference call at 1:30 pm. Did I mention you’re new minion, err…Oracle DBA has some good valid questions for you about the Oracle system as well? Wait. Did you talk to the Access guy yet about the tasks he’s working on? Oh yeah. You did that earlier this morning. Given all that, you’re actually feeling pretty good because one of the projects you’ve been working on went live that morning with no problems. Go team! However, before you can say “I wonder what I should have for lunch today”, you have  3 developers and 1 manager at your desk (or was that 2 developers and 2 managers?) asking for help with a SQL Server performance issue. It’s actually pretty important considering the end users are on site doing user acceptance testing for a major system release.  Dun dun dun…

The Issue

A stored procedure runs fine on Server A but times out on Server B. Both are non-production servers. Both servers have SQL Server 2005 build 9.0.4035. Note: I included the build instead of the service pack level because I didn’t want to look it up and I don’t have it memorized yet. Did I mention we’re running SQL Server 2000, 2005, 2008, 2008 R2 and soon-to-include 2012? Oh and that’s for somewhere around 73 instances and 800+ databases. Oh and then there’s Oracle Exadata.  Continuing on… The databases are identical because they were restored from the same backup file. Still, you verify that the record counts match and the structures match. No problems there. You can run the procedure within SSMS (SQL Server Management Studio) just fine on both servers. No problem.  You breakdown and give the developer db_owner permissions on both databases just to prove it’s not a permissions issue. Plus it’s not production. So no worries. They had no problems running the procedure in SSMS on both servers.  However, when the procedure is executed from the application or from within Visual Studio (2010, I believe), it times out on Server B. There are no error messages here, there, or anywhere. Not in a log. Not in a… Where was I? Oh yeah…

What about the stored procedure itself? It returns two datasets from two queries. From doing a SQL Profiler trace I found it was getting hung up on the first one. The first query is a select with inner joins on four other tables. Nothing too complicated, at least. I probably shouldn’t mention the two optional parameters are included in the inner join clause instead of using a where clause.

One of These Things Isn’t Like the Other

One of these things isn't like the other..

One of these things isn’t like the other..

There are so very many pieces to look at and consider, but this is what I did. I probably should come up with a good checklist for the future so I’m not scrambling. Good intentions and all that, right?

So what could possibly differ between these two systems?  The record counts on the tables are the same. The structures are the same. The indexes are identical. Hmm.

Maybe something with memory? The cache? Could it be the execution plan? I attempted to retrieve the actual plan from Server B and guess what happened? It kept running and running and running. Just like what the developer experienced. I had no problems retrieving the actual execution plan from Server A, though. It ran in about 5 seconds. Double hmm.

So I generated the estimated plan from both systems with no problem and compared them. Gee. They were completely different. That wasn’t a huge surprise but still somewhat surprising considering the usage on both systems should be about the same.  What was interesting was the plan on Server B said an index was missing on one of the tables. Really? The index is there but it turns out the number of statistics on that table was different than the ones on the table on the other server. So why were the statistics so different?  We have maintenance plans in place on both servers to reorganize the indexes and update the statistics every Sunday. It ran on both servers this last weekend just fine. They should be the same but for giggles I thought I’d check them. Guess what? They were different. Dude! Different how? Different in that on Server B it updated the statistics and then reorganized the indexes. This is the server where the procedure hangs when executed. On Server A, the indexes are reorganized before the statistics are updated. Wow. Could this be it? I think it very well could be the problem!

The Test

So on Server B, I reorganized the indexes on only the tables used by this procedure and then updated the statistics. Guess what? I could then easily retrieve the actual execution plan without it hanging. I then asked the developer to try executing the procedure. Ya know what? It ran just fine! Sweet!

Hindsight is 20/20

Should I have gathered a bunch of info from DMVs and what-not first? Yep. Probably. There’s a million things I probably should have done but considering the time crunch and

sheer number of other tasks that have fallen to me, I think I did okay. I solved the problem and made everyone happy so they can continue testing and I can continue on my merry way.

But Whyyyyy?

Now that is the million dollar question. Why does it matter if you reorganize your indexes before updating statistics? Well… you probably won’t like this but I’m going to save that for another post on another day. :-) That’s  my plan anyway.

Do I really want to look under that?

Do I really want to look under the covers?

Hey, Ya’All Ain’t Gonna Believe This!

I do have to give a shout out to our new minion, oops. I mean Oracle DBA. Even though he doesn’t know SQL Server, he asked very intelligent questions which helped me to think through the process and what could be wrong. We made a pretty good team today which is awesome in itself. :-)

Read Full Post »

Confessions of a… Database Administrator?

Ever have one of those days/weeks/months/years/lifetimes when you need to relieve some stress or just get some goofiness out of your system so you can focus on the important things such as work or whose turn it is to make the coffee?  Yep. That was me earlier this week. It was one of those times when a seed was planted in my little ol’ brain and I just had to run with it. Of course, receiving encouragement from not only a fellow conspirator, err… DBA, but also our manager (actually, she just laughed and shook her head) sealed the deal for me. I could not resist the temptation which eventually lead to this blog post.

Wha?

All right. I’ll get to the point. This entire escapade was sadly brought on by our SharePoint Administrator / Webmaster leaving us for greener pastures/other opportunities/sane people. His last day was definitely bittersweet. While we were very happy for his parole, err.. escape… umm… leaving for other opportunities, we were very sad to see him go. He was fantastic to work with. In fact, he and I had a great working relationship. SharePoint would do something stupid, err.. questionable and I’d harass him about it until he fixed it. :-)  Thankfully, he had a great sense of humor.

After we gave him a surprise going away party, which we disguised as a SharePoint meeting (yes, we’re diabolical), someone came up with the brilliant idea to have the SharePoint database server (SQL Server 2005) send him parting emails. Of course, the emails couldn’t just say “so long and thanks for the fish”. No. We had to make it MUCH more memorable and fun.  After an hour of badgering and arm-twisting from my co-worker, I finally gave in and agreed to write the emails. Well… okay.  All she really had to say was something along the lines of  “You should do it!  Come on! Do it!”  So I wrote the messages with some great ideas from the team and happily sent them from the database server as test emails roughly every hour or so. Since it was way too much fun, I decided to share the emails with you all (with permission, of course). I hope you enjoy reading them as much as I enjoyed writing and sending them.

Note: Names have been changed to protect the not-so-innocent and the possibly deranged.

——————————————————————————————————

From:  SPT9000

To:  Clay McFierce

Subject:  Say it isn’t so!

Sent: Thursday, June 21, 2012  10:55 AM

You’re leaving?   We didn’t discuss this… Was it something I did?   *sniff*

——————————————————————————————————

From:  SQL9000

To:  Clay McFierce

Subject:  Clay McFierce is My Hero

Sent: Thursday, June 21, 2012  1:33 PM

My Dearest Clay… Remember when that jerk, SPL4100,wouldn’t leave me alone and was constantly calling me? My drives fluttered when you so bravely gave the order to shut him down. *sigh* I will never forget that moment.

You will always be a part of me…

Faithfully yours… SQL9000

——————————————————————————————————

From:  SQL9000

To:  Clay McFierce

Subject:  Clay McFierce, You Good for Nothing Two-Timing SharePoint Dolt

Sent: Thursday, June 21, 2012  3:16 PM

Clay, you are the master database of my SharePoint farm… I know you’re leaving me for another server!!!  What does she have that I don’t? Is she a newer model? Is she one of those new fancy SQL 2012 servers? I’ll have you know that SQL 2005 is just as good as (if not better than) any of those newfangled SQL 2012 models!  How could you leave me???  I shrank my databases for you!!! *sob* I miss you already…

Forever your one and ONLY SharePoint database server… SQL9000

——————————————————————————————————

From:  SQL9000

To:  Clay McFierce

 Subject:  Clay McFierce… This. Isn’t. Over!

Sent: Thursday, June 21, 2012  3:42 PM

 First I must confess… It was me. I did it… I increased the file versions in the content database! But you MUST understand! I was just lonely! No one ever talks to me except for that creepy SPL4100! And don’t get mestarted on SPL6200!  I beg of you to forgive me! Please don’t leave me!!! I’ll be good! I promise I won’t increase the file versions ever again!

Clay… you know you are the heart of my SharePoint foundation but you overlooked one thing… I didn’t say you could leave!!!   But don’t you worry your shaggy little head, dearest. I have a plan. We. Will. Be. Together… FOREVER!!!!

Desperately yours for all time… SQL9000

——————————————————————————————————

There’s a Backstory?

So there you have it.  The last one was supposed to be two separate emails but since he was about to leave I had to hussle and get the last one sent out.  What made it so much fun was because there’s actually some facts behind those emails. Curious? Read on!

The creepy SPL4100 server:  One day we discovered the old SharePoint server was trying to connect to the database server several times a minute and failing. Apparently, no one had turned the services off for it. So after discussing it with “Clay”, he gave the okay to turn off that server.  It was silenced forever.

“I shrank my databases for you!”  Heh. Heh. I couldn’t resist throwing that in there since we ended up shrinking the content database a few times. Yes, yes. I know shrinking is evil and a very very bad thing since it causes tons of fragmentation and what not. The database should have been around 10GB or less but it was over 1.5 TB and we were quickly running out of drive space.  It turns out the file versions were increasing exponentially and were out of control (another real event which lead to the server’s confession). So while “Clay” worked to figure out what was going on with the file versions, the decision was made to shrink the database when he was able to reduce the versions. Basically, it turns out there’s a flag that wasn’t set to limit the file versions. Long story short, he ended up having to write a script along with a job to execute it at least once a week to keep the number of versions down.

Please note that there are other and much better ways to fix this issue which is a separate post (or you can Google/bing why you shouldn’t shrink a database) but the decision was made to shrink the database back down to a reasonable size and that’s what we did. You can flog me for it later.  

Nighttime in the Server Room

We. Will. Be. Together… FOREVER!!!

Read Full Post »

Warning!

It’s Friday and I’m feeling a bit wacky (what else is new, right?). So… I thought I would write about something that’s been on my list for a few months now and put an amusing spin on it. Well, at least I find it amusing. :-)

Several months ago, we inherited several database servers from another department. Our job was to bring them up to our standards when they were brought onto our network. Luckily, they all had SQL Server 2005, 2008 or 2008 R2 installed on them. Whew!  It was by far a very enlightening experience considering these servers were not set up by database administrators. However, they didn’t do all that bad of a job considering. Note: These servers were also brought onto our domain from a different one which involved lots of Active Directory account additions which affected accounts on these servers.  

Anyway, to help relieve some of the stress, I couldn’t resist putting together a list of steps to be performed while bringing the servers onto our network and up to our standards. This list does not contain everything we did but it’s somewhat close. On a serious note, many months of planning and hard work went into this project by all of IT. There was quite a bit more that went into it.  This is but a small slice of our part. While this list was written in jest, there may actually be a few useful nuggets of information in there. Disclaimer: I put this list together for fun to release some stress. We did not actually partake of every step outlined. I’m hopeful you can spot the “what we actually did” steps versus the “wishful thinking” steps. ;-)

 Generic Work Breakdown Steps (WBS) for Database Server Integration:

DBA Survival Kit Option #1
  1. Retrieve your DBA Survival Kit. It should contain the following items:
    • 1 shot glass
    • 1-3 large bags of dark chocolate, dependent on the duration of the integration and the number of non-DBAs involved
    • 1 large bottle of your choice beverage
    • 1 tin of breath mints
    • 1-3 rolls of duct tape, dependent on the duration of the integration and the number of non-DBAs involved
    • 1 sarcastic ball (note: it is like a Magic 8 ball but displays sarcastic answers such as “whatever” or “ask me if I care”)
    • 1 pillow and blanket set
    • Music playlist of your choice
  2. Retrieve your stash of chocolate and partake of one piece for quality assurance testing.
    • Test a few more pieces just to be sure it’s worthy.
  3. Open the bottle of your choice beverage. Help yourself to one shot to ensure it’s of good quality.
  4. Obtain all SQL login passwords including the SA account.
  5. Start music playlist.
  6. Ignore the voices in your head.
  7. Verify/add domain account as a local admin onto the server for the Database team to use to manage the servers.
  8. Turn on the SQL Agent, if it’s disabled.
    • If it’s been turned off, smack person responsible upside the head unless they have a good reason why it’s been disabled.
  9. Change all SQL-related service accounts to the domain account dedicated to running the SQL services.
    • If the service accounts were running under any local accounts, find out why.
    • If it’s the wrong answer, smack person responsible upside the head.
    • Help yourself to a piece of chocolate.
  10. Manually back up all databases on all instances to a designated area including all system databases.
    • Make note of where you put them for future reference. Feel free to be descriptive.
    • Note: “where the sun doesn’t shine” doesn’t count.
    • Tell the voices in your head to shut up.
  11. Script all logins and save the script to the network.
    • Again, make note of where you put it.
  12. Add the SQL Server domain account to the SQL Server instance as sysadmin.
  13. If they brought their own chocolate, add your team’s Active Directory (AD) accounts to the SQL Server instances as sysadmin.
  14. Coordinate with the applications team to determine how the applications are connecting to the databases.
    • May need to run Profiler traces.
    • Help yourself to a generous shot of your choice beverage.
  15. Work with the applications team during the changing of all sysadmin SQL account passwords including the SA account since it is possible (and very likely) applications are using those accounts.
    • Have some more chocolate… followed by another shot of your choice beverage… or four…

      DBA Survival Kit Option #2

  16. Work with the application team during the addition of any new accounts and disabling the old accounts to ensure the application still works.
    • Add new AD accounts.
    • Set permissions for new AD accounts.
    • Change database owners.
      • WARNING! WARNING! Changing the database owner may break something!
      • Down one shot of your choice beverage per database owner changed followed by a few more pieces of chocolate.
    • Disable the old AD accounts.
    • Pray to the SQL gods everything still works.
    • Help yourself to another shot of your choice beverage just in case. Down another one to appease the SQL gods. Better safe than sorry, right?
  17. Configure / reconfigure database mail
    • Send a test email to the server admins informing them the database servers have unanimously decided the DBA team is worthy of more chocolate and it would please the database servers greatly if chocolate was sent to the DBAs… preferably good quality dark chocolate… on a regular basis…

      A tutu? Seriously?

  18. Verify the original database assessment.
    • Note any changes or discrepancies.
    • Help yourself to two more shots of your choice beverage – one for you and one for the bacon slinging monkey dancing around on your desk wearing a pink tutu. Keep the duct tape out of the monkey’s reach…
  19. Inform your minions, err… the applications team that they have now been bequeathed permission to allow the peasants, err… users to test their logins to ensure they are able to access the databases through the applications.
  20. Work with them to troubleshoot any database-related issues. Keep the sarcastic ball in reach.
    • Finish off another shot of your choice beverage.
    • Scarf down more chocolate.
    • Repeat until the issues are resolved or you run out of your choice beverage and chocolate, whichever comes first.
  21. Set up maintenance plans, if they don’t exist.
    • Ensure the maintenance plans work.
    • Cross your fingers and toes and everything else you can think of.
    • Finish off your choice beverage and chocolate.

      ahhhh... where's that duct tape? hey! who took my blanket?

  22. Retrieve your blanket and pillow making yourself comfy.
  23. Fling mints at the monkey to keep it from dancing an Irish jig on your bladder.
  24. Apply duct tape to keep yourself from falling off the face of the earth.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 30 other followers