Confessions of a… Database Administrator?

Ever have one of those days/weeks/months/years/lifetimes when you need to relieve some stress or just get some goofiness out of your system so you can focus on the important things such as work or whose turn it is to make the coffee?  Yep. That was me earlier this week. It was one of those times when a seed was planted in my little ol’ brain and I just had to run with it. Of course, receiving encouragement from not only a fellow conspirator, err… DBA, but also our manager (actually, she just laughed and shook her head) sealed the deal for me. I could not resist the temptation which eventually lead to this blog post.


All right. I’ll get to the point. This entire escapade was sadly brought on by our SharePoint Administrator / Webmaster leaving us for greener pastures/other opportunities/sane people. His last day was definitely bittersweet. While we were very happy for his parole, err.. escape… umm… leaving for other opportunities, we were very sad to see him go. He was fantastic to work with. In fact, he and I had a great working relationship. SharePoint would do something stupid, err.. questionable and I’d harass him about it until he fixed it. :-)  Thankfully, he had a great sense of humor.

After we gave him a surprise going away party, which we disguised as a SharePoint meeting (yes, we’re diabolical), someone came up with the brilliant idea to have the SharePoint database server (SQL Server 2005) send him parting emails. Of course, the emails couldn’t just say “so long and thanks for the fish”. No. We had to make it MUCH more memorable and fun.  After an hour of badgering and arm-twisting from my co-worker, I finally gave in and agreed to write the emails. Well… okay.  All she really had to say was something along the lines of  “You should do it!  Come on! Do it!”  So I wrote the messages with some great ideas from the team and happily sent them from the database server as test emails roughly every hour or so. Since it was way too much fun, I decided to share the emails with you all (with permission, of course). I hope you enjoy reading them as much as I enjoyed writing and sending them.

Note: Names have been changed to protect the not-so-innocent and the possibly deranged.


From:  SPT9000

To:  Clay McFierce

Subject:  Say it isn’t so!

Sent: Thursday, June 21, 2012  10:55 AM

You’re leaving?   We didn’t discuss this… Was it something I did?   *sniff*


From:  SQL9000

To:  Clay McFierce

Subject:  Clay McFierce is My Hero

Sent: Thursday, June 21, 2012  1:33 PM

My Dearest Clay… Remember when that jerk, SPL4100,wouldn’t leave me alone and was constantly calling me? My drives fluttered when you so bravely gave the order to shut him down. *sigh* I will never forget that moment.

You will always be a part of me…

Faithfully yours… SQL9000


From:  SQL9000

To:  Clay McFierce

Subject:  Clay McFierce, You Good for Nothing Two-Timing SharePoint Dolt

Sent: Thursday, June 21, 2012  3:16 PM

Clay, you are the master database of my SharePoint farm… I know you’re leaving me for another server!!!  What does she have that I don’t? Is she a newer model? Is she one of those new fancy SQL 2012 servers? I’ll have you know that SQL 2005 is just as good as (if not better than) any of those newfangled SQL 2012 models!  How could you leave me???  I shrank my databases for you!!! *sob* I miss you already…

Forever your one and ONLY SharePoint database server… SQL9000


From:  SQL9000

To:  Clay McFierce

 Subject:  Clay McFierce… This. Isn’t. Over!

Sent: Thursday, June 21, 2012  3:42 PM

 First I must confess… It was me. I did it… I increased the file versions in the content database! But you MUST understand! I was just lonely! No one ever talks to me except for that creepy SPL4100! And don’t get mestarted on SPL6200!  I beg of you to forgive me! Please don’t leave me!!! I’ll be good! I promise I won’t increase the file versions ever again!

Clay… you know you are the heart of my SharePoint foundation but you overlooked one thing… I didn’t say you could leave!!!   But don’t you worry your shaggy little head, dearest. I have a plan. We. Will. Be. Together… FOREVER!!!!

Desperately yours for all time… SQL9000


There’s a Backstory?

So there you have it.  The last one was supposed to be two separate emails but since he was about to leave I had to hussle and get the last one sent out.  What made it so much fun was because there’s actually some facts behind those emails. Curious? Read on!

The creepy SPL4100 server:  One day we discovered the old SharePoint server was trying to connect to the database server several times a minute and failing. Apparently, no one had turned the services off for it. So after discussing it with “Clay”, he gave the okay to turn off that server.  It was silenced forever.

“I shrank my databases for you!”  Heh. Heh. I couldn’t resist throwing that in there since we ended up shrinking the content database a few times. Yes, yes. I know shrinking is evil and a very very bad thing since it causes tons of fragmentation and what not. The database should have been around 10GB or less but it was over 1.5 TB and we were quickly running out of drive space.  It turns out the file versions were increasing exponentially and were out of control (another real event which lead to the server’s confession). So while “Clay” worked to figure out what was going on with the file versions, the decision was made to shrink the database when he was able to reduce the versions. Basically, it turns out there’s a flag that wasn’t set to limit the file versions. Long story short, he ended up having to write a script along with a job to execute it at least once a week to keep the number of versions down.

Please note that there are other and much better ways to fix this issue which is a separate post (or you can Google/bing why you shouldn’t shrink a database) but the decision was made to shrink the database back down to a reasonable size and that’s what we did. You can flog me for it later.  

Nighttime in the Server Room

We. Will. Be. Together… FOREVER!!!

T-SQL Tuesday #30 – Ethics

I decided to get my SQL self in gear and finally participate in TSQL Tuesday. Yep. It’s my first one. This time Chris Shaw (Blog | Twitter) is hosting it, and the topic is ethics. For me that means knowing what is right and wrong and conducting yourself in a manner befitting your position.

Considering ethics is quite a broad topic, I figured I’d narrow it down to my very own experiences involving ethics. Having worked in IT for around 14 years, one would think I’d have run into my fair share of ethical situations. As I thought about it for the last week, I was surprised I really couldn’t think of all that many. Or I have had much more but they don’t stick out for whatever reason. There was this one time at SQL boot camp…  The majority of what came to mind occurred before I became a DBA and transpired during my time as a developer, believe it or not. Some of my experiences involve what other people did that was most likely unethical and/or not really all that moral. Personally, I do not believe I have done anything unethical that I know of… there’s still time for my evil plan….

You Want Me To Do What?

The first situation that comes to mind occurred at a prior job before I became a DBA. I was asked to develop a simple application that would ultimately show employee rankings to be used during layoffs. Yeah. Talk about a sensitive issue. I was explicitly told to tell no one what I was working on. And I didn’t. I’ll admit there were times when it was a bit tempting since I knew quite a few people on the list. I am human but I did have a specific job to do and, besides, I wouldn’t want to be the person to have to potentially deliver the bad news to someone. Of course, I couldn’t help wondering where I was at on the list. At least I knew where I ranked. It would have been really funny and sad if it turned out my ranking put me in a position to be laid off. Can you imagine? I don’t know about anyone else, but it’s hard to imagine what one would do in specific situations until one is there and experiences it first hand. If that had happened to me, I’d probably just do my job as asked and just let whatever happens happen.

Are You Serious?

Speaking of lay offs, have you ever been told you’re getting laid off yet they let you work for the next two weeks? Yep. That happened to me at a different job. I will be the first to admit I was not happy and I did my fair share of grumbling and complaining. Considering I worked on the financial system, I could have done considerable damage had I been of lesser moral fiber. But no. I did not. I did my job as normal for the next two weeks. They took a huge risk letting us (I wasn’t the only one) work for the next two weeks. I am guessing they trusted us to not cause havoc. As far as I know, no one gave them reason to regret that decision.

Seriously? They Did What?

The next one that comes to mind again occurred at a prior job and strikes me as kind of funny, in a way. Probably because it blew my mind a bit, and I never in my wildest dreams thought I’d be asked to do something like this. I don’t know where it fits in this discussion except for under the umbrella of “hush and blush”. Meaning, it’s another don’t tell anyone about this and if you’re not comfortable with it, it’s not a problem and we’ll ask someone else to do it. Oh and it involves morally objectionable content; hence, the “blush” part. I know you’re just dying to know what I’m talking about.

One day management approached me with a task to search a database for a given set of words written on a piece of paper. These words could not be spoken out loud and, to be quite honest, some made me blush. Yeah. I think you’ve got the idea. Apparently, some people had files of an extremely questionable nature on their work computers. This database contained a list of computers with file names on them that needed to be searched. After thinking about it for a minute or so, I agreed to do the search. Someone had to do it and they trusted me enough to keep it quiet. I’m glad they also asked me if I were comfortable with doing it in the first place. It was an interesting experience, to say the least. I may have even learned a new word or two that day. In the end, it was just another part of the job. I was just surprised some people apparently didn’t know better than to keep that kind of stuff on their work computers.

Your Mission…

Here’s a fun one. Well, not really. Have you ever been volunteered to participate in a super secret task with a group of co-workers in which you, once again, were sworn to secrecy for good reason? Yeah. That was me at a prior job. Anyway, our mission was to confiscate computers from people in a specific department for a very good reason that I probably shouldn’t really go into detail on.  Let’s just say it involved potential improper use of money.

Have you ever had to go up to some stranger and tell them “Hi there. I’m from IT and I’m here to take your computer. Sorry. I can’t really say why. I’ve just been told to take your computer with me.”  Granted, we said something much nicer than that but you get the idea. Surprisingly, most people I talked to that day were really nice and actually took it in stride. I’m not sure what happened afterwards but it was still uncomfortable and a bit awkward for me considering what I knew. If it were my computer being taken, I sure would like to know why. So it was difficult to not say something. However, I didn’t want to get myself in trouble since I wasn’t sure what the ramifications would be, and I didn’t really want to find out the hard way.

The Moral of the Story is…

There you have it. Some of my most memorable experiences involving ethics and possibly even morality. To me, both can have some gray areas. In general, I believe most (apparently not all) people know what is right from wrong. However, it may not hurt for companies to have an explicitly definition of what is considered acceptable and unacceptable for their employees. I don’t recall if I’ve ever had to sign an ethics agreement except in regards to HIPAA (Health Insurance Portability and Accountability Act). Still, it’s probably a good idea. What do you think?

Um, We Have a Problem…

This last weekend was pretty rough for my entire team. One of our most critical production systems took a dive on Friday morning. Meaning, the database went down unexpectedly and wouldn’t come back up. When I got the call Friday night around 8 pm that we would be working in shifts and I was needed at work that night, I knew it was bad, very bad. This was the first time in three years (that I could remember) that I had to go into work after hours for a production issue. That’s actually pretty good, in my opinion, considering I know other DBAs end up doing quite a bit of after hours support for their systems. I don’t like to speak for others but it seemed pretty rough on all four of us. I don’t think anyone got much sleep the entire weekend; however, we managed to get through it and the system was back up and running by Monday afternoon. I really am lucky to be part of such a great team. My co-workers put in quite a bit of long hours starting on Wednesday which is just amazing to me. I wasn’t involved until Friday night and I was exhausted after only three nights. I can only imagine how they’re feeling.

Should I?

Night #3... Observations...

To be honest, I’m not sure I should even be writing a blog post about this issue for various reasons. One reason being that my role was that of minimal support. This is an Oracle system which is new for us and I know very little about Oracle administration. So my main role was to be a second set of eyes for my manager who worked the night shift with me. I’m very thankful she was there with me.  It also really helped that she has prior Oracle experience and has had some training which I’m so very thankful for. I really didn’t do much except to double-check what my manager was doing, answer phone calls from Oracle support, and type in whatever commands the support people asked  me to. Hmm… That may explain the odd voices that told me to do strange things when my manager stepped away. Yes, I did my best to take note of what it is they were asking me to do which was mostly querying things… thankfully.

Secondly, we worked in shifts with me being on the night shift. Add to that my limited knowledge of Oracle, it was difficult for me to keep track of everything that was going on the entire time except for knowing we were having quite a few issues with the system. So I don’t have a lot of technical details that I’m sure some people would love to hear about. Sorry about that.

But Why?

So why am I writing this? I thought it would be good to document what we went through, at least in general, in case anyone else experiences the same or similar issues. I also thought it would be a somewhat decent way to share what I learned. Granted, it’s not much but it’s something. Also, I’m not placing blame anywhere or pointing fingers. Every system experinces issues (at least I would think so) at some point. This is just one of those times.


Since I’m still pretty tired, hopefully what I write makes at least some sense. I have limited knowledge of Oracle and the every day workings of the system so please keep that in mind. Right now I’m mainly supporting SQL Server but am slowly learning more about Oracle. If I get something wrong, please let me know. This blog post is from my point of view so it’s possible I got something wrong somewhere. If I did, I apologize and will fix it as quickly as I can.

So What in Server Name Happened?

Night #4... Midnight Ramblings

First, I’ll state that this occurred on an Exadata machine with Oracle RAC (Real Application Cluster). It’s been in production since December and we’re running Oracle 11gR2.

From what I understand, the whole issue seems to have started on Wednesday when users were reporting inconsistent query results. They would run a query and get back a certain number of results. They would run the exact same query again and get 0 records back. This would happen repeatedly. One of my co-workers who is great with and knows Oracle pretty well researched and worked on it for quite some time and contacted Oracle support about it. I believe the theory was that it had something to do with the optimizer.

At some point on Thursday, ASM (Automatic Storage Management) went down but then it came back up. It sounds like it had something to do with a flash disk error. An engineer was sent out, and I understand the issue was fixed.  Note:  ASM is basically a file storage system.

Then for some reason, the database terminated unexpectedly with an ORA-600 error Friday morning and would not open up afterwards. Note:  I was told that ORA-600 errors are generic errors that don’t usually tell you much. Great, huh?

At some point, Oracle determined that a duplicate or bad record was inserted into a system table called props$. As of this moment, no one knows how it got there or when. Since we had no idea this table even existed, we were not auditing it. However, I believe we are auditing it now. Apparently, having this extra record caused the database to not open back up when it terminated unexpectedly on Friday. Note: I believe props$ is basically a database properties table. As my manager explained to me and if I understood her, it’s like having your master database in SQL Server become corrupted. However, getting it back up and running is more complicated in Oracle than it is in SQL Server.

The Plan

Night #5... A Plan is Formed...

So the plan was two-fold. One part was to find a good database backup that did not have that extra record in it so we can restore it to production, if necessary. The second part was to determine how this happened and to see if someone could open up the production database without having to resort to restoring the backup.  Note: we were doing full backups nightly.

In addition to all of this, the Linux box containing our backups wouldn’t mount for some reason. So we had to copy a database backup file to a Windows media server which took about 2 hours. At least that worked and we could see the backup files from the Exadata machine.

Anyway, the database from the first restore attempt would not open. So they tried another one. To keep a very long story at least somewhat short, they were successful in restoring a backup to our test Exadata machine and recovering data from it and the archived logs in addition to recovering data from the online redo log (kind of like transaction logs, as I understand it) of the corrupted database.  Which means that we only lost 5 minutes worth of data. I think that is just plain awesome considering everything that happened over the weekend. And so far no one knows how this extra record ended up in that table. Hopefully it won’t happen again. I’m crossing my fingers, toes, and eyes.😉

A Day to Day Pictorial

Please note that I don’t mean to over simplify the process. It was a very long and manual process to restore and recover the database. Everyone worked very hard on getting this to work. It doesn’t seem like a very straightforward process to me, but that could just be me. Also when I refer to “they”, I’m referring to my team in conjunction with Oracle support. Everyone worked well together to get it figured out.

Overall, it sounds like we also have a few bugs and need to do some patching very soon. The support team we worked with seemed to be very professional and helpful. There were quite a few bumps along the way but we survived and the issue was fixed. That’s the important thing to remember.

Hey! I Learned Something!

On the bright side, I actually learned some useful stuff over the weekend!  I now know:

  • how to use PuTTY (the terminal client, not the oh-so-cool kids toy or paste-like substanceit’s probably a good thing I didn’t have any of the gooey kind in my reach this weekend)
  • leaving sticky notes in someone else’s cube late at night is a great stress reliever and a great way to keep one’s sense of humor intact (note: not all of the sticky notes were written by me; some were written by my co-workers)
  • how to start RMAN (recovery manager):  rman target /
  • that management appreciates sticky notes and saw the humor in it (whew!)
  • what RMAN scripts look like
  • that you can’t have leading spaces in RMAN scripts or bad things happen (mostly just errors)
  • how to set the Oracle environment in Linux: . oraenv
  • where the pfile (parameter file) is and how to edit it along with the init file (scary thought)
  • how to look around the ASM file system:  asmcmd (command line utility)
  • that ASM contains an “M”, not two “S”s (gotta love typos)
  • how to start SQLPlus to run SQL commands: sqlplus / as sqldba
  • to be careful when Google’ing props$ (psst…don’t put a space before the $… seriously, nothing bad happens… just at attempt at wacky late night humor)
  • that not only am I part of a fantastic team who put in tons of hours on this issue, but that we also have a great management staff who were very supporting and helpful during this time.

So that was my weekend. It was rough but we survived and learned some things in the process.  Huh… I can’t believe I wrote this on my lunch hour. Usually it takes me longer than that to write a post.

Hey! Nice RAC!

What? Another post in less than a week? Yep! Don’t faint from shock!😉  Besides, I’m overdue for a mostly serious post.  Oh and as for the title of this little post? Trust me. It could have been much, much worse.😉  

Since we’ve had Oracle for a few months now and have one production Oracle system, I thought it’s about time to write a little of what I’ve learned so far. Granted, it’s probably enough to fill a thimble since I’m mainly still supporting SQL Server.  It seems a bit funny to me, in a way, but I’m learning about Oracle pretty much how I learned SQL Server – from experienced co-workers, reading, awesome people on Twitter (thank you!), more reading, and good old-fashioned playing around.

In case anyone is wondering, we are now owners of Oracle 11g R2 on Exadata Database Machines. So what’s an Exadata? It’s basically a super duper uber powerful storage server optimized specifically for Oracle Databases to run on. It appears a lot of processing is offloaded to the hardware. I’m not going to regurgitate all the nitty-gritty specs but you can read all about them here.

A Cluster O’ Fun

It's all fun and games until someone loses a node

We also have an Oracle cluster running on said Exadata box, and I believe there is a plan to get a data warehouse going on one as well. That sounds like it could be fun actually. I had also heard something about us possibly supporting SSAS (SQL Server Analysis Services) for a department. No, that won’t get confusing at all! The Oracle cluster is actually referred to as a RAC which stands for Real Application Cluster. It’s composed of something called Oracle Clusterware and Oracle ASM (Automatic Storage Management). Together they comprise the Oracle Grid Infrastructure. As I understand it, the Clusterware is what makes the cluster. No, really? What was your first clue?  That basically means you’ve got a database on shared storage and multiple servers can access it at the same time. If one node (host server) goes down, the other one(s) can still access it.  The ASM part is basically the file system and volume manager. It includes striping (automatic), mirroring (optional), rebalancing and so on. It basically manages the files for you so you don’t have to.

SQL vs Oracle

So what’s an Oracle cluster like compared to a SQL Server cluster? Sorry, but I really can’t tell you just yet. Yeah, I’m bummed too. When it comes to performance, it’s my understanding that there really isn’t anything out there to compare to an Exadata box. It’s fairly unique. Therefore, one can’t really compare this particular cluster to a SQL Server cluster in terms of performance and what have you. I honestly couldn’t tell you anything about its creation or setup since I wasn’t really all that involved. Hey, someone has to make sure the SQL Servers are still behaving.🙂 Once I get a better grasp on it, I may be able to write something about it as compared to a cluster from a technical aspect but not performance-wise. Time will tell.  However, I would love to hear from anyone who has Exadata and/or RAC experience.🙂

The Verdict?

A cookie by any other name is still a cookie... they just come in different flavors

So what do I think of Oracle so far?  You know how some relationships start off somewhat rocky? Well, this one isn’t any different. However, that’s not necessarily a bad thing. It’s just that I really haven’t had a lot of interaction with it just yet so I really haven’t had enough experience with it to say one way or the other. My initial impression is that it is way more involved and complicated to manage than SQL Server so far. That could just be me, though. Overall, I’m viewing this as a great opportunity to learn something new which is great since I love to learn new things.  :-)  In my opinion, relational databases should be fundamentally the same but with differences. Yes, some are quite different than others but once you have the basic concepts down it’s just a matter of figuring out and learning how to administer and deal with them in their environments which isn’t always that easy. But that’s just my opinion.🙂


It’s Friday and I’m feeling a bit wacky (what else is new, right?). So… I thought I would write about something that’s been on my list for a few months now and put an amusing spin on it. Well, at least I find it amusing.🙂

Several months ago, we inherited several database servers from another department. Our job was to bring them up to our standards when they were brought onto our network. Luckily, they all had SQL Server 2005, 2008 or 2008 R2 installed on them. Whew!  It was by far a very enlightening experience considering these servers were not set up by database administrators. However, they didn’t do all that bad of a job considering. Note: These servers were also brought onto our domain from a different one which involved lots of Active Directory account additions which affected accounts on these servers.  

Anyway, to help relieve some of the stress, I couldn’t resist putting together a list of steps to be performed while bringing the servers onto our network and up to our standards. This list does not contain everything we did but it’s somewhat close. On a serious note, many months of planning and hard work went into this project by all of IT. There was quite a bit more that went into it.  This is but a small slice of our part. While this list was written in jest, there may actually be a few useful nuggets of information in there. Disclaimer: I put this list together for fun to release some stress. We did not actually partake of every step outlined. I’m hopeful you can spot the “what we actually did” steps versus the “wishful thinking” steps. 😉

 Generic Work Breakdown Steps (WBS) for Database Server Integration:

DBA Survival Kit Option #1
  1. Retrieve your DBA Survival Kit. It should contain the following items:
    • 1 shot glass
    • 1-3 large bags of dark chocolate, dependent on the duration of the integration and the number of non-DBAs involved
    • 1 large bottle of your choice beverage
    • 1 tin of breath mints
    • 1-3 rolls of duct tape, dependent on the duration of the integration and the number of non-DBAs involved
    • 1 sarcastic ball (note: it is like a Magic 8 ball but displays sarcastic answers such as “whatever” or “ask me if I care”)
    • 1 pillow and blanket set
    • Music playlist of your choice
  2. Retrieve your stash of chocolate and partake of one piece for quality assurance testing.
    • Test a few more pieces just to be sure it’s worthy.
  3. Open the bottle of your choice beverage. Help yourself to one shot to ensure it’s of good quality.
  4. Obtain all SQL login passwords including the SA account.
  5. Start music playlist.
  6. Ignore the voices in your head.
  7. Verify/add domain account as a local admin onto the server for the Database team to use to manage the servers.
  8. Turn on the SQL Agent, if it’s disabled.
    • If it’s been turned off, smack person responsible upside the head unless they have a good reason why it’s been disabled.
  9. Change all SQL-related service accounts to the domain account dedicated to running the SQL services.
    • If the service accounts were running under any local accounts, find out why.
    • If it’s the wrong answer, smack person responsible upside the head.
    • Help yourself to a piece of chocolate.
  10. Manually back up all databases on all instances to a designated area including all system databases.
    • Make note of where you put them for future reference. Feel free to be descriptive.
    • Note: “where the sun doesn’t shine” doesn’t count.
    • Tell the voices in your head to shut up.
  11. Script all logins and save the script to the network.
    • Again, make note of where you put it.
  12. Add the SQL Server domain account to the SQL Server instance as sysadmin.
  13. If they brought their own chocolate, add your team’s Active Directory (AD) accounts to the SQL Server instances as sysadmin.
  14. Coordinate with the applications team to determine how the applications are connecting to the databases.
    • May need to run Profiler traces.
    • Help yourself to a generous shot of your choice beverage.
  15. Work with the applications team during the changing of all sysadmin SQL account passwords including the SA account since it is possible (and very likely) applications are using those accounts.
    • Have some more chocolate… followed by another shot of your choice beverage… or four…

      DBA Survival Kit Option #2

  16. Work with the application team during the addition of any new accounts and disabling the old accounts to ensure the application still works.
    • Add new AD accounts.
    • Set permissions for new AD accounts.
    • Change database owners.
      • WARNING! WARNING! Changing the database owner may break something!
      • Down one shot of your choice beverage per database owner changed followed by a few more pieces of chocolate.
    • Disable the old AD accounts.
    • Pray to the SQL gods everything still works.
    • Help yourself to another shot of your choice beverage just in case. Down another one to appease the SQL gods. Better safe than sorry, right?
  17. Configure / reconfigure database mail
    • Send a test email to the server admins informing them the database servers have unanimously decided the DBA team is worthy of more chocolate and it would please the database servers greatly if chocolate was sent to the DBAs… preferably good quality dark chocolate… on a regular basis…

      A tutu? Seriously?

  18. Verify the original database assessment.
    • Note any changes or discrepancies.
    • Help yourself to two more shots of your choice beverage – one for you and one for the bacon slinging monkey dancing around on your desk wearing a pink tutu. Keep the duct tape out of the monkey’s reach…
  19. Inform your minions, err… the applications team that they have now been bequeathed permission to allow the peasants, err… users to test their logins to ensure they are able to access the databases through the applications.
  20. Work with them to troubleshoot any database-related issues. Keep the sarcastic ball in reach.
    • Finish off another shot of your choice beverage.
    • Scarf down more chocolate.
    • Repeat until the issues are resolved or you run out of your choice beverage and chocolate, whichever comes first.
  21. Set up maintenance plans, if they don’t exist.
    • Ensure the maintenance plans work.
    • Cross your fingers and toes and everything else you can think of.
    • Finish off your choice beverage and chocolate.

      ahhhh... where's that duct tape? hey! who took my blanket?

  22. Retrieve your blanket and pillow making yourself comfy.
  23. Fling mints at the monkey to keep it from dancing an Irish jig on your bladder.
  24. Apply duct tape to keep yourself from falling off the face of the earth.

S’More SQL, Please!

Ready for SQL on the Slopes?

If you haven’t already heard, SQL Saturday #104 is coming to Colorado Springs on January 7th, 2012. Woo hoo! Hold onto your beanies (knit hats for those of you wondering)! It’s the first SQL Saturday of the New Year and it’s going to be fantastic! Personally, I’m in awe of the mind-boggling speaker and session line up we have! Kudos to Jeremy (Web | Twitter) and Chris (Blog | Twitter) for the great selection! It could not have been an easy task given all the great submissions. If you haven’t already, check it out now! Please note that the schedule isn’t written in stone and is subject to change. Last year we had 3 session tracks. With all the impressive submissions we’ve received, we’ve expanded to 5 (Yes, 5!) tracks for Business Intelligence, Database Development and Administration! Are you in awe yet? Yes? No? Read on!

Interested in Professional Development? If you said “YES!”, then you’re in luck! We’re also including time slots for you to review your resume with a professional resume writer! Sweet! How do you sign up? No worries! It’ll be easy. When you arrive and check in at the registration table, there will be a sign up sheet for time slots. Please note that it will be on a first come first served basis.

Room to Grow

Sneak Preview... Shhhh!

For those who remember the rooms we used last year, guess what? For SQL Saturday 104, we have claimed a bigger and even better main room that even comes with *gasp* an actual stage! What’s even better news? It’s in a separate area of the facility from which we shouldn’t hear what’s going on outside of the room. That means we shouldn’t hear any mayhem generated by the bumper cars or go karts. Since I really like you guys, I’ll even give you a sneak peak of what it looks like. I snuck a few pictures while we were touring the facility. Yeah, I’m sneaky like that… Keep in mind the configuration / layout of the room may change by then. This is just to give you an idea.

Food for Thought… and for Your Tummy

For SQL Saturday #66, we had a great spread of food (not pizza!), and I bet it’ll be just as good or even better this time.

What makes lunch even better is that we’ll have a few round table discussions including WIT (Women in Technology), Professional Development, and Sponsors. Curious as to who will be at the WIT round table? Can you hazard a few guesses? Yes? No? Joining the WIT round table will be none other than Karen Lopez (Web | Twitter), Meredith Ryan-Smith (Blog | Twitter), myself (scary, I know), and a special guest to be announced soon. And the crowd goes wild! This will be my first time participating in a round table discussion so please be nice and patient with me.🙂

Update: We are very happy to announce that Thomas LaRock (Blog | Twitter) has agreed to participate in the WIT round table! Yay! Also, chances are fairly good I won’t be able to participate. Instead, I’ll probably be running around helping to make sure everything is running smoothly.

Stress Reliever?

Gabe and Doug having way too much fun (is there such a thing?) at Laser Tag during SQL Saturday #66

What about stuff like Laser Tag, Mini Golf, Go Karts and so on, you ask? Have no fear! Since it was a huge success last year, we’re including the fun social interactions once again during breaks. So if you’re up to chasing after fellow SQL peeps during Laser Tag or challenging them to a round of go karts, we’ve got you covered. As for what games we’ll actually be including, you’ll just have to show up to find out!😉

#SQL Ski

I’ve heard that Colorado boasts some of the best skiing in the country, but don’t take my word for it. Check it out for yourself! Join us on Sunday, January 8th, as we board the “SQL Bus” to Monarch Mountain for a fun-filled day of skiing! Not much for skiing? Then just hang out and relax! What could be better than watching your fellow SQL peeps fall on their… err, SSAS… ?😉 Say no more? You’re ready to sign up? Great! Sign up herePlease note that this ski trip is an optional paid outing.

Don’t wait! We need you to register for SQL Saturday so we can get a good idea of how many people to expect and can plan accordingly. Can’t wait to see you there!🙂

Last but most definitely not least, I want to give a huge thanks to our wonderful sponsors. Without them, this wouldn’t be possible. Please take a few minutes of your time to visit their web sites.









Chris Gosnell Photography


Colorado Springs Convention and Visitors Bureau

Gabby Communications


Party! Party! Party!

If you’re in the southern Colorado area, please join us for the Colorado Springs SQL Server User Group’s annual Holiday Party on Wednesday, December 7th! We’ll be meeting in the VIP Bowling area at Mr. Biggs starting at 5:30 pm for a great dinner (can you say ribs? Nom nom nom), Bowling and Laser Tag courtesy of our wonderful friends at redgate! Thank you, redgate! Bring your family! We need you to sign up here (it’s free) so we can get a good idea of how many people to plan for. See you there!

Oh and I probably shouldn’t mention there’s a full bar nearby… 😉

Someone coined the hash tag “#SQLFamily” on Twitter not so long ago. So as part of “Meme Monday”, Thomas LaRock a.k.a. SQL Rockstar (Twitter | Blog) asked for people to write about “What #SQLFamily means to me.” I’ve read some great blog posts so far. I don’t know what more I can contribute or say what hasn’t already been said, but I’ll give it a shot.

In some ways I feel being a part of SQL family is like living in a bag of mixed nuts. It’s probably a very poor analogy and I hope no one takes offense, but I believe we’re all at least a little nuts in some way. Obviously, some people are nuttier than others which can make for very entertaining and enlightening conversations. It can also explain why we all get along so well, for the most part. Say what? What I mean is that no matter how different we all are (peanuts vs.  pecans) or where we come from (from Canada to Slovenia), we have at least one thing in common – a love for SQL Server.

A little squirrelly? Nutty? Why not?

In my opinion (and others), SQL family is about people who are not just willing to help but who enjoy helping others. Their dedication really shows through by being approachable at events to writing articles and answering questions on Twitter. They may exist, but I don’t know of any other technical community where you can get an entire day of fantastic free training such as the SQL Saturday movement.  *cough* <insert shameless plug for SQL Saturday #104> Don’t forget to resister here! Submit your abstracts now! </insert>😀

As with any large group, you have a variety of nuts… err, people. Some are pretty outgoing and some are fairly shy. You have people with expert level knowledge, those just starting to learn, and others in between.  However, I’ve determined that if you try to make an effort to get to know people regardless of their level of expertise, you can be rewarded with great friendships and ample entertainment. People you can turn to when you have an issue you need help with, whether it’s technical or not and whether it’s at conventions (SQL Rally, PASS, etc) or online such as Twitter, forums or blogs. We argue. We laugh! We cry. We share.  It really is a great thing. *group hug!*

Having said all that, no one is perfect and neither is any family or community. Every group has their share of issues and problems, but what I feel makes the SQL community stand out is their willingness to share their knowledge and how much they enjoy doing so.  It may sound a bit cheesy and even though I’m not quite sure where I fit in (maybe everyone’s nutty little sister), I am proud to be a part of this wonderful family.🙂