Growing, Growing, Gone!

For the last several months, I’ve been blogging when I can about the script I use during my server checks. The last post on this subject was all about processes. This week it’s all about file growth, as in database file growth.

I wrote this script when I first became a DBA which was several years ago. I’m glad I’ve been going through my script because now I realize it is in serious need of updating. These particular queries definitely need updating. I found myself looking at these queries and wondering what the *bleep* was I thinking when I wrote them. Then I decided I had to cut myself some slack. They aren’t that bad. The majority of the servers I administered back then were SQL Server 2000. I learned on the job as I went since we didn’t have much of a training budget. Besides, to err is human, right? Life is all about learning and trying new things. At least that’s what I keep telling myself.

Speaking of learning new things, I learned something very important from my DBA buddy recently. There is no such thing as a blonde moment. It’s actually a chocolate deprivation moment. 😉 It’s my new favorite quote. Aaanannd now back to our regularly scheduled blog post…

At my current place of employment, we have special jobs that run on every SQL Server database server to email us when the database and/or log files have passed a specified threshold in terms of size. I can’t share these with you because I didn’t write them. Hint: they use DBCC SHOWFILESTATS. However, I’ll share with you my old code and what I’m updating it with now that I’ve learned a thing or two.

Why do we care about file growth? If your data and/or log files are growing too frequently, this could cause fragmentation. When we have lots of fragmentation, we tend to have performance issues. When your data pages are heavily fragmented, it could take SQL Server a while to find the data it’s looking for.

To control how much your data and log files grow, you set the “Enable Autogrowth” option for the database. It’s located under Database Properties. See picture below for an example.

clip_image002

Whether we’ve built the server from the ground up or have inherited it, it’s a good idea to have an idea of what the data volume is or will be. In some cases, it’s probably a best guess or some calculations based on specs from the vendor or an estimate from the clients. However, when it comes down to it, observing and making adjustments as you go is probably what ends up happening.

Once you have it set, you may want to set something up to monitor the growth. If you want to write your own script to run regularly or on demand, there are a few different ways to check file growth through T-SQL.

You can use the DBCC SHOWFILESTATS command to get the total versus used extents and do a simple calculation to see what’s free. If you’re wondering what exactly an extent is, I’ll try to explain without going into too much detail. SQL Server stores data and such in pages. An extent is basically 8 pages physically lined up next to each other, like ducks in a row. There ya go. Actually, it can be quite an in depth topic but I won’t delve into that here. At least not today.

Now here is the query I wrote several years ago to check file growth:

select dbid
,fileid
,size
,maxsize
,(maxsize – size) size_difference
,growth
,name
,filename
from master.sys.sysaltfiles
where maxsize > 0
and dbid = 8;

Normally I run it for all databases but for simplicity today I limited it to just one database. To be completely honest, my old code has a group by and having in it but no aggregate. Not sure why I did that. *shrug* Nobody is perfect. So when this query is run, here is what the results look like:

image

I’ll usually include another condition where the max difference is less than a certain number like 500. That way I know it’s getting close to growing again. The sysaltfiles table is mostly for SQL Server 2000 but will still work in later versions. However, the catalog view that should be used for SQL Server 2005 and later is sys.master_files. It contains basically the same information. Here’s what a query for it could look like:

select database_id
,file_id
,type
,type_desc
,size
,max_size
,growth
,name
,physical_name
from sys.master_files
where database_id = 8;

And here is what the results look like:

clip_image006

For both, the size and max_size values are in 8 KB pages. Now if max_size is 0, that means the autogrowth option has not been enabled. If it’s a -1, that means it’s unrestricted growth. This means it will grow and grow until it runs out of disk space which may not be a good thing.

Want to see those numbers in MB? No problem! Just divide by 128. Why 128? Simply because there are 128 pages per MB. Now let’s take a look:

select database_id
,file_id
,type
,type_desc
,size / 128 size_in_MB
,max_size / 128 max_size_in_MB
,growth / 128 growth_in_MB
,name
,physical_name
from sys.master_files
where database_id = 8;

clip_image007

You can also get the same information from sys.database_files. What’s the difference between sys.master_files and sys.database_files? The sys.master_files catalog view will show you file information for all your databases. The sys.database_files catalog view will show you the file information for the database you are currently in.

You may be wondering “is there a DMV for that”? Why, yes, there is… sort of. You’ll see why I say “sort of” later.  It’s called sys.dm_db_file_space_usage and here is one way to query on it:

select database_id
,db_name(database_id) database_name
,unallocated_extent_page_count
,user_object_reserved_page_count
,internal_object_reserved_page_count
,mixed_extent_page_count
from sys.dm_db_file_space_usage;

And the results:

clip_image009

It shows a more in depth look at the file space use in terms of extents such as the number of unallocated pages in the extents, mixed extents, and so on. So basically, pages that are unallocated are your free pages. Don’t get excited just yet. There’s one BIG drawback. It’s for tempdb only. Yes, you read that right. If you don’t believe me, try it for yourself. It’ll only return data for tempdb. *sigh* Yeah, I know. It’d be pretty cool if you could use it for other databases. Oh well.

There is a whole heck of a lot more on this subject. I thought I’d just go over some basics for now. Plus it’ll give me more to write on another time.

As for any shenanigans at work while our boss was out of town for the last week and a half… sorry to disappoint you all, but we actually behaved ourselves and got quite a bit of work done. Yes, we were productive! Yay coffee!

We started the week pretty good. When someone came to us with an issue, we managed to be empathetic towards them. It doesn’t work in production when you said you tested it during integration last week? I bet you feel really sad right now. By the middle of the week, we were mostly sympathetic. What? You need sysadmin rights in production? Bummer for you. Towards the end of the week, we may have ended up a bit umm… pathetic. What do YOU think WE think about what you just did? Seriously, we didn’t really say that to anyone except for each other… which is how my DBA buddy ended up cracking the screen on her iPhone. Let’s just say I’m really good at ducking. Gotcha! I’m kidding. She dropped it. No, really. She did drop it. Work replaced it, though. So all is well.

Speaking of all being well, tomorrow is SQL Saturday #52 in Denver! Woo hoo! I’m really looking forward to it! Granted, I’m not thrilled about the 1 ½ hour drive I have to get there at 0’dark thirty tomorrow morning, but I’ll manage. It’ll be worth it not only for the great stuff I’ll learn about SQL Server but also because I’ll get to meet more fellow SQL geeks, err… peeps! See you all tomorrow! Here’s hoping I don’t offend anyone or get myself blacklisted anywhere! Seriously, I’ll behave myself. Unless I end up over caffeinating myself, then all bets are off! I probably should apologize in advance. 😉

Updated: To view the prior post in this series, please see “I Distinctly Detect a Percolating Process”. Next up is “Take Those Jobs and Check ‘Em”.

Advertisements

6 thoughts on “Growing, Growing, Gone!

  1. I really enjoyed reading your post. This gives me one more thing to add to my to do list on all these new db’s I have taken over as well as others I have been supporting over the past couple of years. Have a safe trip to SQL Saturday #52

    1. I’m really glad you enjoyed it. Makes me feel all warm and fuzzy inside. 🙂 Seriously, I’m glad at least someone finds it useful. 🙂

      Thanks for wishing me a safe trip to SQL Saturday. The drive shouldn’t be too bad as long as I stop for coffee along the way. 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s