For the last several months, I’ve been blogging when I can about the script I use during my server checks. The last post on this subject was all about processes. This week it’s all about file growth, as in database file growth.
I wrote this script when I first became a DBA which was several years ago. I’m glad I’ve been going through my script because now I realize it is in serious need of updating. These particular queries definitely need updating. I found myself looking at these queries and wondering what the *bleep* was I thinking when I wrote them. Then I decided I had to cut myself some slack. They aren’t that bad. The majority of the servers I administered back then were SQL Server 2000. I learned on the job as I went since we didn’t have much of a training budget. Besides, to err is human, right? Life is all about learning and trying new things. At least that’s what I keep telling myself.
Speaking of learning new things, I learned something very important from my DBA buddy recently. There is no such thing as a blonde moment. It’s actually a chocolate deprivation moment.😉 It’s my new favorite quote. Aaanannd now back to our regularly scheduled blog post…
At my current place of employment, we have special jobs that run on every SQL Server database server to email us when the database and/or log files have passed a specified threshold in terms of size. I can’t share these with you because I didn’t write them. Hint: they use DBCC SHOWFILESTATS. However, I’ll share with you my old code and what I’m updating it with now that I’ve learned a thing or two.
Why do we care about file growth? If your data and/or log files are growing too frequently, this could cause fragmentation. When we have lots of fragmentation, we tend to have performance issues. When your data pages are heavily fragmented, it could take SQL Server a while to find the data it’s looking for.
To control how much your data and log files grow, you set the “Enable Autogrowth” option for the database. It’s located under Database Properties. See picture below for an example.
Whether we’ve built the server from the ground up or have inherited it, it’s a good idea to have an idea of what the data volume is or will be. In some cases, it’s probably a best guess or some calculations based on specs from the vendor or an estimate from the clients. However, when it comes down to it, observing and making adjustments as you go is probably what ends up happening.
Once you have it set, you may want to set something up to monitor the growth. If you want to write your own script to run regularly or on demand, there are a few different ways to check file growth through T-SQL.
You can use the DBCC SHOWFILESTATS command to get the total versus used extents and do a simple calculation to see what’s free. If you’re wondering what exactly an extent is, I’ll try to explain without going into too much detail. SQL Server stores data and such in pages. An extent is basically 8 pages physically lined up next to each other, like ducks in a row. There ya go. Actually, it can be quite an in depth topic but I won’t delve into that here. At least not today.
Now here is the query I wrote several years ago to check file growth:
,(maxsize – size) size_difference
where maxsize > 0
and dbid = 8;
Normally I run it for all databases but for simplicity today I limited it to just one database. To be completely honest, my old code has a group by and having in it but no aggregate. Not sure why I did that. *shrug* Nobody is perfect. So when this query is run, here is what the results look like:
I’ll usually include another condition where the max difference is less than a certain number like 500. That way I know it’s getting close to growing again. The sysaltfiles table is mostly for SQL Server 2000 but will still work in later versions. However, the catalog view that should be used for SQL Server 2005 and later is sys.master_files. It contains basically the same information. Here’s what a query for it could look like:
where database_id = 8;
And here is what the results look like:
For both, the size and max_size values are in 8 KB pages. Now if max_size is 0, that means the autogrowth option has not been enabled. If it’s a -1, that means it’s unrestricted growth. This means it will grow and grow until it runs out of disk space which may not be a good thing.
Want to see those numbers in MB? No problem! Just divide by 128. Why 128? Simply because there are 128 pages per MB. Now let’s take a look:
,size / 128 size_in_MB
,max_size / 128 max_size_in_MB
,growth / 128 growth_in_MB
where database_id = 8;
You can also get the same information from sys.database_files. What’s the difference between sys.master_files and sys.database_files? The sys.master_files catalog view will show you file information for all your databases. The sys.database_files catalog view will show you the file information for the database you are currently in.
You may be wondering “is there a DMV for that”? Why, yes, there is… sort of. You’ll see why I say “sort of” later. It’s called sys.dm_db_file_space_usage and here is one way to query on it:
And the results:
It shows a more in depth look at the file space use in terms of extents such as the number of unallocated pages in the extents, mixed extents, and so on. So basically, pages that are unallocated are your free pages. Don’t get excited just yet. There’s one BIG drawback. It’s for tempdb only. Yes, you read that right. If you don’t believe me, try it for yourself. It’ll only return data for tempdb. *sigh* Yeah, I know. It’d be pretty cool if you could use it for other databases. Oh well.
There is a whole heck of a lot more on this subject. I thought I’d just go over some basics for now. Plus it’ll give me more to write on another time.
As for any shenanigans at work while our boss was out of town for the last week and a half… sorry to disappoint you all, but we actually behaved ourselves and got quite a bit of work done. Yes, we were productive! Yay coffee!
We started the week pretty good. When someone came to us with an issue, we managed to be empathetic towards them. It doesn’t work in production when you said you tested it during integration last week? I bet you feel really sad right now. By the middle of the week, we were mostly sympathetic. What? You need sysadmin rights in production? Bummer for you. Towards the end of the week, we may have ended up a bit umm… pathetic. What do YOU think WE think about what you just did? Seriously, we didn’t really say that to anyone except for each other… which is how my DBA buddy ended up cracking the screen on her iPhone. Let’s just say I’m really good at ducking. Gotcha! I’m kidding. She dropped it. No, really. She did drop it. Work replaced it, though. So all is well.
Speaking of all being well, tomorrow is SQL Saturday #52 in Denver! Woo hoo! I’m really looking forward to it! Granted, I’m not thrilled about the 1 ½ hour drive I have to get there at 0’dark thirty tomorrow morning, but I’ll manage. It’ll be worth it not only for the great stuff I’ll learn about SQL Server but also because I’ll get to meet more fellow SQL geeks, err… peeps! See you all tomorrow! Here’s hoping I don’t offend anyone or get myself blacklisted anywhere! Seriously, I’ll behave myself. Unless I end up over caffeinating myself, then all bets are off! I probably should apologize in advance.😉