Recently, I was doing some testing/playing around on a particular database on my home test servers. After I was done, I checked out the “Data/Log Files Autogrowth/Autoshrink” portion of the “Disk Usage” report for the database, which is shown below.
While I might expect to see results like this on a test box, I would be mortified if I saw it on a production box. As a small challenge to your SQL Server administration skills, just by looking at the above data, what can you deduce it? What do you think is going on, and why, and how might it affect the databases’ performance? In addition, what might you do to prevent the problem in the first place? Let’s see who can come up with the best explanation. Please post your responses below.
PS: If you are not familiar with this report, I suggest you learn more about it. It is available from SSMS by right-clicking on a database, and then selecting Reports | Standard Reports | Disk Usage. Such a simple report can make it easy to identify some specific database configuration problems, as demonstrated in the example above.
8 thoughts on “What’s Wrong with this Picture: A Database Misconfiguration Well Demonstrated”
Your file growth settings for your log are too small. You’re going to have a large number of VLF’s. It’s growing much too often.
That looks like a log configured with default size (tiny), 10% autogrow, and a database in full recovery with no log backups.
Affect performance: Well, the log can’t use instant init, so that growth takes time, data modifications may have to wait until the log growth is complete. If it times out that could result in a log full error.
Preventing it: Switch to simple recovery or schedule log backups, set the autogrow to a sensible value for the size of the log, pre-size the log to a sensible size for the expected data volumes and transactional activity and monitor for the log usage nearing 100% so that it can be manually grown
Looks like you have autogrowth set at a percentage, and the log file size was not set properly to start with.
When I teach my “Best Practices Every DBA Must Know” session, one of the things I focus on is the importance of pre-sizing both your MDF and LDF files to an appropriate size, so that a database doesn’t have to use autogrow to grow them. In many, many cases, I see databases that were created at their default sizes and growth settings, and over time, both the MDF and LDF files have grown using autogrows exclusively. As stated above, this not only hurts performance during autogrowth (note some of the very long autogrowths above), it leads to physical file fragmentation for both MDFs and LDFs; and for LDFs, it also leads to having excessive VLF files. All of these can contribute to poor performance. A little prevention can easily prevent a lot of problems. If you re not familiar with the best practice of pre-sizing MDF and LDF files, you can download my presentation on my blog for more information.
I think what is going on is the database is in full recovery mode and the transaction log is not being backed up. This will cause the transaction log to keep growing until it eventually fills up the disk potentially. Other possible causes might be replication or database mirroring that has stopped, preventing log records from being cleared.
I discovered this report section a while back and profiled the source query, which I have blogged HERE, with some modifications.
I’ll agree with what’s above but it is crucial to note that the last autogrowth was 5.5 minutes after the previous one. If you’re filling 4GB of log space in five minutes you have more wrong than file size and autogrowth settings. Before doing anything else see what’s currently running that’s eating up the log space (something along the lines of sp_who2 or sp_WhoIsActive). Once it’s been identified see what needs to be done to stop it from running and to handle data modification that’s already been done (either rollback or just restore to last good backup).
Next step is to fix that process. Consider using the bulk-logged recovery model if this a bulk load where point in time recovery isn’t necessary. If this isn’t a bulk load you may need to make code fixes to the process. Now that the root cause has been handled set the autogrowth properties as discussed above.
For performance concerns, in addition to those mentioned in previous posts, you either have a runaway process that’s consuming resources that are better used for legit processes or are doing a large load that can use the more efficient bulk-recovery model.
Thanks for all of the people who responded with valuable insight into this very common problem.
These responses are quite valuable. I would also like to add that rebuilding index online vs offline could also affects the log. To my understanding, you’ll need twice the amount of space for an online rebuild of an index. This also goes hand in hand with having the right recovery model in place during your index maintenance operation. Depending on the recovery model, index maintenance (CREATE, ALTER INDEX REBUILD) can be minimally logged.
Comments are closed.