Recently, I was doing some testing/playing around on a particular database on my home test servers. After I was done, I checked out the “Data/Log Files Autogrowth/Autoshrink” portion of the “Disk Usage” report for the database, which is shown below.
While I might expect to see results like this on a test box, I would be mortified if I saw it on a production box. As a small challenge to your SQL Server administration skills, just by looking at the above data, what can you deduce it? What do you think is going on, and why, and how might it affect the databases’ performance? In addition, what might you do to prevent the problem in the first place? Let’s see who can come up with the best explanation. Please post your responses below.
PS: If you are not familiar with this report, I suggest you learn more about it. It is available from SSMS by right-clicking on a database, and then selecting Reports | Standard Reports | Disk Usage. Such a simple report can make it easy to identify some specific database configuration problems, as demonstrated in the example above.
In SQL Server 2005 and later, there is a database option called “forced parameterization”. When it is turned on, it overrides the default “simple parameterization” normally used by SQL Server. It does this by forcing most SELECT, INSERT, UPDATE, and DELETE statements to be parameterized, potentially boosting performance of databases by reducing the number of query compiles and recompiles that need to occur.
As with many “options” in SQL Server, whether this option should be turned on for a particular database is not a simple decision. So I was curious to see how many DBAs actually employ this option. I ran the above poll on my website, and had two surprises. The first surprise is that about 19% of responders said they use it, which I thought was high. The second surprise is that about 44% of responders have not even heard of this option.
If you are one of the 44% who aren’t familiar with forced parameterization, I suggest you read the following to learn more about it. While this database option can be very handy in specific cases, it can also hurt performance in other databases. If you try this option, be sure to perform a benchmark before implementing it, then another benchmark after implementing it, and seeing if you get the behavior you expect. Of course, perform this experiment in a test environment, not in production.
Two years ago when SQL Server MVP Deep Dives: Volume 1 was released, I had the opportunity to contribute two chapters. In SQL Server MVP Deep Dives Volume 2, I not only had the opportunity to contribute a chapter, but the privilege of being the section editor of the “Performance Tuning and Optimization” section of the book. I would like to thank all of the following writers in my section for all their hard work, and for getting me their drafts on schedule. I also want to thank Grant Fritchey, who helped me coordinating the technical editing of the chapters.
For some time now, I have been working on a new book called How to Perform a SQL Server Health Check. I have written over 300 pages of it, and I am only about half way done. In the meantime, I have published a short except from the book on Simple-Talk.com. The article focuses specifically on database properties, and also includes a T-SQL script I wrote, and a PowerShell script written by Phil Factor, that will help you collect your existing database properties, making it easier for you to determine your existing database property settings. This is just a taste of the things to come from my new book. Now, I just need to find some extra time to finish writing it.
I have been a judge for the Exceptional DBA of the Year Awards ever since its inception, and one of the questions that is asked on the application form is “What is the biggest mistake you have made as a DBA, and how did you deal with it”. This question produces some interesting answers, many of which provide important lessons for all DBAs. I have collected some of those responses from past years, not for the current 2011 Exceptional DBA of Year Awards entries, which I think you will find educational. I have edited these entries for readability and to remove any potential confidential information. Hopefully you will learn from these DBAs’ mistakes, and not make the same ones yourself.
Every since the first file was written to a floppy disk drive using DOS, physical file fragmentation has been a problem. Essentially, when the OS writes a file to a disk subsystem, and if contiguous clusters are not available, they are written elsewhere on disk. So when a file is stored on disk in a non-contiguous manner on a disk subsystem, the file is considered to be physically fragmented. Physical file fragmentation can contribute to an additional load on your I/O subsystem and reduce performance because the disks have to work harder to read and write data. This is because the heads on disk drive have to randomly jump around to different physical locations on the disk to find all of the data, instead of reading and writing the data sequentially, as when the data is contiguous.