Excerpt from Upcoming Book by Grant Fritchey Called Learn “SQL Server in a Month of Lunches”

Later this year, a new book by author Grant Fritchey, called Learn SQL Server in a Month of Lunches, will be released by Manning Publications. Below is a preliminary excerpt from the book on “Creating a Full Backup”. The book is geared toward readers new to SQL Server.

 

Creating a Full Backup

clip_image002The full backup is a complete copy of the database. It copies everything. Not just all the data, but all the tables and all the indexes and all the stored procedures; everything that makes up the database gets copied. That’s what makes this so important for protection in the event of some type of catastrophic loss. As with most everything else in SQL Server, you can perform a full backup using either the SQL Server Management Studio (SSMS) graphical user interface (GUI), or you can use Transact SQL (TSQL) to make the backups happen. Let’s start with the GUI.

Continue reading

When was the Last Time You Removed an Index Because it was Never Used?

image

Indexes can be great for boosting the performance of a query, but if an index is never used, it can drag down an instance’s performance. This is because unused indexes must be maintained just like any index. For example, anytime a row is inserted, updated, or deleted, indexes must be maintained. If a lot of records are updated or inserted into a table, an index may experience page splitting. During index rebuilding or reorganizing, indexes are defragmented. During DBCC CHECK DB, indexes have to be checked for their integrity. During all of these processes, all data modifications have to be logged in the transaction log. And of course, indexes take up space in the data cache and on disk. All of these resources are wasted on unused indexes, and can hurt the overall performance of an instance of SQL Server.

Continue reading

Do You Ever Physically Defragment Your SQL Server MDF & LDF Files?

image

Every since the first file was written to a floppy disk drive using DOS, physical file fragmentation has been a problem. Essentially, when the OS writes a file to a disk subsystem, and if contiguous clusters are not available, they are written elsewhere on disk. So when a file is stored on disk in a non-contiguous manner on a disk subsystem, the file is considered to be physically fragmented. Physical file fragmentation can contribute to an additional load on your I/O subsystem and reduce performance because the disks have to work harder to read and write data. This is because the heads on disk drive have to randomly jump around to different physical locations on the disk to find all of the data, instead of reading and writing the data sequentially, as when the data is contiguous.

Continue reading

SQLServerCentral Best Practices Clinic

I have just begun a new article series at SQLServerCentral.com called the SQLServerCentral.com Best Practices Clinic. What is this series all about? Our goal is to experiment with getting community feedback on different ways that the two node cluster that is running the backend databases for SQLServerCentral.com and Simple-Talk.com can be optimized. If you would like to participate in this SQL Server community experiment, please check out this article and provide your feedback.

Microsoft Codename Atlanta Attempts to Help Out DBAs By Providing Basic Assistance

I have always recommended that DBAs be as proactive as possible, catching potential problems, and actual problems, as soon as possible before they can negatively affect a SQL Server’s performance or availability. In fact, I am working on a new book, tentatively titled, How to Perform a SQL Server Health Check, which will provide a large number of check-off lists DBAs can use to verify that their SQL Server instances are correctly configured using generally accepted SQL Server best practices.

Continue reading

How Often Do You Backup Your Transaction Logs?

image

Whenever I speak about database maintenance, I always recommend that DBAs backup up their production database transaction logs at least once an hour. As is with most things DBA-related, there is no exact answer that will meet every SQL Server environment’s needs. The interval between transaction log backups depend on many factors, including, but not limited, to how much data the organization is willing to loose, how active the database is, and so on.

When I set up database maintenance plans myself, I generally backup at 15 minute intervals, although sometimes I deviate from this, depending on a variety of different factors.

In my most recent poll, I asked DBAs how often they backup their transaction logs, and the results are shown at the left.

It is interesting to see the various choices made by the various DBAs who completed the poll.  Based on my recommendation of backing up transaction logs at least once an hour, (if you add up the results), nearly 77% follow this recommendation, which I think is great.

A few other DBAs choose different transaction log periods. Just over 2% of the DBAs said they never back up their transaction log. I hope this is because their databases use the Simple Recovery Model, and not because they don’t know what they are doing.

A surprising 6% or so said that they don’t even know what a transaction log backup is. Hopefully, after seeing this post, they will do some research and find out why transaction log backups are so important.