Since I wrote the first part of this blog series, SQLServerCentral.com (SSC) SQL Servers have been upgraded, and performance is much better than it was on the older hardware. In fact, all hardware bottlenecks have gone away. Before the move to the new hardware, CPU and disk I/O were constantly pegged, and now, both resources are more than comfortably keeping up with their load. Memory and network utilization had never been a problem with the old hardware.
As I mentioned in Part 1, we used to have two SQL Server instances running multiple databases on two different servers. These older servers had identical specs, as follows:
- Two, 2GHz AMD Dual Core CPUs (a total of four cores)
- 4GB RAM (using the /3GB switch)
- Three locally-attached RAID 1 arrays (with system files, SQL Server binaries, MDF, LDF, BAK, and TRN files scattered among them)
- Windows 2003 (32-bit)
- SQL Server 2005 (32-bit)
While most of the databases ran on one of the two servers, one of the larger, and more problematic databases (due to performance issues) ran on the second server by itself. In essence, we were dividing up the total database load among two different physical servers. While we had two servers, they ran independently and were not clustered.
When the Red Gate IT staff selected the new hardware (with my input), they were limited to what hardware (and configurations) they had to choose from because the SSC servers are hosted with a large ISP. After much consideration, the new hardware selected included two new servers with the following identical hardware:
- Two, 2.26Hz Quad Core Intel CPUs (a total of eight cores)
- 24GB RAM
- Two, locally-attached RAID 1 arrays (array 1 holds the system files and SQL Server binaries, and array 2 is used to make local backups, before they are moved off the server)
- Two, RAID 10 SAN arrays (array 1 holds the MDF files, and array 2 holds the LDF files)
- One, RAID 1 SAN array (used exclusively for tempdb)
- Windows 2008 (64-bit)
- SQL Server 2008 (64-bit)
Unlike the previous two SQL Server setup, the two new servers have been configured as an active/active cluster. Currently, all of the databases from both of the older server instances are running on the first instance of SQL Server on node 1 of the cluster, while the second instance of SQL Server, running on node 2 of the cluster, is not currently being used (other than for the MDW database used by the SQL Server 2008 Data Collector). This second instance will soon be put to use to support the Simple-Talk website, which currently is still running on older hardware and software, with a different ISP. This move will be made later this year. So, right now, all of the SSC databases are running on a single instance of SQL Server, instead of two instances, as before. Even with the extra load of running all of the SSC databases on a single instance, the new hardware can more than handle the total load.
The CPU and disk I/O issues we had with the older servers have been eased greatly with the new hardware. The doubling of the number of cores (plus their addition speed), has eliminated the CPU bottleneck. The addition of more and faster spindles, and the isolation of MDF files, LDF files, and tempdb, has substantially reduced disk I/O problems, eliminating that bottleneck.
In the part 3 of this blog series, I will discuss the type of database maintenance we do on the databases.