August 29, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

Performance Improvement: Bigger and Better

  • August 3, 2009
  • By Robert Bogue
  • Send Email »
  • More Articles »

Splitting Workloads

While discussing the potential bottlenecks, I mentioned things that could be done to address the issues with memory, disk, and network. Adding additional CPUs for CPU issues is relatively obvious. However, these scaling techniques are techniques that all work on the same server. They're not techniques that will help once you've maxed out a single server. In this section we'll look at strategies for improving performance and scalability by splitting workloads across multiple servers.

Splitting Workloads to Their Own Servers

The first thing to do when you're trying to improve performance and scalability is to isolate workloads on to their own servers. If you've got the application pieces servicing the users on a server and the database on the same server you'll want to split those roles because in doing so you'll eliminate the competition for the same finite resources. However, doing this may cause issues if you've not tested it. Suddenly you'll have to deal with double-hop issues where authentication can't be retransmitted from the client to a process on another server. The NTLM protocol in Windows can't be retransmitted from one server to another and so if you're using NTLM and your database server is on the same machine — all is well. When you move it the authentication can't be transmitted so the application breaks because the users can't log into the database. Obviously, you can switch the database over to using a single SQL account rather than active directory accounts. You can also change authentication over to Kerberos which does allow for the delegation of authentication (when the application pool account is trusted for delegation.)

Despite this the move from a single server with all of the workloads on it to a set of servers is relatively trivial. It's also, generally speaking, relatively easy to test in development — by using a centralized database server from the developer workstations or by pointing one developer machine to another developer's machine for databases.

Multiple Front End Web Servers

It's probably no surprise that if you have one web server that you can improve performance and scalability by adding another. Certainly this works up to a point. However, it means that you have to consider as set of challenges that weren't important when the solution was on a single server. We've already covered load balancers and their settings. We've also covered session state and caching both of which are dramatically impacted by the decision to move to multiple front end web servers. (See Performance Improvement — Session State and Performance Improvement - Caching).

Migrating to a multiple server model — horizontal scaling — is the most effective way to improve the performance of a web application when the front end web servers are the bottleneck. However, because moving from one front end web server to multiple front end web servers means so much upheaval it's also a relatively expensive and risky way to improve performance and scalability.

Multiple Backend Database Servers

Database platforms are specifically designed to be scalable because they often become the central point on the back end that is hard to break apart. Database system software is extremely optimized and designed to leverage all of the resources available to them.

Memory, which is used quite effectively for caching, is a key way that they mitigate the need to access the relatively slower disks. They optimize reads by organizing the requests in a way that should be easier for the disk to respond to. By organizing the requests into clusters and into a relatively straight line across the disk (called elevator seeking) database servers can make better use of the disk resources that they have.

However, despite these optimizations there will be a point that it will simply be impossible to scale a single server. The server will run out of CPU bandwidth or bandwidth on the network. Breaking the back end database on to multiple database servers is one way to exceed this scalability limit. This is done by selecting some data that isn't related to the other data (and thus there's no need to do cross-database queries) and moving it to another database server (or clustered database servers).

Commonly any databases that are being used for session state or caching are moved to another database server while the primary data stores for the application are left on the same server as long as possible. Because coordinated caches and managing session state in a database makes for a very high IO operation set of databases and because these databases are by definition not related to other data in the system, they are ideal to be moved to other database servers or clusters.

Summary

Performance improvement often means breaking it down and building it back up. You'll have to break down the problem until you have a single, definable issue that you can develop strategies for resolving. It also means building up more servers, more memory, and more hardware to make more resources available for handling the load.

About the Author

Robert Bogue, MS MVP Microsoft Office SharePoint Server, MCSE, MCSA:Security, etc., has contributed to more than 100 book projects and numerous other publishing projects. Robert’s latest book is The SharePoint Shepherd’s Guide for End Users. You can find out more about the book at http://www.SharePointShepherd.com. Robert blogs at http://www.thorprojects.com/blog You can reach Robert at Rob.Bogue@thorprojects.com.


Tags: Windows, server, CPU, performance management, performance



Page 2 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel