Grid computing is not a new concept. Universities, research centers, and even some businesses have been using the concept for several decades under different names. Various incarnations of grid computing over the last few years have gone by other names, such as distributed computing. In fact, grid computing is remarkably similar to parallel processing as well. If you know what those are, you probably have a good idea of what grid computing is.
Grid computing takes many computers on a network and divides a computing task among them for speed. With enough individual PCs working together, the grid computer essentially becomes a super computer. In fact, if you look at a list of the top “supercomputers” in the world (see the further reference links at the end), you’ll see a list dominated by PCs working together in grids of thousands of computers.
While you can build a grid computing network from identical PCs (like most of the grid networks in the list of top supercomputers), another advantage of grid computing is that it can use many different kinds of PCs. As long you you can write a local desktop application for a given PC hardware and OS platform, the data each PC returns after processing is usually in a standard format, such as XML. So, PCs running Mac OS, Linux, Windows, as well as workstations running other OSs can all collaborate in a grid system, given the proper local application for each.
Two of the most popular and trendy applications to take advantage of grid computing recently are the Search for Extra Terrestrial Intelligence [email protected] project and United Devices Cancer Research Project (see the further reference links at the end). Each of these projects is very typical of how grid computing works and the application needs driving grid computing. The need driving SETI and cancer research applications both involve processing massive amounts of data. In the case of SETI, the data is millions of hours worth of radio-telescope data. In the case of cancer, it is millions of combinations of chemical data used to search for treatment methods for cancer.
When processing this much data, the application designer can take one of two approaches: You could design your application to work on a single very expensive supercomputer or you could design it to run on many connected inexpensive computers. By efficiently running inexpensive computers as a grid computing solution, an application designer spreads the work among many less expensive computers.
In the cases of SETI and cancer research, “less expensive” actually translates to “free” in terms of processing time because of how the applications are marketed. Both applications are distributed as free Internet downloads that users install on their PCs. Users download and run them so they can take part in solving great problems for humantiy. The applications run as screen savers when the PCs aren’t in use, using the PCs’ unused processor time to run a chunk of data. Each time a PC finishes a chunk of data, the application sends the results back to a central server, flagging anything that might merit follow-up, and the server provides the PC with a new chunk of data to work on.
The free model works great for worthy causes; people will donate spare cycles to SETI and cancer research, but grid computing is also finding its way into commercial applications. (The cancer research application from United Devices is in fact a showcase project to prove the grid computing concept to help them sell their grid computing suite.) IBM and many other major vendors are pushing grid computing solutions. In commercial applications, there are three business models for grid computing that will work. One is grid computing as a service. In this model, your application data is sent to a hosting service and you pay for processing time.
The other model involves using your company’s own PC spare processing power. Whether your company has 10 PCs or 100,000, the concept is the same: Design and install a grid application that runs at night, on weekends, any time the PC isn’t in active use on all of your own PCs. In fact, they can even be designed to run as background apps on underutilized PCs. Does your company have a lot of PCs that are mostly used for word processing or other applications that have low processor usage? Those computers can run grid applications 24 hrs a day: in the background while a user is typing, and as the main processor task when the PC isn’t in use.
The third model is similar to the second. In this case if you need a dedicated grid, to rival the power of a supercomputer for example, you would build a grid network from the ground up with PCs dedicated to that task.
Programming a grid computing application involves many additional complexities not involved in a single-machine application. As a developer, some of the additional aspects you’ll need to plan for include:
- Dividing and combining data and results: In a grid solution, you have to determine how to parcel the data into chunks to send those chunks to the individual computers on the grid and how to recombine the results the send you. This will probably require processing by one or more dedicated machines.
- Data security: If the data you will be processing needs to be secure, you need a way of ensuring that the PC user can’t intercept or unbundle the raw data from your application.
- Application security: You need to be tremendously careful that the application you write to run on the grid PCs is secure and can’t be hijacked or hacked, turning your application into a launching pad for zombie attacks, worms, and other malware.
- Testing: Not that you don’t rigorously test every application you release on unsuspecting users, but if you are going to have hundreds, thousands, or millions of PCs chewing on your data, you can bet they aren’t going to be identically configured test-beds. You need great testing methodology and processes to ensure both a smooth application experience for the user but also to ensure your data isn’t being corrupted by a bug.
- Redundancy and capacity planning: If you send a chunk of data to a PC and that PC never finishes processing it, your data still needs to be processed. You need to decide whether to send redundant chunks for multiple analysis, whether to wait a given time for a response and the resend to the next available PC, and you need to determine how quickly each PC (remember they aren’t identically configured) can process each chunk and decide if you send different size batches to each PC based on its capacity.
Many companies are selling grid solutions. These help you deploy and manage the applications you run on the grid and manage the resulting network of computers. All of the major computing vendors (including IBM, Sun, HP, Microsoft, and Intel) have some initiative in grid computing and are involved in sponsoring grid computing research organizations.
For further reference, see the following resources:
Jim Minatel is a freelance writer for Developer.com in addition to working with Wiley and WROX publishing.