Architecture & DesignResource Management: An Introduction

Resource Management: An Introduction

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Basic Idea

You have acquired a set of resources, but for you to be effective, you need you to come up with a scheme to use these resources. Remember that you may not have the resources “forever,” as it the case with dynamic resources, so you are best advised to make the best of what you have for as long as you have them. Even if the resources are dedicated resources, you still would need a way to ensure high utilization. Resource managers are needed for you to achieve this goal. A typical resource manger uses some sort of a scheduler to ensure proper usage1 or resources by increasing their utilization. Scheduling is the concept of sharing a scarce resource amongst users without starving any of the users, and at best gives the impression that every user has access to all of what that resource has to offer. This poses a challenge when the numbers of users increase dramatically or the duration of the jobs varies greatly. What makes this challenge even greater is that scheduling problems are mostly NP-Complete, with a very limited number of scenarios that are considered to fall under the P-type problem domain.

Single criterion scheduling are problems where the user is interested in maximizing or minimizing only one thing or criterion (minimize the flow, time, or the completion time). Many scenarios, machine shop or otherwise, require more than criteria to be optimized. For example, on a multi-processing machine, you want to minimize startup time and at the same time minimize completion time of all the tasks. There are times where these two criterions conflict; in other words, you might need to suspend a task, thus delaying its completion time, to start a newly arrived task. The point is that “sacrifices” must be made, and that is the point of heuristic-type algorithms; they aim to minimize the overall sacrifice one has to make to optimize everything near-perfectly. This does not always work, however, but considering the problem domain, it is a very good attempt at solving the unsolvable. As you might expect, scheduling shares a number of ideas from optimization theory.

Quality of Service (QoS) for Grid computing has a special meaning because it no longer applies only to network resources. Compute, data, and network resources together need to be managed and there needs to be a mechanism that provides a quantifiable way of dictating QoS across all three domains. Scheduling systems thus need to take QoS guarantees into account when scheduling tasks across resources and administrative domains. The concept of QoS and data scheduling is further complicated when talking about globally distributed and/or dense systems where scheduling becomes more difficult; therefore, meeting QoS guarantees becomes even more complex.

Think of an operating system and how it schedules various threads or processes on the CPU. As the number of CPUs increase, the problem becomes more difficult, but the concept is still the same. These are a number of different scheduling algorithms, but I will not cover them in this article. The main focus here is to break down a resource manager into its core components, and talk about how these components work together to achieve a single goal: high resource utilization.

Resource Manager Components

Conceptually speaking, the resource manager is very simple:

  • Queue incoming tasks
  • Keep a record of available resources
  • Match resources with the incoming tasks (scheduler)
  • Queue results

I am not saying that it is easy to design or write a resource manager, however, but from a conceptual standpoint it is a simple enough design that you can relate to. Figure 1 depicts this architecture.

Figure 1: Anatomy of a Resource Manager

There are a number of ways that this architecture can be realized, but the one thing you need to keep in mind here is that network queuing theory plays a major role here. If you have an influx of tasks that is greater than the speed that your processing engine is able to off-load, the client queue will get backed up and you will start to lose tasks. This is the same behavior if you were to talk about a router placed in a network with large amounts of data transfer. Congestion control is implicit in the case of a resource manager as the resources will only be ready and request to process the next task when the current task has already been completed. This makes our understanding of the environment a little easier as if we were seeing a backlog of tasks waiting to be processed, this is a clear indication that we need only to add more resources to assist with the heavy load of the incoming tasks.

Your goal in this article is not to build a resource manager, but rather have a clear and better understanding of how one actually works and what its main components are. Focus a bit on the overall flow. You will delve into the details in the subsequent sections.

The flow is something like the following:

  1. Resources log on to the Grid resource manager.
  2. Basic resource information is sent to the resource manager such as OS type, amount of free memory, number of CPUs, and a number of other parameters based on the Resource Manager involved.
  3. Data and any updates as synchronized between the resource manager and the resource.
  4. Resource goes in to a waiting queue ready to be assigned a task.
  5. The resource manager updates the table of available resources with the new resource.
  6. The scheduling engine assigns a task to the resource if and when a new task is available.
  7. The resource gets the task and the data, loads the appropriate service, and executes the task.
  8. The task result is sent back to the client.
  9. The resource is ready for another task.

Resources’ Information Database

This is not really a database in most cases because it is sustainable to a high degree of change. As resources come and go, this database must be updated. The info feeds directly into the scheduling engine used to schedule tasks to the available resources.

A resource needs to make itself available for it to be scheduled. When the resource logs on to the Grid, it sends its information to the resource manager. The resource manager uses this information to figure out what kinds of tasks can run on this specific resource. For example, if you are talking about a Windows-based server, perhaps you can run Java and Windows-complied applications. For Linux, the situation is a bit different. The same applies for the number of processors available, the amount of memory, the version of the OS, and many other system parameters that are used by the resource manager and later by the scheduler to determine the feasibility of that resource to the needed resource.

The difficulty arises when you try to find the best fit possible and the number of system parameters you use to determine how fit a resource really is. The number of parameters is directly proportional to the scheduling granularity and complexity. If the scheduler must take a number of parameters into consideration before making a final decision, this will add more overhead to the scheduler and increase system complexity. One of the ways this is mitigated is by the use of advanced hashtables and lookup tables that more information is found faster and more efficient updates are possible.

Clients’ Information Database

This is a bit simpler because the number of clients is usually much smaller than the number of resources in a given Grid infrastructure. You may have a couple thousand resources, but only a handful of clients accessing and using these resources. From the perspective of the client manageability, the resource manager has a simpler task. From the point of view of high availability, however, it is not the case. Resource managers are or at least should follow the “fire and forget” methodology. This means that the clients send a task to the resource manager and from that moment on, the resource manager is in charge of making sure that the task gets done. This poses a number of challenges for the resource manager. The way this problem is solved varies from vendor to vendor, and as always the solution poses a performance vs. reliability tradeoff.

Scheduling Process

This is where the rubber meets the road. You have client demands; you have resources; what you need to do now is to match your supplies with your demands. Sounds easy enough, don’t you think? It is, for the most part, easy. As mentioned before, the number of criteria that you use to choose the best decision directly impacts the complexity of your system. The more criteria, the more complex the scheduler will be. Generally speaking, you need to worry about the type of the target Operating System, CPU type, amount of memory, and available resources such as access to a file system or database. At the end of the day, schedulers are just one big finite state machine (FSM) with the input being the task and the final stage being the target resource that gets assigned that task.

A number of vendors give you the capability to optimize the scheduler. Things such as protocol used for communication between the components, amount of memory to consume whilst scheduling, and the ability to provide user-based filtering mechanisms are just some examples. Keep in mind that scheduling is and will be a difficult problem (NP-complete for those interested) and it can only be solved heuristically. This basically means that schedulers make some basic assumptions and then make a decision as to what request goes to what resource. When you choose a resource manager, you must make sure that the vendor allows you to change these assumptions used to solve this optimization problem that is scheduling.

Summary

I have started to talk about the components that make up a Grid. The focus was mainly on the resource manager at this point and, as you probably noticed, it is not an easy problem to tackle. Each implementation of a resource manager varies. Each vendor likes to add its own flavor to the mix and tackle the problem a little differently the next. You need to keep in mind that a solution that worked a few years back may not apply today. There are a number of advancements in the field of Grid and High Performance Computing and you are only beginning to scratch the surface. Keep your eyes open for new advancements and changes occurring in this field almost every day.

Endnote

1 I use the terms scheduler and resource manager interchangeably throughout this article. Although a scheduling (and the scheduler subsystem) is only one part of the resource manager, it is the most important part.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories