March 3, 2021
Hot Topics:

CE Power Conservation Strategies

  • By Nancy Nicolaisen
  • Send Email »
  • More Articles »

We began this group of examples with stern admonishments about keeping things lightweight: Use memory sparingly and efficiently, release allocations as quickly as possible, and find ways to condense the lavish desktop Windows feature set to which we've become accustomed. With a bit of experience, it becomes clear that practices such as these are the only way to keep from running out of space on a CE device. There is another, equally dire, circumstance we must avoid: Running out of "juice."

Consider this. A pair of double As powers the CE devices I own, with a watch battery for backup. Both run MIPS RISC series processors, support PC card storage and various peripherals, and integrate modem access to networks. To my way of thinking, this is next door to druid magic, and, in fact, the whole setup does rely to a certain extent on a bit of "smoke and mirrors:" the Windows CE power management strategy. It takes an extraordinary amount of parsimony to keep a sophisticated computing device running for weeks at a time on wattage that would scarcely keep a flashlight burning for a couple of hours. Ultimately, all of our conservation efforts under CE share the aim of making the most of a very limited power supply.

In this lesson, we explore the application programmer's mostly collaborative role in extending the battery life of the device. This task sounds overwhelming. However, it's mostly a matter of staying out of the way, and allowing the inherently low power consumption design of CE do its work.

CE Power Management Design Assumptions

Power conservation is so important that, for the most part, the CE operating system takes it out of our hands. At the heart of the Windows CE design is a power management mechanism that puts the device into the lowest power consumption mode that is appropriate to the current state of its use. For example, when a device is "off," it consumes only enough power to maintain volatile memory. When the device is actively executing applications, it enters a low power consumption mode whenever the processor is idle. Surprisingly enough, even a busy device idles the processor around 90% of the time.

The original architects of CE counted on certain power consumption patterns when they designed the operating system. Here is a summary of the original design assumptions:

  • The device is in use less than two hours per day.
  • The duration of individual device usage events ranges from five minutes to one hour.
  • The display is fully powered when the device is in use.
  • When the device is in use, the processor is busy 10% of the time, idle 90% of the time.
  • The device has main batteries plus a backup battery.
  • There is no nonvolatile, writable memory.
  • Battery life targets are calculated without accounting for add-in devices such as PC cards or Flash storage.

In a nutshell, CE devices prolong battery life by doing nothing as often as possible! Typical Windows CE applications collaborate in this scheme passively, employing a simple and elegant strategy. Here's how it works.

CE's Blocking Message Loop

Let's start by defining a few terms. A process is a single instance of an executing application. Each process has its own address space, in which reside its code and data. A thread is a unit of execution, and each process has at least one thread. Threads belong to a specific process and share the memory resources of the process that creates them. In a multitasking system, individual threads run consecutively, each for a small increment of time, or timeslice. Threads are allotted a timeslice based on the operating system's scheduling algorithm.

The scheduling algorithm is designed to give preference to time-critical threads (device drivers, for example) and to defer lower priority threads. By default, application threads are fairly low priority and receive a brief but adequate timeslice. When a thread isn't scheduled, it is blocked from executing. A blocked thread uses no processor resources. From a user's viewpoint, the overall effect of consecutively scheduling threads is that the system runs smoothly, apparently running end user applications and handling operating system housekeeping "simultaneously." If all goes as planned, it's not unlike animation—at 16 frames per second, Mickey Mouse sails his tugboat serenely into the sunset.

There are two key points here, and they are somewhat subtle, so we'll emphasize them:

  • In a multitasking environment, threads that "hog" the processor by inflating their priority or the duration of their timeslice will degrade system performance because other threads "starve." The appearance of simultaneity depends on each thread getting frequent, short timeslices.
  • In a CE system, threads that "hog" the processor keep it from idling and will rapidly drain power supplies.

In a typical, single-threaded Win 32 application, we don't have to worry much about either of these things happening because the GetMessage() function that we call in our message processing code automatically blocks the application's one and only thread whenever it is waiting for message delivery. In practice, this is far in excess of the assumed 90% of the time, so by doing the usual thing, we conserve battery power.

// Main message loop blocks the thread if there are no messages:
while (GetMessage(&msg, NULL, 0, 0))
   if (!TranslateAccelerator (msg.hwnd, hAccelTable, &msg))

By contrast, the function PeekMessage() doesn't block the application's single thread, and a loop like this one will dramatically reduce battery life:


   if (!PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
      //doing something or other...

Page 1 of 2

This article was originally published on May 12, 2004

Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date