There are a lot of things that you can do to improve your Win32 application’s runtime memory use under CE. However, you can also reap some fairly impressive gains in memory efficiency simply by being aware of where the linker puts data in your program’s executable.
An executable file basically has two kinds of things in it: executable code and data. On all of the Windows hosts, code is “pure,” meaning that you can’t modify it at runtime. For this reason, the part of a Windows executable file that contains the code is designated “read only,” so it is of little concern to us here. The rest of the executable file is devoted to data, of which there are two kinds: read-only data, such as literal strings and constants; and read/write data, which is usually either data with global scope or data defined by using the static keyword.
Because access permissions are set on a page-by-page basis once the program is in memory, these data storage areas in your program have to be created in page-sized increments. At a minimum, your executable file contains one page for static (“read-only”) data, and one page for read/write data. This means that there well may be some unused space in the data storage areas of your executable file, and you could profit by using it instead of making allocation requests.
On the other side of the coin, you may discover that one or the other of your data sections is just slightly larger than one page, which causes the allocation of a lot of unnecessary space. Moving things around a bit may shrink your data sections and get you in under that critical threshold.
So, how do you find out how your application file lays out the data sections?
Set the linker output option for generating a mapfile; this contains data section names, lengths, and base addresses.
Here’s how to interpret the parts of the link map that show how your program’s data is organized.
Figure 1: An Abbreviated Link Map
ExeFileMem Timestamp is 3bbcb82d (Thu Oct 04 13:27:41 2001) Preferred load address is 00010000 Start Length Name Class 0001:00000000 00000c48H .text CODE 0002:00000000 00000054H .rdata DATA 0002:00000054 00000028H .idata$2 DATA 0002:0000007c 00000014H .idata$3 DATA 0002:00000090 00000088H .idata$4 DATA 0002:00000118 0000023cH .idata$6 DATA 0002:00000354 00000000H .edata DATA 0003:00000000 00000088H .idata$5 DATA 0003:00000088 00000004H .CRT$XCA DATA 0003:0000008c 00000004H .CRT$XCZ DATA 0003:00000090 00000004H .CRT$XIA DATA 0003:00000094 00000004H .CRT$XIZ DATA 0003:00000098 00000004H .CRT$XPA DATA 0003:0000009c 00000004H .CRT$XPZ DATA 0003:000000a0 00000004H .CRT$XTA DATA 0003:000000a4 00000004H .CRT$XTZ DATA 0003:000000a8 00000010H .data DATA 0003:000000b8 00000810H .bss DATA 0004:00000000 00000104H .pdata DATA 0005:00000000 000001f0H .rsrc$01 DATA 0005:000001f0 000005d4H .rsrc$02 DATA
First, let’s look at the two places read-only data is stored, the .rdata section and the resource data sections, .rsrc$01 and rsrc$02.
Here are a few lines from the ExeFileMem example. Basically, it is a typical “Hello World” generated application, but with some data declarations added for the purposes of producing the mapfile above.
// Global Variables: HINSTANCE hInst; // The current instance HWND hwndCB; // The command bar handle //make a large global allocation int iLargeIntArray; //the const keyword means this //datum is read-only const int iConstDataItem = 1;
Notice the last declaration, iConstDataItem. This variable is declared with the keyword, which makes it non modifiable.
Porting Tip: Non-modifiable data is placed in the program section .rdata.
We’ve seen (a lot) that memory allocation operations typically have a granularity of one page. To designate an item or group of items as “read only,” they have to reside together in their own page(s), because that is the smallest increment on which we can set access permissions. Any datum that we declare with the const keyword has to end up in a read-only data section because that’s the only way to ensure that it is protected from modification.
Now, examine the map file line that gives the length of the .rdata section:
Start Length Name Class 0002:00000000 00000054H .rdata DATA
The length of this section is 54h (84 decimal); however, when loaded, this section will consume an entire page of program memory. Yikes! We need to find some other non-modifiable data we can move to this page, or this space will be wasted.
The first possibility that springs to mind is string data, and most applications have a number of them, stored as resources. Resources have their own section in the executable file layout. Here are the map file lines that apply to them.
Start Length Name Class 0005:00000000 000001f0H .rsrc$01 DATA 0005:000001f0 000005d4H .rsrc$02 DATA
There’s a good reason not to move string data out of the resource files and into the static data area. CE provides us with a special version of the LoadString() function used to load string resources that allows you to read the string in place. Calling LoadString() like this returns a pointer to the string:
LPCTSTR pStringResource; //if the last parameter is NULL, get back a pointer to a //constant string pStringResource = (LPCTSTR)LoadString( hInstance, IDS_STRING, NULL );
You can use the string, but you can’t write back to it. Always use this function to load string resources you don’t intend to modify.
There are lots of other possibilities for moving data to the .rdata section, but leave string resources in the resource section so you can use the CE version of LoadString().
Read/Write Data Sections
Any item that has greater than function-level scope or is declared with the static keyword resides in the read/write data sections of the executable file. In addition, the loader requires access to something less 100 bytes of this data area.
There are two kinds of read/write data in your program: The initialized items live in the .data section, and the unitilialized items live in the .bss section. These sections share a region of memory, so the sum of their length is what matters, not the individual sizes. Here are their lines in the map file:
Start Length Name Class 0003:000000a8 00000010H .data DATA 0003:000000b8 00000810H .bss DATA
The sum of their lengths is 820h, or 2080 decimal. Again, you can see that we are just a bit beyond the next lowest page size increment, 2048. At runtime, it will require three pages (assuming a 1 Kb page size) to store this data. If we could move an item or two into stack-based storage, or possibly make something constant, we could save a page.
Porting Tip: Here is a good, rough-cut checklist for squeezing wasted memory out of your application’s executable file:
- Explicitly declare all data that is functionally constant by using the const keyword.
- Recompile and see how use of const changes the size of the sections.
- Adjust the size of the read/write data section: Shrink it by moving invariant data to const and other data to the stack. Fill it by using leftover space for buffering that otherwise would have been allocated from the heap.
- Leave 100 free bytes in the read/write section for use by the loader.
- Use CE-style LoadString() to access string resources in place.
- Put comments in all your source files that remind you to look at a load map if you change code.
Low Memory Conditions
Even when we’ve done all we can to make our program memory efficient, memory can become dangerously low. When making allocation requests, you must assume that they can fail, and make provisions for handling the failure gracefully. You should always test the return value of a memory allocation function.
Porting Tip: Search your code for allocation function calls, and make sure all returns are tested. Provide a handler for graceful cleanup if allocation fails.
Running out of memory is a more serious (and more likely) situation on CE than it is on any other Windows platform. There is no disk-based virtual memory to swap pages to, so the operating system constantly monitors the allocation status of physical memory. CE uses several mechanisms to conserve memory as it becomes more scarce, but for our purposes two of these are important: allocation request filtering and the WM_HIBERATE construct.
Allocation Request Filtering
Allocation request filtering prevents a single application from taking all available memory with a single large allocation request. Basically, it does this by enforcing a flexible upper limit on the amount of memory and application can request. As memory becomes scarcer, the system lowers the limits on maximum allocation size.
You may see the effects of allocation filtering if you attempt to make an allocation with VirtualAlloc() and it fails even though the result of a call to VirtualQuery() seems to show that there is enough free memory to satisfy your request. In fact, in any kind of low-memory situation, VirtualAlloc family functions will fail first, so always test returns and provide a failure handler.
Warning Applications About Low Memory With the WM_HIBERNATE Message
Scarcity of memory is a situation that is always looming on a CE device. To make the memory-constrained environment manageable, three things need to happen:
- Applications must strive to be memory efficient.
- The system must notify applications when a memory crisis is developing.
- The applications must have a chance to clean up and gracefully relinquish whatever memory they can.
The WM_HIBERNATE message is CE’s way of giving notice to applications that the system is entering a low-memory state. For a notification about a memory shortage to be useful, you must get it in time to do something about it. For example, the application may need to tell the user to close files, to relinquish whatever memory it can, and to save its state in preparation for an orderly close. For this reason, applications begin to get the WM_HIBERNATE message when available memory falls below the “hibernation threshold” which is about 128 Kb (this threshold is set slightly higher if the page size is greater than 1 Kb). At 64 Kb of free memory, we cross the “low memory” threshold, and the system more aggressively requests active applications to hibernate, attempts to free pages in the system heap, and as a last resort, and tries to shrink the stack. Finally, at about 16 Kb, the system displays the “out of memory” dialog and begins forcibly closing down applications.
When an application first receives the hibernate notification, chances are good that you can clean up, relinquish memory, and prepare for possible close. In all likelihood, you’ll still be able to allocate small amounts of memory from your local heap—remember, WM_HIBERNATE is sent as a result of a page allocation crisis, but that doesn’t mean that all committed pages are full. If you design your code so that you are never dependent (or at least not for long) on holding large allocations of memory, you can respond to WM_HIBERNATE by completing operations that are in progress, freeing memory, and possibly saving application state.
Porting Tip: If you have been reasonably careful about your use of allocated memory, responding to WM_HIBERNATE may not be too difficult.
When you get a WM_HIBERNATE message, your first response should be to free any allocation made with VirtualAlloc(). These go back to the allocation pool immediately, and so may resolve the immediate crisis. Next, if possible, finish any pending operations that make use of a private heap, and relinquish the heap. Freeing memory devoted to a private heap also returns pages to the allocation pool immediately, so freeing a private heap may be worth incurring some data loss.
Graphic objects, especially bitmaps, consume large amounts of memory, so discard those, along with created brushes, pens, and device contexts next.
If you want to warn the user that a low memory situation exists when you first get the WM_HIBERNATE message, you can invoke the system low memory dialog using this function:
int SHShowOutOfMemory(HWND hwndOwner, UINT grfFlags);
The parameters, in the order shown, are the handle to the window that will serve as the parent of the dialog, and a reserved value that must be set to zero. Also, though the declaration shows that there is an integer return, there is no meaningful value returned. The advantage of using this function is that it comes to you with absolutely no overhead. Putting this dialog up may cause the user to close another application, which may in turn solve your memory shortage problems. At the very least, it provides feedback and won’t consume additional memory.
In the next installment, you’ll see how to use CE’s memory reconnaissance tools to monitor you application’s exposure to low-memory conditions.
About the Author
Nancy Nicolaisen is a software engineer who has designed and implemented highly modular Windows CE products that include features such as full remote diagnostics, CE-side data compression, dynamically constructed user interface, automatic screen size detection, entry time data validation.
In addition to writing for Developer.com, she has written several books including Making Win 32 Applications Mobile.
# # #