Small Memory Allocation
Memory allocation looks like an easy subject. Apparently only: Memory management has always been a notorious source of troubles in C/C++. The problem is even more exacerbated on platforms with (very) limited memory, such as BREW.
Both heap and stack memory are scarce resources in BREW—phones' specifications are extremely convincing documents in this respect. That's why memory allocation (terminology note: allocation implies deallocation) is of paramount importance. Our coverage of this subject will be guided by the patterns described in , mentioned throughout the article in capitals.
We will review the main techniques and some of the 'usual suspects' and offer possible solutions to memory fragmentation, access speed, and exhaustion issues.
Automatic storage (MEMORY DISCARD) plays a central role in all handling idioms and has definite advantages:
- It is fast (as this is basically compile time memory—all allocations are merely pointer arithmetic).
- It is safe to use (provides automatic allocation/deallocation).
Being so fast should make it very appealing for BREW use (special allocators were designed trying to use this memory for very fast access ). Unfortunately, this is not the BREW way. The stack is so small, recursion is not recommended and substantial memory allocations have to be done from the heap. But, automatic handling of C++ destruction makes stack allocation an indispensable tool for safe resource manipulation, better known as RAII ("resource acquisition is initialization" technique (TC++PL3 section 14.4) ). In previous articles, we presented several examples using this technique—BrewVector and BrewString are the first of all safe wrappers around memory pools. Later in this article, we'll see how we can generalize this technique and use it to increase memory deallocation efficiency.
MEMORY DISCARD usage is based on placement new . Please note that different signatures are possible for placement new:
void* operator new(size_t nbytes, Allocator* pool); void* operator new(size_t nbytes, Allocator& pool); void* operator new(size_t nbytes, Allocator* pool); void* operator new(size_t nbytes, Allocator& pool);
There is a lesser known placement, delete, that matches new:
void operator delete(void * v, Allocator* pool); etc...
If, during object initialization as part of a placement new call—for example, during constructor invocation on a class object instance—an exception is thrown, a matching placement delete call is made, with the same arguments and values as for placement new.
FIXED ALLOCATION is a very interesting technique, based on allocating all the necessary memory at the beginning of the program and releasing it at the end, so there is no need for temporary or intermediate memory allocations. There are definite benefits here:
- It is safe—obviously this is an all-or-nothing technique.
- It is deterministic and fast—no heap allocations.
Of course, in most of the cases, this idiom is unacceptable but notably FIXED ALLOCATION can be easily combined with other techniques. Possible scenarios are based on the so called "rainy day funds"—pre-allocated pools of memory allowing guaranteed execution of critical paths of code. An interesting example reported in  is pre-allocating all the objects needed for a critical operation (dialing 911) and such guaranteeing its availability. As explained later, FIXED ALLOCATION is central in the instrumentation of allocators.
VARIABLE ALLOCATION is synonymous to what is currently known as heap allocation. This is the technique mostly used in BREW—based on helper functions from the MALLOC/FREE family or global/class new /delete implemented on top of the same MALLOC/FREE. There are a number of issues with VARIABLE ALLOCATION:
- Usually, VARIABLE ALLOCATION incurs memory overhead as well as processor time
- Memory exhaustion
- Memory fragmentation
Items 2 and 3 are of particular interest to our discussion, but some of the techniques presented are pertinent to Item 1, too. Memory exhaustion is generally strongly related to deallocation (or lack of it) and it will be analyzed later.
Memory fragmentation is a significant problem with VARIABLE ALLOCATION. There are two kinds of fragmentation:
- Internal—memory waste inside an allocated structure
- External—memory waste between two allocated structures, as this space is too small to be reused
Interestingly enough, internal and external fragmentation are in balance; in other words, they are trying to decrease external fragmentation results in an increase in the internal one, complicating the matter even more.
Obviously, the main concern is external fragmentation and there are different strategies to deal with it, all implying first of all that there is a way to measure fragmentation:
- Do nothing if you are not affected. This is the simplest advice and the most effective for small programs or when other allocation techniques (see FIXED ALLOCATION) are available.
- Try to keep allocations/deallocations together.
- Avoid interleaving large blocks with small ones.
- Avoid allocating other objects before releasing large temporaries (see Qualcomm BREW Online Knowledge Base for an example). That's why realloc might not be your friend.
- Use special techniques. And here is where allocators come handy. Usually, memory fragmentation is the result of small blocks (usually temporaries) allocating a blocking combination of larger blocks. Allocating small objects from a separate heap allows more control over the process. Another important raison d'être of allocators is speed—by replacing standard allocators with fine-tuned, fast memory factories. On top of this, using specialized allocators for small objects has the added benefit of avoiding extra per-object overhead (see  for a description and a full implementation) MEMORY DISCARD and POOLED ALLOCATION are examples of such techniques.