October 24, 2016
Hot Topics:

More Game Coding Tidbits and Style That Saved My Butt

  • April 1, 2005
  • By Mike McShaffry
  • Send Email »
  • More Articles »

Using Memory Mapped Files

A memory mapped file maps a virtual addressable memory space onto a file. The entire contents of the file can be read and written as easy as if it were entirely in memory. Win32 and Linux both support memory mapped files. Instead of writing your own low level memory cache, use memory mapped files to load any resources or data into your game. Since Windows uses memory mapping to load .EXE and .DLL files, it's a fair bet that the algorithm is fairly efficient. These files are especially useful for larger data sets that come in groups of 512Kb. Don't use these files for anything smaller. Check your system for the default size of a virtual memory page—that is the smallest chunk of memory a memory mapped file manipulates.

Under Win32, you can open a read-only memory mapped file with this code:

//Open a memory mapped file
//Please forgive the lack of error checking code,
//you should always check return values.

HANDLE fileHandle = CreateFile(
   //the file must already exist

//Create a mapping object from the file

HANDLE fileMapping = CreateFileMapping(
   0, 0,

//Map the file to memory pointer
char *fileBase = (char *) MapViewOfFile(
   0, 0,


Once you have a valid memory map, you can read from the file by accessing the variable, fileBase, as shown here.

//Read the first 50 bytes and copy them to dest.
memcpy(dest, fileBase, 50);

The operating system doesn't copy the entire contents of the file into memory. Instead, it uses the same code used to manage virtual memory. The only thing that is different is the name of the file on secondary storage.


Don't memory-map a file if it exists on CD-ROM. If the CD is ejected, your memory map will become invalid. If the CD is ejected while data is being read, your application will suffer a crash. Sadly, the crash will happen before Windows has a chance to send the WM_DEVICECHANGE message, which is always sent when the CD is ejected.

Tales from the Pixel Mines

An alternative to MapViewOfFile, MapViewOfFileEx, has an additional parameter that allows a programmer to specify the exact base address of the memory map. Ultima IX's original resource cache made use of this parameter, but in a very bad way. I know because I wrote it. Each resource file assumed that it would exist at a specific base address. This allowed me to code data structures, including pointers, directly as data in the memory mapped file. This seemed like a great idea until I realized Ultima IX would no longer run on Windows NT. It turns out that the allowable base address range was completely different from Windows 95. I knew when I coded the thing that it was a dream too good to be true. I eliminated the base address specific assumptions from the code and data files and it worked just fine.

Writing Your Own Memory Manager

Most games extend the provided memory management system. The biggest reasons to do this are performance, efficiency, and improved debugging. Default memory managers in the C runtime are designed to run fairly well in a wide range of memory allocation scenarios. They tend to break down under the load of computer games, where allocations and deallocations of relatively tiny memory blocks can be fast and furious.


You shouldn't purposely design your game to create a worst case scenario for the memory manager in the C runtime. I've known a few programmers who find it a fun and challenging task to write their own memory manager. I'm not one of them, and you should also resist the urge. The memory manager in the C runtime doesn't suck completely, and it has gone through more programming time and debugging than you'll have even if your game is in development for a number of years. Instead, you'll want to write a system that is either much simpler in design and features, or you will choose to simply extend the memory manager that is already there.

Tales from the Pixel Mines

Ultima VII: The Black Gate had a legendary memory manager: The VooDoo Memory Management System. It was written by a programmer that used to work on guided missile systems for the Department of Defense, a brilliant and dedicated engineer. U7 ran in good old DOS back in the days when protected mode was the neat new thing. VooDoo was a true 32-bit memory system for a 16-bit operating system, and the only problem with it was you had to read and write to the memory locations with assembly code, since the Borland compiler didn't understand 32-bit pointers. It was done this way because U7 couldn't really exist in a 16-bit memory space—there were atomic data structures larger than 64Kb. For all its hoopla VooDoo was actually pretty simple, and it only provided the most basic memory management features. The fact that it was actually called VooDoo was a testament to the fact that it actually worked; it wasn't exactly supported by the operating system or the Borland compilers.

VooDoo MM for Ultima VII is a great example of writing a simple memory manager to solve a specific problem. It didn't support multithreading, it assumed that memory blocks were large, and finally it wasn't written to support a high number or frequency of allocations.

A standard memory manager, like the one in the C runtime, must support multithreading. Each time the memory manager's data structures are accessed or changed they must be protected with critical sections, allowing only one thread to allocate or deallocate memory at a time. All this extra code is time consuming, especially if you malloc and free very frequently. Most games are multithreaded to support sound systems, but don't necessarily need a multithreaded memory manager for every part of the game. A single threaded memory manager that you write yourself might be a good solution.

Simple memory managers can use a doubly linked list as the basis for keeping track of allocated and free memory blocks. The C runtime uses a more complicated system to reduce the algorithmic complexity of searching through the allocated and free blocks that could be as small as a single byte. Your memory blocks might be either more regularly shaped, fewer in number, or both. This creates an opportunity to design a simpler, more efficient system.

Default memory managers must assume that deallocations happen approximately as often as allocations, and they might happen in any order and at any time. Their data structures have to keep track of a large number of blocks of available and used memory. Any time a piece of memory changes state from used to available the data structures must be quickly traversed. When blocks become available again, the memory manager must detect adjacent available blocks and merge them to make a larger block. Finding free memory of an appropriate size to minimize wasted space can be extremely tricky. Since default memory managers solve these problems to a large extent, their performance isn't as high as another memory manager that can make more assumptions about how and when memory allocations occur.

If your game can allocate and deallocate most of its dynamic memory space at once, you can write a memory manager based on a data structure no more complicated than a singly linked list. You'd never use something this simple in a more general case, of course, because a singly linked list has O(n) algorithmic complexity. That would cripple any memory management system used in the general case.

A good reason to extend a memory manager is to add some debugging features. Two features that are common include adding additional bytes before and after the allocation to track memory corruption, or to track memory leaks. The C runtime adds only one byte before and after an allocated block, which might be fine to catch those pesky x+1 and x-1 errors but doesn't help for much else. If the memory corruption seems pretty random, and most of them sure seem that way, you can increase your odds of catching the culprit by writing a custom manager that adds more bytes to the beginning and ending of each block. In practice the extra space is set to a small number, even one byte, in the release build.


Anything you do differently from the debug and release builds can change the behavior of bugs from one build target to another. Murphy's Law dictates that the bug will only appear in the build target that is hardest, or even impossible, to debug.

Another common extension to memory managers is leak detection. MFC redefines the new operator to add __FILE__ and __LINE__ information to each allocated memory block in debug mode. When the memory manager is shut down all the unfreed blocks are printed out in the output window in the debugger. This should give you a good place to start when you need to track down a memory leak.

If you decide to write your own memory manager keep the following points in mind:

  • Data structures: Choose the data structure that matches your memory allocation scenario. If you traverse a large number of free and available blocks very frequently, choose a hash table or tree based structure. If you hardly ever traverse it to find free blocks you could get away with a list. Store the data structure separately from the memory pool; any corruption will keep your memory manager's data structure in tact.
  • Single/multithreaded access: Don't forget to add appropriate code to protect your memory manager from multithreaded access if you need it. Eliminate the protections if you are sure access to the memory manager will only happen from a single thread and you'll gain some performance.
  • Debug and testing: Allocate a little additional memory before and after the block to detect memory corruption. Add caller information to the debug memory blocks; at a minimum you should use __FILE__ and __LINE__ to track where the allocation occurred.

Page 3 of 4

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Sitemap | Contact Us

Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel