May 25, 2018
Hot Topics:

Simple Rules that Boost Your Mobile Application's Performance

  • November 10, 2006
  • By Alex Gusev
  • Send Email »
  • More Articles »

Performance and Its Killers

As you may guess, a performance in general is a number of characteristics that may be somehow measured. You can apply it either for devices or applications during their execution and have a look at RAM usage, booting time, and so forth. In particular, in case of mobile applications, device characteristics and features often dictate quite high and tough requirements to satisfy user expectations while you have very limited resources available on the device. Therefore, mobile applications should be designed carefully and employ every possibility to improve their performance.

This article will demonstrate a few simple but nevertheless useful examples to make it happen. File I/O operations, heap usage, and heavy loops are just a few examples of 'performance killers' to be noted. Most of them may be avoided easily when you keep an eye on them. You will explore most of the common mistakes that may help you significantly boost up your program's performance.

File I/O

Basic reads and writes

If you have ever moved your existing application from, for example, Windows Mobile 2003 SE to Windows Mobile 5.0, you could not help but notice that all operations with files became dramatically slower. The same effect can be observed when you start working with some kind of flash card instead of the internal device's memory (when it is not flash, of course). The reason is obvious enough—all read/write operations depend on the flash block size, regardless of how much data you want to read from or save to the flash card. Then, knowing this block size and adjusting buffers in your application accordingly can magically increase throughput of I/O operations.

In versions prior to WinCE 5.0, you might gather such information only via direct communication with a device driver DeviceIoControl API. In Windows CE 5.0, you luckily have the

   LPCWSTR pszRootPath,
   LPCE_VOLUME_INFO lpVolumeInfo);

function that fills in the CE_VOLUME_INFO structure:

typedef struct _CE_VOLUME_INFO
   DWORD cbSize;
   DWORD dwAttributes;
   DWORD dwFlags;
   DWORD dwBlockSize;

for the given pszRootPath of the file system, where dwBlockSize is the block size of your flash in bytes.

Simple calculations show you that if you store, for example, 4 bytes every time to fit 1 KB in total with 512 bytes as flash block size (as usual value for flashes), it will require 256 calls. But, if you buffer the whole process at 512 bytes, you can end up only with 4 'write' operations. Usually, every such 'read/write' requires a call to the kernel. Every single I/O may be quick enough, but summarizing the grand total effect of time loss on hundreds or thousands of read/write operations may easily make your application as slow as a snail.

Another good example of when you might want to use buffering is using it along with compression or encryption. Many algorithms use blocks as their input and output, so buffering is only a natural way of being there. In some situations, it may be even worth reading the whole block from flash, making the required changes, and then storing it back to achieve better performance.

Hidden I/O operations

An inefficiency of file I/O operations may be well hidden. This is especially true for C++ applications, when actual implementation of serialization operations for complex objects such as a list or an array is based on their items code; for example, when operator >> is overloaded for a simple item. This may result in many small time-consuming reads or writes. Consider the following example:

CSampleList list;
stream >> list;

The problem lies in the container's implementation of operator >>:

stream >> count;
TSomeObject obj;
for (int i = 0; i < count; i++)
   stream >> obj;

As you can see, it calls a further operator >> on every container's item, which obviously costs too much in terms of performance. As a reasonable alternative approach, you can consider the following example:

stream >> count;
stream.Read(pData,count * sizeof(TSomeObject));
This code will work much faster, but the price is that TSomeObject should be a simple type and its internal format has to be the same as expected.

Using memory-mapped files

As a logical continuation of buffered I/O, another option to increase the speed of I/O operations is memory mapping the file that you are writing to. It may be used as a suitable cache mechanism if your application can survive some data loss in case of a reset or power failure. Thus, you benefit from large blocks of data being written to the flash when you finally store such mapped files. For more details, please refer to the following APIs:

   HANDLE hFileMappingObject,
   DWORD dwDesiredAccess,
   DWORD dwFileOffsetHigh,
   DWORD dwFileOffsetLow,
   DWORD dwNumberOfBytesToMap

BOOL WINAPI UnmapViewOfFile(
   LPCVOID lpBaseAddress

   LPCVOID lpBaseAddress,
   DWORD dwNumberOfBytesToFlush

   HANDLE hFile,
   LPSECURITY_ATTRIBUTES lpFileMappingAttributes,
   DWORD flProtect,
   DWORD dwMaximumSizeHigh,
   DWORD dwMaximumSizeLow,
   LPCTSTR lpName

   LPCTSTR lpFileName,
   DWORD dwDesiredAccess,
   DWORD dwShareMode,
   LPSECURITY_ATTRIBUTES lpSecurityAttributes,
   DWORD dwCreationDisposition,
   DWORD dwFlagsAndAttributes,
   HANDLE hTemplateFile);

Heap Usage

Now, turn to more general areas. On embedded systems, the stack size is often limited, so a heap should be used instead. Nevertheless, if used without care, this also may cause performance problems. Consider the following code snippet:

while (expr)
   CSomeObject *pObj = new CSomeObject;
   delete pObj;

If such a loop is executed for a large number of iterations, this amount of heap calls is just redundant and leads to heap fragmentation. You might consider the following scenario, which reuses temporary variables where possible:

CSomeObject *pObj = new CSomeObject;
while (expr)
delete pObj;

Page 1 of 2

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

By submitting your information, you agree that developer.com may send you developer offers via email, phone and text message, as well as email offers about other products and services that developer believes may be of interest to you. developer will process your information in accordance with the Quinstreet Privacy Policy.


We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.
Thanks for your registration, follow us on our social networks to keep up-to-date