# Java vs. C++: The Performance Showdown

### What About Floating Point?

You’ve seen how Java and C++ compare in integer arithmetic, but what about floating point? Let’s face it, you would struggle to find an application that uses only integer arithmetic—most programs rely on floating point computation at some point (even if it is for the simple purpose of averaging two numbers).

Using the same approach as in the previous example, consider the results of generating some random floating point numbers, multiplying them with each other, measuring the time it takes, and then averaging those times. This time though, to ensure you are “properly” multiplying floating point numbers, generate two sets of random numbers and multiply them with each other:

` `**private** void generateRandoms()
{
randoms = new double[*N_GENERATED*];
for( int i = 0; i < *N_GENERATED*; i++)
{
randoms[i] = Math.*random*();
}
multiply = new double[*N_MULTIPLY*];
for( int i = 0; i < *N_MULTIPLY*; i++ )
{
multiply[i] = Math.*random*();
}
}
**private** void javaCompute()
{
double result = 0;
for(int i = 0; i < *N_MULTIPLY*; i++)
{
for( int j = 0; j < *N_GENERATED*; j++ )
result = randoms[j] * multiply[i];
}
}

The above code (which you can find in `DoubleMaths.java`) generates the following result on my laptop:

`Java computing took 47`

In other words, it takes on average 47 milliseconds to perform about 10,000,000 floating-point operations (multiplications) in Java (roughly twice as much time as it took to perform integer arithmetic operations).

Now, let’s look at how well C++ performs when it comes to this (find the code in the DoubleMaths project included in the JavaVsCPP.zip code download):

`C computing took 0.001477`

Again, this is the result of using the compiler optimization for speed; \disabling any compiler optimization renders:

`C computing took 84.734633`

So, using an optimized compilation seems to hardly affect the C++ version. Using a decent compiler would render little difference between integer and floating point arithmetic, and what's more, the code generated would be around 25,000 times faster! However, you do have to choose your compiler carefully, or you could end up with execution times twice as long as those with the Java code!

### Number Comparison

In terms of computations, so far C++ seems to be winning. But how do the two languages perform when it comes to number comparisons? Consider two examples:

- Two integers are used in an
`if`statement. The`if`statement will perform a simple assignment if the statement is true. - An
`if`statement is used with floating point numbers involved in a tested expression.

For first example, you will generate a series of random integer numbers and traverse the array comparing the previous number in the series with the current one. If the current one is bigger, you’ll simply store it in a variable. Thus, at the end of the traversal, you will have found the largest number in the series.

You'll be using the same number-generation method as in the previous examples:

```
/**
* Generate random numbers
*/
```**private** void generateRandoms()
{
randoms = new int[*N_GENERATED*];
for( int i = 0; i < *N_GENERATED*; i++)
{
randoms[i] = (int)(i * Math.*random*());
}
}

The same goes for the execution time averaging, where you will use the same method again. However, the way you are going to execute this will differ. You could of course allocate a long array (a few tens of millions or so) of integers and traverse it (as described above), but you would run into another issue: the indexed memory access time (which is explained shortly). Instead, you will use a small array of int’s (100 in this case) and repeat the operation 100,000 times. And you will time the operation this way.

In Java, the code comes down to this:

```
public static void main( String args[] )
{
IntComparison comp = new IntComparison();
comp.generateRandoms();
long timeJava[] = new long[
```*N_ITERATIONS*];
long start, end;
for( int i = 0; i < *N_ITERATIONS*; i++ )
{
start = System.*currentTimeMillis*();
for( int j = 0; j < *N_REPEAT*; j++ )
comp.javaCompare();
end = System.*currentTimeMillis*();
timeJava[i] = (end - start);
}
System.*out*.println( "Java compare took " + *testTime*(timeJava) );
}

Running the above generates the following result on my laptop:
`Java compare took 50`

So, on average it takes 50 milliseconds to perform 100,000 x 100 = 10 million integer comparisons.

Let’s have a look at the result of a similar implementation in C++ (find the source in the IntComparison project included in the JavaVsCPP.zip code download):

`C computing took 0.001971`

Draw you own conclusion, but remember that the code was compiled using “optimize for speed” settings.

### Indexed Memory Addressing

So far, you’ve been looking at small chunks of data that are being accessed, but how does the memory allocation and access model perform when it comes to indexed data (i.e., arrays)? To find out, consider simply iterating over an array with a lot of items in it (millions!) and comparing the time it took to simply access each element. For the purpose of this exercise, accessing it simply means reading the data, storing it into a variable, and then writing it back into the array.

So the actual function you are going to measure looks like this in Java:

` `**private** void javaTraverse()
{
int temp = 0;
for( int i = 0; i < *N_ELEMS*; i++ )
{
temp = array[i];
array[i] = temp;
}
}

Running the above code (found in `ArraysAccess.java`) renders the following result:

`Java traverse took 53`

So it takes on average 53 milliseconds to traverse an array of 10 million entries! Implementing the equivalent C++ code (ArraysAccess project included in the JavaVsCPP.zip code download) is a bit different from the previous examples because C++ allows for up 65,535 elements in an array by default. To overcome that, this example uses a bit of Windows API again and incorporates the `GlobalAlloc` function, which allows for the allocation of large chunks of memory:

```
int main(int argc, char* argv[])
{
int * randoms;
HGLOBAL h = GlobalAlloc( GPTR, sizeof(int) * N_GENERATED );
randoms = (int *)h;
generate_randoms( randoms );
CStopWatch watch;
long double timeNative[N_ITERATIONS];
for( int i = 0; i < N_ITERATIONS; i++ )
{
watch.Start();
nativeTraverse(randoms);
watch.Stop();
timeNative[i] = watch.GetDuration();
}
printf( "C traversing took %lf\n", test_times(timeNative) );
GlobalFree( h );
return 0;
}
```

As you can see, you’re simply allocating 10 million int’s again using `GlobalAlloc`, and you then traverse this in the same manner as you do in Java. The average result of this operation is:

`C traversing took 10.857639`

So, it is about five times faster than Java. (However, compile this code with optimization disabled and the timings take about 2-3 times longer than those recorded in Java!)

Page 2 of 3

*This article was originally published on January 6, 2010*