February 25, 2021
Hot Topics:

Using JOONE for Artificial Intelligence Programming

  • By Jeff Heaton
  • Send Email »
  • More Articles »


Few programmers have not been intrigued by Artificial Intelligence programming at one point or another. Many programmers who become interested in AI are quickly put off by the complexity of the algorithms involved. In this article, we will examine an open source project for Java that can simplify much of this complexity.

The Java Object Oriented Neural Network (JOONE) is an open source project that offers a highly adaptable neural network for Java programmers. The JOONE project source code is covered by a Lesser GNU Public License (LGPL). In a nutshell, this means that the source code is freely available and you need to pay no royalties to use JOONE. JOONE can be downloaded from http://joone.sourceforge.net/.

JOONE can allow you to create neural networks easily from a Java program. JOONE supports many features, such as multithreading and distributed processing. This means that JOONE can take advantage of multiprocessor computers and multiple computers to distribute the processing load.

Neural Networks

JOONE implements an artificial neural network in Java. An artificial neural network seeks to emulate the function of the biological neural network that makes up the brains found in nearly all higher life forms found on Earth. Neural networks are made up of neurons. A diagram of an actual neuron is shown in Figure 1.

Figure 1: A biological neuron

As you can see from Figure 1, the neuron is made up of a core cell and several long connectors, which are called synapses. These synapses are how the neurons are connected amongst themselves. Neural networks, both biological and artificial, work by transferring signals from neuron to neuron across the synapses.


In this article, you will be shown a simple example of how to use JOONE. The topic of neural networks is very broad and covers many different applications. In this article, we will show you how to use JOONE to solve a very simple pattern recognition problem. Pattern recognition is a very common use for neural networks.

Pattern recognition presents the neural network with a pattern, to see whether the neural network is able to recognize that pattern. The pattern should be able to be distorted in some way and the neural network still is able to recognize it. This is similar to a human's ability to recognize something such as a traffic signal. The human should be able to recognize a traffic signal in the rain, daylight, or night. Even though each of these images looks considerably different, the human mind is able to determine that they are the same image.

When programming JOONE, you are generally working with two types of objects. You are given Neuron layer objects that represent a layer of one or more neuron that share similar characteristics. Neural networks usually will have either one or two layers of neurons. These layers are connected together by synapses. The synapses carry the pattern, which is to be recognized, from layer to layer.

Synapses do not just transmit the pattern from one neuron layer to the next. Synapses will develop biases towards elements of the pattern. These biases will cause certain elements of the pattern to be transmitted less effectively to the next layer than they would otherwise be. These biases, which are usually called weights, form the memory of the neural network. By adjusting the weights, which are stored in synapses, the behavior of the neural network is altered.

Synapses also play another role in JOONE. In JOONE, it is useful to think of synapses as data conduits. Just as synapses carry patterns from one neuron layer to another, specialized versions of the synapse are used to carry patterns both into and out of the neural network. You will now be shown how a simple single layer neural network can be constructed to recognize a pattern.

Training the Neural Network

For the purposes of the article, we will teach JOONE to recognize a very simple pattern. For this pattern, we will examine a binary boolean operation, such as XOR. The XOR operation's truth table is summarized below.

  X     Y     X XOR Y  
0 0 0
0 1 1
1 0 1
1 1 0

As you can see from the preceding table, the XOR operator will only be true, indicated by a value of one, when X and Y hold different values. In all other cases, the XOR operator evaluates to false, indicated by a zero. By default, JOONE takes its input from text files stored on your system. These text files are read by using a special synapse called the FileInputSynapse. To train for the XOR problem, you must construct an input file that contains the data shown above. This file is shown in Listing 1.

Listing 1: Input file for the XOR problem


We will now examine a simple program that teaches JOONE to recognize the XOR operation and produce the correct result. We will now examine the process that must be carried out to train the neural network. The process of training involves presenting the XOR problem to the neural network and observing the result. If the result is not what was expected, the training algorithm will adjust the weights, stored in the synapses. The difference between the actual output of the neural network and the anticipated output is called the error. Training will continue until the error falls below an acceptable level. This level is generally a percent, such as 10%. We will now examine the code that must be used to train a neural network.

The training process begins by setting up the neural network. The input, hidden, and output layers must all be created.

    // First, creates the three Layers
    input = new SigmoidLayer();
    hidden = new SigmoidLayer();
    output = new SigmoidLayer();

As you can see, each of the layers are created using the JOONE object SigmoidLayer. Sigmoid layers produce an output based on the natural logarithm. JOONE contains additional layers, other than the sigmoid layer type, that you may choose to use.

Next, each of these layers is given a name. These names will be helpful to later identify the layer during debugging.


Each layer must now be defined. We will specify the number of "rows" in each of the layers. This number of rows corresponds to the number of neurons in the layer.


As you can see from the preceding code, the input layer has two neurons, the hidden layer has three hidden neurons, and the output layer contains one neuron. It makes sense for the neural network to contain two input neurons and one output neuron because the XOR operator accepts two parameters and results in one value.

To make use of the neuron layers, we must also construct synapses. In this example, we will have several synapses. These synapses are crated with the following lines of code.

    // input -> hidden conn.
    FullSynapse synapse_IH = new FullSynapse();
    // hidden -> output conn.
    FullSynapse synapse_HO = new FullSynapse();

Just as was the case with the neuron layers, synapses can also be given names to assist in debugging. The following lines of code name the newly created synapses.


Finally, we must connect the synapses to the appropriate neuron layers. The following lines of code do this.

    // Connect the input layer with the hidden layer

    // Connect the hidden layer with the output layer

Now that the neural network has been created, we must create a Monitor object that will regulate the neural network. The following lines of code create the Monitor object.

    // Create the Monitor object and set the learning parameters
    monitor = new Monitor();

Page 1 of 3

This article was originally published on November 21, 2002

Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date