March 8, 2021
Hot Topics:

Using JOONE for Artificial Intelligence Programming

  • By Jeff Heaton
  • Send Email »
  • More Articles »

The learning rate and momentum are parameters that are used to specify how the training will occur. JOONE makes use of the backpropagation learning method. For more information on the learning rate or the momentum, you should refer to the backpropagation algorithm.

This monitor object should be assigned to each of the neuron layers. The following lines of code do this.


Like many of the Java objects themselves, the JOONE monitor allows listeners to be added to it. As training progresses, JOONE will notify the listeners as to the progress of the training. For this simple example, we use:


We must now set up the input synapse. As previously mentioned, we will use a FileInputSynapse to read from a disk file. Disk files are not the only sort of input that JOONE can accept. JOONE is very flexible with regard to the input sources that it will accept. To cause JOONE to be able to accept other input types, you simply must create a new specialized synapse to accept the input. For this example, we will simply use the FileInputSynapse. The FileInputSynapse is first instantiated.

    inputStream = new FileInputSynapse();

Next, the FileInputSynapse must be informed of which columns to be used. The file shown in Listing 1 uses the first two columns as the inputs. The following lines of code set up the first two columns as the input to the neural network.

    // The first two columns contain the input values

Next, we must provide the name to the input file. This name will come directly from the user interface. An edit control was provided to collect the name of the input file. The following lines of code set the filename for the FileInputSynapse.

    // This is the file that contains the input data

As previously mentioned, a synapse is just a conduit for data to travel between neuron layers. The FileInputSynapse is the conduit through which data enters the neural network. To facilitate this, we must add the FileInputSynapse to the input layer of the neural network. This is done by the following line.


Now that the neural network is set up, we must create a trainer and monitor. The trainer is used to train the neural network because the monitor runs the neural network through a set number of training iterations. For each training iteration, data is presented to the neural network and the results are observed. The neural network's weights, which are stored in the synapse connection that go between the neuron layers, will be adjusted based on this error. As training progresses, this error level will drop. The following lines of code set up the trainer and attach it to the monitor.

    trainer = new TeachingSynapse();

You will recall that the input file provided in Listing 1 contains three columns. So far, we have only used the first two columns, which specify the input to the neural network. The third column contains the expected output when the neural network is presented with the numbers in the first column. We must provide the trainer access to this column so that the error can be determined. The error is the difference between the actual output of the neural network and this expected output. The following lines of code create another FileInputSynapse and prepare it to read from the same input file as before.

    // Setting of the file containing the desired responses,
    // provided by a FileInputSynapse
    samples = new FileInputSynapse();

This time, we would like to point the FileInputSynapse at the third column. The following lines of code do this and then set the trainer to use this FileInputSynapse.

    // The output values are on the third column of the file

Finally, the trainer is connected to the output layer of the neural network. This will cause the trainer to receive the output of the neural network.

    // Connects the Teacher to the last layer of the net

We are now ready to begin the background threads for all of the layers, as well as the trainers.


Finally, we set some parameters for the training. We specify that there are four rows in the input file, that we would like to train for 20,000 cycles, and that we are learning. If you set the learning parameter to false, the neural network would simply process the input and not learn. We will cover input processing in the next section.


We are now ready to begin the training process. Calling the Go method of the monitor will start the training process in the background.


The neural network will now be trained for 20,000 cycles. When the neural network is finished training, the error level should now be at a reasonably low level. An error level below 10% is acceptable.

Running the Neural Network

Now that the neural network has been trained, we can test it by presenting the input patterns to the neural network and observing the results. The method used to run the neural network must first prepare the neural network to process data. Currently, the neural network is in a training mode. To begin with, we will remove the trainer from the output layer. We will replace the trainer with an FileOutputSynapse so that we can record the output from the neural network. The following lines of code do this.

    FileOutputSynapse results = new FileOutputSynapse();

Now we must reset the input stream. We will use the same file input stream that we used during training. This will feed the same inputs that were used during training to the neural network.


Next, we must restart all of the threads that correspond to the neural network.


Now that the threads have been restarted, we must set some basic configuration information for the recognition. The following lines of code do this.


First, the number of input patterns is set to four. This is because we want the neural network to process each of the four input patterns that you originally used to train the neural network. Finally, the mode is set to learning. With this completed, we can call the "Go" method of the monitor.


When training completes, you will see that the output file produces will be similar to Listing 2.

Listing 2: The output from the neural network


You can see that the first line from the listing is a number which is reasonably close to zero. This is good because the first line of the input training file, as seen in Listing 1, was supposed to result in zero. Similarly, the second line is reasonably close to one, which is also good because the second line in the training file was also supposed to produce one.

Page 2 of 3

This article was originally published on November 21, 2002

Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date