August 20, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

Processing Image Pixels, Applying Image Convolution in Java, Part 2

  • April 4, 2006
  • By Richard G. Baldwin
  • Send Email »
  • More Articles »

Java Programming, Notes # 414


Preface

Part of a series

This lesson is one in a series designed to teach you how to use Java to create special effects with images by directly manipulating the pixels in the images.  The first lesson in the series was entitled Processing Image Pixels using Java, Getting Started.  The previous lesson was Part 1 of this two-part lesson.

This is Part 2 of the two-part lesson.  You are strongly encouraged to review the first part of this lesson entitled Processing Image Pixels, Applying Image Convolution in Java, Part 1 before continuing with this lesson.

The primary objective of this lesson is to teach you how to integrate much of what you have already learned about Digital Signal Processing (DSP) and Image Convolution into Java programs that can be used to experiment with, and to understand the effects of a wide variety of image-convolution operations.

Part 1 of this lesson showed you how to design copying filters, smoothing filters, sharpening filters, 3D embossing filters, and edge detection filters, and how to apply those filters to images.  The results of numerous experiments using filters of the types listed above were presented.  This part of the lesson explains the code required to perform the experiments that were presented in Part 1.

You will need a driver program

The lesson entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness provided and explained a class named ImgMod02a that makes it easy to:

  • Manipulate and modify the pixels that belong to an image.
  • Display the processed image along with the original image.

ImgMod02a serves as a driver that controls the execution of a second program that actually processes the pixels.  It displays the original and processed images in the standard format shown in Figure 1.


Figure 1

Get the class and the interface

The image-processing programs that I will explain in this lesson run under the control of ImgMod02a.  In order to compile and run the programs that I will provide in this lesson, you will need to go to the lessons entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness and Processing Image Pixels using Java, Getting Started to get copies of the class named ImgMod02a and the interface named ImgIntfc02

Viewing tip

You may find it useful to open another copy of this lesson in a separate browser window.  That will make it easier for you to scroll back and forth among the different figures and listings while you are reading about them.

Supplementary material

I recommend that you also study the other lessons in my extensive collection of online Java tutorials.  You will find those lessons published at Gamelan.com.  However, as of the date of this writing, Gamelan doesn't maintain a consolidated index of my Java tutorial lessons, and sometimes they are difficult to locate there.  You will find a consolidated index at www.DickBaldwin.com.

I particularly recommend that you study the lessons referred to in the References section of this lesson.

Background Information

A three-dimensional array of pixel data as type int

The driver program named ImgMod02a:

  • Extracts the pixels from an image file.
  • Converts the pixel data to type int.
  • Stores the pixel data in a three-dimensional array of type int that is well suited for processing.
  • Passes the three-dimensional array object's reference to a method in an object instantiated from an image-processing class.
  • Receives a reference to a three-dimensional array object containing processed pixel data from the image-processing method.
  • Displays the original image and the processed image in a stacked display as shown in Figure 1.
  • Makes it possible for the user to provide new input data to the image-processing method, invoking the image-processing method repeatedly in order to create new displays showing the newly-processed image along with the original image.

The manner in which that is accomplished was explained in earlier lessons.

A grid of colored pixels

Each three-dimensional array object represents one image consisting of a grid of colored pixels.  The pixels in the grid are arranged in rows and columns when they are rendered.  One of the dimensions of the array represents rows.  A second dimension represents columns.  The third dimension represents the color (and transparency) of the pixels.

Convolution in one dimension

The earlier lesson entitled Convolution and Frequency Filtering in Java taught you about performing convolution in one dimension.  In that lesson, I showed you how to apply a convolution filter to a sampled time series in one dimension.  As you may recall, the mathematical process in one dimension involves the following steps:

  • Register the n-point convolution filter with the first n samples in the time series.
  • Compute an output value, which is the sum of the products of the convolution filter coefficient values and the corresponding time series values.
  • Optionally divide the sum of products output value by the number of filter coefficients.
  • Move the convolution filter one step forward, registering it with the next n samples in the time series and compute the next output value as a sum of products.
  • Repeat this process until all samples in the time series have been processed.

Convolution in two dimensions

Convolution in two dimensions involves essentially the same steps except that in this case we are dealing with three different 3D sampled surfaces and a 3D convolution filter surface instead of a simple sampled time series.

(There is a red surface, a green surface, and a blue surface, each of which must be processed.  Each surface has width and height corresponding to the first two dimensions of the 3D surface.  In addition, each sampled value that represents the surface can be different.  This constitutes the third dimension of the surface.  There is also an alpha or transparency surface that could be processed, but the programs in this lesson don't process the alpha surface.  Similarly, the convolution filter surface has three dimensions corresponding to width, height, and the values of the coefficients in the operator.  Don't be confused by the dimensions of the array object containing the surface or the convolution filter and the dimensions of the surface or the convolution filter.)

Steps in the processing

Basically, the steps involved in processing one of the three surfaces to produce one output surface consist of:

  • Register the 2D aspect (width and height) of the convolution filter with the first 2D area centered on the first row of samples on the input surface.
  • Compute a point for the output surface, by computing the sum of the products of the convolution filter values and the corresponding input surface values.
  • Optionally divide the sum of products output value by the number of filter coefficients.
  • Move the convolution filter one step forward along the row, registering it with the next 2D area on the surface and compute the next point on the output surface as a sum of products.  When that row has been completely processed, move the convolution filter to the beginning of the next row, registering with the corresponding 2D area on the input surface and compute the next point for the output surface.
  • Repeat this process until all samples in the surface have been processed.

Repeat once for each color surface

Repeat the above set of steps three times, once for each of the three color surfaces.

Watch out for the edges

Special care must be taken to avoid having the edges of the convolution filter extend outside the boundaries of the input surface.

Testing

All of the code in this lesson was tested using J2SE 5.0 and WinXP

Discussion and Sample Code

There are a rather large number of classes involved in producing the experimental results described in the first part of this lesson, which was entitled Processing Image Pixels, Applying Image Convolution in Java, Part 1.

Some of those classes have already been discussed in detail in earlier lessons.  For retrieval of the source code for those classes, I will simply refer you to the earlier lessons referred to in the References section of this lesson.

(As an alternative, you can probably just go to Google and enter the name of the class along with the keywords Baldwin and java and find the lessons online.)

Some of the classes are updated versions of classes discussed in earlier lessons.  In those cases, I will provide some, but not very much discussion of the new version of the class.  For the most part, I will simply refer you to the lesson containing the earlier version for the discussion.

Some of the classes are new to this lesson.  I will discuss and explain those classes in detail.

The class named Graph08

I will begin with the class named Graph08.  A complete listing of this class is provided in Listing 28 near the end of this lesson.

This is an updated version of the earlier plotting class named Graph03.  The update allows the user to plot up to eight functions instead of only 5 as is the case with Graph03.

(Figure 2 shows a sample of the plotting format produced by the class named Graph08.)


Figure 2

The use of this class requires the use of a corresponding interface named.  GraphIntfc08.  This interface, which is provided in Listing 29, is an update to the earlier interface named GraphIntfc01.

Graph03 and GraphIntfc01 were explained in the earlier lesson entitled Convolution and Matched Filtering in Java.  Because of the similarity between Graph08 and Graph03, I will simply refer you to that explanation, and won't repeat the explanation here.

The Dsp041 class

This class is new to this lesson.  A detailed description of the class was provided in Processing Image Pixels, Applying Image Convolution in Java, Part 1 of this lesson in the section entitled Preview.

This class must be run under control of the class named Graph08.  To run this class as a program, enter the following command at the command line prompt:

java Graph08 Dsp041

I will explain this class in fragments.  The source code for the class is provided in its entirety in Listing 30 near the end of the lesson.

This class illustrates the application of a convolution filter to signals having a known waveform displaying the results in the format shown in Figure 2.

The class definition

The class definition for the class named Dsp041 begins in Listing 1.

class Dsp041 implements GraphIntfc08{
  //Establish length for various arrays
  int filterLen = 200;
  int signalLen = 400;
  int outputLen = signalLen - filterLen;
  //Ignore right half of signal, which is all zeros, when
  // computing the spectrum.
  int signalSpectrumPts = signalLen/2;
  int filterSpectrumPts = outputLen;
  int outputSpectrumPts = outputLen;
 

  //Create arrays to store different types of data.
  double[] signal = new double[signalLen];
  double[] filter = new double[filterLen];
  double[] output = new double[outputLen];
  double[] spectrumA = new double[signalSpectrumPts];
  double[] spectrumB = new double[filterSpectrumPts];
  double[] spectrumC = new double[outputSpectrumPts];

Listing 1

Note that the class implements the interface named GraphIntfc08.  This is a requirement for any class that is to be run under control of the class named Graph08.

The code in Listing 1 declares variables and creates array objects.  The code is straightforward and shouldn't require further explanation.

The constructor for the class named Dsp041

The constructor begins in Listing 2.

  public Dsp041(){//constructor
   
    //This is a single impulse filter that simply copies
    // the input to the output.
    filter[0] = 1;

/*
    //This is a high-pass filter with an output that is
    // proportional to the slope of the signal.  In
    // essence,the output approximates the first derivative
    // of the signal.
    filter[0] = -1.0;
    filter[1] = 1.0;

    //This is a high-pass filter with an output that is
    // proportional to the rate of change of the slope of
    // the signal. In essence, the output approximates the
    // second derivative of the signal.
    filter[0] = -0.5;
    filter[1] = 1.0;
    filter[2] = -0.5;

    //This is a relatively soft high-pass filter, which
    // produces a little blip in the output each time the
    // slope of the signal changes.  The size of the blip
    // is roughly proportional to the rate of change of the
    // slope of the signal.
    filter[0] = -0.2;
    filter[1] = 1.0;
    filter[2] = -0.2;

    //This is a low-pass smoothing filter.  It approximates
    // a four-point running average or integration of the
    // signal.
    filter[0] = 0.250;
    filter[1] = 0.250;
    filter[2] = 0.250;
    filter[3] = 0.250;
*/

Listing 2

Enable and disable code

An object of the class named Dsp041 applies a one-dimensional convolution filter to a signal with a known waveform and displays the results in the format shown in Figure 2.  By enabling and disabling code using comments, you can create and save a convolution filter having given set of coefficient values.  Then you can recompile the class and rerun the program to see the effect of the filter on the signal.

Several predefined filter waveforms are provided in Listing 2.  Obviously, you can modify the code in Listing 2 to create a filter of your own design.

Create a signal time series

Listing 3 creates a signal time series containing four distinct waveforms.  These waveforms consist of all positive values riding on a positive non-zero baseline:

  • An impulse.
  • A rectangular pulse.
  • A triangular pulse with a large slope.
  • A triangular pulse with a smaller slope.

    //First create a baseline in the signal time series.
    //Modify the following value and recompile the class
    // to change the baseline.
    double baseline = 10.0;
    for(int cnt = 0;cnt < signalLen;cnt++){
      signal[cnt] = baseline;
    }//end for loop
   
    //Now add the pulses to the signal time series.
   
    //First add an impulse.
    signal[20] = 75;
   
    //Add a rectangular pulse.
    signal[30] = 75;
    signal[31] = 75;
    signal[32] = 75;
    signal[33] = 75;
    signal[34] = 75;
    signal[35] = 75;
    signal[36] = 75;
    signal[37] = 75;
    signal[38] = 75;
    signal[39] = 75;
   
    //Add a triangular pulse with a large slope.
    signal[50] = 10;
    signal[51] = 30;
    signal[52] = 50;
    signal[53] = 70;
    signal[54] = 90;
    signal[55] = 70;
    signal[56] = 50;
    signal[57] = 30;
    signal[58] = 10;
   
    //Add a triangular pulse with a smaller slope.
    signal[70] = 10;
    signal[71] = 20;
    signal[72] = 30;
    signal[73] = 40;
    signal[74] = 50;
    signal[75] = 60;
    signal[76] = 70;
    signal[77] = 80;
    signal[78] = 90;
    signal[79] = 80;
    signal[80] = 70;
    signal[81] = 60;
    signal[82] = 50;
    signal[83] = 40;
    signal[84] = 30;
    signal[85] = 20;
    signal[86] = 10;

Listing 3

The signal created by the code in Listing 3 is shown in the first graph at the top of Figure 2.

Obviously, you could replace the waveforms created by Listing 3 with signal waveforms of your own design if you elect to do so.  Just remember that if you are using this program to investigate the application of convolution to image color data, all color values in an image are positive.

Apply the convolution filter to the signal

Listing 4 invokes the method named convolve to apply the convolution filter to the signal.

    convolve(signal,filter,output);

Listing 4

You will find a complete listing of the method named convolve in Listing 30.

The code in the method named convolve emulates a one-dimensional version of the 2D image convolution scheme used in ImgMod032 with respect to normalization and scaling.  That normalization scheme was explained in an earlier lesson entitled Processing Image Pixels, Understanding Image Convolution in Java, and is also explained in detail in the comments in Listing 30.  While the normalization process is rather long and tedious, it is also straightforward and shouldn't require an explanation beyond the comments in Listing 30.

Aside from the normalization and scaling code, the actual convolution process implemented in the method named convolve has been explained in earlier lessons referred to in the References section.  Therefore, it shouldn't be necessary to provide a detailed explanation of the method named convolve.

Compute Discrete Fourier Transform of signal

Listing 5 invokes the method named dft to compute and save the Discrete Fourier Transform (DFT) of the signal expressed in decibels.

    //Ignore right half of signal which is all zeros.
    dft(signal,signalSpectrumPts,spectrumA);

Listing 5

The computation of the DFT has been explained in several earlier lessons explained in the References section.  Also, there are extensive comments provided with the dft method in Listing 30.  Therefore, I won't repeat that explanation here.

The results of the DFT computation on the signal are shown in the fourth graph in Figure 2.

Compute the amplitude frequency response of the filter

Listing 6 computes and saves the DFT of the convolution filter expressed in db.

    dft(filter,filterSpectrumPts,spectrumB);

Listing 6

Note that the convolution filter is embedded in a long time series having zero values.  This causes the output of the DFT to be finely sampled and produces a smooth curve for the frequency response of the convolution filter.

The results of the DFT computation on the convolution filter are shown in the fifth graph in Figure 2.

Compute the spectrum of the filtered output

Listing 7 computes and saves the DFT of the filtered output expressed in decibels.

    dft(output,outputSpectrumPts,spectrumC);
  }//end constructor

Listing 7

The results of the DFT computation on the filtered output are shown in the sixth graph in Figure 2.

Listing 7 also signals the end of the constructor.

Plot the results

All of the time series and frequency domain functions have now been produced and saved.  They may be retrieved and plotted by invoking the method named getNmbr and the methods named f1 through f6 shown in Listing 30.  These methods are invoked by the object of the class named Graph08 for the purpose of producing the graphic output in the format shown in Figure 2.

The purpose of these methods is simply to scale and return the data to be plotted.  They are straightforward and shouldn't require an explanation beyond the comments provided in Listing 30.

The ImgMod33 class and the ImgMod33a class

These two classes are new to this lesson.  A detailed description of the two classes was provided in the Preview section of Part 1 of this lesson.

It is recommended that you read the material in that section before attempting to understand the program code in this section.

Program listings

A complete listing of the class named ImgMod33 is provided in Listing 31.  A complete listing of the class named ImgMod33a is provided in Listing 32.

The two classes are very similar

The two classes are the same except that ImgMod33a uses the class named ImgMod32a to perform the 2D convolution whereas ImgMod33 uses the class named ImgMod32 to perform the 2D convolution.  As a result, I will explain ImgMod33, but will not explain ImgMod33a.  However, I will explain the differences between ImgMod32 and ImgMod32a later in the lesson.

A general purpose 2D image convolution capability

Each of these classes provides a general purpose 2D image convolution and color filtering capability in Java, wherein the convolution filter is provided to the program by way of a text file.

Running the program

Both classes are designed to be driven by the class named ImgMod02a

(The class named ImgMod02a was explained in the earlier lesson entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness.)

The image file to be processed by convolution is specified on the command line.  Convolution filters are provided as text files.

Enter one of the following at the command line to run one of these programs (where ImageFileName is the name of a .gif or .jpg file, including the extension):

java ImgMod02a ImgMod33 ImageFileName
java ImgMod02a ImgMod33a ImageFileName

Specify the filter file

Then enter the name of a file containing a 2D convolution filter in the TextField that appears in the interactive control panel shown in Figure 3.


Figure 3

Click the Replot button on the Frame shown in Figure 1 to cause the convolution filter to be applied to the image and the filtered result to be displayed.

(See Figure 1 for an example of the Frame containing the original image, the processed image, and the Replot button.  See comments at the beginning of the method named getFilter in the class named ImgMod33 for a description and an example of the required format for the file containing the 2D convolution filter.)

Wave number data

Each time you click the Replot button, the following additional information is displayed in two separate color contour maps in the format shown in Figure 4:

  • The convolution filter.
  • The wave number response of the convolution filter.

Figure 4

(Note that the two contour maps do not appear side-by-side as shown in Figure 4.  Rather, they appear on top of one another.  You must move the one on the top to see the one on the bottom.)

The class definition

The class definition for the class named ImgMod33 begins in Listing 8.  Note that the class extends Frame and implements ImgIntfc02.

class ImgMod33 extends Frame implements ImgIntfc02{
                                       
  TextField fileNameField = new TextField("");
  Panel rgbPanel = new Panel();
  TextField redField = new TextField("1.0");
  TextField greenField = new TextField("1.0");
  TextField blueField = new TextField("1.0");
  Label instructions = new Label(
          "Enter Filter File Name and scale factors for " +
          "Red, Green, and Blue and click Replot");

Listing 8

The class extends Frame so that an object of the class serves as the interactive control panel shown in Figure 3.  The class implements the interface named ImgIntfc02 to make it possible for the class to run under the control of the class named ImgMod02a and to display the modified image in the format shown in Figure 1.

The code in Listing 8 simply declares and initializes some instance variables.

The constructor

The constructor for the class named ImgMod33 is shown in its entirety in Listing 9.

  ImgMod33(){//constructor
    setLayout(new GridLayout(4,1));
    add(new Label("Filter File Name"));
    add(fileNameField);
   
    //Populate the rgbPanel
    rgbPanel.add(new Label("Red"));
    rgbPanel.add(redField);
    rgbPanel.add(new Label("Green"));
    rgbPanel.add(greenField);
    rgbPanel.add(new Label("Blue"));
    rgbPanel.add(blueField);

    add(rgbPanel);
    add(instructions);
    setTitle("Copyright 2005, R.G.Baldwin");
    setBounds(400,0,460,125);
    setVisible(true);
  }//end constructor

Listing 9

The code in the constructor constructs the interactive control panel shown in Figure 3.

The method named processImg

The processImg method begins in Listing 10.

  public int[][][] processImg(int[][][] threeDPix,
                              int imgRows,
                              int imgCols){

    //Create an empty output array of the same size as the
    // incoming array.
    int[][][] output = new int[imgRows][imgCols][4];

Listing 10

The processImg method must be defined by all classes that implement ImgIntfc02.  The method is called at the beginning of the run and each time thereafter that the user clicks the Replot button on the Frame shown in Figure 1.

The processImg method gets a 2D convolution filter from a text file, applies it to the incoming 3D array of pixel data and returns a filtered 3D array of pixel data.

The code in Listing 10 creates an output array object in which to return the filtered image data.

Make a working copy

Listing 11 makes a working copy of the 3D pixel array to avoid making permanent changes to the original image data.

    int[][][] working3D = new int[imgRows][imgCols][4];
    for(int row = 0;row < imgRows;row++){
      for(int col = 0;col < imgCols;col++){
        working3D[row][col][0] = threeDPix[row][col][0];
        working3D[row][col][1] = threeDPix[row][col][1];
        working3D[row][col][2] = threeDPix[row][col][2];
        working3D[row][col][3] = threeDPix[row][col][3];
        //Copy alpha values directly to the output. They
        // are not processed when the image is filtered
        // by the convolution filter.
        output[row][col][0] = threeDPix[row][col][0];
      }//end inner loop
    }//end outer loop

Listing 11

Get the convolution filter from the file

Listing 12 gets the convolution filter from the specified text file containing the filter.

    //Get the file name containing the filter from the
    // textfield.
    String fileName = fileNameField.getText();
    if(fileName.equals("")){
      //The file name is an empty string. Skip the
      // convolution process and pass the input image
      // directly to the output.
      output = working3D;
    }else{
      //Get a 2D array that is populated with the contents
      // of the file containing the 2D filter.
      double[][] filter = getFilter(fileName);

Listing 12

Listing 12 gets the name of the file containing the convolution filter from the interactive control panel shown in Figure 3.  If the TextField contains an empty string (as is the case the first time the processImg method is called) the convolution process is skipped and the input image is passed directly to the output.

If the user later enters a valid file name into the TextField and clicks the Replot button, causing the processImg method to be called again, Listing 12 invokes the method named getFilter to read the convolution filter from the file and to deposit it into the array object referred to by filter.

The getFilter method

The getFilter method is shown in its entirety in Listing 31.  Although the method is rather long, it is straightforward and shouldn't require an explanation beyond the comments provided in the listing.  The comments also describe the required format for the text file containing the filter coefficients.

Plot impulse response and wave number response

Listing 13 plots the impulse response and wave number response of the 2D convolution filter as a pair of colored contour maps.

      //Plot the impulse response and the wave-number
      // response of the convolution filter.  These items
      // are not computed and plotted when the program
      // starts running.  Rather, they are computed and
      // plotted each time the user clicks the Replot
      // button after entering the name of a file
      // containing a convolution filter into the
      // TextField.
     
      //Begin by placing the impulse response in the
      // center of a large flat surface with an elevation
      // of zero.This is done to improve the resolution of
      // the Fourier Transform, which will be computed
      // later.
      int numFilterRows = filter.length;
      int numFilterCols = filter[0].length;
      int rows = 0;
      int cols = 0;
      //Make the size of the surface ten pixels larger than
      // the convolution filter with a minimum size of
      // 32x32 pixels.
      if(numFilterRows < 22){
        rows = 32;
      }else{
        rows = numFilterRows + 10;
      }//end else
      if(numFilterCols < 22){
        cols = 32;
      }else{
        cols = numFilterCols + 10;
      }//end else
     
      //Create the surface, which will be initialized to
      // all zero values.
      double[][] filterSurface = new double[rows][cols];
      //Place the convolution filter in the center of the
      // surface.
      for(int row = 0;row < numFilterRows;row++){
        for(int col = 0;col < numFilterCols;col++){
          filterSurface[row + (rows - numFilterRows)/2]
                       [col + (cols - numFilterCols)/2] =
                                          filter[row][col];
        }//end inner loop
      }//end outer loop
 
      //Display the filter and the surface on which it
      // resides as a 3D plot in a color contour format.
      new ImgMod29(filterSurface,4,true,1);
 
      //Get and display the 2D Fourier Transform of the
      // convolution filter.
     
      //Prepare arrays to receive the results of the
      // Fourier transform.
      double[][] real = new double[rows][cols];
      double[][] imag = new double[rows][cols];
      double[][] amp = new double[rows][cols];
      //Perform the 2D Fourier transform.
      ImgMod30.xform2D(filterSurface,real,imag,amp);
      //Ignore the real and imaginary results.  Prepare the
      // amplitude spectrum for more-effective plotting by
      // shifting the origin to the center in wave-number
      // space.
      double[][] shiftedAmplitudeSpect =
                                 ImgMod30.shiftOrigin(amp);
                                  
      //Get and display the minimum and maximum wave number
      // values.  This is useful because the way that the
      // wave number plots are normalized. it is not
      // possible to infer the flatness or lack thereof of
      // the wave number surface simply by viewing the
      // plot.  The colors that describe the elevations
      // always range from black at the minimum to white at
      // the maximum, with different colors in between
      // regardless of the difference between the minimum
      // and the maximum.
      double maxValue = -Double.MAX_VALUE;
      double minValue = Double.MAX_VALUE;
      for(int row = 0;row < rows;row++){
        for(int col = 0;col < cols;col++){
          if(amp[row][col] > maxValue){
            maxValue = amp[row][col];
          }//end if
          if(amp[row][col] < minValue){
            minValue = amp[row][col];
          }//end if
        }//end inner loop
      }//end outer loop
     
      System.out.println("minValue: " + minValue);
      System.out.println("maxValue: " + maxValue);
      System.out.println("ratio: " + maxValue/minValue);
                                  
      //Generate and display the wave-number response
      // graph by plotting the 3D surface on the computer
      // screen.
      new ImgMod29(shiftedAmplitudeSpect,4,true,1);

Listing 13

I'm not going to tell you that the code in Listing 13 is straightforward.  It isn't.  However, I am going to tell you that everything that results from the code in Listing 13 has been explained in earlier lessons referred to in the References section of this lesson and I'm not going to repeat those explanations here.  If you don't understand the code in Listing 13, you simply need to go back and study those earlier lessons.

Perform the convolution

Finally, we have reached the main objective of the class named ImgMod33Listing 14 invokes the static convolve method of the ImgMod32 class to convolve the image with the 2D convolution filter.

      output = ImgMod32.convolve(working3D,filter);
    }//end else

Listing 14

The convolve method of the ImgMod32 class was explained in detail in the earlier lesson entitled Processing Image Pixels, Understanding Image Convolution in Java.  Therefore, I won't repeat that explanation here.

The important point is that this class named ImgMod33 serves as a driver, reading 2D convolution filters from text files, applying the filters to images, and causing the raw and processed images to be displayed under control of the class named ImgMod02a in the format shown in Figure 1.

Listing 14 also signals the end of the else clause that began in Listing 12.

Changes for ImgMod33a

Listing 14 also shows the location of the difference between the classes named ImgMod33 and ImgMod33a.  Whereas the class named ImgMod33 invokes the static convolve method belonging to the class named ImgMod32, the class named ImgMod33a invokes the convolve method belonging to the class named ImgMod32a.

These two convolve method differ in the way that they normalize the convolution output to produce the required image color format consisting of unsigned eight-bit values.  I explained the normalization scheme implemented by ImgMod32 in the earlier lesson entitled Processing Image Pixels, Understanding Image Convolution in Java.  I will explain the normalization scheme implemented by the class named ImgMod32a later in this lesson.

Apply color filtering

Following the completion of the convolution operation, Listing 15 scales the color values in each color plane if the multiplicative factor in the corresponding TextField in the interactive control panel has a value other than 1.0.

    if(!redField.getText().equals(1.0)){
      double scale = Double.parseDouble(
                                       redField.getText());
      scaleColorPlane(output,1,scale);
    }//end if on redField
   
    if(!greenField.getText().equals(1.0)){
      double scale = Double.parseDouble(
                                     greenField.getText());
      scaleColorPlane(output,2,scale);
    }//end if on greenField
   
    if(!blueField.getText().equals(1.0)){
      double scale = Double.parseDouble(
                                      blueField.getText());
      scaleColorPlane(output,3,scale);
    }//end if on blueField

Listing 15

The scaling for each color plane is actually performed by invoking the method named scaleColorPlane.

The scaleColorPlane method

The scaleColorPlane method is shown in its entirety in Listing 31.  The code in the method is straightforward and shouldn't require an explanation beyond the comments in the listing.

Return the processed image

Listing 16 returns a reference to the array containing the image, which has undergone convolution filtering, color filtering, or a combination of the two.

    return output;

  }//end processImg method

Listing 16

Listing 16 also signals the end of the processImg method, and the end of the ImgMod33 class insofar as this explanation is concerned.  Once again, however, the class contains a couple of additional methods, which do not need to be explained in detail in this lesson.

The class named ImgMod32a

The class named ImgMod32a is shown in its entirety in Listing 33.  This class is similar to ImgMod32 except that it uses a different normalization scheme when converting convolution results back to eight-bit unsigned values.  The
normalization scheme causes the mean and the RMS of the output to match the mean and the RMS of the input.  Then it sets negative values to 0 and sets values greater than 255 to 255.

The class named ImgMod32 was explained in detail in the earlier lesson entitled Processing Image Pixels, Understanding Image Convolution in Java.  Much of the code in ImgMod32a is identical to the code in ImgMod32.  Therefore, I won't repeat the explanation for that code.  Rather, I will explain the code that differs between the two classes in the following sections.

The convolve method

All of the important changes were made to the static method named convolve.  I will confine my explanation to that method.  The static convolve method applies an incoming 2D convolution filter to each color plane in an incoming 3D array of pixel data and returns a filtered 3D array of pixel data.

The convolution filter is applied separately to each color plane.

The alpha plane is not modified.

Normalization

The output is normalized so as to guarantee that the output color values fall within the range from 0 to 255.  This is accomplished by causing the mean and the RMS of the color values in each output color plane to match the mean and the RMS of the color values in the corresponding input color plane.  Then, all negative color values are set to a value of 0 and all color values greater than 255 are set to 255.

Computations in type double

The convolution filter is passed to the method as a 2D array of type double.  All convolution and normalization arithmetic is performed as type double.  The normalized results are converted to type int before returning them to the calling method.

This method does not modify the contents of the incoming array of pixel data. 

A dead zone around the perimeter of the image

An unfiltered dead zone equal to half the filter length is left around the perimeter of the filtered image to avoid any attempt to perform convolution using data outside the bounds of the image.

The code for the convolve method

The convolve method begins in Listing 17.

  public static int[][][] convolve(
                    int[][][] threeDPix,double[][] filter){
    //Get the dimensions of the image and filter arrays.
    int numImgRows = threeDPix.length;
    int numImgCols = threeDPix[0].length;
    int numFilRows = filter.length;
    int numFilCols = filter[0].length;

    //Display the dimensions of the image and filter
    // arrays.
    System.out.println("numImgRows = " + numImgRows);
    System.out.println("numImgCols = " + numImgCols);
    System.out.println("numFilRows = " + numFilRows);
    System.out.println("numFilCols = " + numFilCols);

    //Make a working copy of the incoming 3D pixel array to
    // avoid making permanent changes to the original image
    // data. Convert the pixel data to type double in the
    // process.  Will convert back to type int when
    // returning from this method.
    double[][][] work3D = intToDouble(threeDPix);

Listing 17

The code in Listing 17 is straightforward and shouldn't require further explanation.

Display the mean values

Listing 18 invokes the getMean method to get and to display the mean value for each incoming color plane.

    //Display the mean value for each color plane.
    System.out.println(
                   "Input red mean: " + getMean(work3D,1));
    System.out.println(
                 "Input green mean: " + getMean(work3D,2));
    System.out.println(
                  "Input blue mean: " + getMean(work3D,3));

Listing 18

The getMean method is shown in its entirety in Listing 33.  The code in the method is straightforward and shouldn't require further explanation.

Remove the mean value

Listing 19 invokes the method named removeMean to get and save the mean value for each color plane, and to remove the mean from every color value in each color plane.

    double redMean = removeMean(work3D,1);
    double greenMean = removeMean(work3D,2);
    double blueMean = removeMean(work3D,3);

Listing 19

You can view the removeMean method in Listing 33.  The code in the method is straightforward.

Get, save, and display the RMS value

The root mean square (RMS) value of a set of color values is simply the square root of the sum of the squares of all the color values.

Listing 20 invokes the getRms method to get and save the RMS value for the color values belonging to each color plane.  These values will be used later to cause the RMS value for each color plane in the filtered image to match the RMS values for each color plane in the original image.

    //Get and save the input RMS value for later
    // restoration.
    double inputRedRms = getRms(work3D,1);
    double inputGreenRms = getRms(work3D,2);
    double inputBlueRms = getRms(work3D,3);
    
    //Display the input RMS value
    System.out.println("Input red RMS: " + inputRedRms);
    System.out.println(
                      "Input green RMS: " + inputGreenRms);
    System.out.println("Input blue RMS: " + inputBlueRms);

Listing 20

Once you know the definition of the RMS value, the code in the getRms method is straightforward.  You can view the method in its entirety in Listing 33.

Listing 20 also displays the RMS values for each color plane in the original image.

Create and condition the output array object

I will let the comments in Listing 21 speak for themselves.

    //Create an empty output array of the same size as the
    // incoming array of pixels.
    double[][][] output = 
                     new double[numImgRows][numImgCols][4];
    
    //Copy the alpha values directly to the output array.
    // They will not be processed during the convolution
    // process.
    for(int row = 0;row < numImgRows;row++){
      for(int col = 0;col < numImgCols;col++){
        output[row][col][0] = work3D[row][col][0];
      }//end inner loop
    }//end outer loop

Listing 21

Perform the 2D convolution

The code in Listing 22 performs the actual 2D convolution.

//Because of the length of the following statements, and
// the width of this publication format, this format
// sacrifices indentation style for clarity. Otherwise,it
// would be necessary to break the statements into so many
// short lines that it would be very difficult to read
// them.

//Use nested for loops to perform a 2D convolution of each
// color plane with the 2D convolution filter.

for(int yReg = numFilRows-1;yReg < numImgRows;yReg++){
  for(int xReg = numFilCols-1;xReg < numImgCols;xReg++){
    for(int filRow = 0;filRow < numFilRows;filRow++){
      for(int filCol = 0;filCol < numFilCols;filCol++){
        
        output[yReg-numFilRows/2][xReg-numFilCols/2][1] += 
                      work3D[yReg-filRow][xReg-filCol][1] *
                                    filter[filRow][filCol];

        output[yReg-numFilRows/2][xReg-numFilCols/2][2] += 
                      work3D[yReg-filRow][xReg-filCol][2] *
                                    filter[filRow][filCol];

        output[yReg-numFilRows/2][xReg-numFilCols/2][3] += 
                      work3D[yReg-filRow][xReg-filCol][3] *
                                    filter[filRow][filCol];

      }//End loop on filCol
    }//End loop on filRow

    //Divide the result at each point in the output by the
    // number of filter coefficients.  Note that in some
    // cases, this is not helpful.  For example, it is not
    // helpful when a large number of the filter
    // coefficients have a value of zero.
    output[yReg-numFilRows/2][xReg-numFilCols/2][1] = 
           output[yReg-numFilRows/2][xReg-numFilCols/2][1]/
                                   (numFilRows*numFilCols);
    output[yReg-numFilRows/2][xReg-numFilCols/2][2] = 
           output[yReg-numFilRows/2][xReg-numFilCols/2][2]/
                                   (numFilRows*numFilCols);
    output[yReg-numFilRows/2][xReg-numFilCols/2][3] = 
           output[yReg-numFilRows/2][xReg-numFilCols/2][3]/
                                   (numFilRows*numFilCols);

  }//End loop on xReg
}//End loop on yReg

Listing 22

This is the same code that I explained in the earlier lesson entitled Processing Image Pixels, Understanding Image Convolution in Java.  Therefore, I won't repeat that explanation here.

Restoring the mean and RMS values

In an earlier lesson entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness I explained how to modify the width of the color distribution of each color plane in an image.  The RMS value is a measure of the width of the color distribution if the RMS value is computed after the mean has been removed from the color values.

Given a set of color values with a zero mean, we can modify the width of the color distribution simply by multiplying every color value in the set by the same constant.  If the constant is greater than 1.0, the width of the resulting distribution will be wider.  If the constant is less than 1.0, the width of the resulting distribution will be narrower.

Our objective here is to adjust the RMS value, and hence the width of the color distribution on each color plane, to the RMS value for that color plane in the unfiltered image.

Make sure the mean is zero

Although the mean was removed prior to convolution, it is possible that arithmetic inaccuracies during the convolution process could result in a slightly non-zero mean in the filtered image.  The code in Listing 23 makes certain that the mean value is zero before adjusting the RMS value.

    removeMean(output,1);
    removeMean(output,2);
    removeMean(output,3);

Listing 23

Adjust the RMS value

The code in Listing 24 makes the adjustment to the RMS value for each color plane as described above.

    //Get and save the RMS value of the output for each
    // color plane.
    double outputRedRms = getRms(output,1);
    double outputGreenRms = getRms(output,2);
    double outputBlueRms = getRms(output,3);
    
    //Scale the output to cause the RMS value of the output
    // to match the RMS value of the input
    scaleColorPlane(output,1,inputRedRms/outputRedRms);
    scaleColorPlane(output,2,inputGreenRms/outputGreenRms);
    scaleColorPlane(output,3,inputBlueRms/outputBlueRms);

    //Display the adjusted RMS values.  Should match the
    // input RMS values.
    System.out.println(
                    "Output red RMS: " + getRms(output,1));
    System.out.println(
                  "Output green RMS: " + getRms(output,2));
    System.out.println(
                   "Output blue RMS: " + getRms(output,3));

Listing 24

Listing 24 also displays the final RMS value for each color plane in the filtered image.

Restore the mean value

The mean value of a set of color values in a color plane can be changed simply by adding the same constant to every color value belonging to the color plane.  Listing 25 restores the mean value for each color plane of the filtered output to mean value of the corresponding color plane in the unfiltered image.

    addConstantToColor(output,1,redMean);
    addConstantToColor(output,2,greenMean);
    addConstantToColor(output,3,blueMean);

    System.out.println(
                  "Output red mean: " + getMean(output,1));
    System.out.println(
                "Output green mean: " + getMean(output,2));
    System.out.println(
                 "Output blue mean: " + getMean(output,3));

Listing 25

Listing 25 also displays the final mean value of each color plane in the filtered image.

Guarantee that all color values are within the proper range

Listing 26 guarantees that all color values fall within the required range of 0 to 255 inclusive by setting negative values to zero and setting all values greater then 255 to 255.

    //Clip all negative color values at zero and all color
    // values that are greater than 255 at 255.
    clipToZero(output,1);
    clipToZero(output,2);
    clipToZero(output,3);
    
    clipTo255(output,1);
    clipTo255(output,2);
    clipTo255(output,3);

Listing 26

The methods named clipToZero and clipTo255 are straightforward.  They can be viewed in their entirety in Listing 33.

Return the filtered image

Listing 27 returns a reference to the array containing the filtered pixels, converting the color values from type double to type int in the process.

    return doubleToInt(output);

  }//end convolve method

Listing 27

Listing 27 also signals the end of the static convolve method belonging to the class named ImgMod32a.

Run the Programs

I encourage you to copy, compile, and run the programs that are provided in the section named Complete Program Listings.

Experiment with different filters and images

Experiment with the programs, creating new convolution filters and applying them to your own images.  Try to make certain that you understand the results of your experiments.  Modify the programs and observe the results of your modifications.

The required classes

The required classes and the instructions for running the programs are provided in the comments at the beginning of the programs.  The source code for the classes that are not provided in this lesson can be found in the earlier lessons referred to in the References section.

Perhaps the easiest way to find the source code for those classes is to go to Google and search for the following keywords where className is the name of the class of interest:

java baldwin "class className"

Have fun and learn

Above all, have fun and use these programs to learn as much as you can about image convolution.

Summary

This is Part 2 of a two-part lesson.  In the two parts of this lesson, I have provided you with a general purpose image convolution program.  In addition, I have walked you through several experiments intended to help you understand why image convolution does what it does.

I also showed you how to design and implement the following types of convolution filters:

  • A simple copy filter
  • Smoothing or softening filters
  • Sharpening filters
  • Embossing filters that produce a 3D-like effect
  • Edge detection filters

What's Next?

Future lessons will show you how to write image-processing programs that implement many common special effects as well as a few that aren't so common.  This will include programs to do the following:

  • Deal with the effects of noise in an image.
  • Morph one image into another image.
  • Rotate an image.
  • Change the size of an image.
  • Create a kaleidoscope of an image.
  • Other special effects that I may dream up or discover while doing the background research for the lessons in this series.

References

In preparation for understanding the material in this lesson, I recommend that you study the material in the following previously-published lessons:

  • 100   Periodic Motion and Sinusoids
  • 104   Sampled Time Series
  • 108   Averaging Time Series
  • 1478 Fun with Java, How and Why Spectral Analysis Works
  • 1482 Spectrum Analysis using Java, Sampling Frequency, Folding Frequency, and the FFT Algorithm
  • 1483 Spectrum Analysis using Java, Frequency Resolution versus Data Length
  • 1484 Spectrum Analysis using Java, Complex Spectrum and Phase Angle
  • 1485 Spectrum Analysis using Java, Forward and Inverse Transforms, Filtering in the Frequency Domain
  • 1487 Convolution and Frequency Filtering in Java
  • 1488 Convolution and Matched Filtering in Java
  • 1489 Plotting 3D Surfaces using Java
  • 1490 2D Fourier Transforms using Java
  • 1491 2D Fourier Transforms using Java, Part 2
  • 1492 Plotting Large Quantities of Data using Java
  • 400 Processing Image Pixels using Java, Getting Started
  • 402 Processing Image Pixels using Java, Creating a Spotlight
  • 404 Processing Image Pixels Using Java: Controlling Contrast and Brightness
  • 406 Processing Image Pixels, Color Intensity, Color Filtering, and Color Inversion
  • 408 Processing Image Pixels, Performing Convolution on Images
  • 410 Processing Image Pixels, Understanding Image Convolution in Java
  • 412 Processing Image Pixels, Applying Image Convolution in Java, Part 1




Page 1 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel