http://www.developer.com/ws/android/programming/face-detection-with-android-apis.html

Back to article

Face Detection with Android APIs


April 18, 2012

Through two main APIs, Android provides a simple way for you to identify the faces of people in a bitmap image, with each face containing all the basic location information. This tutorial focuses on utilizing these APIs to accomplish the face detection task, which can be extended for many other interesting applications. As we work through these APIs, we will develop a simple working project. The entire source package is available for download as a reference.

One thing to note is face detection is a computer technology that determines the locations and sizes in arbitrary images. Do not confuse it with face recognition. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image. One of the ways to do this is by comparing selected facial features from the image and a facial database. Simply put, face detection extracts people's faces in images but face recognition tries to find out who they are.

How To Install Android Face Detection APIs

As mentioned before, there are two main APIs introduced in this tutorial:

There is no installation necessary since they come with the base Android APIs, not from optional packages.

Constructing An Android Activity For Face Detection

You can construct a generic Android activity. We extend the base class ImageView to MyImageView, which we use as our main view to display the image as well as face feature markers. At the moment, the bitmap containing faces must be in 565 format for the APIs to work correctly. A detected face needs to have a confidence measure above the threshold defined in android.media.FaceDetector.Face.CONFIDENCE_THRESHOLD.

The most important method is implemented in setFace(). It instantiates the FaceDetector object and calls findFaces. The result is then stored in faces. Face midpoints are passed onto MyImageView for display.

public class TutorialOnFaceDetect1 extends Activity {
private MyImageView mIV;
private Bitmap mFaceBitmap;
private int mFaceWidth = 200;
private int mFaceHeight = 200;
private static final int MAX_FACES = 1;
private static String TAG = "TutorialOnFaceDetect";

@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

mIV = new MyImageView(this);
setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT));

// load the photo
Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3);
mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true);
b.recycle();

mFaceWidth = mFaceBitmap.getWidth();
mFaceHeight = mFaceBitmap.getHeight();
mIV.setImageBitmap(mFaceBitmap);

// perform face detection and set the feature points setFace();

mIV.invalidate();
}

public void setFace() {
FaceDetector fd;
FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES];
PointF midpoint = new PointF();
int [] fpx = null;
int [] fpy = null;
int count = 0;

try {
fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES);
count = fd.findFaces(mFaceBitmap, faces);
} catch (Exception e) {
Log.e(TAG, "setFace(): " + e.toString());
return;
}

// check if we detect any faces
if (count > 0) {
fpx = new int[count];
fpy = new int[count];

for (int i = 0; i < count; i++) {
try {
faces[i].getMidPoint(midpoint);

fpx[i] = (int)midpoint.x;
fpy[i] = (int)midpoint.y;
} catch (Exception e) {
Log.e(TAG, "setFace(): face " + i + ": " + e.toString());
}
}
}

mIV.setDisplayPoints(fpx, fpy, count, 0);
}
}

In the following code we added setDisplayPoints() to our MyImageView to render markers at the detected face features. Figure 1 shows a marker centered on the midpoint of the detected face.

// set up detected face features for display
public void setDisplayPoints(int [] xx, int [] yy, int total, int style) {
mDisplayStyle = style;
mPX = null;
mPY = null;

if (xx != null && yy != null && total > 0) {
mPX = new int[total];
mPY = new int[total];

for (int i = 0; i < total; i++) {
mPX[i] = xx[i];
mPY[i] = yy[i];
}
}
}

Android Face Detection - Figure 1

Figure 1: Single Face Detected in Android

Android Face Detection: Detecting Multiple Faces

You can specify the maximum number of faces to be detected using FaceDetector. You can modify the following variable for this purpose, for example. In the API documentation, it does not specify whether an upper limit exists, so you can try to detect as many faces as possible.

 private static final int MAX_FACES = 10;

Then you can use count returned from findFaces to obtain all the results from the list. Figure 2 is one example showing multiple markers centered on the respective midpoints of the detected faces.

Android Face Detection - Figure 2

Figure 2: Multiple Faces Detected in Android

Android Face Detection: Approximating Eye Center Locations

Android face detector returns other information as well for us to fine-tune the results a little bit. For example, it also returns eyesDistance, pose, and confidence. We can use eyesDistance to estimate where the eye center locations are.

This time we also put setFace() in a background thread inside of doLengthyCalc(), because the computation of face detection can potentially take too long and cause the "Application Not Responding" error when dealing with big images or images with many faces to detect.

Figure 3 is one example showing multiple markers centered on the respective eyes of the detected faces.

public class TutorialOnFaceDetect extends Activity {
private MyImageView mIV;
private Bitmap mFaceBitmap;
private int mFaceWidth = 200;
private int mFaceHeight = 200;
private static final int MAX_FACES = 10;
private static String TAG = "TutorialOnFaceDetect";
private static boolean DEBUG = false;

protected static final int GUIUPDATE_SETFACE = 999;
protected Handler mHandler = new Handler(){
// @Override
public void handleMessage(Message msg) {
mIV.invalidate();

super.handleMessage(msg);
}
};

@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

mIV = new MyImageView(this);
setContentView(mIV, new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT));

// load the photo
Bitmap b = BitmapFactory.decodeResource(getResources(), R.drawable.face3);
mFaceBitmap = b.copy(Bitmap.Config.RGB_565, true);
b.recycle();

mFaceWidth = mFaceBitmap.getWidth();
mFaceHeight = mFaceBitmap.getHeight();
mIV.setImageBitmap(mFaceBitmap);
mIV.invalidate();

// perform face detection in setFace() in a background thread
doLengthyCalc();
}

public void setFace() {
FaceDetector fd;
FaceDetector.Face [] faces = new FaceDetector.Face[MAX_FACES];
PointF eyescenter = new PointF();
float eyesdist = 0.0f;
int [] fpx = null;
int [] fpy = null;
int count = 0;

try {
fd = new FaceDetector(mFaceWidth, mFaceHeight, MAX_FACES);
count = fd.findFaces(mFaceBitmap, faces);
} catch (Exception e) {
Log.e(TAG, "setFace(): " + e.toString());
return;
}

// check if we detect any faces
if (count > 0) {
fpx = new int[count * 2];
fpy = new int[count * 2];

for (int i = 0; i < count; i++) {
try {
faces[i].getMidPoint(eyescenter);
eyesdist = faces[i].eyesDistance();

// set up left eye location
fpx[2 * i] = (int)(eyescenter.x - eyesdist / 2);
fpy[2 * i] = (int)eyescenter.y;

// set up right eye location
fpx[2 * i + 1] = (int)(eyescenter.x + eyesdist / 2);
fpy[2 * i + 1] = (int)eyescenter.y;

if (DEBUG) {
Log.e(TAG, "setFace(): face " + i + ": confidence = " + faces[i].confidence()
+ ", eyes distance = " + faces[i].eyesDistance()
+ ", pose = ("+ faces[i].pose(FaceDetector.Face.EULER_X) + ","
+ faces[i].pose(FaceDetector.Face.EULER_Y) + ","
+ faces[i].pose(FaceDetector.Face.EULER_Z) + ")"
+ ", eyes midpoint = (" + eyescenter.x + "," + eyescenter.y +")");
}
} catch (Exception e) {
Log.e(TAG, "setFace(): face " + i + ": " + e.toString());
}
}
}

mIV.setDisplayPoints(fpx, fpy, count * 2, 1);
}

private void doLengthyCalc() {
Thread t = new Thread() {
Message m = new Message();

public void run() {
try {
setFace();
m.what = TutorialOnFaceDetect.GUIUPDATE_SETFACE;
TutorialOnFaceDetect.this.mHandler.sendMessage(m);
} catch (Exception e) {
Log.e(TAG, "doLengthyCalc(): " + e.toString());
}
}
};

t.start();
}
}

Android Face Detection - Figure 3

Figure 3: Eyes Detected in Android

Android Face Detection: Color vs. Grayscale

Generally speaking, face detection is mostly achieved by searching for high-contrast areas that resemble facial features, so results from grayscale images are usually not too far off those from color images. However, some researchers are still trying to improve the accuracy of face detection in color images. In reality, other factors such as lighting and occlusion will have an even bigger impact on the accuracy of face detection.

We ran through some sample grayscale and color images and got similar results from the Android APIs. Therefore, the APIs seem to ignore the factor of different color channels. One example is shown below in Figure 4.

Android Face Detection - Figure 4

Figure 4: Grayscale Face Detected in Android

Conclusion

In this tutorial, we introduced the simple face detector in Android APIs and worked through a real example. The entire software package is available for download; you can import it into Eclipse by selecting "Creating project from existing source." If you are interested in exploring Android face detection further, here are some helpful considerations:

  • Many applications can potentially make good use of face detection. For example, it can be used to remove the red eye defect, count the number of people, correct camera focus, align face features, or create face databases.
  • There are many publicly available face databases you can use for your own implementations. Find a listing here.
  • In terms of real-time applications (e.g. live camera stream), the face detection performance from Android APIs could be less than satisfactory. Consider looking into OpenCV for Android.

Android Face Detection Code Download

About the Author

Chunyen Liu has been a software professional for many years. Some of his applications were among winners at programming contests administered by Sun, ACM, and IBM. He has co-authored software patents, written 20+ articles, reviewed books, and also created numerous hobby apps at Androidlet and The J Maker. He holds advanced degrees in Computer Science with knowledge from 20+ graduate-level courses. On the non-technical side, he is a tournament-ranked table tennis player, certified umpire, and certified coach of USA Table Tennis.

Sitemap | Contact Us