Journal of Applied Science and Engineering

Published by Tamkang University Press


Impact Factor



Wen-Bing Horng This email address is being protected from spambots. You need JavaScript enabled to view it.1, Cheng-Ping Lee1 and Chun-Wen Chen1

1Department of Computer Science and Information Engineering Tamkang University Tamsui, Taipei, Taiwan 251, R. O. C. 


Received: July 19, 2001
Accepted: August 10, 2001
Publication Date: September 1, 2001

Download Citation: ||  


An age group classification system for gray-scale facial images is proposed in this paper. Four age groups, including babies, young adults, middle-aged adults, and old adults, are used in the classification system. The process of the system is divided into three phases: location, feature extraction, and age classification. Based on the symmetry of human faces and the variation of gray levels, the positions of eyes, noses, and mouths could be located by applying the Sobel edge operator and region labeling. Two geometric features and three wrinkle features from a facial image are then obtained. Finally, two back-propagation neural networks are constructed for classification. The first one employs the geometric features to distinguish whether the facial image is a baby. If it is not, then the second network uses the wrinkle features to classify the image into one of three adult groups. The proposed system is experimented with 230 facial images on a Pentium II 350 processor with 128 MB RAM. One half of the images are used for training and the other half for test. It takes 0.235 second to classify an image on an average. The identification rate achieves 90.52% for the training images and 81.58% for the test images, which is roughly close to human’s subjective justification.

Keywords: Age Classification, Facial Feature Extraction, Neural Network


  1. [1] Chellappa, R., Wilson, C. L. and Sirohey, S., “Human and machine recognition of faces: A Survey,” Proc. of the IEEE, Vol. 83, pp. 705-740 (1995).
  2. [2] Choi, C., “Age change for predicting future faces,” Proc. IEEE Int. Conf. on Fuzzy Systems, Vol. 3, pp. 1603-1608 (1999).
  3. [3] Gonzales, R. C. and Woods, R. E., Digital Image Processing, AddisonWesley, Reading, MA, U. S. A. (1992).
  4. [4] Gose, E., Johnsonbaugh, R. and Jost, S., Pattern Recognition and Image Analysis, Prentice Hall, Upper Saddle River, New Jersey, U. S. A. (1996).
  5. [5] Gutta, S. and Wecheler, H., “Gender and ethnic classification of human faces using hybrid classifiers,” Proc. Int. Joint Conference on Neural Networks, Vol. 6, pp. 4084-4089 (1999).
  6. [6] Huang, J. C., “A study on gray scale face recognition,” Master Thesis, National Chiao Tung University, Hsinchu, Taiwan, R. O. C. (1996).
  7. [7] Kass, M., Witkin, A. and Terzopoulos, D., “Snake: active contour models,” Proc. First Int. Conf. on Computer Vision, London, England, pp. 259-268 (1987).
  8. [8] Kwon, Y. H. and da Vitoria Lobo, N., “Age classification from facial images,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Seattle, Washington, U. S. A., pp. 762-767 (1994).
  9. [9] Loomis, A., Drawing the Head and Hands, ninth printing, Viking Press, New York , U. S. A. (1974).
  10. [10] Looney, C. G., Pattern Recognition Using Neural Networks, Oxford University Press, New York, U. S. A. (1997).
  11. [11] Shepherd, J. W., “An interactive computer system for retrieving faces,” Aspects of Face Processing, Ellis, H. D. et al. Eds, Martinus Nijhoff International, Dordrecht, The Netherlands, pp. 398-409 (1986).
  12. [12] Thomas, C., Make-Up: The Dramatic Student's Approach, Theatre Arts Books, New York, U. S. A. (1968).
  13. [13] Yuille, A. L., Choen, D. S. and Hallinan, P. W., “Feature extraction from faces using deformable templates,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Diego, California, U. S. A., pp. 104-109 (1989).