Search In this Thesis
   Search In this Thesis  
العنوان
Facial Expression Analysis using Computational Techniques.
الناشر
Ain Shams University. Faculty of Computer & information Sciences.
المؤلف
Tantawi,Manal Mohsen Mohamed
تاريخ النشر
2008 .
عدد الصفحات
89P.
الفهرس
Only 14 pages are availabe for public view

from 101

from 101

Abstract

Automatic analysis (recognition) of facial expressions has rapidly become an area of intense interest in the research field of computer vision. This is due to its essential rule in the study of behavioral science and the development of human-computer interfaces (HCI). The process of facial expression recognition usually consists of three stages: preprocessing, where the face and its components are extracted from input images; extracting features which can represent appropriately the emotions expressed on faces, and finally, classifying the extracted features in terms of certain classes.
This work contributes to classifying facial expressions in terms of six prototypic basic expressions (anger, disgust, fear, happiness, sadness and surprise) from static gray scale images obtained from JAFFE database. The proposed facial expression analyzer has been developed in four different approaches, according to how features are extracted and the artificial neural network model used for classification.
For the feature extraction stage, features are extracted globally (G) from the full face, or modularly (M) where the eyes & mouth regions are cropped in three different schemes namely: 1) both eyes & mouth; 2) left eye, right eye & mouth; 3) one-eye & mouth, using Principle Component Analysis (PCA) neural network model. Multi-layer Perceptron (MLP), standard Radial Basis Functions (RBF) and RBF modules (six modules, each one is specialized in recognizing one of the basic expressions) neural network models are employed for classification stage. Therefore, the four developed approaches are (GPCA\MLP), (MPCA\MLP), (MPCA\RBF) and (MPCA\MRBF).
Experiments resulted that GPCA\MLP has achieved average recognition accuracy 71.6%, while MPCA\MLP has achieved average recognition accuracy 73% with the left eye, right eye & mouth modular scheme. Using MPCA\RBF has increased the average recognition accuracy to 75% using both modular schemes: both-eyes & mouth and left eye, right eye & mouth schemes. Finally, MPCA\MRBF approach with the left eye, right eye & mouth scheme has achieved a recognition accuracy in range from 60% (’disgust’ expression) up to 100% (’happy’ expression ) with an average accuracy 82%, which is considered the best result achieved in the literature using the same database and testing paradigm. Therefore, this result emphasizes on the potentials of the modular approach in enhancing the recognition accuracy, reducing long training times, and being also adaptive.