Search In this Thesis
   Search In this Thesis  
العنوان
Multimodal Human verification with performance evaluation /
المؤلف
Ali, Hanaa Shaker Abdel-Baset.
هيئة الاعداد
باحث / Hanaa Shaker Abdel-Baset Ali
مشرف / Mahmoud I. Abdalla
مشرف / Mahmoud I. Abdalla
مشرف / Mahmoud I. Abdalla
الموضوع
Performance - Evaluation. Electronics.
تاريخ النشر
2011.
عدد الصفحات
ix, 113 p. :
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
1/1/2011
مكان الإجازة
جامعة الزقازيق - كلية الهندسة - electronics and communcation
الفهرس
Only 14 pages are availabe for public view

from 130

from 130

Abstract

Identity recognition systems are an important part of our every day life. Information
system/computer network security such as user authentication and access to databases is
an important potential application area for biometrics. Biometric systems based on face
images and/or speech signals have been shown to be quite effective. However, their
performance easily degrades in the presence of a mismatch between training and testing
conditions.
A system which uses more than one biometric at the same time is known as a
multimodal system. It often consists of several modality experts and a decision stage.
Multimodal systems can be more robust and give higher recognition accuracy. One of the
factors important to the accuracy of a multimodal system is the choice of the technique
deployed for data fusion. Another important issue is that of variations in the biometric
data. Such variations are reflected in the corresponding biometric scores, and thereby can
influence the overall effectiveness ofmultimodal biometric recognition.
In this thesis, a score fusion personal identification method using both face and speech
is introduced to improve the rate of single biometric identification. For speaker
recognition, an effective and robust method is proposed to extract speech features, capable
of operating in noisy environment. Based on the time-frequency multi-resolution property
of wavelet transform, the input speech signal is decomposed into various frequency
channels. For capturing the characteristic of the signal, Mel-Frequency Cepstral
Coefficients (MFCCs) of the wavelet channels are calculated. Hidden Markov Models
(HMMs) are used for the recognition stage as they give better recognition for the speaker’s
features than Dynamic Time Warping (DTW). Comparison of the proposed approach with
the MFCCs conventional feature extraction method shows that the proposed method not
only effectively reduces the influence of noise, but also improves recognition.
For face recognition, the wavelet-only scheme is used in the feature extraction stage of
face and nearest neighbour classifier is used in the recognition stage. Seeking the most
successful subbands, it is noted that the highest recognition accuracy is obtained using
approximations at level 3, followed by the horizontal details at level3. The vertical and
diagonal details give poor performance. Z-score is performed on the selected wavelet
subband coefficients by subtracting the mean and dividing by the standard deviation.
Histogram Equalization (HE) and Adaptive Histogram Equalization (AHE) are applied in.