Archives and Documentation Center
Digital Archives

Automatic analysis of head and facial gestures in video streams

Show simple item record

dc.contributor Graduate Program in Electrical and Electronic Engineering.
dc.contributor.advisor Sankur, Bülent.
dc.contributor.author Akakın, Hatice Çınar.
dc.date.accessioned 2023-03-16T10:17:25Z
dc.date.available 2023-03-16T10:17:25Z
dc.date.issued 2010.
dc.identifier.other EE 2010 A33 PhD
dc.identifier.uri http://digitalarchive.boun.edu.tr/handle/123456789/12758
dc.description.abstract Automatic analysis of head gestures and facial expressions is a challenging research area and it has significant applications for intelligent human-computer interfaces. An important task is the automatic classification of non-verbal messages composed of facial signals where both facial expressions and head rotations are observed. This is a challenging task, because there is no definite grammar or code-book for mapping the non-verbal facial signals into a corresponding mental state. Furthermore, non-verbal facial signals and the observed emotions have dependency on personality, society, state of the mood and also the context in which they are displayed or observed. This thesis mainly addresses the three desired tasks for an effective visual information based automatic face and head gesture (FHG) analyzer. First we develop a fully automatic, robust and accurate 17-point facial landmark localizer based on local appearance information and structural information of landmarks. Second, we develop a multistep facial landmark tracker in order to handle simultaneous head rotations and facial expressions. Thirdly, we analyze the mental states underlying facial behaviors by utilizing time series of the extracted features. We consider two data representation types, namely facial landmark trajectories and spatiotemporal evolution data of the face image during an emotional expression. Novel and different sets of features are extracted from these face representations for the automatic facial expression recognition. Features can be landmark coordinate time series, facial geometric features or appearance patches on expressive regions of the face. We use comparatively, feature sequence classifiers: Hidden Markov Models and Hidden Conditional Random Fields, and feature subspace methods: Independent Component Analysis, Non-negative Matrix Factorization and Discrete Cosine Transform on the spatiotemporal data with modified nearest neighbor classifier. Proposed algorithms improves the state of the art performance results for both posed and spontaneous databases.
dc.format.extent 30cm.
dc.publisher Thesis (M.S.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2010.
dc.subject.lcsh Facial expression.
dc.subject.lcsh Face perception.
dc.title Automatic analysis of head and facial gestures in video streams
dc.format.pages xxvi, 163 leaves;


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Digital Archive


Browse

My Account