Abstract:
Next generation search engines will enable query formulations, other than text, relying on visual information encoded in terms of images and shapes. Content-based retrieval research aims at developing search engines that would allow users to perform a query by similarity of content. This thesis deals with two fundamentals problems in content-based 3D object retrieval: (1) How to describe a 3D shape to obtain a reliable representative for the subsequent task of similarity search? (2) How to supervise the search process to learn inter-shape similarities for more effective and semantic retrieval? Concerning the first problem, we develop a novel 3D shape description scheme based on probability density of multivariate local surface features. We constructively obtain local characterizations of 3D points on a 3D surface and then summarize the resulting local shape information into a global shape descriptor. This conversion mechanism circumvents the correspondence problem between two shapes and proves to be robust and effective. Experiments that we have conducted on several 3D object databases show that density-based descriptors are very fast to compute and very effective for 3D similarity search. Concerning the second problem, we propose a similarity learning scheme that incorporates a certain amount of supervision into the querying process. Our approach relies on combining multiple similarity scores by optimizing a convex regularized version of the empirical ranking risk criterion. This score fusion approach to similarity learning is applicable to a variety of search engine problems using arbitrary data modalities. In this work, we demonstrate its effectiveness in 3D object retrieval.