Özet:
All autonomous robots need to gather information about their surroundings. Once information about the environment is accurately extracted a great deal of further research is possible ranging from human computer interaction to high level action planning. However experience indicates achieving accurate perception can be a challenging problem. One of the problems is erroneous perception of the unrelated features of the environment as landmarks. Due to imperfect nature of sensors, methods should be developed to compensate for the uncertainties introduced by the low level perception algorithms. Looking from this point of view, the self localization literature may be seen as an effort to resolve the uncertainties introduced by lower level perception algorithms. This thesis proposes an algorithm to work between low level visual detection algorithms and higher level modules of a robot. The algorithm aims to select and remove erroneous perception information generated due to misplaced landmarks. Defining landmarks and correct locations for landmarks might sound very environment dependent at first. However this is not the case due to the generic de nition of a landmark and correct location. Using the meta pose instead of a specific pose as the state space, the method becomes easily portable with a few alterations in the parameters of the underlying probabilistic framework. Experiments are performed in two different environments to show general nature of the proposed algorithm.