Abstract:
Smart phones equipped with a rich set of sensors are explored as alternative platforms for human activity recognition in the ubiquitous computing domain. However, there exist challenges that should be tackled before the successful acceptance of such systems by the masses. In this thesis, we particularly focus on the challenges arising from the di erences in user behavior and in the hardware. To investigate the impact of these factors on the recognition accuracy, we collected data from 20 users focusing on ve basic locomotion activities using the accelerometer, gyroscope and magnetometer. Using this dataset, we analyze whether activity recognition can be performed independently in terms of device, device model, user, device orientation and device position. We rst show that, using raw acceleration, above 96% recognition accuracy can be obtained for device and model dependency tests, while success rate for orientation and user dependency tests remained at 87% and 90%. In order to tackle these issues, we rst calculated linear acceleration, then using sensor fusion these acceleration readings are converted from phone coordinates to the earth coordinates. These methods helped in removing the orientation e ects and increased both the user-independent and orientation-independent activity recognition accuracy to 98% and 95%. Finally, we analyze the impact of phone position on activity recognition using three di erent methods, namely using a generalized classi er, position-speci c classi er and a joint classi er and show that using position-speci c classi cation is not necessary, a generalized classi er performs very similarly. However, analyzing the confusion matrices, we observe that, stationary activities (sitting and standing) reduce the performance and combining these activities into a stationary class boosted recognition rates up to 98%.