Archives and Documentation Center
Digital Archives

Computer vision-based human action recognition via keypoint tracking

Show simple item record

dc.contributor Graduate Program in Computer Engineering.
dc.contributor.advisor Akarun, Lale.
dc.contributor.author Kara, Yunus Emre.
dc.date.accessioned 2023-03-16T10:00:22Z
dc.date.available 2023-03-16T10:00:22Z
dc.date.issued 2011.
dc.identifier.other CMPE 2011 K37
dc.identifier.uri http://digitalarchive.boun.edu.tr/handle/123456789/12179
dc.description.abstract Computer vision-based human action recognition is a highly active research area which has many application areas including security, surveillance, assisted living, and entertainment. In this thesis, a new system for computer vision-based recognition of human actions is presented. The proposed system uses videos as input. The approach is invariant of the location of the action and zoom levels, the appearance of the person, partial occlusions including self-occlusions and some viewpoint changes. It is robust against temporal length variations. Keypoints are tracked through time and the trajectories of tracked keypoints are used for interpreting the human action in the video. Then, features from videos are extracted. A group of features for describing a trajectory are proposed. Trajectories are clustered using these trajectory features. The clustered trajectories are used for describing an image sequence. Image sequence descriptors are the normalized histograms of the clusters of trajectories. At the nal stage, the proposed system uses the descriptors of the image sequences in a supervised learning approach. An application based on the proposed method has been developed and applied to various datasets. A new multi modal dataset, called WeCare, which is focused on elderly care systems is introduced. The main objective of the dataset is to detect falls of humans. For attaining this goal, some other actions that can be confused with the falling action are included in the dataset. The evaluation of the proposed approach is done using two datasets: KTH Human Action Dataset and URADL Dataset. The proposed technique performs comparable to the methods in the literature. It has 87.25 per cent accuracy on the KTH dataset, 88 per cent accuracy on the URADL dataset. It has an accuracy of 98.75 per cent on the WeCare dataset.
dc.format.extent 30cm.
dc.publisher Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2011.
dc.relation Includes appendices.
dc.relation Includes appendices.
dc.subject.lcsh Human-computer interaction.
dc.subject.lcsh Computer vision.
dc.subject.lcsh Human-machine systems.
dc.title Computer vision-based human action recognition via keypoint tracking
dc.format.pages xv, 96 leaves ;


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Digital Archive


Browse

My Account