Capstone Project

Back to listing
Group 2021-07 Status completed
Title 3D Human Knee Flexion Angle Estimation Using Deep Convolutional Neural Networks
Supervisor Thomas Fevens (CSSE), Hassan Rivaz (ECE), Paul Martineau (McGill)
Description An Anterior Cruciate Ligament(ACL)injury can cause serious burden, especially for athletes participating in relatively risky sports. This raises a growing incentive for designing injury-prevention programs. For this purpose, the analysis of drop jump landing test, for example, can provide a useful asset for recognizing those who are more likely to sustain knee injuries. Multiple research efforts have been conducted on engaging existing technologies such as the Microsoft Kinect sensor and Motion Capture (MoCap)to investigate the connection between the lower limb angle ranges during jump tests and the injury risk associated with them. Even though these technologies provide sufficient capabilities to researchers and clinicians, they need certain levels of knowledge to enable them to utilize these facilities. Moreover, these systems demand special requirements and setup procedures which make them limiting. Due to recent advances in the area pf Deep Learning, numerous powerful 3D estimation algorithms have been developed over the last few years. Having access to relatively reliable and accurate 3D body key point information can lead to successful detection and prevention of injury. The idea of combining temporal convolutions in video sequences with deep Convolutional Neural Networks (CNNs) offer a substantial opportunity to tackle the challenging task of accurate 3D human pose estimation. Using Microsoft Kinect sensor as the ground truth, the students will analyze the performance of CNN-based 3D human pose estimation in everyday settings. The qualitative and quantitative results are convincing enough to give an incentive to pursue further improvements, especially in the task of lower extremity kinematics estimation. In addition to the performance comparison between Kinect and CNN, the students will also incorporate temporal information of the video sequence by using RNN, LSTM and Transformers to improve the results. Finally, the students will compare their results to Facebook VideoPose3D, which also uses the sequence information. Reference: Improving Accuracy and Runtime of Skeletal Tracking of Lower Limbs for Athletic Jump Mechanics Assessment, A Portafaix and T Fevens, IEEE EMBC 2021
Requirement Deep learning Python programming Android SDK (for video over smartphone) (not very important)
Tools The students will be working with a laptop and Microsoft Kinect. In addition, they will use publicly available datasets.
Number of Students 6
Students Mehrooz Ahmed, Joseph Jordan, Linfeng Zhao, Nathalie Olcese, Niyirera Olivier Ntagahira, Zirui Qiu
Comments:
Links: