Category
Modelling / Simulation
Document Type
Paper
Abstract
This study outlines a technique to repurpose widely available high resolution three-dimensional (3D) motion capture data for training a machine learning model to estimate the ground reaction forces from two-dimensional (2D) pose estimation keypoints. Keypoints describe anatomically related landmarks in 2D image coordinates. The landmarks can be calculated from 3D motion capture data and projected to different image planes, serving to synthesise a near-infinite number of 2D camera views. This highly efficient method of synthesising 2D camera views can be used to enlarge sparse 2D video databases of sporting movements. We show the feasibility of this approach using a sidestepping dataset and evaluate the optimal camera number and location required to estimate 3D ground reaction forces. The method presented and the additional insights gained from this approach can be used to optimise corporeal data capture by sports practitioners.
Recommended Citation
Mundt, Marion; Goldacre, Molly; and Alderson, Jacqueline
(2022)
"SYNTHESISING 2D VIDEOS FROM 3D DATA: ENLARGING SPARSE 2D VIDEO DATASETS FOR MACHINE LEARNING APPLICATIONS,"
ISBS Proceedings Archive: Vol. 40:
Iss.
1, Article 121.
Available at:
https://commons.nmu.edu/isbs/vol40/iss1/121