--------------------------
DESCRIPTION
--------------------------
1. Dataset description
Multi-view Leap2 Hand Pose Dataset (ML2HP Dataset) is a new dataset for hand pose recognition, captured using a multi-view recording setup with two Leap Motion Controller 2 devices. This dataset encompasses a diverse range of hand poses, recorded from different angles to ensure comprehensive coverage. The dataset includes real images with the associated precise and automatic hand properties, such as landmark coordinates, velocities, orientations, and finger widths. This dataset has been meticulously designed and curated to maintain a balance in terms of subjects, hand poses, and the usage of right or left hand, ensuring fairness and parity. The content includes 714,000 instances of 17 different hand poses (including real images and 247 associated hand properties). These examples have been recorded from 21 subjects. The multi-view setup is necessary to mitigate hand occlusion phenomena, ensuring continuous tracking and pose estimation required in real human-computer interaction applications like virtual reality gaming. Overall, this dataset contributes to advancing the field of multimodal hand pose recognition by providing researchers with a valuable resource for developing advanced human computer interfaces based on machine learning algorithms.
2. Dataset content
The ML2HP Dataset is meticulously organized into a hierarchical file structure to facilitate easy access and retrieval of specific instances for analysis. At the top level, there is a folder for each subject, identified by integer numbers (e.g., “001”, “002”). Within each subject folder, there are subfolders representing the hand used, designated as “Right_Hand” and “Left_hand”. Inside each hand folder, further subfolders are categorized by hand pose class, named according to the specific hand pose (e.g., “OpenPalm”, “ClosedFist”, etc). Each pose class folder contains two additional subfolders corresponding to the recording devices: “Horizontal” for the camera placed horizontally and “Vertical” for the camera placed vertically. In addition, at the top level, there is a “subjects_info.csv” file that includes the information of age and gender for each subject identifier.
- The subjects_info.csv file contains information of age and gender for each subject identifier.
- Each hand_properties.csv contains the 247 hand properties (such as landmark coordinates, velocities, orientations, and finger widths) from a specific subject, hand, pose and device.
- Each .bmp contains an image from a specific subject, hand, pose and device.
--------------------------
METHODOLOGY
--------------------------
1. Methodology and data acquisition protocol
Before recording the data, each participant received detailed information about the data collection protocol and voluntarily provided informed consent, including their agreement to have the data published, by signing a consent form prior to their inclusion in the research study. During the data collection process, participants were instructed to perform various hand poses while facing either one of the cameras, the other camera, or positioned diagonally between both. Additionally, participants were prompted to move their hands through the entire view range of the cameras, ensuring comprehensive coverage of hand poses from different angles, perspectives, and distances. This approach enabled the capture of a diverse range of hand configuration instances.
Participants were asked to perform each hand pose repeatedly until enough instances were recorded for each class and hand (right and left). The protocol involved initially recording instances of right-hand poses from all classes, repeating the process for left-hand poses. This systematic approach ensured comprehensive coverage of both right- and left-hand configurations across all classes, facilitating a balanced and representative dataset.
A data curation process was performed after the data collection to synchronize and balance the available data.