Multimodal Dataset of Lightness and Fragility

The aim of this work is to collect multimodal data of short dance performances containing full body movements displaying different expressive qualities. In total we collected 150 segments by 13 participants. The data consists of multiple 3D accelerometer data, video channels as well as audio of respiration and physiological signals. The data (video, audio, IMU, EMG) can be freely used for research purposes.

Task

We recorded a dataset of short dance performances containing full-body movements of two expressive qualities: Lightness and Fragility. The description of these two expressive qualities can be found in this paper. At the beginning, dancers were given the definitions of the expressive qualities. Next, the dancers were asked to perform an improvised choreography containing movements that, in the opinion of the dancer, expressed convincingly the quality. The dancers had different dance backgrounds (classic dance, pop, contemporary dance), and different levels of professional experience. They performed five repetitions of each expressive quality, each trial had a duration of 1 minute.

Setup Hardware and Data

We recorded data from the following devices:

  • 5 IMU sensors (x-OSC) placed on the dancer’s body limbs; the data is captured at 50 frames per second; each frame consists of 9 values: (x, y, z) of accelerometer, gyroscope, and magnetometer;
  • 2 video cameras (1280x720, at 50fps);
  • 2 EMG sensors placed on the dancer’s forearms (MYO); the data is captured at 50 frames per second, each frame consists of 8 signals;
  • one wireless microphone (Mono, 48 kHz) placed close to the dancer’s nose and mouth, recording the sound of breathing;

Data was recorded and synchronized using the EyesWeb XMI platform. Synchronization of data streams is obtained by using SMPTE time-codes, i.e., a standard which is widely used in multimedia content production.

Segmentation

The recorded video streams were evaluated by dance experts and expressive movement analysis experts. For every trial, they identified segments of about 10s each corresponding to a uniform, coherent sequence of movements. For each dancer and each expressive quality, between 5 and 6 segments were chosen.

To download the whole dataset, please click here (warning: 4.6 GB file download).

 

QualityNo. segmentsMean durationTotal duration
Lightness 77 10.2 s 13 min 6 s
Fragility 73 10.4 s 12 min 41 s
Total 150 10.3 s 25 min 46 s




Dancer 1:
Intended quality: Lightness


Intended quality: Fragility




Dancer 2:
Intended quality: Lightness


Intended quality: Fragility




Dancer 3:
Intended quality: Lightness


Intended quality: Fragility




Dancer 4:
Intended quality: Lightness


Intended quality: Fragility




Dancer 5:
Intended quality: Lightness


Intended quality: Fragility




Dancer 6:
Intended quality: Lightness


Intended quality: Fragility




Dancer 8:
Intended quality: Lightness


Intended quality: Fragility




Dancer 9:
Intended quality: Lightness


Intended quality: Fragility




Dancer 10:
Intended quality: Lightness


Intended quality: Fragility




Dancer 11:
Intended quality: Lightness


Intended quality: Fragility




Dancer 12:
Intended quality: Lightness


Intended quality: Fragility




Dancer 13:
Intended quality: Lightness


Intended quality: Fragility




Dancer 14:
Intended quality: Lightness


Intended quality: Fragility







More about the data formats used can be found here.

Ranking

Five raters watched the 150 segments resulting from the segmentation. They observed each video segment and they were asked to rate the global level of Fragility and Lightness they perceived by using two independent 5-point Likert scales (from 0 to 4).
If you need the results please write to: This email address is being protected from spambots. You need JavaScript enabled to view it.

If you have used our dataset in your research please cite our work:
Niewiadomski, R., Mancini, M., Piana, S., Alborno, P., Volpe, G., Camurri, A., Low-Intrusive Recognition of Expressive Movement Qualities, Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI 2017), Glasgow, UK, November 13–17, ACM, New York, NY, USA, pages 230-237, 2017. doi: 10.1145/3136755.3136757

Full paper can be found here.

Thank you!

You are here: Home DANCE Dataset Expressive Vocabulary Data