Multimodal Dataset of Lightness and Fragility

The aim of this work is to collect multimodal data of short dance performances containing full body movements displaying different expressive qualities. In total we collected 150 segments by 13 participants. The data consists of multiple 3D accelerometer data, video channels as well as audio of respiration and physiological signals. The data (video, audio, IMU, EMG) can be freely used for research purposes.

Task

We recorded a dataset of short dance performances containing full-body movements of two expressive qualities: Lightness and Fragility. The description of these two expressive qualities can be found in this paper. At the beginning, dancers were given the definitions of the expressive qualities. Next, the dancers were asked to perform an improvised choreography containing movements that, in the opinion of the dancer, expressed convincingly the quality. The dancers had different dance backgrounds (classic dance, pop, contemporary dance), and different levels of professional experience. They performed five repetitions of each expressive quality, each trial had a duration of 1 minute.

Setup Hardware and Data

We recorded data from the following devices:

  • 5 IMU sensors (x-OSC) placed on the dancer’s body limbs; the data is captured at 50 frames per second; each frame consists of 9 values: (x, y, z) of accelerometer, gyroscope, and magnetometer;
  • 2 video cameras (1280x720, at 50fps);
  • 2 EMG sensors placed on the dancer’s forearms (MYO); the data is captured at 50 frames per second, each frame consists of 8 signals;
  • one wireless microphone (Mono, 48 kHz) placed close to the dancer’s nose and mouth, recording the sound of breathing;

Data was recorded and synchronized using the EyesWeb XMI platform. Synchronization of data streams is obtained by using SMPTE time-codes, i.e., a standard which is widely used in multimedia content production.

Segmentation

The recorded video streams were evaluated by dance experts and expressive movement analysis experts. For every trial, they identified segments of about 10s each corresponding to a uniform, coherent sequence of movements. For each dancer and each expressive quality, between 5 and 6 segments were chosen.

To download the whole dataset, please click here (warning: 4.6 GB file download).

 

QualityNo. segmentsMean durationTotal duration
Lightness 77 10.2 s 13 min 6 s
Fragility 73 10.4 s 12 min 41 s
Total 150 10.3 s 25 min 46 s




Dancer 1:
Intended quality: Lightness


Intended quality: Fragility




Dancer 2:
Intended quality: Lightness


Intended quality: Fragility




Dancer 3:
Intended quality: Lightness


Intended quality: Fragility




Dancer 4:
Intended quality: Lightness


Intended quality: Fragility




Dancer 5:
Intended quality: Lightness


Intended quality: Fragility




Dancer 6:
Intended quality: Lightness


Intended quality: Fragility




Dancer 8:
Intended quality: Lightness


Intended quality: Fragility




Dancer 9:
Intended quality: Lightness


Intended quality: Fragility




Dancer 10:
Intended quality: Lightness


Intended quality: Fragility




Dancer 11:
Intended quality: Lightness


Intended quality: Fragility




Dancer 12:
Intended quality: Lightness


Intended quality: Fragility




Dancer 13:
Intended quality: Lightness


Intended quality: Fragility




Dancer 14:
Intended quality: Lightness


Intended quality: Fragility







More about the data formats used can be found here.

Ranking

Five raters watched the 150 segments resulting from the segmentation. They observed each video segment and they were asked to rate the global level of Fragility and Lightness they perceived by using two independent 5-point Likert scales (from 0 to 4).
If you need the results please write to: This email address is being protected from spambots. You need JavaScript enabled to view it.

If you have used our dataset in your research please cite the following our works:

Niewiadomski, R., Mancini, M., Piana, S., Alborno, P., Volpe, G., Camurri, A.,
Low-Intrusive Recognition of Expressive Movement Qualities,
Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI 2017), Glasgow, UK, November 13–17, ACM, New York, NY, USA, pages 230-237, 2017, doi: 10.1145/3136755.3136757

Full paper available here.


and

Vaessen, M.J., Abassi, E., Mancini, M., Camurri, A., de Gelder B.,
Computational Feature Analysis of Body Movements Reveals Hierarchical Brain Organization
in Cerebral Cortex, Oxford University Press, v. 28, n. 10, pp. bhy228, Oct, 2018, ISBN/ISSN: 1047-3211
Full paper available here: here

Thank you!

Expressive Vocabulary

This dataset of expressive qualities is the result of the collaboration with Virgilio Sieni, director of the Venitian Biennale Danza (2013-2016) and founder of the Accademia sull'arte del gesto. We worked with the choreographer Virgilio Sieni to define a set of expressive qualities that corresponds to his expressive vocabulary. The dataset constitutes the choreographic language of Virgilio Sieni: some of the most identifiable and isolable gestural figures and movement sequences (his “words”) have been chosen to be mutually compared in different interpretations to investigate movement qualities. For the purpose of the creation this dataset, Virgilio Sieni participated in a number of multimodal recording sessions during the first and second year of DANCE project. We were able to collect a dataset of movement fragments for a number of qualities such as Lightness, Fragility, Suspension, Transmission, and Figure.

The motion capture data can be freely used for research purposes.

Setup Hardware and Data

We recorded data from the following devices:
  • Qualisys motion capture system sampling dancers' movement at 100 Hz,
  • video recording system (1280x720, 50fps) (unavailable for download).
Data was recorded and synchronized using the EyesWeb XMI platform.

Expressive Qualities

Lightness


Definition: A necessary condition for a Light movement is Fluidity. Further, a fluid movement should include at least one of the following characteristics: (i) to exhibit a low amount of downward vertical acceleration following gravity (in particular on forearms and knees), (ii) each possible downward acceleration must be counterbalanced by an opposite “harmonic” upward movement (simultaneous or consequent); (iii) vertical downward acceleration movements are turned into a finalization on the horizontal plane.

Lightness_example.zip



Fragility


Definition: In order to describe the Fragility, first we will introduce two its components: Upper body crack and Leg release. The Upper body crack is a single synchronized discontinuity of the upper body movement followed by the movement re-planning. Leg release is sudden movement of the hips (and knees) toward the floor due to a sudden synchronized release of knees. Both these components correspond to the abrupt variation of the acceleration of the body movements. While the upper body crack regards upper body, legs release concerns the lower part of the body. Fragility, is defined as a frequent a-rhythmic interruptions and consequent re-planning of movement. Thus, it can be seen as a non-periodic a-rhythmic sequence of Upper body cracks and Leg releases.
Fragility emerges, for example, when moving at the boundary between balance and fall, resulting in a series of short movements with frequent interruptions and restarts. The resulting movements are non-predictable.

Fragility_example.zip



Suspension


Definition: Suspension refers to non-directional holding of energy on one of the body planes. The body or some parts of it may, for example, waving or rippling. The movements of suspension are often highly predictable and repetitive.

Suspension_example.zip



Transmission


Definition: The transmission refers to circulation of body kinetic energy between different body planes. It may include resonances between different body parts, or changes of the body part that is leading movement.

Transmission_example.zip



Figure


Definition: Articulation of alignments and unisons, consolidation of balance.

Figure_example.zip


More about this work can be found in the paper:

Camurri, A., Volpe, G., Piana, S., Mancini, M., Niewiadomski, R., Ferrari, N., Canepa, C., The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement, 3rd International Symposium on Movement and Computing, MOCO 2016, 5-6 July 2016, Thessaloniki, Greece, 2016. doi: 10.1145/2948910.2948927.

Full paper can be found here.

Video fragments with SMPTE

Video fragments with SMPTE code

We recorded video fragments data with SMPTE code in the audio track, for synchrionization with the other sensor channels.

 



Date: 2017-01-13

 


Date: 2017-01-14

 


Date: 2017-01-18

 


Date: 2017-01-25

 


Date: 2017-02-01

 



If you have used our dataset in your research please cite the following our works:

Niewiadomski, R., Mancini, M., Cera, A., Piana, S., Canepa, C., Camurri, A.
Does embodied training improve the recognition of mid-level expressive movement qualities sonification?
in Journal on Multimodal User Interfaces, ISBN/ISSN: 1783-8738, Dec, 2018. doi: 10.1007/s12193-018-0284-0
Full paper available here...

and:

Vaessen, M.J., Abassi, E., Mancini, M., Camurri, A., de Gelder, B.
Computational Feature Analysis of Body Movements Reveals Hierarchical Brain Organization
in Cerebral Cortex, Oxford University Press, v. 28, n. 10, pp. bhy228, ISBN/ISSN: 1047-3211, Oct, 2018
Full paper available here...

Multimodal dataset of full-body movement qualities: Lightness and Fragility

Multimodal synchronized recordings of short dance improvisations, i.e., full body movements aiming to display different expressive qualities. In total we collected 150 segments by 13 participants. The data consists of 3D accelerometers data, video, respiration (audio, microphone), and simple physiological signals. This dataset can be freely used for research purposes, with the only request to cite the following papers describing and using this dataset:

Niewiadomski, R., Mancini, M., Cera, A., Piana, S., Canepa, C., Camurri, A.
Does embodied training improve the recognition of mid-level expressive movement qualities sonification?
in Journal on Multimodal User Interfaces, ISBN/ISSN: 1783-8738, Dec, 2018. doi: 10.1007/s12193-018-0284-0
Full paper available here...

and:

Vaessen, M.J., Abassi, E., Mancini, M., Camurri, A., de Gelder, B.
Computational Feature Analysis of Body Movements Reveals Hierarchical Brain Organization
in Cerebral Cortex, Oxford University Press, v. 28, n. 10, pp. bhy228, ISBN/ISSN: 1047-3211, Oct, 2018
Full paper available here...

The dataset has been recorded by University of Genoa at the Casa Paganini-InfoMus research centre of DIBRIS, in the EU Project ICT DANCE n 645553 (dance.dibris.unige.it).

Task

We recorded a dataset of short dance performances containing full-body movements of two expressive qualities: Lightness and Fragility (see the above mentioned papers).

Protocol: first, dancers were instructed with the definitions of the two expressive qualities. Next, the dancers were asked to perform an improvised choreography containing movements that, in the opinion of the dancer, expressed convincingly the requested quality. The dancers had different dance backgrounds (classic, dance, pop, contemporary dance), and are at an amateur level of dance experience. They performed five repetitions of each expressive quality, each trial had a duration of 1 minute. From each 1min trial, we extracted segments in which the quality was best expressed, with the help of dance experts.

Setup Hardware and Data

We recorded (synchronized) data from the following devices:

  • 5 IMU sensors (x-OSC) placed on the dancer’s body limbs; the data is captured at 50 frames per second; each frame consists of 9 values: (x, y, z) of accelerometer, gyroscope, and magnetometer;
  • 2 video cameras (1280x720, at 50fps);
  • 2 EMG sensors placed on the dancer’s forearms (MYO); data captured at 50 frames per second, each frame consists of 8 signals;
  • 1 wireless microphone (Mono, 48 kHz) placed close to the dancer’s nose and mouth, to record the audio of respiration.

Data were recorded and synchronized using the EyesWeb XMI platform. Synchronization of data streams is obtained by SMPTE time-codes.

The Data Segments

The recorded video streams were evaluated by dance experts and expressive movement analysis experts. For every trial, they identified segments of about 10s each corresponding to a uniform, coherent sequence of movements expressing the requested quality. For each dancer and each expressive quality, between 5 and 6 segments were chosen.

QualityNo. segmentsMean durationTotal duration
Lightness 77 10.2 s 13 min 6 s
Fragility 73 10.4 s 12 min 41 s
Total 150 10.3 s 25 min 46 s



Below we present some examples of the videos and data. The whole dataset can been seen and download from here.

Dancer 1:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 2:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 3:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 4:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 5:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 6:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 8:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 9:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 10:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 11:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 12:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 13:

Quality Lightness Fragility
Example
Video Download video video
Data data data



Dancer 14:

Quality Lightness Fragility
Example
Video Download video video
Data data data




More about the data formats used can be found here.

Ranking

Five raters watched the 150 segments resulting from the segmentation. They observed each video segment and they were asked to rate the global level of Fragility and Lightness they perceived by using two independent 5-point Likert scales (from 0 to 4).

If you need more info about the Experts evaluation, please download the following: Excel file.


Video fragments with SMPTE code

We recorded video fragments data with SMPTE code in the audio track, for synchrionization with the other sensor channels:

Date of recording Link to download page Description Quality video files count
2017-01-13 Total video files: N. 20 Lightness quality video files: N. 10
Fragility quality video files: N. 10
2017-01-14 Total video files: N. 21 Lightness quality video files: N. 11
Fragility quality video files: N. 10
2017-01-18 Total video files: N. 36 Lightness quality video files: N. 21
Fragility quality video files: N. 15
2017-01-25 Total video files: N. 51 Lightness quality video files: N. 18
Fragility quality video files: N. 20
2017-02-01 Total video files: N. 24 Lightness quality video files: N. 12
Fragility quality video files: N. 12





CHI 2018 - Workshop

This page will host the CHI 2018 Workshop.

You are here: Home pageDance