CHI 2018 - Workshop

This page will host the CHI 2018 Workshop.

DANCE Platform version 2

In this webpage you can find instructions necessary to download, install and run the DANCE software platform Version 2. The platform is based on EyesWeb XMI (http://www.infomus.org/eyesweb_eng.php), allowing users to perform synchronized recording, playback, and analysis of a multimodal stream of data. Details about the platform architecture and data stream formats are provided in Deliverable 4.2.

Download and install EyesWeb XMI

The updated version of EyesWeb XMI can be downloaded here: ftp://ftp.infomus.org/Evaluate/EyesWeb/XMI/Version_5.7.x/EyesWeb_XMI_setup_5.7.0.0.exe

Instructions to download and install EyesWeb XMI can be found on the DANCE software platform version 1 page.

 

2. Download the DANCE example tools and patches

The DANCE example tools and patches are programs, written to be loaded and executed in EyesWeb XMI, that allow the user to record, playback and analyze multimodal data (video, audio, motion capture, sensors). To run tools you will need to download the corresponding installers, launch them and execute the tools as normal Windows applications. To run patches you will need to download and load them into the EyesWeb application (see step 1 on how to download and install EyesWeb). The current version of the DANCE example tools and patches includes applications allowing you to perform different tasks:

  1. to record multimodal data
  2. to playback multimodal data
  3. to analyze multimodal data

a) Tools and patches for recording and playing back multimodal data

Video recorder tool (download installer) [Warning: a Decklink frame grabber is required to roon this tool]

This module, depicted below, is able to record the video stream of multiple video sources. Each video source is stored in a separate video+audio (e.g., AVI, mp4). To guarantee synchronization with the other data streams, the audio channels of the generated files contain the SMPTE time signal encoded as audio. During playback, the SMPTE is decoded from audio to extract timing information and play the video stream in sync.

DANCE platform2 videorecorder

Kinect recorder tool (download installer) [Warning: a Kinect 2 is needed to run the tool]

The Kinect recorder tool is depicted below:

DANCE platform2 kinectrecorder

The tool shows the current framerate in the "Data Status" field (15.01 frames per second in the example), the name used for this trial (trial_000, progressive numbers are automatically assigned to each trial), and the value of the reference clock (HHHH:MM:SS.mmm; 0000:00:49.160 in the above picture).

The option panel, displayed below, has been redesigned in the new version. It allows to choose which Kinect streams to record, the recording mode (standalone or slave) and to assign an ID to the recording module. The ID will be reported in the master interface, as described in the previous section.

DANCE platform2 kinectrecorder options

 

Audio recorder tool (download installer) [Warning: an ASIO compatible sound card is needed to run the tool]

The audio recorder tool is depicted in the below:

DANCE platform2 audiorecorder

It has been improved from the previous version by adding the possibility of recording several stereo audio tracks at the same time. The recorder is currently limited, mainly for performance reasons, to 8 stereo tracks, that is, 16 mono channels. The recorder interface shows the input audio signals as well as the number of recorded audio buffers per second.

The options panel allows the experimenter to set the audio recorder ID and the input audio tracks to be recoded.

DANCE platform2 audiorecorder options

IMU recorder tool (download installer) [Warning: one or more X-OSC sensors are needed to run the tool]

The IMU recorder tool is depicted below:

DANCE platform2 imurecorder

The graph shows the values selected by the user (acceleration, gyroscope, or compass) for each of the four IMUs. In the lower left you may read the current framerate of each of the four sensors (49.95 samples per second). Below the graph you may see both the trial name and the reference clock.

The data is saved by the recording tool in txt files, in a format which is easy to be read by external software (e.g., Matlab), and can be of course read by EyesWeb itself for playback or analysis purposes. The options panel allows you to configure the working mode of the recorder.

 DANCE platform2 imurecorder options

 

Myo recorder tool (download installer) [Warning: 2 Myo sensors are needed to run the tool]

This tool has been added during the second year of DANCE. It allows to record data coming from 2 Myo sensors at about 40 fps. The data consists of muscle contraction (1 value between 0 and 1), acceleration, gyroscope and rotation (yaw, pitch, roll).

DANCE platform2 myorecorder

 

Playback and quality extraction tools (download installer)

We provide 2 tools along with the DANCE Platform version 2:

  • a data playback tool
  • a quality of movement extraction tool

The installer contains sample data consisting in 2 trials recorded in DANCE during the months of January and February 2017. Both trials consists of multimodal data coming from Inertial Movement Units (accelerometers), Electromyographic sensors, video cameras and microphone. Each trial focuses on a single movement quality in the set of movement qualities identified by the UNIGE partner in collaboration with the choreographer V. Sieni. See Deliverable 2.2 for more details.

The 2 trials are:

  • Trial 84: a dancer performing "fragile" movements (i.e., a sequence of non-rhythmical upper body cracks or leg releases; movements on the boundary between balance and fall, time cuts followed by a movement re-planning; the resulting movement is non-predictable, interrupted, uncertain)
  • Trial 246: a dancer performing "light" movements (i.e., fluid movement together with a lack of vertical acceleration, mainly toward the floor, in particular of forearms and knees; for each vertical downward movement there is an opposite harmonic counterbalancing upward movement, simultaneous or consequent; there can be a convertion of gravity into movement on the horizontal plane using rotations and a spread of gravity on the horizontal dimension)

For a detailed description of the above and other movement qualities please refer to Deliverable 2.2.

Playback

The tool window is depicted below:

DANCE platform2 playbacktool

The trial name is reported on top of the window. In the upper part of the window, the video streams are played back. In the lower part of the window, the EMG (on the left) and intertial data (on the right )is displayed in a time window of 100 samples. ON the bottom, there are buttons to move between the trials and start/stop the playback. The tool is automatically installed in the EyesWeb XMI folder, for example in C:\Program Files (x86)\EyesWeb XMI 5.7.0.0\DANCE_Playback_Tool.

The installer installs 2 example trials in the subfolders "2017-01-18" and "2017-02-01". If more trials are found they are automatically added to the playback tool. Once started, the tools also sends the inertial sensors data through OSC messages. This data can be received by other software modules (for example, the movement features extraction tool, see below and Deliverable 3.2) to be analyzed.

Quality extraction

In the DANCE project we aim to innovate the state of art on the automated analysis of the expressive movement. We consider movement as a communication channel allowing humans to express and perceive implicit high-level messages, such as emotional states, social bonds, and so on. That is, we are not interested in physical space occupation or movement direction per se, or in “functional” physical movements: our interest is on the implications at the expressive level. For example: a hand movement direction to the left or to the right may be irrelevant, instead the level of fluidity or impulsiveness of such movement might be relevant.

We propose a Multi-Layered Computational Framework of Qualities consisting of several layers, ranging from physical signals to high-level qualities and addresses several aspects of movement analysis with different spatial and temporal scales. The features computed at lower layers, contribute to the computation of the features at higher levels, which are usually related to more abstract concepts. In more details, within the framework, the movement quality analysis organized on four levels:

  1. points: physical data that can be detected by exploiting sensors in real-time (for example, position/orientation of the body planes)
  2. frames: physical and sensorial data, not subject to interpretation, detected uniquely starting from instantaneous physical data on the shortest time span needed for their definition and depending on the characteristics of the human sensorial channels
  3. qualities: perceptual features, interpretable and predictable, with a given error and correction, starting from different constellations of physical and sensory data from level 2, on the time span needed for their perception (for example, lightness is perceived as a delay on the horizontal plane or as a balance between vertical upward/downward pushes)
  4. affects: perceptual and contextual features, interpretable and predictable, with a given error and correction, starting from a narration of different qualities, on a large time span needed for their introjection (for example, tension created by a pressing sequence of movement cracks/releases or by a sudden break of lightness).

DANCE platform2 framework

The first layer is devoted to capturing and preprocessing data (e.g., computing the kinematics features such as velocity, trajectories) from sensor systems (e.g., video, motion capture, audio, or wearable sensors). The second layer computes a low-level motion features at a small time scale (i.e., observable frame-by-frame) such as kinetic energy or smoothness from such data. The third one segments the flow of movements in a series of single units (or gestures) and computes a set of mid-level features, i.e., complex features that are usually extracted on groups of joints or the whole body, and require significantly longer temporal intervals to be observed (i.e., between 0.5s and 5s). Finally, the fourth layer represents even more abstract concepts such as emotional states of the displayer, social attitudes, user’s engagement in a full-body interaction. 

Along with the DANCE Platform version 2 we provide a quality of movement extraction tool capable of analyzing pre-recorded multimodal data and to extract the following qualities:

  • Suspension: it is a non-directional retention of energy on one of the body planes. The body or some parts of it may waving or rippling, the movement are of high predictability. We detect it by checking whether the maximum of the energy is retained over a period of time on one body plan.

  • Fragility: it is a sequence of non-rhythmical upper body cracks or leg releases . Movements on the boundary between balance and fall, time cuts followed by a movement re-planning. The resulting movement is non-predictable, interrupted, and uncertain. We detect irregular sequences of upper body cracks or leg releases in 1 a second time window.
  • Lightness: this quality is related to the Laban’s Weight quality (for details, see: Rudolf Laban and Frederick C. Lawrence. 1947: Effort. Macdonald & Evans.) It is computed by extracting the Energy vertical component normalized to the overall amount of Energy in the movement.

  • Kinectic Energy of a moving (part of) body, KE = 1/2*mass*velocity

  • Smoothness: it is defined as the inverse of the third derivative of position

 

DANCE platform2 qualitiesextractiontool

 

The DANCE Project

“Closing the eyes is needed to see the immeasurable spaces of a dream, to find images that open up to feelings coming from the least explored places of the brain, to experiment with forms of the visible derived by different sensorial configurations”
(J.Svankmajer, Touching and Imagining)

DANCE maps these thoughts into scientific and artistic research for investigating how affective and relational qualities of body movement can be expressed, represented, and analyzed by the auditory channel. DANCE aims at understanding the meaning of “closing the eyes”, the perception of expressiveness and entrainment in dance, the participation to the emotion conveyed by a sequence of movements in space, the understanding of the non-verbal language of bodies that communicate, imagining and questioning concrete ways to listen to a choreography, feel a ballet.

DANCE dimensions, strongly integrated and interacting, are the following:

  1. Inclusion and rehabilitation. Interactive sonification and musicalization of choreutic movement (processing of sound spaces and ways of using affective music content in relation to the quality of gestures) investigates forms of sensory substitution: to see through listening. Testing new perceptual experiences of body movement in the dark implies the possibility of rapprochement and sharing (of spaces and emotions) between visually impaired and sighted people.
  2. Scientific and technological research. Neuroscience studies on brain plasticity in sensory substitution at University of Maastricht will be combined with studies at Casa Paganini-InfoMus on technologies capable of seeing emotional qualities and social indicators of movement, interpreting them as techniques for searching data in digital archives (e.g. sound, music, visual archives), and with research on interactive sonification at KTH.
  3. Artistic research and production. The choreography itself creates the music: dance can be conceived as musical composition (or re-composition, interpretation), changing its traditional dimension into an eminently or exclusively listening experience: Gesture, as an aesthetic object, is experienced as a sounding object.

 

DANCE Mocap 2
An excerpt from a synchronized multimodal recording session (motion capture and physiology) to create a dataset of  movement data to be used to validate algorithms for the real-time analysis of movement qualities. Dancer Roberta Messa. April-May 2015

DANCE Platform version 1

In this webpage you can find instructions necessary to download, install and run the DANCE software platform. The platform is based on EyesWeb XMI (http://www.infomus.org/eyesweb_eng.php), allowing users to perform synchronized recording, playback, and analysis of a multimodal stream of data. Details about the platform architecture and data stream formats are provided in Deliverable 4.1.

The platform supports several research activities of the DANCE Project:

  • creation of a multimodal repository of recordings of movement qualities;
  • fine-grain synchronization of multimodal data
  • segmentation of the recordings in fragments, according to the chosen qualities
  • playback and testing of the repository
  • extraction of the movement features and qualities
  • real-time interaction sonification
  • design and development process of scientific experiments of DANCE
  • design and development of the prototypes of applications
  • design and development of artistic projects exploiting the results of the DANCE project (e.g., artistic performances)

To use the current version of the platform you have to:

  1. download and install EyesWeb XMI
  2. download the DANCE example tools and patches (a patch is a program, written and stored in the EysWeb XMI format; see the description below)
  3. download the sample data needed by the example patches
  4. run EyesWeb XMI, load one or more patches and execute them

1. Download and install EyesWeb XMI

EyesWeb XMI is a modular system that allows both expert (e.g., researchers in computer engineering) and non-expert users (e.g., artists) to create multimodal installations in a visual way. EyesWeb provides software modules, called blocks, that can be assembled intuitively (i.e., by operating only with mouse) to create tools and programs, called patches, that exploit system's resources such as multimodal files, webcams, sound cards, multiple displays and so on.

The last version of EyesWeb is the 5.6.1.0. (released in Nov 2016). You can download it from the following link: ftp://ftp.infomus.org/Evaluate/EyesWeb/XMI/Version_5.6.x/EyesWeb_XMI_setup_5.6.1.0.exe

Once you have finished installing the EyesWeb, proceed to step 2 in order to be able to run the DANCE platform patches.

2. Download the DANCE example tools and patches

The DANCE example tools and patches are programs, written to be execute by EyesWeb, that allow the user to record, playback and analyze multimodal data (video, audio, motion capture, sensors). To run tools you will need to download the corresponding installers, launch them and execute the tools as normal Windows applications. To run patches you will need to download and load them into the EyesWeb application (see step 1 on how to download and install EyesWeb). The current version of the DANCE example tools and patches includes applications allowing you to perform different tasks:

  1. to record and playback multimodal data
  2. to analyze multimodal data

a) Tools and patches for recording and playing back multimodal data

Kinect recorder tool (download installer)

The Kinect recorder tool is depicted below:

DANCE platform videorecorder

The tool shows the current framerate (50 frames per second in the shown example), the name used for this trial (trial_000, progressive numbers are automatically assigned to each trial), and the value of the reference clock (HHHH:MM:SS.mmm; 0000:00:07.440 in the above picture).

The recording tool records avi files. The video is encoded in MPEG-4 format, the resolution is 1280x720 and the framerate is 50 fps. Audio is encoded in AAC format at 48000 Hz. Two channels are recorded: the left channel contais audio from the system’s audio input device (e.g., a microphone), whereas the right channel is the reference clock encoded in SMPTE audio format.

Multiple instances of the video recorder tool can be started and can work standalone, or synchronized with the other recorders. The options panel allows you to configure the working mode of the recorder.

Audio recorder tool (download installer)

The audio recorder tool is depicted in the below:

DANCE platform audiorecorder

The recorder creates a stereo file in WAVE format, and the SMPTE is added as the right channel. Audio is sampled at 48000 Hz. The user interface is very similar to the video recorder tool. The main difference is the visualization part. In this tool the audio waveform is shown instead of the video stream. The options panel allows you to configure the working mode of the recorder.

IMU recorder tool (download installer)

The IMU recorder tool is depicted below:

DANCE platform imurecorder

The recorder expects input values (sent as OSC packets, see Deliverable D4.1 for details about this file format) from 4 Inertial Movement Units (IMUs). The graph in the figure shows the values selected by the user (one among accelerometer, gyroscope, or compass) for each of the 4 IMUs. In the lower left part of the recorder interface you can read the current streaming framerate related to each sensor (49.95 fps in this example). Below the graph you can read both the trial name and the reference clock.

The data is saved by the recording tool in CSV format. CSV format is commonly used to to provide portability towards by external softwares (e.g., Matlab),  and, obviously, can be read by EyesWeb itself for playback or analysis purposes. The options panel allows you to configure the working mode of the recorder.

Recorder tools options panel

All the recording tools share the same options panel depicted below:

DANCE platform recorderoptions

The control type section controls the synchronization mode. If set in master/standalone mode then the tool works with its own clock, without synchronizing it to other devices. In slave mode the tool receives the clock time from an external device (the master). The clock time can be received both via Network (OSC protocol) and via audio (SMPTE Sync).

Playback patch

Once you recorded some audio, video and IMU data, you can play it back using the playback patch.

Download the patch and copy the downloaded file to the parent folder of the recorded data. If you did not record any data you can download some sample data from this website.

For example, if the recorded/downloaded data is located in:
C:\Users\my_username\recordings
then the patch path will be:
C:\Users\my_username\recordings\DANCE_platform_reader.zip

Start EyesWeb and load the reader patch. You will see the following screen:

DANCE platform playerloaded

1) Run the patch by pressing the play button on the EyesWeb toolbar
DANCE platform eyeswebtoolbar

2) Use the slider “Trial Selector” to select a specific trial to be played back
3) To PLAY press the green "Play" button (or press the "s" key)
4) To STOP press the red "Stop" the playback (or press the "t" key)
5) To PAUSE the playback press the (or press the "p" key)

IMPORTANT: When changing from a recording to another you have firstly to stop the currently played segment and then you can start the new one.

During the playback of a file, video, 3D mocap data and IMU signals will be displayed. An example:

DANCE platform playerstarted

Patches for analyzing multimodal data

Now that you recorded or downloaded some multimodal data and you can successfully play it back, you can procced by performing some analysis on it.

In the DANCE project we aim to innovate the state of art on the automated analysis of the expressive movement. We consider movement as a communication channel allowing humans to express and perceive implicit high-level messages, such as emotional states, social bonds, and so on. That is, we are not interested in physical space occupation or movement direction per se, or in “functional” physical movements: our interest is on the implications at the expressive level. For example: a hand movement direction to the left or to the right may be irrelevant, instead the level of fluidity or impulsiveness of such movement might be relevant. Example: let us consider the movement “Knocking at a door”. We do not want to analyze the functional action of “knocking at a door”, but the intention that lies behind it (e.g., the lover that knocks at the door of his beloved). To study it, we focus on the sets of non-verbal expressive features that are described in detail in Deliverable 5.1.

The following expressive features can be extracted on multimodal data using the patches you can download below:

  • Energy of movement. Kinectic Energy of a moving (part of) body, Energy = 1/2*mass*velocity
  • Slowness. This feature indicates whether the movement is performed slowly or not.
  • Smoothness. This feature is based on Energy and Slowness. If movement exhibits high (respectively, low) slowness and no (respectively, many) energy peaks are detected then smoothness is high (respectively, low).
  • Weight. This feature is related to the Laban’s Weight quality (for details, see: Rudolf Laban and Frederick C. Lawrence. 1947: Effort. Macdonald & Evans.) It is computed by extracting the Energy vertical component normalized to the overall amount of Energy in the movement.
  • Suddenness. It is computed using alfa-stable distributions.An alpha-stable fit is performed on peaks of accelerations. A movement is sudden when the product between alpha and gamma is high (see Deliverable 2.1 for details). The algorithm takes as input the 3D joint accelerations on a time window on which the suddenness has to be computed, and then it fits it into the alfa-stable distribution. The output of the app gets close to 1 (i.e., very sudden movements) when there are abrupt increases of the joint's velocity in the input signal, and vice-versa.
  • Impulsivity. An impulsive movement can be performed by a part of the body or by the whole body and is characterized by the following properties: (P1) it is sudden, that is, it presents a high variation of speed (either from low to high or from high to low); (P2) it is executed with no preparation.
  • Fluidity. A Fluid movement can be performed by a part of the body or by the whole body and is characterized by the following properties: (P1) the movement of each involved joint of the (part of) the body is smooth, following the standard definitions in the literature of biomechanics; (P2): the energy of movement (energy of muscles) is free to propagate along the kinematic chains of (parts of) then body (e.g., from head to trunk, from shoulders to arms) according to a coordinated wave-like propagation. That is, there is an efficient propagation of movement along the kinematic chains, with a minimization of dissipation of energy. Fluidity is computed as the distance between the evolution in time of Humanoid Mass-Spring model (i.e., a model of the human body conceived as a set of masses connected by springs, see Deliverable 2.1 for details) and the actual movement of a user.

Besides the above expressive features, we are interested in extracting analysis primitives: thay are unary, binary, or n-ary operators that summarize with one or more values the temporal development of low-level features in an analysis time unit (a movement unit or a time window). The simplest unary analysis primitives are statistical moments (e.g., average, standard deviation, skewness, and kurtosis). Further examples of unary operators, that are more complex, include shape (e.g., slope, peaks, valleys), entropy, recurrence, and various time-frequency transforms). Models for predictions (e.g., HMM) can also be applied. Binary and n-ary operators can be applied e.g., for measuring relationship between low-level features computed on the movement of different body parts. For example, synchronization can be used to assess coordination between hands. Causality can provide information on whether the movement of a joint leads or follows the movement of another joint.

The following analysis primitive can be extracted on multimodal data using the patches you can download below:

We split the patches for analyzing multimodal data in 2 groups:

  1. patches computing features from IMUs
  2. patches computing features from motion capture data

Patches computing features and analysis primitives from IMUs

The links reported below summarize the patches for computing features and analysis primitives from IMUs. To use and test the patches:

  1. dowload the playback patch
  2. download the zip file extract the IMU patches
  3. download and extract the sample IMU data and videos of sample dance fragments used by the patches to demonstrate examples of computation of features; the data has to be extracted in the same folder of the patches
  4. run EyesWeb and load the patches indicated in the table for each feature, and run the patches in the indicated order
Patches computing features from IMUs
(you need the sample IMU data to run these patches)
FEATURE/
ANALYSIS
PRIMITIVE
PATCH FILES (to be executed in EyesWeb in the indicated order)
Energy 1.    DANCE_platform_reader.eywx
2.    DANCE_library_energy_imu.eywx
Slowness 1.    DANCE_platform_reader.eywx
2.    DANCE_library_slowness_imu.eywx
Smoothness 1.    DANCE_platform_reader.eywx
2.    DANCE_library_smoothness_imu.eywx
Weight 1.    DANCE_platform_reader.eywx
2.    DANCE_library_weight_imu.eywx
Suddenness
Impulsivity
1.    DANCE_platform_reader.eywx
2.    DANCE_library_impulsivity-suddenness_imu.eywx
Event Sync 1.    DANCE_platform_reader.eywx
2.    DANCE_library_event_synchronization.eywx
SPIKE Sync 1.    DANCE_platform_reader.eywx
2.    DANCE_library_spike_synchronization.eywx

Patches computing features and analysis primitives from motion capture data

The links reported below summarize the main patches for computing features and analysis primitives from motion capture data. To use and test the patches:

  1. dowload the playback patch
  2. download the zip file and extract the motion capture patches and extract the motion capture patches
  3. download and extract the sample motion capture data and videos of sample dance fragments used by the patches to demonstrate examples of computation of features; the data has to be extracted in the same folder of the patches
  4. run EyesWeb and load the patches indicated in the table in the feature, and run the patches in the indicated order
Patches computing features from motion capture data
(you need the sample motion capture data to run these patches)
FEATURE/
ANALYSIS
PRIMITIVE
PATCH FILES (to be executed in EyesWeb in the indicated order)
Impulsivity
Suddenness
1.    DANCE_platform_reader.eywx
2.    DANCE_library_impulsivity-suddenness_mocap.eywx
 Fluidity 1.    DANCE_platform_reader.eywx
2.    DANCE_fluidity_mocap.eywx
Event Sync 1.    DANCE_platform_reader.eywx
2.    DANCE_library_event_synchronization.eywx
SPIKE Sync 1.    DANCE_platform_reader.eywx
2.    DANCE_library_spike_synchronization.eywx

 

3. Download the IMU and motion capture sample data

As reported in the above paragraphs, you have to download and extract some sample data in order to run the DANCE example patches. Without some sample data the example patches will not start, or will start but will not provide any output. The sample data is contained in a zip file and it is a collection of 2 trials consisting in data recorded by a motion capture system, a videocamera, and 4 IMU sensors placed on the dancer's limbs (wrists and ankles). The zip archive contains the following folders and files:

  • folder: imu
    • file: 2015-12-15_t010_imu01_acceleration.txt (CSV format, 3D accelerometer data captured by IMU number 1 during trial number 010 on 2015-12-15)
    • file: 2015-12-15_t010_imu01_gyro.txt (CSV format, 3D gyroscope data captured by IMU number 1 during trial number 010 on 2015-12-15)
    • file: 2015-12-15_t010_imu01_magnetic.txt (CSV format, 3D compass data captured by IMU number 1 during trial number 010 on 2015-12-15)
    • .... (the same files are present for IMUs number 2, 3, and 4 and for trial number 011)
  • folder: mocap
    • file: 2015-12-15_t010_mocap.qam (3D model of the dancer captured during trial number 010 on 2015-12-15)
    • file: 2015-12-15_t010_mocap.tsv (3D motion captured data of the dancer captured during trial number 010 on 2015-12-15)
    • file: 2015-12-15_t011_mocap.qam (3D model of the dancer captured during trial number 011 on 2015-12-15)
    • file: 2015-12-15_t011_mocap.tsv (3D motion captured data of the dancer captured during trial number 011 on 2015-12-15)
  • folder: video
    • file: 2015-12-15_t010_video01.avi (video and audio file of the dancer captured during trial number 010 on 2015-12-15)
    • file: 2015-12-15_t010_video01_timings.txt (timings of the video frames contained in the corresponding video file 2015-12-15_t010_video01.avi)
    • file: 2015-12-15_t011_video01.avi (video and audio file of the dancer captured during trial number 011 on 2015-12-15)
    • file: 2015-12-15_t011_video01_timings.txt (timings of the video frames contained in the corresponding video file 2015-12-15_t011_video01.avi)

4. Run EyesWeb XMI, load one or more patches and execute them

If you downloaded EyesWeb, you installed it and you downloaded some example patches plus the needed sample data you are ready to run the patches:

  1. run EyesWeb, by clicking on the corresponding shortcut in the start menu:
    DANCE platform eyeswebshortcut
  2. load an example patch: select the file->open menu item; browse for the patch file; select "open":
    DANCE platform eyeswebopenfile
  3. click on the "play" button:
    DANCE platform eyeswebtoolbar
  4. depending on the example patch your executing, different display windows will appear

Consortium

 

Università degli Studi di Genova (UNIGE)

logo unige

logo cp

Established in 1984 at Università degli Studi di Genova (UNIGE), the Casa Paganini - InfoMus lab carries on scientific research and design, development, and experimentation of multimodal interfaces and multimedia systems. Main research addresses computational models of nonverbal multimodal expressive and social communication, with special focus on human movement and gesture (e.g., dance, full-body movements), audio (e.g., music), expressive and tangible multimodal interfaces. Applications domains include cultural heritage, edutainment, rehabilitation, entertainment, creative industry (music, dance), user-centric media, and mobile distributed multimedia systems. The lab is the main developer of the freely available EyesWeb XMI platform for synchronised recording, real-time interactive processing, and rendering of multimodal data.

The Università degli Studi di Genova (UNIGE) is the coordinator of the project. It will lead the project by ensuring an effective coordination and collaboration among al the partners, by monitoring the progress and by participating in the R&D activities. It is mainly in charge of realtime analysis and sonification of non-verbal full-body movement, emotion detection, use-case design, development of the DANCE platform, and integration of its components.

 

 

 

University of Maastricht (UM)

logo maastricht

The Faculty of Psychology and Neuroscience of the University of Maastricht is home to the department of Cognitive Neuroscience and the Maastricht Brain Imaging Centre, comprising of 9 research groups, that are carrying  out  research  on  the  topics  such  as  auditory  perception,  emotion  and  cognition,  columnar  level fMRI,  physiological  (f)MRI,  visual  perception  and  learning,  developmental  neuroscience,  and  fMRI methods.  Prof.  Beatrice de Gelder  is  the  head  of  the  Emotion and Cognition group  which  investigates cognitive en affective neuroscience of intersensory perception. The research focuses on how emotion and cognition interact in humans and on the interaction between different sensory systems, primarily between seeing and hearing.

UM will mainly investigate superior auditory localisation and movement perception abilities in the blind. Beyond state of the art neuroscientific investigation of structural and functional changes in cortical and subcortical networks and how they relate to superior auditory localisation in the congenitally blind. UM will actively collaborate with the other partners on real-time analysis and sonification of non-verbal full-body movement, emotion detection, use-case design, development of the DANCE platform, and integration of its components.

 

 

 

Kungliga Tekniska Hoegskolan (KTH)

 logo kth

KTH accounts for one-third of Sweden’s technical research and engineering education capacity at university level. Education and research cover a broad spectrum, from natural sciences to all the branches of engineering as well as architecture, industrial engineering and management, urban planning, work science and environmental engineering. In addition to the research carried out by the KTH Schools, a large number of both national and local Competence Centres are located at KTH. Various research foundations also finance a number of research programmes. The project will be carried out by researchers at the Dept. of Media Technology and Interaction Design (MID), School of Computer Science and Communication (CSC), one of the 10 schools which constitute KTH.

MID has it´s roots in computer- and behavioural science, graphic technique and media technology, and has today faculty that represents e.g., anthropology nd psychology, computer science and media technology, sound and music computing, interaction design, film- and literature science, and media- and communication science.

KTH will mainly conduct research on effective mappings between physical gestures and sound parameters to support feedback and understanding. KTH will actively collaborate with the other partners on real-time analysis and sonification of non-verbal full-body movement, emotion detection, use-case design, development of the DANCE platform, and integration of its components.

 

 

 

 

 

 

You are here: Home pageDance