Designs for motor execution and imagery experiments were based on a previous work [56 (
link)] and relied on a delayed grasping task after a visual presentation of the target objects. More specifically, in each trial, a picture of the target object was visually presented for 2 seconds, then, after a 4-second pause, an auditory cue prompted the actual task: participants had to preshape the hand as if they were grasping the target object to use it (for the execution group) or imagine a preshaping movement without moving their hand (for the imagery group). A 10-second interval separated 2 subsequent trials. Twenty different target objects were used for this study (see Table
6 for a list), and, in each experiment, movements were repeated 5 times, for a total number of 100 trials, organized in 5 fMRI runs, each lasting 5 minutes 44 seconds, including 12 seconds of rest at the beginning and at the end of each run to achieve a measure of baseline fMRI activity. The experimental paradigm for execution and imagery experiments was coded using Presentation (Neurobehavioral System, Berkeley, CA), and presented with an MR-compatible monitor at the resolution of 1,200 × 800 pixels, and a mirror mounted on the MR coil. During the observation experiment, participants watched short videos of preshaping movements towards an object from the same set adopted in the other experiments. In each trial, the video was followed by a task that implied a judgment on the target of the preshaping gesture. To create videos, we used vectors of joint angles (according to a 24 DoFs model) corresponding to the common starting posture and to the 20 final object-specific postures, recorded in a previous study [56 (
link)]. Intermediate hand configurations (i.e., posture vectors) between the initial and final postures were obtained from linear interpolation between the values of each kinematic joint angle in the initial and final hand postural configurations. The resulting 30 vectors of joint angles were plotted as 3D renderings, using
Mathematica 8.0 (Wolfram Research Inc, Champaign, IL, USA), saved as png images (size: 800 × 600 pixels), and converted to 1 second-long videos at a frame rate of 60 Hz. Five sets of 20 videos were created, showing the hand rendering as seen from 5 different viewpoints, obtained by changing the values of azimuth and elevation. During the fMRI experiment, participants performed 5 runs, each comprising 20 trials. During each trial, the video was presented (1 second), followed by a black fixation cross at the center of the screen (7 seconds). Then, the judgment task (2-alternatives forced choice) was presented, and participants were shown the black/white pictures of 2 objects (size: 250 × 250 pixels)—the target of the preshaping gesture previously shown and a randomly chosen alternative—and asked to press the left or right key on an MR-compatible keyboard to select the actual target of the preshaping movement. After the task, the same black fixation cross was shown for 6 seconds. Each run comprised the presentation of the full set of 20 videos (20 objects), always from the same viewpoint; the 5 different viewpoints were presented in separate runs. Each run started and ended with 10 seconds of rest and lasted in total 5 minutes 40 seconds. The experimental paradigm was delivered with an MR-compatible monitor at the resolution of 1,200 × 800 pixels, and a mirror mounted on the MR coil, using the e-Prime 2 software package (Psychology Software Tools, Pittsburgh, PA, USA). Owing to hardware failure, behavioral responses from 2 participants could not be recorded. For all experiments, participants performed a familiarization run, outside the MR scanner, to ensure that they correctly understood the procedures.
Averta G., Barontini F., Catrambone V., Haddadin S., Handjaras G., Held J.P., Hu T., Jakubowitz E., Kanzler C.M., Kühn J., Lambercy O., Leo A., Obermeier A., Ricciardi E., Schwarz A., Valenza G., Bicchi A, & Bianchi M. (2021). U-Limb: A multi-modal, multi-center database on arm motion control in healthy and post-stroke conditions. GigaScience, 10(6), giab043.