30 (12 female) observers (ages 21–64 years, mean age 40 years) rated the dominance of 12 neutral faces overlaid on 12 different dynamic backgrounds, using a 5 point rating scale ranging from very submissive to very dominant. 6 of these backgrounds have previously been classified as high in dominance (strong backgrounds: mean dominance rating 0.85±0.07 on a scale from 0 to 1, N = 35) while 6 others have been classified as low in dominance (weak backgrounds: mean dominance rating 0.22±0.06, N = 35).
The 12 neutral faces were selected from the Dominance data set of the validated Princeton faces database. Their neutrality had been verified in a previous study for a homogeneous dark background. Neutral faces were used as targets since their evaluation is affected by emotional scene content to a greater extent than the evaluation of faces with exaggerated facial expressions, probably because of their ambiguous nature. In addition, by using neutral faces, issues of stimulus-background congruency can be avoided.
The 12 dynamic backgrounds were different natural textures from the Dyntex database (AVI movies with a resolution of 600×480 pixels, a duration of 10 s, and a frame rate of 25 fps), representing everyday background scenes like moving water, fluttering vegetation and a waving flag. In an earlier study 6 of these textures (with identifiers 54ab110, 64adl10, 648dc10, 649ha10, 6484d10, and 6485110 in the Dyntex database) were classified as high in dominance (strong), and 6 others (with identifiers 54ac110, 571b110, 645ab10, 6486b10, 6482210, 6485310) were classified as low in dominance (weak; Fig. 1B).
In contrast to the previous studies on the effects of affective backgrounds on facial evaluation the backgrounds used in this study are dynamic and have no evident semantic affective connotation. Each face was overlaid on each dynamic background, resulting in a total of 144 different stimuli (12 faces×12 backgrounds).
Dell Precision 490 PCs were used to present the stimuli to the observers in random order and to register their response. The computers were equipped with Dell 19" monitors, with a screen resolution of 1280×1024 pixels, and a screen refresh rate of 60 Hz. MediaLab v2012 (www.empirisoft.com) was used to present the stimuli and collect the answers. The stimuli were presented for maximally 10 s on a light grey background, flanked by a rating scale. If a participant responded within 10 s after the onset of a stimulus presentation, the current face would disappear and the next face would be shown. If a participant needed more than 10 s to respond, the stimulus disappeared from the screen but the rating scale remained visible until the participant had responded. Participants were instructed to base their answers solely on their first and overall impression of each face and to ignore the background. Observers used standard mouse pointers to indicate their response. Statistical analyses were performed with IBM SPSS 20.0 for Windows. As the experiments used an ordinal scale of measurement, without assuming it is also an interval scale, non-parametric tests were used.