Test Design
test design
Assessing the potential for learning analogy problem-solving: does EF play a role?
Analogical reasoning is related to everyday learning and scholastic learning and is a robust predictor of g. Therefore, children's ability to reason by analogy is often measured in a school context to gain insight into children's cognitive and intellectual functioning. Often, the ability to reason by analogy is measured by means of conventional, static instruments. Static tests are criticised by researchers and practitioners to provide an overview of what individuals have learned in the past and for this reason are assumed not to tap into the potential for learning, based on Vygotsky's zone of proximal development. This seminar will focus on children's potential for reasoning by analogy, as measured by means of a dynamic test, which has a test-training-test design. In so doing, the potential relationship between dynamic test outcomes and executive functioning will be explored.
Developing a test to assess the ability of Zurich’s police cadets to discriminate, learn and recognize voices
The goal of this pilot study is to develop a test through which people with extraordinary voice recognition and discrimination skills can be found (for forensic purposes). Since interest in this field has emerged, three studies have been published with the goal of finding people with potential super-recognition skills in voice processing. One of them is a discrimination test and two are recognition tests, but neither combines the two test scenarios and their test designs cannot be directly compared to a casework scenario in forensics phonetics. The pilot study at hand attempts to bridge this gap and analyses if the skills of voice discrimination and recognition correlate. The study is guided by a practical, forensic application, which further complicates the process of creating a viable test. The participants for the pilot consist of different classes of police cadets, which means the test can be redone and adjusted over time.
Commonly used face cognition tests yield low reliability and inconsistent performance: Implications for test design, analysis, and interpretation of individual differences data
Unfamiliar face processing (face cognition) ability varies considerably in the general population. However, the means of its assessment are not standardised, and selected laboratory tests vary between studies. It is also unclear whether 1) the most commonly employed tests are reliable, 2) participants show a degree of consistency in their performance, 3) and the face cognition tests broadly measure one underlying ability, akin to general intelligence. In this study, we asked participants to perform eight tests frequently employed in the individual differences literature. We examined the reliability of these tests, relationships between them, consistency in participants’ performance, and used data driven approaches to determine factors underpinning performance. Overall, our findings suggest that the reliability of these tests is poor to moderate, the correlations between them are weak, the consistency in participant performance across tasks is low and that performance can be broadly split into two factors: telling faces together, and telling faces apart. We recommend that future studies adjust analyses to account for stimuli (face images) and participants as random factors, routinely assess reliability, and that newly developed tests of face cognition are examined in the context of convergent validity with other commonly used measures of face cognition ability.