Abstract
Subjective comparisons of aspects of experience provides reliable and powerful numerical data, which can provide us means to characterize structures of consciousness. Yet, an exhaustive set of comparative and pairwise judgements among N stimuli requires N(2) trials, which is costly for in-lab face-to-face data collection from a participant. By utilizing an online experimental platform, it is easy to recruit many participants, randomly distributing a small proportion of all possible pairs. However, random assignment is not efficient in obtaining data uniformly across all pairs of stimuli. Here, we introduce a new method for minimizing variance in trial counts across stimulus pairs, by integrating PsychoPy with GitHub Gist, which records the frequency with which each pair has been presented. We provide JavaScript code that can be incorporated into customized code chunks in PsychoPy. The program can be run on Pavlovia for online participants, and we show the effectiveness of our method. • The frequencies that each stimulus pair has been shown are stored on GitHub Gist. • When a new participant starts on Pavlovia, our methodology reads the frequencies, selects the least presented stimuli for the participant, and updates the frequencies. • The frequencies get dynamically balanced for efficient data collection.