Establishing the test-retest reliability of perception and attention measures is important for exploring individual differences

Posted on

Dr Kait Clark, lead of the Applied Cognition and Neuroscience theme of the Psychological Sciences Research Group (PSRG), has published a new open-access paper in the Journal of Vision: “Test-retest reliability for common tasks in vision science.” The paper is co-authored by UWE PhD student Kayley Birch-Hurst, collaborators Dr Craig Hedge and Dr Charlotte Pennington at Aston University, and UWE Psychology alumni Austin Petrie and Josh Lee.

The authors argue that considering the test-retest reliability of a perception or attention task is crucial if researchers wish to use the task to assess the impact of individual differences (e.g., traits, experience) on performance. The issue is rooted in the historical development of vision science tasks, which were often designed to minimise differences between participants in order to understand a cognitive mechanism more generally. With an increased interest in looking at the influence of individual differences on perception and attention, researchers are now using the same tasks, but these tasks may not have a sufficient spread in participant variability to tell us anything meaningful about individual differences.

Test-retest reliability is the degree to which a participant’s performance is similar from one time completing a task to another. When there are little differences between individuals on a task, test-retest reliability tends to be low; i.e., if participants’ measures of accuracy or response time are all quite similar to each other, the degree to which one individual’s performance predicts their performance on a second test is going to be small. Therefore, a task with low test-retest reliability is not going to produce a consistent index of performance for any given individual (i.e., where they fall on a spectrum from “poor” to “excellent”) and cannot be used to assess individual differences in performance.

To assess test-retest reliability, Dr Clark and her team tested 160 undergraduate psychology participants on four commonly used tasks in vision science. The tasks measured a range of perceptual and attentional faculties such as sustained attention, motion perception, and peripheral processing, and each participant was tested twice, 1-3 weeks apart. The results demonstrate a range of reliabilities (as measured by the intraclass correlation coefficient, or ICC), indicating that some tasks (and some measures within these tasks) are more suitable for the exploration of individual differences than others. As expected, higher ICCs were associated with higher between-participant variability. The authors also reviewed a wide range of vision science tasks with known reliabilities and summarise these statistics in a useful reference table for future researchers. Finally, they provide detailed guidelines and recommendations for how to appropriately assess test-retest reliability.

Research Experience as an Undergrad: My summer internship and placement

Posted on

By Josh Lee

I’m a second-year psychology student at UWE, and throughout my first year I found myself developing a keen interest in psychological research. The more I engaged with my degree, the more interested I became, and I started actively seeking opportunities to gain research experience towards the end of first year. I was interested in learning more about the research process, and I also know how valuable experience can be for postgraduate applications.

In May of this year I went on an animal behaviour research trip to the island of Lundy. This was shortly after applying for my first research role, a paid summer internship with Drs Kait Clark and Charlotte Pennington. I learnt a lot on Lundy and made friends with the other student researchers. Towards the end we realised we were on the same wavelength…three fellow Lundy attendees and I had been invited to interview for the same position. The interviews were scheduled for the week after our return from Lundy, and we were now friends competing against each other. All we could do was wish each other luck in the interview and hope for the best.

The interview was competitive, and we were all given a short programming task to attempt in advance. Maybe there was something in the sea air, but when an email came through from Kait offering the job, all four our names were on it. Taking the extracurricular opportunity to learn and conduct psychological research on Lundy perhaps led to an edge in the interview, and we now had the chance to contribute to a legitimate paper together.

The main aim of the project was to develop a set of visual and social cognition tasks for the purposes of establishing test-retest reliability, building on a recent study by Hedge, Powell, & Sumner (2018). Our first task was to complete a comprehensive review of visual cognition literature. Although I had experience of examining research papers to get references for essays, this was much more in depth and specific. The process of comparing the different papers took a while to get used to, but it has been eye-opening to review papers with a view towards designing our own study rather than evaluating a proposition for an essay. It highlights different issues within and between papers that I would not have considered otherwise, and I feel like it has helped me develop a more complete approach to evaluating research papers in general. We were given lots of freedom to conduct the review and research – this was hugely beneficial as it left a lot of potential for creative ideas and individual contribution.

We chose measures for which the test-retest reliability had not already been established so our research could have the most impact. Each of us then chose one measure and worked through writing the Python code to implement parameters in alignment with previous studies. We are using PsychoPy, open-source software, to program our measures. I have limited coding knowledge (but enough to pass the interview stage!) so using Python has been a learning experience. Although frustrating at times, help has always been available and through a combination of initiative, trial and error, and advice, the measures shaped up nicely. I developed a motion coherence task, and piloting it on my friends has been interesting – explaining what the task is for and the wider context requires a thorough knowledge of it, and I am genuinely passionate about it. I never thought I’d be excited about a spreadsheet.

During our summer internship we also had an opportunity to meet with Dr Craig Hedge, whose recent paper has inspired our current work. We got to hear about his research first hand and discuss our project and how it related to his paper. It was interesting and insightful to talk about his work and how our test-retest reliability project came about.

Now we’ve finished the development stage of the project, and with all the tasks up and running, it’s time for data collection. I’m continuing to work on this project as my work-based learning placement for my Developing Self and Society (DSAS) module. Time slots are available on UWE’s participant pool for students to book in, and so we have all been running sessions for up to four participants at once. This involves briefing, setting up the experiments on the computers, giving instructions, addressing issues that arise, and ensuring that the conditions are the same for every session. It’s fun to discuss the study when debriefing the participants, to raise awareness of what is being investigated and help them understand why they did the tasks involved. The integration of my internship with one of my second-year modules shows how beneficial an opportunity like this can be. In isolation, it is good experience on its own, but linking it with my regular studies and incorporating my experience into university work has made it invaluable.

It’s been great working closely with Kait and Charlotte in addition to Austin, Triin, and Kieran. Chatting with staff as well as students in a different year to me has given me insight into the university and the course itself. I have learnt a lot already and will continue to do so. The project will also help me with my own research project and my degree in general. I’m excited to see what the rest of it brings.

Back to top