A need for science to ‘slow down’? Experiences from the British Neuroscience Association Festival of Neuroscience

Posted on

By Alice Stephenson (PhD Student)

BNA 2019 Festival of Neuroscience, Dublin

In April, I was fortunate to attend the annual and prestigious British Neuroscience Association (BNA) Festival of Neuroscience. The conference provided a unique opportunity to engage with contemporary interdisciplinary neuroscience research across the UK and internationally. Spread over four days the event hosted workshops, symposiums, keynote lectures and poster presentations. Covering 11 neuroscience themes, as an experimental psychologist, I was particularly interested in themes relating to attention, motivation and behavioursensory and motor systems, and neuroendocrinology and autonomic systems. 

Not only was this a chance to embrace cutting-edge interdisciplinary neuroscience research, this was a chance to develop skills enabling me to become a meticulous researcher. It was clear that an overall goal of the conference was to encourage high standards of scientific rigour by embracing the open science movement.

“Fast science… too much rubbish out there we have to sift through.”

Professor Uta Frith

The open science movement encompasses a range of practices that promote transparency and accessibility of knowledge, including data sharing and open access publishing. The Open Science Framework is one tool that enables users to create research projects and encourages sharing hypotheses, data, and publications. These practices encourage openness, integrity, and reproducibility in research, something particularly important in the field of psychology. 

A especially striking claim was noted by Ioannidis (2005): “most published research findings are false.” Ioannidis argued that there is a methodological crisis in science, particularly apparent in psychology, but also cognitive neuroscience, clinical medicine, and many other fields (Cooper, 2018). If an effect is real, any researcher should be able to obtain it using the same procedures with adequate statistical power. However, many scientific studies are difficult to replicate. Open science practices have been suggested to help enable accurate replications and facilitate the dissection of scientific knowledge, improving scientific quality and integrity.  

The BNA has a clear positive stance on open science practices, and I was lucky enough to be a part of this. Professor Uta Frith, a world-renowned developmental neuropsychologist, gave a plenary lecture about the three R’s: reproducibility, replicability, and reliability, which was arguably one of the most important and influential lectures over the course of the conference. 

Professor Frith summed up the scientific crisis in two words, “Fast Science.” Essentially, science is progressing too fast, leading to lower quality research. Could this be due to an increase in people, labs, and journals? Speeded communication via social media? Pressure and career incentives for increased output? Sheer volume of pre-prints available to download? Professor Frith argued that there is “too much rubbish one has to sift through.”

A potential solution to this is a ‘Slow Science’ movement. The notion of “resisting quantity and choosing quality.”  Professor Frith argued the need for the system to change. Often we here about the pitfalls of the peer review process, yet Professor Frith provided us with some novel ideas. She argued for a limited number of publications per year. This would encourage researchers to spend quality time on one piece of research, improving scientific rigour. Excess work would be appropriate for other outlets. Only one grant at a time should be allowed. She also discussed the need for continuous training programmes. 

A lack of statistical expertise in the research community?

Professor Firth argued that there is a clear lack of statistical knowledge. With increasing computational advancements, it is becoming easier and easier to plug data into a function and accept the outcome. Yet, we must understand how these algorithms work so we can spot errors and notice illogical results. 

This is something that spoke out to me. I love working with EEG data. Analysing time-series data allows us to capture cognitive processes during dynamic and fast changing situations. However, working with such rich and temporally complex data is technically challenging. The EEG signal is so small at the surface of the scalp, and the signal to noise ratio is poor. Artefacts, non-physiological (e.g. computer hum) and physiological (e.g. eye movements), contaminate the recording, meaning that not only does the EEG pick up neural activity, but it also records other electrical signals we are not interested in. Therefore, we apply mathematical algorithms to help with cleaning the data, to improve the signal to noise ratio. Once the data are cleaned, we also apply algorithms to transform the data from the time series domain (for which it is recorded in) to the frequency domain. The number of techniques of EEG analysis has risen hugely, partly thanks to computational power, and therefore there are now a whole host of computational techniques, including machine-learning, that can be applied to EEG data.  

Each time algorithms are applied to the EEG data, the EEG data change. How can an EEG researcher trust the output? How can an EEG researcher sensibly interpret the data, and make informed conclusions? Having an underlying understanding of what the mathematical algorithms are doing to the data is no doubt paramount. 

Professor Frith is right, there is a need for continuous training as data analysis is moving at an exhaustingly fast pace. 

Pre-registration posters – an opportunity to get feedback on your method and planned statistical analyses

I also managed to contribute to the open science movement during the conference. On the second-to-last day, I presented a poster on my research looking at the temporal neural dynamics of switching between a visual perceptual and visuomotor task. This was not an ordinary poster presentation; this was a pre-registration poster presentation. I presented planned work to be carried out, with a clear introduction, hypotheses and method. I also included a plan of the statistical analyses. There were no data, graphs, or conclusions.

The poster session was an excellent opportunity for feedback from the neuroscience community on my method and statistical analyses. This is arguably the most useful time for feedback – before the research is carried out. This was particularly beneficial for me coming from a very small EEG community, and seeking particular expertise is vital. A post-doctoral researcher, who had looked at something similar during her PhD, provided me with honest and informative feedback on my experimental design. In addition, I uploaded my poster to the Open Science Framework, and the abstract was published in the open access journal Brain and Neurosciences Advances. I also received a preregistered badge for my work. These badges work as an incentive to encourage researchers to engage with open science practices. Check out cos.io/badges for more information. 

So, what next?

Practical tools and significant support are coming together to allow open science to blossom. It is now our responsibility to be part of this. I’ve created an Open Science Framework account and plan to start there, detailing my aims, methods and data, to improve transparency in research. I’m making the most of my last year of my PhD to attend data analysis workshops. I would like to pre-register my research in the near future. How do I contribute to the slow science movement? I can start by slowing down (perhaps saying no to additional projects?!), improving my statistical knowledge, and embracing open science practices.   

Not only was the conference an incredible insight into multidisciplinary neuroscience research (I did not realise you could put a mouse in an MRI scanner, anaesthetised of course, as it would never keep its head still!), it was an influential and motivating atmosphere. Thank you, British Neuroscience Association. Now, who else wants to join me in advocating open science, becoming a rigorous researcher, and improving scientific practices?!

References

Cooper, M. M. (2018). The Replication Crisis and Chemistry Education Research. Journal of Chemical Education, 95, 1– 2.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

Shining the light on implicit bias: Do we really know what we believe?

Posted on

By Charlotte R. Pennington

In everyday life, you will be asked to report your attitudes and opinions towards a whole host of different things.  When buying a TV, you may be asked retrospectively to provide ratings of the product, or even the person who sold it to you. The reporting of attitudes has become so sought after by companies that specific websites have been developed providing people with an open forum to post their opinions and evaluations of accommodation, restaurants, and services, and even receive arbitrary points and badges for their reviews (e.g., Trip Advisor).  Given the plethora of surveys and questionnaires utilised on a daily basis, you may therefore think that measuring attitudes is relatively easy. Simply ask someone what they think and they will respond with an honest answer. However, psychology has shed light on the limitations posed by self-report tools, such as questionnaires and surveys, which are so readily used by companies and organisations alike.  

Gauging attitudes: A problem of measurement or construct?

Explicit attitudes refer to consciously accessible thoughts and feelings towards people, objects or concepts. They are introspectively accessible, meaning that you can reach inside your mind and report your feelings and thoughts. However, there are many issues when it comes to measuring people’s attitudes accurately. Reflect on the following example; a builder may measure the height of a window frame to fit double-glazed windows. Each time he/she does this, the window measurements remain the same. Windows can be measured, and the builder has the best tools to yield the correct measurements. Unfortunately, the same is not true when it comes to measuring attitudes; they are mental constructs and not tangible things. They are slippery and shape-shift depending on context. This presents numerous issues for researchers trying to measure them.

Studies have shown consistently how people’s attitudes can be altered by systematic factors, such as how the questions are framed and even what order they are presented. For example, a recent study demonstrates how the number of scale points in a questionnaire affects the extent to which gender stereotypes of brilliance are expressed. Specifically, female course instructors were more likely to receive a top rating on a 6-point scale relative to a 10-point scale, whereas this difference did not emerge for male instructors. The author’s reason that this effect occurs because of cultural meanings assigned to the number ‘10’ – perfection. As such, a top-score on a 6-point scale does not carry such strong performance expectations. To me, this is a landmark study demonstrating how the features of tools that are frequently used to judge merit can powerfully affect people’s responses. Who knew that something which appears meaningless can shape our answers in a way that tells a completely different story?

Another issue plaguing questionnaires is that psychologists – or whomever uses them – need to trust that the questionnaire can tap into exactly what we want to measure. When asking people about socially sensitive topics, such as prejudice or discriminatory behaviour this is rarely often the case. Consider how you would answer the following questions when asked by a researcher, someone you barely know: “Do you treat people from other races the same as you treat people from your own race? Do you willingly give to charity or those who need it the most? Think hypothetically about your answers for a minute. Now, reflect on your previous behaviour and try to gauge whether the answers given provide an accurate representation of how you really act. What you might uncover about yourself here is called the ‘willing and able’ problem; people may not be willing to report their honest attitudes, and when put on the spot, may not be able to accurately reflect and report what they truly feel. Answers to questions are usually influenced by self-presentational motives – that is, people’s desire to look good in someone else’s eyes.

A more interesting question is that we might not know what we actually believe. To a lay audience with no psychological training, this may sound surprising. How can we hold attitudes that we are unaware of? Psychology holds the answer. The past three decades of psychological research have revealed the frailties of introspection (the inner workings of our mind), and how little control we possess over our own thoughts. This has led researchers to coin the term ‘implicit attitudes’; introspectively unidentified traces of past experience that mediate favourable or unfavourable feeling towards social objects. The general argument is that individuals harness attitudes that they are not aware of, and these can manifest as judgements or actions.

How do we measure attitudes that people aren’t aware of?

The development of implicit measures have afforded remarkable insight into the human mind, and opened up a new research field termed implicit social cognition. This may leave you wondering, how do we measure such attitudes, and how do they develop in the first place?

Whereas explicit attitudes are measured by asking people directly about their thoughts and feelings (e.g., through questionnaires), implicit attitudes are assessed indirectly through tasks that typically measure response times towards various stimuli and compare systematic variations in people’s performance. One of the most well-known tasks of this kind is the Implicit Association Test (IAT), which tests how quick (or slow) people are at pairing different social categories with various attributes. The race IAT, for example, requires test-takers to categorise pictures of White and Black faces with positive and negative terms as quickly as possible. The underlying theory is that people will be quicker to pair concepts with attributes that are strongly associated in memory, compared to those weakly associated. In order to understand this better, think about learning a new language for the first time; you will always be quicker to think about words from your own language compared to those from a newly learned language because of the automaticity of your native tongue. Going back to the race IAT, research has consistently shown that White people are quicker to associate pictures of White faces with positive terms and Black faces with negative terms. This is referred to as implicit bias.

Social psychologists theorise that implicit bias, such as that demonstrated by White people taking the race IAT, are learned through experience. This occurs either directly through encounters with a particular social group, or indirectly through exposure to information about this social group. In Western cultures, White people are inundated with cultural messages and stereotypes that portray Black people as uneducated, relatively poor and more likely to be in trouble with the law. Consequently, implicit bias may form through exposure to cultural milieu. Do you know what your own IAT test result shows? Anybody can take these tests through the Project Implicit website. Your test result may surprise you, but it’s important to recall that this might not reflect your personal beliefs but rather learned associations imbued through exposure to your cultural or social environment. Research has revealed remarkable findings through the use of the IAT.  For example, a recent longitudinal study shows that implicit attitudes towards race, skin-tone and sexual orientation have trended towards neutrality over the last 12 years (i.e., people’s implicit bias towards these social categories seems to be decreasing). However, attitudes towards age and disability have remained stable, and have increased in relation to body-weight stigma. Moreover, implicit attitudes appear to hold predictive validity; studies have shown that people’s preference for White people on the race IAT predicts intention to vote for a White relative to Black presidential candidate. Now that’s a cool finding!

However, implicit measures have also received their fair share of criticism. Research indicates a weak relationship between explicit and implicit attitudes, suggesting that they may reflect separate attitude representations. An alternative theory, however, is that explicit and implicit measures allow people to edit their responses to varying degrees. In 2016, as a PhD student I wrote my first commentary reflecting on what exactly do implicit measures assess? In addition, although the IAT has shown some predictive validity (e.g., voting behaviour), other research indicates that for more socially sensitive attitudes, the IAT does not predict resulting discriminatory behaviour. Although the IAT was heralded to provide new insights into human cognition and behaviour, some researchers believe this test has been oversold. Nevertheless, I argue that the reason that implicit attitudes may not predict real-world behaviour is influenced by the same issues that plague self-report measures – social desirability. That is, people may think negatively about a certain out-group member, but that doesn’t necessarily mean they will act upon this. The same may be true for weak correlations between explicit and implicit attitude measures; people distort their attitudes on self-report questionnaires, whereas implicit measures aren’t susceptible to these self-presentational motives. Should we expect correlations between these two measures when one is tapping into controllable beliefs and the other is uncovering introspectively unidentified traces of past experience?

In order to answer these questions, I was awarded funding through the Vice Chancellor’s Early Career Research Awards (VC ECR Award) at UWE Bristol to investigate other implicit socio-cognitive mechanisms that may predict implicit bias. The blue sky thinking behind this research is to develop other measures that can potentially measure implicit behavioural manifestations of bias. At this stage, we are too early in our research endeavour to reveal any findings; however other influential and impactful avenues have already stemmed from this research.

At the same time as I have been conducting my research, Ellie Bliss (Adult Nurse Lecturer) and Alisha Airey (BME Project Officer) have been running staff workshops at UWE Bristol, reflecting upon how implicit (unconscious) bias can play out in the higher education classroom. I am now involved in supporting these workshops, providing research-led guidance on how we access implicit bias, and answering the many questions that staff have about this rather ambiguous construct. One interesting discussion centres on whether implicit biases can be viewed as unconscious when we are increasingly acknowledging them through teaching and training. The majority of attendees come away from the workshop with new reflections on how teaching practice is orientated towards Western culture, and with classroom strategies to implement to prevent implicit bias playing out. However, a handful of attendees are surprised and doubtful of the concept of implicit bias and the tools that purportedly measure it. They have difficulty in accepting that they may hold certain biases. But the truth is, we all do.

Where is implicit social cognition headed?

In this blog post I hope I have demonstrated that we are shining the light on what implicit bias really is and the nature of our unconscious attitudes. Such research has paved the way for training workshops which teach people to acknowledge their deep-rooted attitudes and reflect upon how these may impact our thinking and behaviour towards other people. But what’s next for this research arena? There are still lots of unanswered questions and controversies surrounding implicit bias, which makes it an exciting topic to study. Do implicit measures really provide a window into the unconscious mind? Is implicit bias relatively stable when measured at different time points? Can implicit bias be changed, and if so, are such changes short or long-term? Are attitudes towards some social groups easier to change than others? Can we, as a field, develop other (implicit) behavioural measures that more accurately predict implicit attitudes better than self-reports? Such investigations will represent the future of implicit social cognition and I, for one, am extremely excited to see what’s to come.

Research Experience as an Undergrad: My summer internship and placement

Posted on

By Josh Lee

I’m a second-year psychology student at UWE, and throughout my first year I found myself developing a keen interest in psychological research. The more I engaged with my degree, the more interested I became, and I started actively seeking opportunities to gain research experience towards the end of first year. I was interested in learning more about the research process, and I also know how valuable experience can be for postgraduate applications.

In May of this year I went on an animal behaviour research trip to the island of Lundy. This was shortly after applying for my first research role, a paid summer internship with Drs Kait Clark and Charlotte Pennington. I learnt a lot on Lundy and made friends with the other student researchers. Towards the end we realised we were on the same wavelength…three fellow Lundy attendees and I had been invited to interview for the same position. The interviews were scheduled for the week after our return from Lundy, and we were now friends competing against each other. All we could do was wish each other luck in the interview and hope for the best.

The interview was competitive, and we were all given a short programming task to attempt in advance. Maybe there was something in the sea air, but when an email came through from Kait offering the job, all four our names were on it. Taking the extracurricular opportunity to learn and conduct psychological research on Lundy perhaps led to an edge in the interview, and we now had the chance to contribute to a legitimate paper together.

The main aim of the project was to develop a set of visual and social cognition tasks for the purposes of establishing test-retest reliability, building on a recent study by Hedge, Powell, & Sumner (2018). Our first task was to complete a comprehensive review of visual cognition literature. Although I had experience of examining research papers to get references for essays, this was much more in depth and specific. The process of comparing the different papers took a while to get used to, but it has been eye-opening to review papers with a view towards designing our own study rather than evaluating a proposition for an essay. It highlights different issues within and between papers that I would not have considered otherwise, and I feel like it has helped me develop a more complete approach to evaluating research papers in general. We were given lots of freedom to conduct the review and research – this was hugely beneficial as it left a lot of potential for creative ideas and individual contribution.

We chose measures for which the test-retest reliability had not already been established so our research could have the most impact. Each of us then chose one measure and worked through writing the Python code to implement parameters in alignment with previous studies. We are using PsychoPy, open-source software, to program our measures. I have limited coding knowledge (but enough to pass the interview stage!) so using Python has been a learning experience. Although frustrating at times, help has always been available and through a combination of initiative, trial and error, and advice, the measures shaped up nicely. I developed a motion coherence task, and piloting it on my friends has been interesting – explaining what the task is for and the wider context requires a thorough knowledge of it, and I am genuinely passionate about it. I never thought I’d be excited about a spreadsheet.

During our summer internship we also had an opportunity to meet with Dr Craig Hedge, whose recent paper has inspired our current work. We got to hear about his research first hand and discuss our project and how it related to his paper. It was interesting and insightful to talk about his work and how our test-retest reliability project came about.

Now we’ve finished the development stage of the project, and with all the tasks up and running, it’s time for data collection. I’m continuing to work on this project as my work-based learning placement for my Developing Self and Society (DSAS) module. Time slots are available on UWE’s participant pool for students to book in, and so we have all been running sessions for up to four participants at once. This involves briefing, setting up the experiments on the computers, giving instructions, addressing issues that arise, and ensuring that the conditions are the same for every session. It’s fun to discuss the study when debriefing the participants, to raise awareness of what is being investigated and help them understand why they did the tasks involved. The integration of my internship with one of my second-year modules shows how beneficial an opportunity like this can be. In isolation, it is good experience on its own, but linking it with my regular studies and incorporating my experience into university work has made it invaluable.

It’s been great working closely with Kait and Charlotte in addition to Austin, Triin, and Kieran. Chatting with staff as well as students in a different year to me has given me insight into the university and the course itself. I have learnt a lot already and will continue to do so. The project will also help me with my own research project and my degree in general. I’m excited to see what the rest of it brings.