PSRG Videos: About us

Posted on

Thanks to Matt at Housecat Productions, we have videos about PSRG and each of our themes (footage recorded pre-pandemic). Check out each of the videos and the work that we do. Feel free to get in touch in the comments below or by email (

An overview of our whole research group:

About our Ageing Well theme:

About our Applied Cognition and Neuroscience theme:

About our Optimising Performance and Engagement theme:

About our Promoting Psychological Health theme:

Thanks for watching. And thanks Matt.

Dry January: A reflective account of a participating alcohol researcher

Posted on

By Charlotte Pennington

This blog post provides a reflective account of my own experiences participating in Dry January – an alcohol abstinence challenge initiated by Alcohol Change UK  that encourages people to reduce their levels of alcohol consumption. As an experimental social psychologist, my research investigates the influences of contextual and social factors on alcohol consumption and related behaviours. Working alongside collaborators at Edge Hill and Aston University, our work to date has suggested that frequent alcohol consumption is associated with our motivations and expected outcomes of drinking, as well as heightened attention towards alcohol-related cues. It was never my intention to blog about my experiences of Dry January; it was simply a personal endeavour that I aimed to complete. However, as the days went on, I noticed a lot of parallels with my own research, as well as becoming more cognisant about wider issues embedded in the UK’s drinking culture. I hope that my reflections below will speak to other people’s experiences associated with taking part, as well as highlighting cultural issues with our relationship with alcohol, and how we might best support those who wish to reduce their consumption.

What is Dry January?

“Dry January”  is a 1-month challenge where people give up alcohol for the month of January. Typically completed with the aid of a phone app, the overarching goal is to refrain from drinking alcoholic beverages for 31-days, with badges rewarded for day streaks, drinking in moderation, reducing alcohol intake, and total dry days. At the end of each day, the participant completes an online calendar, responding whether they’ve stayed ‘Dry’, ‘drank’, or ‘drank as planned’. Given the nascent stage of this particular challenge (2013), there has been little research conducted on the benefits and potential drawbacks of Dry January. However, research has suggested that Dry January can have a range of positive health-related and psychological benefits, ranging from improved sleep, weight loss, and enhanced self-control (see De Visser et al., 2016). Further, whilst some have proposed that Dry January may lead to a ‘rebound effect’ (i.e., binge February), the majority of available research suggests that a period of abstinence can encourage longer term reductions in drinking (Bray et al., 2010; De Visser et al., 2016). This is because, after a person has made a commitment to engage in behaviour change, they are more likely to maintain these changes in the future (de Visser et al., 2017).

Reflections on Dry January

Learning about the reasons I drink

According to the Alcohol Use Disorders Identification Test (AUDIT), I am categorised as a ‘low risk drinker’ with an overall score of 7 out of 40 (anything above 8 is seen as higher risk). Personally, I would classify myself as an ‘occasional social drinker’, who rarely binge drinks but instead has ‘one or two every now and again’. Over the Christmas period, I found myself overindulging in unhealthy foods and drinking more alcohol and I therefore decided to take part in Dry January to regulate my behaviour and explore the benefits. Early in the New Year, I found it relatively easy to abstain from alcohol, simply because I felt I’d had my fill over the festive period (my AUDIT score would have been temporarily higher!). As the days turned into weeks, however, I found myself thinking about reaching for an alcoholic drink a lot more. I then stopped to think about when and why I wanted an alcoholic drink, and realised that I tend to drink to alleviate stress or to relax in social situations. In the alcohol literature, these reasons are known as ‘drinking motives’ (see Kuntsche et al., 2006), which are the valued outcomes that people associate with drinking alcohol. The examples I provide are known as ‘coping’ (i.e., to deal with negative emotions) and ‘social’ motives (i.e., to enhance interactions), but there are also enhancement (i.e., to heighten mood) and conformity motives (i.e., to avoid social pressure or a need to fit in). Interestingly, research has shown that these drinking motives are a unique predictor of alcohol consumption and related behaviours (Kuntsche et al., 2014; Merrill & Read, 2010).

Another insight I had was that my tendency to drink alcohol in low-quantity, but rather frequently, may influence me to underestimate my true alcohol consumption. The limitations of self-report measures of alcohol consumption could be discussed at length, but the main point here is that such behaviour may be easily forgotten and unreliability reported. In addition, it may mask viewing consumption as a ‘problem’ and be an obstacle to behaviour change. Partaking in Dry January made me realise how I might reach for a drink without really counting it or thinking that I need to cut down.

Alcohol cues are everywhere

Seeing an advertisement for alcohol in a drinking establishment (i.e., pubs and bars) comes as no surprise, particularly in UK culture. However, abstaining from alcohol made me evaluate the quantity of alcohol advertisements we see in our everyday lives and the appropriateness of their placement. During a conference visit in January, I stayed at a well-known hotel chain and found promotional offers for alcohol in the reception lobby, as well as included in leaflets in my room. Throughout Dry January I became more and more aware of the number of alcohol adverts that were aired on television and decorative signs in shops that glamorised drinking. I found my heightened awareness of this very interesting and this led me to think more about something we, as researchers, call ‘cue reactivity’. Here, research shows that alcohol-related cues capture and hold the attention of those who drink alcohol and appear to increase subjective cravings for alcohol (see Field et al., 2009). Moreover, such attentional processing seems to be heightened in heavy drinkers and even abstinent alcoholics (Field et al., 2013). Such advertisements challenged my self-control to not drink by heightening my craving for alcohol, which led me to think about the implications that such advertising has for those with problematic alcohol use, or alcohol-related disorders. In the UK, alcohol-related adverts are regulated so that they do not condone or encourage irresponsible or immoderate drinking. Unlike cigarette advertising which is banned on television and heavily regulated in supermarkets (e.g., cigarettes hidden behind a screen), however, alcohol advertising appears to be much more relaxed. Let’s compare a packet of cigarettes and a bottle of alcohol, for example; the packaging of cigarettes includes large health warnings on 65% of the front and back, a brand name in standard font, and drab colours. The packaging of alcohol, on the other hand, includes limited (or no) health warnings, bright colours and attractive images, and beverages themselves come in many different colours and flavours. It therefore seems that more work needs to be done to regulate the branding and advertisement of alcohol to make them less attractive and ‘wanted’. Health warnings and nutritional labels would raise awareness of the health implications of consumption, and help people to make informed decisions with regards to drinking.

Challenging conversations around drinking

There were a few occasions during Dry January where my choice not to drink during social occasions was questioned by others, and people tried to influence this decision. Statements such as “just have one and then don’t drink tomorrow”, or “have a beer now and then a glass of water” were voiced, perhaps with the aim to test my self-control. This made me think about the wider conversations we have about drinking and the societal norms associated with alcohol; we wouldn’t ask someone why they are drinking, so why is it okay to ask someone why they’re not? I found that the most effective way of dealing with this was to have open conservations about the benefits of Dry January and to engage in discussion with people about wider problems regarding the UK’s binge drinking culture (see Pincock, 2003).  It was interesting to outline the many different reasons why people may choose to moderate their alcohol consumption, or not to drink, spanning choice (e.g., not feeling compelled to drink in social circles), finance (deciding to drive to a venue rather than drinking to save money), and health (weight loss, better sleep, concentration, and addiction). In relation to this, another challenging experience of taking part in Dry January concerned situations in which others expected me to pay for a round of drinks, or split the bill, when they had been drinking alcohol and I hadn’t. In some establishments, the cost of an alcoholic beverage is up to four-times the cost of a soft drink, so the bill might be quite surprising! Again, this may have been overcome with a simple conversation, but the stronger message here is that we need to be more aware of how we treat people who are not drinking and think more carefully about how we can support them.

So how did I get on?

Out of the 31 days in January, I managed 26 days dry; 1 of these was a ‘drinking as planned’ day, whilst the other 4 were days in which I drank in moderation. Using the phone app was extremely helpful for monitoring and managing my behaviour; over the course of my alcohol-free days, my best streak was 14 days, and I saved substantial money and calories.  Some people have said “so you didn’t complete Dry Jan?!”, and again I think this rhetoric is problematic. Although I didn’t complete 31 whole days, the challenge allowed me to regulate my consumption and cut down significantly. It also helped me to think about helpful strategies to moderate my drinking, such as adding on an extra “dry” day to my calendar after an unplanned drinking session, and to not give up on the challenge if I had drank. The most positive experience of Dry January for me, however, has been reflecting on the conversations we have about alcohol, and thinking about ways in which we can support people who chose to reduce their intake or abstain all together. It has also opened my eyes to cultural and societal factors that influence alcohol consumption (e.g., alcohol advertising), which may act as an obstacle to reducing intake. For me, Dry January has been a fundamentally interesting reflective experience, both as a participant and an alcohol researcher, and has aided my awareness of the benefits and barriers that people face when making the choice to cut down or abstain from drinking, and how we might best support them.  


Bray, R. M., Brown, J. M., Pemberton, M. R., Williams, J., Jones, S. B., & VandermaasPeeler, R. (2010). Alcohol use after forced abstinence in basic training among United States Navy and Air Force trainees. Journal of Studies on Alcohol & Drugs, 71, 15-22.

de Visser, R. O., Robinson, E., & Bond, R. (2016). Voluntary temporary abstinence from alcohol during “Dry January” and subsequent alcohol use. Health Psychology, 35, 281–289. 

de Visser, R. O., Robinson, E., Smith, T., Cass, G., & Walmsley, M. (2017). The growth of ‘dry January’: promoting participation and the benefits of participation. European Journal of Public Health27, 929-931.

Field, M., Mogg, K., Mann, B., Bennett, G. A., & Bradley, B. P. (2013). Attentional biases in abstinent alcoholics and their association with craving. Psychology of Addictive Behaviors, 27, 71–80. 

Field, M., Munafò, M. R., & Franken, I. H. (2009). A meta-analytic investigation of the relationship between attentional bias and subjective craving in substance abuse. Psychological Bulletin135, 589.

Field, M., Mogg, K., Mann, B., Bennett, G. A., & Bradley, B. P. (2013). Attentional biases in abstinent alcoholics and their association with craving. Psychology of Addictive Behaviors27, 71-80.

Kuntsche, E., Knibbe, R., Gmel, G., & Engels, R. (2006). Who drinks and why? A review of socio-demographic, personality, and contextual issues behind the drinking motives in young people. Addictive Behaviors31, 1844-1857.

Kuntsche, E., Gabhainn, S. N., Roberts, C., Windlin, B., Vieno, A., Bendtsen, P., … & Aasvee, K. (2014). Drinking motives and links to alcohol use in 13 European countries. Journal of Studies on Alcohol and Drugs75, 428-437.

Merrill, J. E., & Read, J. P. (2010). Motivational pathways to unique types of alcohol consequences. Psychology of Addictive Behaviors24, 705.

Pincock, S. (2003). Binge drinking on rise in UK and elsewhere. The Lancet362, 1126-1126.

A need for science to ‘slow down’? Experiences from the British Neuroscience Association Festival of Neuroscience

Posted on

By Alice Stephenson (PhD Student)

BNA 2019 Festival of Neuroscience, Dublin

In April, I was fortunate to attend the annual and prestigious British Neuroscience Association (BNA) Festival of Neuroscience. The conference provided a unique opportunity to engage with contemporary interdisciplinary neuroscience research across the UK and internationally. Spread over four days the event hosted workshops, symposiums, keynote lectures and poster presentations. Covering 11 neuroscience themes, as an experimental psychologist, I was particularly interested in themes relating to attention, motivation and behavioursensory and motor systems, and neuroendocrinology and autonomic systems. 

Not only was this a chance to embrace cutting-edge interdisciplinary neuroscience research, this was a chance to develop skills enabling me to become a meticulous researcher. It was clear that an overall goal of the conference was to encourage high standards of scientific rigour by embracing the open science movement.

“Fast science… too much rubbish out there we have to sift through.”

Professor Uta Frith

The open science movement encompasses a range of practices that promote transparency and accessibility of knowledge, including data sharing and open access publishing. The Open Science Framework is one tool that enables users to create research projects and encourages sharing hypotheses, data, and publications. These practices encourage openness, integrity, and reproducibility in research, something particularly important in the field of psychology. 

A especially striking claim was noted by Ioannidis (2005): “most published research findings are false.” Ioannidis argued that there is a methodological crisis in science, particularly apparent in psychology, but also cognitive neuroscience, clinical medicine, and many other fields (Cooper, 2018). If an effect is real, any researcher should be able to obtain it using the same procedures with adequate statistical power. However, many scientific studies are difficult to replicate. Open science practices have been suggested to help enable accurate replications and facilitate the dissection of scientific knowledge, improving scientific quality and integrity.  

The BNA has a clear positive stance on open science practices, and I was lucky enough to be a part of this. Professor Uta Frith, a world-renowned developmental neuropsychologist, gave a plenary lecture about the three R’s: reproducibility, replicability, and reliability, which was arguably one of the most important and influential lectures over the course of the conference. 

Professor Frith summed up the scientific crisis in two words, “Fast Science.” Essentially, science is progressing too fast, leading to lower quality research. Could this be due to an increase in people, labs, and journals? Speeded communication via social media? Pressure and career incentives for increased output? Sheer volume of pre-prints available to download? Professor Frith argued that there is “too much rubbish one has to sift through.”

A potential solution to this is a ‘Slow Science’ movement. The notion of “resisting quantity and choosing quality.”  Professor Frith argued the need for the system to change. Often we here about the pitfalls of the peer review process, yet Professor Frith provided us with some novel ideas. She argued for a limited number of publications per year. This would encourage researchers to spend quality time on one piece of research, improving scientific rigour. Excess work would be appropriate for other outlets. Only one grant at a time should be allowed. She also discussed the need for continuous training programmes. 

A lack of statistical expertise in the research community?

Professor Firth argued that there is a clear lack of statistical knowledge. With increasing computational advancements, it is becoming easier and easier to plug data into a function and accept the outcome. Yet, we must understand how these algorithms work so we can spot errors and notice illogical results. 

This is something that spoke out to me. I love working with EEG data. Analysing time-series data allows us to capture cognitive processes during dynamic and fast changing situations. However, working with such rich and temporally complex data is technically challenging. The EEG signal is so small at the surface of the scalp, and the signal to noise ratio is poor. Artefacts, non-physiological (e.g. computer hum) and physiological (e.g. eye movements), contaminate the recording, meaning that not only does the EEG pick up neural activity, but it also records other electrical signals we are not interested in. Therefore, we apply mathematical algorithms to help with cleaning the data, to improve the signal to noise ratio. Once the data are cleaned, we also apply algorithms to transform the data from the time series domain (for which it is recorded in) to the frequency domain. The number of techniques of EEG analysis has risen hugely, partly thanks to computational power, and therefore there are now a whole host of computational techniques, including machine-learning, that can be applied to EEG data.  

Each time algorithms are applied to the EEG data, the EEG data change. How can an EEG researcher trust the output? How can an EEG researcher sensibly interpret the data, and make informed conclusions? Having an underlying understanding of what the mathematical algorithms are doing to the data is no doubt paramount. 

Professor Frith is right, there is a need for continuous training as data analysis is moving at an exhaustingly fast pace. 

Pre-registration posters – an opportunity to get feedback on your method and planned statistical analyses

I also managed to contribute to the open science movement during the conference. On the second-to-last day, I presented a poster on my research looking at the temporal neural dynamics of switching between a visual perceptual and visuomotor task. This was not an ordinary poster presentation; this was a pre-registration poster presentation. I presented planned work to be carried out, with a clear introduction, hypotheses and method. I also included a plan of the statistical analyses. There were no data, graphs, or conclusions.

The poster session was an excellent opportunity for feedback from the neuroscience community on my method and statistical analyses. This is arguably the most useful time for feedback – before the research is carried out. This was particularly beneficial for me coming from a very small EEG community, and seeking particular expertise is vital. A post-doctoral researcher, who had looked at something similar during her PhD, provided me with honest and informative feedback on my experimental design. In addition, I uploaded my poster to the Open Science Framework, and the abstract was published in the open access journal Brain and Neurosciences Advances. I also received a preregistered badge for my work. These badges work as an incentive to encourage researchers to engage with open science practices. Check out for more information. 

So, what next?

Practical tools and significant support are coming together to allow open science to blossom. It is now our responsibility to be part of this. I’ve created an Open Science Framework account and plan to start there, detailing my aims, methods and data, to improve transparency in research. I’m making the most of my last year of my PhD to attend data analysis workshops. I would like to pre-register my research in the near future. How do I contribute to the slow science movement? I can start by slowing down (perhaps saying no to additional projects?!), improving my statistical knowledge, and embracing open science practices.   

Not only was the conference an incredible insight into multidisciplinary neuroscience research (I did not realise you could put a mouse in an MRI scanner, anaesthetised of course, as it would never keep its head still!), it was an influential and motivating atmosphere. Thank you, British Neuroscience Association. Now, who else wants to join me in advocating open science, becoming a rigorous researcher, and improving scientific practices?!


Cooper, M. M. (2018). The Replication Crisis and Chemistry Education Research. Journal of Chemical Education, 95, 1– 2.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

Shining the light on implicit bias: Do we really know what we believe?

Posted on

By Charlotte R. Pennington

In everyday life, you will be asked to report your attitudes and opinions towards a whole host of different things.  When buying a TV, you may be asked retrospectively to provide ratings of the product, or even the person who sold it to you. The reporting of attitudes has become so sought after by companies that specific websites have been developed providing people with an open forum to post their opinions and evaluations of accommodation, restaurants, and services, and even receive arbitrary points and badges for their reviews (e.g., Trip Advisor).  Given the plethora of surveys and questionnaires utilised on a daily basis, you may therefore think that measuring attitudes is relatively easy. Simply ask someone what they think and they will respond with an honest answer. However, psychology has shed light on the limitations posed by self-report tools, such as questionnaires and surveys, which are so readily used by companies and organisations alike.  

Gauging attitudes: A problem of measurement or construct?

Explicit attitudes refer to consciously accessible thoughts and feelings towards people, objects or concepts. They are introspectively accessible, meaning that you can reach inside your mind and report your feelings and thoughts. However, there are many issues when it comes to measuring people’s attitudes accurately. Reflect on the following example; a builder may measure the height of a window frame to fit double-glazed windows. Each time he/she does this, the window measurements remain the same. Windows can be measured, and the builder has the best tools to yield the correct measurements. Unfortunately, the same is not true when it comes to measuring attitudes; they are mental constructs and not tangible things. They are slippery and shape-shift depending on context. This presents numerous issues for researchers trying to measure them.

Studies have shown consistently how people’s attitudes can be altered by systematic factors, such as how the questions are framed and even what order they are presented. For example, a recent study demonstrates how the number of scale points in a questionnaire affects the extent to which gender stereotypes of brilliance are expressed. Specifically, female course instructors were more likely to receive a top rating on a 6-point scale relative to a 10-point scale, whereas this difference did not emerge for male instructors. The author’s reason that this effect occurs because of cultural meanings assigned to the number ‘10’ – perfection. As such, a top-score on a 6-point scale does not carry such strong performance expectations. To me, this is a landmark study demonstrating how the features of tools that are frequently used to judge merit can powerfully affect people’s responses. Who knew that something which appears meaningless can shape our answers in a way that tells a completely different story?

Another issue plaguing questionnaires is that psychologists – or whomever uses them – need to trust that the questionnaire can tap into exactly what we want to measure. When asking people about socially sensitive topics, such as prejudice or discriminatory behaviour this is rarely often the case. Consider how you would answer the following questions when asked by a researcher, someone you barely know: “Do you treat people from other races the same as you treat people from your own race? Do you willingly give to charity or those who need it the most? Think hypothetically about your answers for a minute. Now, reflect on your previous behaviour and try to gauge whether the answers given provide an accurate representation of how you really act. What you might uncover about yourself here is called the ‘willing and able’ problem; people may not be willing to report their honest attitudes, and when put on the spot, may not be able to accurately reflect and report what they truly feel. Answers to questions are usually influenced by self-presentational motives – that is, people’s desire to look good in someone else’s eyes.

A more interesting question is that we might not know what we actually believe. To a lay audience with no psychological training, this may sound surprising. How can we hold attitudes that we are unaware of? Psychology holds the answer. The past three decades of psychological research have revealed the frailties of introspection (the inner workings of our mind), and how little control we possess over our own thoughts. This has led researchers to coin the term ‘implicit attitudes’; introspectively unidentified traces of past experience that mediate favourable or unfavourable feeling towards social objects. The general argument is that individuals harness attitudes that they are not aware of, and these can manifest as judgements or actions.

How do we measure attitudes that people aren’t aware of?

The development of implicit measures have afforded remarkable insight into the human mind, and opened up a new research field termed implicit social cognition. This may leave you wondering, how do we measure such attitudes, and how do they develop in the first place?

Whereas explicit attitudes are measured by asking people directly about their thoughts and feelings (e.g., through questionnaires), implicit attitudes are assessed indirectly through tasks that typically measure response times towards various stimuli and compare systematic variations in people’s performance. One of the most well-known tasks of this kind is the Implicit Association Test (IAT), which tests how quick (or slow) people are at pairing different social categories with various attributes. The race IAT, for example, requires test-takers to categorise pictures of White and Black faces with positive and negative terms as quickly as possible. The underlying theory is that people will be quicker to pair concepts with attributes that are strongly associated in memory, compared to those weakly associated. In order to understand this better, think about learning a new language for the first time; you will always be quicker to think about words from your own language compared to those from a newly learned language because of the automaticity of your native tongue. Going back to the race IAT, research has consistently shown that White people are quicker to associate pictures of White faces with positive terms and Black faces with negative terms. This is referred to as implicit bias.

Social psychologists theorise that implicit bias, such as that demonstrated by White people taking the race IAT, are learned through experience. This occurs either directly through encounters with a particular social group, or indirectly through exposure to information about this social group. In Western cultures, White people are inundated with cultural messages and stereotypes that portray Black people as uneducated, relatively poor and more likely to be in trouble with the law. Consequently, implicit bias may form through exposure to cultural milieu. Do you know what your own IAT test result shows? Anybody can take these tests through the Project Implicit website. Your test result may surprise you, but it’s important to recall that this might not reflect your personal beliefs but rather learned associations imbued through exposure to your cultural or social environment. Research has revealed remarkable findings through the use of the IAT.  For example, a recent longitudinal study shows that implicit attitudes towards race, skin-tone and sexual orientation have trended towards neutrality over the last 12 years (i.e., people’s implicit bias towards these social categories seems to be decreasing). However, attitudes towards age and disability have remained stable, and have increased in relation to body-weight stigma. Moreover, implicit attitudes appear to hold predictive validity; studies have shown that people’s preference for White people on the race IAT predicts intention to vote for a White relative to Black presidential candidate. Now that’s a cool finding!

However, implicit measures have also received their fair share of criticism. Research indicates a weak relationship between explicit and implicit attitudes, suggesting that they may reflect separate attitude representations. An alternative theory, however, is that explicit and implicit measures allow people to edit their responses to varying degrees. In 2016, as a PhD student I wrote my first commentary reflecting on what exactly do implicit measures assess? In addition, although the IAT has shown some predictive validity (e.g., voting behaviour), other research indicates that for more socially sensitive attitudes, the IAT does not predict resulting discriminatory behaviour. Although the IAT was heralded to provide new insights into human cognition and behaviour, some researchers believe this test has been oversold. Nevertheless, I argue that the reason that implicit attitudes may not predict real-world behaviour is influenced by the same issues that plague self-report measures – social desirability. That is, people may think negatively about a certain out-group member, but that doesn’t necessarily mean they will act upon this. The same may be true for weak correlations between explicit and implicit attitude measures; people distort their attitudes on self-report questionnaires, whereas implicit measures aren’t susceptible to these self-presentational motives. Should we expect correlations between these two measures when one is tapping into controllable beliefs and the other is uncovering introspectively unidentified traces of past experience?

In order to answer these questions, I was awarded funding through the Vice Chancellor’s Early Career Research Awards (VC ECR Award) at UWE Bristol to investigate other implicit socio-cognitive mechanisms that may predict implicit bias. The blue sky thinking behind this research is to develop other measures that can potentially measure implicit behavioural manifestations of bias. At this stage, we are too early in our research endeavour to reveal any findings; however other influential and impactful avenues have already stemmed from this research.

At the same time as I have been conducting my research, Ellie Bliss (Adult Nurse Lecturer) and Alisha Airey (BME Project Officer) have been running staff workshops at UWE Bristol, reflecting upon how implicit (unconscious) bias can play out in the higher education classroom. I am now involved in supporting these workshops, providing research-led guidance on how we access implicit bias, and answering the many questions that staff have about this rather ambiguous construct. One interesting discussion centres on whether implicit biases can be viewed as unconscious when we are increasingly acknowledging them through teaching and training. The majority of attendees come away from the workshop with new reflections on how teaching practice is orientated towards Western culture, and with classroom strategies to implement to prevent implicit bias playing out. However, a handful of attendees are surprised and doubtful of the concept of implicit bias and the tools that purportedly measure it. They have difficulty in accepting that they may hold certain biases. But the truth is, we all do.

Where is implicit social cognition headed?

In this blog post I hope I have demonstrated that we are shining the light on what implicit bias really is and the nature of our unconscious attitudes. Such research has paved the way for training workshops which teach people to acknowledge their deep-rooted attitudes and reflect upon how these may impact our thinking and behaviour towards other people. But what’s next for this research arena? There are still lots of unanswered questions and controversies surrounding implicit bias, which makes it an exciting topic to study. Do implicit measures really provide a window into the unconscious mind? Is implicit bias relatively stable when measured at different time points? Can implicit bias be changed, and if so, are such changes short or long-term? Are attitudes towards some social groups easier to change than others? Can we, as a field, develop other (implicit) behavioural measures that more accurately predict implicit attitudes better than self-reports? Such investigations will represent the future of implicit social cognition and I, for one, am extremely excited to see what’s to come.

Research Experience as an Undergrad: My summer internship and placement

Posted on

By Josh Lee

I’m a second-year psychology student at UWE, and throughout my first year I found myself developing a keen interest in psychological research. The more I engaged with my degree, the more interested I became, and I started actively seeking opportunities to gain research experience towards the end of first year. I was interested in learning more about the research process, and I also know how valuable experience can be for postgraduate applications.

In May of this year I went on an animal behaviour research trip to the island of Lundy. This was shortly after applying for my first research role, a paid summer internship with Drs Kait Clark and Charlotte Pennington. I learnt a lot on Lundy and made friends with the other student researchers. Towards the end we realised we were on the same wavelength…three fellow Lundy attendees and I had been invited to interview for the same position. The interviews were scheduled for the week after our return from Lundy, and we were now friends competing against each other. All we could do was wish each other luck in the interview and hope for the best.

The interview was competitive, and we were all given a short programming task to attempt in advance. Maybe there was something in the sea air, but when an email came through from Kait offering the job, all four our names were on it. Taking the extracurricular opportunity to learn and conduct psychological research on Lundy perhaps led to an edge in the interview, and we now had the chance to contribute to a legitimate paper together.

The main aim of the project was to develop a set of visual and social cognition tasks for the purposes of establishing test-retest reliability, building on a recent study by Hedge, Powell, & Sumner (2018). Our first task was to complete a comprehensive review of visual cognition literature. Although I had experience of examining research papers to get references for essays, this was much more in depth and specific. The process of comparing the different papers took a while to get used to, but it has been eye-opening to review papers with a view towards designing our own study rather than evaluating a proposition for an essay. It highlights different issues within and between papers that I would not have considered otherwise, and I feel like it has helped me develop a more complete approach to evaluating research papers in general. We were given lots of freedom to conduct the review and research – this was hugely beneficial as it left a lot of potential for creative ideas and individual contribution.

We chose measures for which the test-retest reliability had not already been established so our research could have the most impact. Each of us then chose one measure and worked through writing the Python code to implement parameters in alignment with previous studies. We are using PsychoPy, open-source software, to program our measures. I have limited coding knowledge (but enough to pass the interview stage!) so using Python has been a learning experience. Although frustrating at times, help has always been available and through a combination of initiative, trial and error, and advice, the measures shaped up nicely. I developed a motion coherence task, and piloting it on my friends has been interesting – explaining what the task is for and the wider context requires a thorough knowledge of it, and I am genuinely passionate about it. I never thought I’d be excited about a spreadsheet.

During our summer internship we also had an opportunity to meet with Dr Craig Hedge, whose recent paper has inspired our current work. We got to hear about his research first hand and discuss our project and how it related to his paper. It was interesting and insightful to talk about his work and how our test-retest reliability project came about.

Now we’ve finished the development stage of the project, and with all the tasks up and running, it’s time for data collection. I’m continuing to work on this project as my work-based learning placement for my Developing Self and Society (DSAS) module. Time slots are available on UWE’s participant pool for students to book in, and so we have all been running sessions for up to four participants at once. This involves briefing, setting up the experiments on the computers, giving instructions, addressing issues that arise, and ensuring that the conditions are the same for every session. It’s fun to discuss the study when debriefing the participants, to raise awareness of what is being investigated and help them understand why they did the tasks involved. The integration of my internship with one of my second-year modules shows how beneficial an opportunity like this can be. In isolation, it is good experience on its own, but linking it with my regular studies and incorporating my experience into university work has made it invaluable.

It’s been great working closely with Kait and Charlotte in addition to Austin, Triin, and Kieran. Chatting with staff as well as students in a different year to me has given me insight into the university and the course itself. I have learnt a lot already and will continue to do so. The project will also help me with my own research project and my degree in general. I’m excited to see what the rest of it brings.