COVID-19 opportunities to shift artificial intelligence towards serving the common good

Posted on

Contact tracing apps are just one measure governments have been using in their attempts to contain the current coronavirus outbreak. Such apps have also raised concerns about privacy and data security. COVID-19 – and the current calls for AI-led healthcare solutions – highlight the pressing need to consider wider ethical issues raised by artificial intelligence (AI). This blog discusses some of the ethical issues raised in a recent report and brief on moral issues and dilemmas linked to AI, written by the Science Communication Unit for the European Parliament . 

AI means, broadly, machines that mimic human cognitive functions, such as learning, problem-solving, speech recognition and visual perception. One of the key benefits touted for AI is to reduce healthcare inefficiencies; indeed, AI is already widespread in healthcare settings in developed economies, and its use is set to increase. 

There are clear benefits for strained healthcare systems. In some fundamental areas of medicine, such as medical image diagnostics, machine learning has been shown to match or even surpass human ability to detect illnesses. New technologies, such as health monitoring devices, may free up medical staff time for more direct interactions with patients, and so potentially increase the overall quality of care. Intelligent robots may also work as companions or carers, remind you to take your medications, help you with your mobility or cleaning tasks, or help you stay in contact with your family, friends and healthcare providers via video link.

AI technologies have been an important tool in tracking and tracing contacts during the COVID-19 outbreak in countries such as South Korea. There are clear benefits to such life-saving AI, but widespread use of a contact-tracing app also raises ethical questions. South Korea has tried to flatten its curve using intense scraping of personal data, and other countries have been using digital surveillance and AI-supported drones to monitor the population in attempts to stem the spread. The curtailing of individual privacy may be a price we have to pay, but it is a tricky ethical balance to strike – for example, the National Human Rights Commission of Korea has expressed its concern about excessive disclosure of private information of COVID-19 patients.  

The case of the missing AI laws

As adoption of AI continues to grow apace – in healthcare, as well as in other sectors such as transportation, energy, defence, services, entertainment, finance, cybersecurity –legislation has lagged behind. There remains a significant time lag between the pace of AI development and the pace of AI lawmaking. The World Economic Forum calls for much-needed ‘governance architectures’ to build public trust in AI to ensure that the technology can be used for health crises such as COVID in future.There exist several laws and regulations dealing with aspects relevant to AI (such as the EU’s GDPR on data, or several country laws on autonomous vehicles) but no countries yet have specific laws on ethical and responsible AI. Several countries are discussing restrictions on the use of lethal autonomous weapons systems (LAWS).[1] However, governments in general have been reluctant to create restrictive laws.

A new report commissioned by the European Parliament will feed into the work of their Scientific Foresight Unit, STOA. The report, written by the Science Communication Unit, was led by Professor of Robot Ethics, Alan Winfield.

Broad ethical questions

Reviewing the scientific literature and existing frameworks around the world, we found there are diverse, complex ethical concerns arising from the development of artificial intelligence.

 In relation to healthcare, for diseases like COVID-19, where disease is spread via social contact, care robots  could provide necessary, protective, socially distanced support for vulnerable people. However, if this technology becomes more pervasive, it could be used in more routine settings as well. Questions then arise over whether a care robot or a companion robot can really substitute for human interaction – particularly pertinent in the long-term caring of vulnerable and often lonely people, who derive basic companionship from caregivers.

As with many areas of AI technology, the privacy and dignity of users’ needs to be carefully considered when designing healthcare service and companion robots. Robots do not have the capacity for ethical reflection or a moral basis for decision-making, and so humans must hold ultimate control over any decision-making in healthcare and other contexts.

Other applications raise further concerns, ranging from large-scale and well-known issues such job losses from automation, to more personal, moral quandaries such as how AI will affect our sense of trust, our ability to judge what is real, and our personal relationships.

Perhaps unexpectedly, we also found that AI has a significant energy cost and furthers social inequalities – and that, crucially, these aspects are not being covered by existing frameworks.

Our Policy Options Brief highlights four key gaps in current frameworks, which don’t currently cover:

  • ensuring benefits from AI are shared fairly;
  • ensuring workers are not exploited;
  • reducing energy demands in the context of environmental and climate change;
  • and reducing the risk of AI-assisted financial crime.

It is also clear that, while AI has global applications and potential benefits, there are enormous disparities in access and benefits between global regions. It is incumbent upon today’s policy- and law-makers to ensure that AI does not widen global inequalities further. Progressive steps could include data-sharing and collaborative approaches (such as India’s promise to share its AI solutions with other developing economies), and efforts to make teaching around computational approaches a fundamental part of education, available to all.

Is AI developed for the common good?

Calls have been issued for contributions from AI experts and contributors worldwide to help find further solutions to the COVID-19 crisis – for example, the AI-ROBOTICS vs COVID-19 initiative of the European AI Alliance is compiling a ‘solutions repository’. At the time of writing, there were 248 organisations and individuals offering COVID-related solutions via AI development. These include a deep-learning hand-washing coach AI, which gives you immediate feedback on how to handwash better. 

Other solutions include gathering and screening knowledge; software enabling a robot to disinfect areas, or to screen people’s body temperature; robots that deliver objects to people in quarantine; automated detection of early breathing difficulties; and FAQ chatbots or even psychological support chatbots.

Government calls for AI-supported COVID-19 solutions are producing an interesting ethical interface between sectors that have previously kept each other at arm’s length. In the hyper-competitive world of AI companies, co-operation (or even information sharing) towards a common goal is unchartered territory. These developments crystallise one of the ethical questions at the core of AI debates – should AI be developed and used for private or public ends? In this time of COVID-19, increased attention by governments (and the increased media attention on some of the privacy-related costs of AI) provide an opportunity to open up and move forward this debate. Moreover, the IEEE urges that the sense of ‘emerging solidarity’ and ‘common global destiny’ accompanying the COVID-19 crisis are perfect levers to make the sustainability and wellbeing changes required.

One barrier to debate is in the difficulty of understanding some of the most advanced AI technologies, which is why good science communication is crucial. It is vitally important that the public are able to formulate and voice informed opinions on potentially society-changing developments. Governments need better information too – and up-to-date, independent and evidence-based forms of technology assessment. Organisations such as the Science, Technology Assessment and Analytics team in the US Government Accountability Office or the European Foresight platform are examples that are trying to enable governments and lawmakers to understand such technologies deeply while they can still be shaped.

In order to enjoy the benefits of AI, good governance frameworks are urgently needed to balance the ethical considerations and manage the risks. It is yet to be seen if the COVID-19-prompted developments in AI will herald a new era of public-private cooperation for the common good, but if there was ever a time to amplify this conversation, it is now.

Ruth Larbey, Science Communication Unit, UWE Bristol.


[1] Belgium has already passed legislation to prevent the use or development of LAWS. 


How to write a research synthesis report (or how I conquered my batteries mountain!)

Posted on

The words on the screen are drifting in and out of focus… lithium-ion and sodium-ion, redox flow and redox couples… and, errrrrm, what does ‘roundtrip efficiency’ mean?

It’s April 2018.  I have just returned to work after a sleepless year on maternity leave and been tasked with writing a report on battery technologies and their environmental impacts.

It’s an honour to write about such an important topic – batteries are critical to renewable energy systems and e-mobility – and I am excited about the job ahead.

However, faced with this seemingly insurmountable, not to mention impenetrable, pile of scientific papers upon which to base the report, it’s also easy to feel a little daunted.

I pull myself together. I know that I can do this because I’ve been here before, having successfully delivered reports on a diverse set of topics, from green finance to fish farming – as baffling as some of these topics may have seemed at first.

And sure enough, six months later, Towards the Battery of the Future (as the finished report is now titled) is being handed out to warm approval at high-level international conferences and EU meetings, deemed worthy of attention by top-tier policymakers and captains of industry.

With a glow of satisfaction, I pat myself on the back for having mastered a topic that, initially, I knew very little about. I’m also chuffed to have played a role in sharing the science with wider society.

Research syntheses

Towards the Battery of the Future is one of a number of reports I have worked on for Science for Environment Policy over the past 8 years. It is an example of a research synthesis – a publication which weaves together research, often from multiple disciplines, to support or influence policy.

In Science for Environment Policy’s case, we distill research to help policymakers protect and enhance our environment.

I can tell you from my time on these reports that producing a research synthesis is a tricky business. I am just starting work on a new report which explores the wonders of pollinators, and it feels a good time to reflect upon how best to go about a research synthesis.

An increasing body of scholarly work is assessing the role and impact of research syntheses, and various techniques for creating them1. This has yielded some interesting principles and frameworks, which provide valuable food for thought and guidelines for action.

This blog post is my nuts-and-bolts contribution to the discussion and, below, we have a handful of pointers, drawn from personal experience. These helped me take the batteries report, and those before it, on the journey from a mystifying blur of pixels to a bona fide publication, and one which may just make the world a better place.

1. Talk to real people

A chat with a well-selected expert can clarify more about a topic than days of scouring through research papers (and certainly more than could ever be gleaned from Wikipedia).

Work on the batteries report really got going after some enlightening conversations with the commissioning policy officer in Brussels and my trusty scientific advisor in Germany. Both helped define what we really need to focus on.

Where does the weight of evidence sit? What are the big debates and unknowns? And, seriously, what does roundtrip efficiency actually mean?

Thanks these chats, the words on my screen start to snap into focus, and, armed with a list of useful keywords, I feel ready to take on the research databases and build this report.

(And, turns out roundtrip efficiency is really a very simple concept. Need to know: you don’t want your batteries to leak too much energy when recharging).

2. And talk to lots of different types of people

I lost count of how many people contributed to and reviewed the batteries report. These helpful souls not only offered useful details, but also balance with their diverse backgrounds, from transport to chemicals.

And it’s not just scientists and policymakers who can help. Businesses, consultants and community groups, for example, are all a treasure trove of information and perspective.

I have been transported from my desk in a grey suburb of Bristol to tropical forests of Central America and windswept fish farms of the Baltic Sea, courtesy of telephone conversations with astonishingly obliging contributors.

With my tabula rasa outset for each report, I do often feel a little ignorant during these chats.  I’ve not quite forgiven the guy who actually shouted at me for asking the wrong questions (owing to my ignorance on the particular topic of the report at the time), but I did come out of that conversation much more knowledgeable than when I went in.

A caveat: the more people involved in a report, the longer it takes – and the risk of missing publication in time for key policy events increases, diminishing the report’s potential impact. In practice, synthesis writers are often faced with the challenge of finding the best way to produce robust content within short timeframes (see also: limited budgets).

3. Your reference manager is your best friend

I’ve seen many a writer get in a twist attempting to manually manage the reams of references that make up a report. Problems often arise as a report continually shifts in form throughout its development; citations get lost, bibliographies get muddled.

I’ve adopted Mendeley to overcome these issues, and do all the awkward formatting for me. It’s not perfect, and I’m always keen to know how others deal with their references, but it sure makes life a lot easier.

4. Keep on truckin’

It is the research that goes into developing a report, and not the actual writing, that drains the most time and energy. A day spent filtering and reading papers can amount to just two or three short paragraphs of text. Producing a research synthesis report is, at times, frustratingly arduous.

However, as Towards the Battery of the Future gradually morphed into a rounded product, I was reminded of why I went into science communication in the first place: it’s the perfect excuse to learn new things. The process of translating between the languages of science and the ‘lay person’ is also something I find undeniably satisfying.

Indeed, as I submit the final draft, I’m wishing I could make my own efficient roundtrip – to go back and do it all again.

Michelle Kilfoyle, Science Writer, Science for Environment Policy

  1. Some recent examples:

The Royal Society & the Academy of Medical Sciences (2018) Evidence synthesis for policy: a statement of principles. https://royalsociety.org/~/media/policy/projects/evidence-synthesis/evidence-synthesis-statement-principles.pdf

Wyborn et al. (2018) Understanding the Impacts of Research Synthesis. Environmental Science & Policy. 86: 72–84. DOI:10.1016/J.ENVSCI.2018.04.013

New and notable – selected publications from the Science Communication Unit

Posted on

The last 6 months have been a busy time for the Unit, we are now fully in the swing of the 2016/17 teaching programme for our MSc Science Communication and PgCert Practical Science Communication students, we’ve been working on a number of exciting research projects and if that wasn’t enough to keep us busy, we’ve also produced a number of exciting publications.

We wanted to share some of these recent publications to provide an insight into the work that we are involved in as the Science Communication Unit.

Science for Environment Policy

Science for Environment Policy

Science for Environment Policy is a free news and information service published by Directorate-General Environment, European Commission. It is designed to help the busy policymaker keep up-to-date with the latest environmental research findings needed to design, implement and regulate effective policies. In addition to a weekly news alert we publish a number of longer reports on specific topics of interest to the environmental policy sector.

Recent reports focus on:

Ship recycling: The ship-recycling industry — which dismantles old and decommissioned ships, enabling the re-use of valuable materials — is a major supplier of steel and an important part of the economy in many countries, such as Bangladesh, India, Pakistan and Turkey. However, mounting evidence of negative impacts undermines the industry’s contribution to sustainable development. This Thematic Issue presents a selection of recent research on the environmental and human impacts of shipbreaking.

Environmental compliance assurance and combatting environmental crime: How does the law protect the environment? The responsibility for the legal protection of the environment rests largely with public authorities such as the police, local authorities or specialised regulatory agencies. However, more recently, attention has been focused on the enforcement of environmental law — how it should most effectively be implemented, how best to ensure compliance, and how best to deal with breaches of environmental law where they occur. This Thematic Issue presents recent research into the value of emerging networks of enforcement bodies, the need to exploit new technologies and strategies, the use of appropriate sanctions and the added value of a compliance assurance conceptual framework.

Synthetic biology and biodiversity: Synthetic biology is an emerging field and industry, with a growing number of applications in the pharmaceutical, chemical, agricultural and energy sectors. While it may propose solutions to some of the greatest challenges facing the environment, such as climate change and scarcity of clean water, the introduction of novel, synthetic organisms may also pose a high risk for natural ecosystems. This future brief outlines the benefits, risks and techniques of these new technologies, and examines some of the ethical and safety issues.

Socioeconomic status and noise and air pollution: Lower socioeconomic status is generally associated with poorer health, and both air and noise pollution contribute to a wide range of other factors influencing human health. But do these health inequalities arise because of increased exposure to pollution, increased sensitivity to exposure, increased vulnerabilities, or some combination? This In-depth Report presents evidence on whether people in deprived areas are more affected by air and noise pollution — and suffer greater consequences — than wealthier populations.

Educational outreach

We’ve published several research papers exploring the role and impact of science outreach. Education outreach usually aims to work with children to influence their attitudes or knowledge about STEM – but there are only so many scientists and engineers to go around. So what if instead we influenced the influencers? In this publication, Laura Fogg-Rogers describes her ‘Children as Engineers’ project, which paired student engineers with pre-service (student) teachers.

Fogg-Rogers, L. A., Edmonds, J. and Lewis, F. (2016) Paired peer learning through engineering education outreach. European Journal of Engineering Education. ISSN 0304-3797 Available from: http://eprints.uwe.ac.uk/29111

Teachers have been shown in numerous research studies to be critical for shaping children’s attitudes to STEM subjects, and yet only 5% of primary school teachers have a STEM higher qualification. So improving teacher’s science teaching self-efficacy, or the perception of their ability to do this job, is therefore critical if we want to influence young minds in science.

The student engineers and teachers worked together to perform outreach projects in primary schools and the project proved very successful. The engineers improved their public engagement skills, and the teachers showed significant improvements to their science teaching self-efficacy and subject knowledge confidence. The project has now been extended with a £50,000 funding grant from HEFCE and will be run again in 2017.

And finally, Dr Emma Weitkamp considers how university outreach activities can be designed to encourage young people to think about the relationships between science and society. In this example, Emma worked with Professor Dawn Arnold to devise an outreach project on plant genetics and consider how this type of project could meet the needs of both teachers, researchers and science communicators all seeking (slightly) different aims.emma-book

A Cross Disciplinary Embodiment: Exploring the Impacts of Embedding Science Communication Principles in a Collaborative Learning Space. Emma Weitkamp and Dawn Arnold in Science and Technology Education and Communication, Seeking Synergy. Maarten C. A. van der Sanden, Delft University of Technology, The Netherlands and Marc J. de Vries (Eds.) Delft University of Technology, The Netherlands. 

We hope that you find our work interesting and insightful, keep an eye on this blog – next week we will highlight our publications around robots, robot ethics, ‘fun’ in science communication and theatre.

Details of all our publications to date can be found on the Science Communication Unit webpages.