COVID-19 opportunities to shift artificial intelligence towards serving the common good

Posted on

Contact tracing apps are just one measure governments have been using in their attempts to contain the current coronavirus outbreak. Such apps have also raised concerns about privacy and data security. COVID-19 – and the current calls for AI-led healthcare solutions – highlight the pressing need to consider wider ethical issues raised by artificial intelligence (AI). This blog discusses some of the ethical issues raised in a recent report and brief on moral issues and dilemmas linked to AI, written by the Science Communication Unit for the European Parliament . 

AI means, broadly, machines that mimic human cognitive functions, such as learning, problem-solving, speech recognition and visual perception. One of the key benefits touted for AI is to reduce healthcare inefficiencies; indeed, AI is already widespread in healthcare settings in developed economies, and its use is set to increase. 

There are clear benefits for strained healthcare systems. In some fundamental areas of medicine, such as medical image diagnostics, machine learning has been shown to match or even surpass human ability to detect illnesses. New technologies, such as health monitoring devices, may free up medical staff time for more direct interactions with patients, and so potentially increase the overall quality of care. Intelligent robots may also work as companions or carers, remind you to take your medications, help you with your mobility or cleaning tasks, or help you stay in contact with your family, friends and healthcare providers via video link.

AI technologies have been an important tool in tracking and tracing contacts during the COVID-19 outbreak in countries such as South Korea. There are clear benefits to such life-saving AI, but widespread use of a contact-tracing app also raises ethical questions. South Korea has tried to flatten its curve using intense scraping of personal data, and other countries have been using digital surveillance and AI-supported drones to monitor the population in attempts to stem the spread. The curtailing of individual privacy may be a price we have to pay, but it is a tricky ethical balance to strike – for example, the National Human Rights Commission of Korea has expressed its concern about excessive disclosure of private information of COVID-19 patients.  

The case of the missing AI laws

As adoption of AI continues to grow apace – in healthcare, as well as in other sectors such as transportation, energy, defence, services, entertainment, finance, cybersecurity –legislation has lagged behind. There remains a significant time lag between the pace of AI development and the pace of AI lawmaking. The World Economic Forum calls for much-needed ‘governance architectures’ to build public trust in AI to ensure that the technology can be used for health crises such as COVID in future.There exist several laws and regulations dealing with aspects relevant to AI (such as the EU’s GDPR on data, or several country laws on autonomous vehicles) but no countries yet have specific laws on ethical and responsible AI. Several countries are discussing restrictions on the use of lethal autonomous weapons systems (LAWS).[1] However, governments in general have been reluctant to create restrictive laws.

A new report commissioned by the European Parliament will feed into the work of their Scientific Foresight Unit, STOA. The report, written by the Science Communication Unit, was led by Professor of Robot Ethics, Alan Winfield.

Broad ethical questions

Reviewing the scientific literature and existing frameworks around the world, we found there are diverse, complex ethical concerns arising from the development of artificial intelligence.

 In relation to healthcare, for diseases like COVID-19, where disease is spread via social contact, care robots  could provide necessary, protective, socially distanced support for vulnerable people. However, if this technology becomes more pervasive, it could be used in more routine settings as well. Questions then arise over whether a care robot or a companion robot can really substitute for human interaction – particularly pertinent in the long-term caring of vulnerable and often lonely people, who derive basic companionship from caregivers.

As with many areas of AI technology, the privacy and dignity of users’ needs to be carefully considered when designing healthcare service and companion robots. Robots do not have the capacity for ethical reflection or a moral basis for decision-making, and so humans must hold ultimate control over any decision-making in healthcare and other contexts.

Other applications raise further concerns, ranging from large-scale and well-known issues such job losses from automation, to more personal, moral quandaries such as how AI will affect our sense of trust, our ability to judge what is real, and our personal relationships.

Perhaps unexpectedly, we also found that AI has a significant energy cost and furthers social inequalities – and that, crucially, these aspects are not being covered by existing frameworks.

Our Policy Options Brief highlights four key gaps in current frameworks, which don’t currently cover:

  • ensuring benefits from AI are shared fairly;
  • ensuring workers are not exploited;
  • reducing energy demands in the context of environmental and climate change;
  • and reducing the risk of AI-assisted financial crime.

It is also clear that, while AI has global applications and potential benefits, there are enormous disparities in access and benefits between global regions. It is incumbent upon today’s policy- and law-makers to ensure that AI does not widen global inequalities further. Progressive steps could include data-sharing and collaborative approaches (such as India’s promise to share its AI solutions with other developing economies), and efforts to make teaching around computational approaches a fundamental part of education, available to all.

Is AI developed for the common good?

Calls have been issued for contributions from AI experts and contributors worldwide to help find further solutions to the COVID-19 crisis – for example, the AI-ROBOTICS vs COVID-19 initiative of the European AI Alliance is compiling a ‘solutions repository’. At the time of writing, there were 248 organisations and individuals offering COVID-related solutions via AI development. These include a deep-learning hand-washing coach AI, which gives you immediate feedback on how to handwash better. 

Other solutions include gathering and screening knowledge; software enabling a robot to disinfect areas, or to screen people’s body temperature; robots that deliver objects to people in quarantine; automated detection of early breathing difficulties; and FAQ chatbots or even psychological support chatbots.

Government calls for AI-supported COVID-19 solutions are producing an interesting ethical interface between sectors that have previously kept each other at arm’s length. In the hyper-competitive world of AI companies, co-operation (or even information sharing) towards a common goal is unchartered territory. These developments crystallise one of the ethical questions at the core of AI debates – should AI be developed and used for private or public ends? In this time of COVID-19, increased attention by governments (and the increased media attention on some of the privacy-related costs of AI) provide an opportunity to open up and move forward this debate. Moreover, the IEEE urges that the sense of ‘emerging solidarity’ and ‘common global destiny’ accompanying the COVID-19 crisis are perfect levers to make the sustainability and wellbeing changes required.

One barrier to debate is in the difficulty of understanding some of the most advanced AI technologies, which is why good science communication is crucial. It is vitally important that the public are able to formulate and voice informed opinions on potentially society-changing developments. Governments need better information too – and up-to-date, independent and evidence-based forms of technology assessment. Organisations such as the Science, Technology Assessment and Analytics team in the US Government Accountability Office or the European Foresight platform are examples that are trying to enable governments and lawmakers to understand such technologies deeply while they can still be shaped.

In order to enjoy the benefits of AI, good governance frameworks are urgently needed to balance the ethical considerations and manage the risks. It is yet to be seen if the COVID-19-prompted developments in AI will herald a new era of public-private cooperation for the common good, but if there was ever a time to amplify this conversation, it is now.

Ruth Larbey, Science Communication Unit, UWE Bristol.


[1] Belgium has already passed legislation to prevent the use or development of LAWS. 


Leave a Reply

Your email address will not be published. Required fields are marked *