Mutual Shaping in Swarm Robotics: User Studies in Fire and Rescue, Storage Organization, and Bridge Inspection

Posted on

Fire engine at the Bristol Robotics Lab after a focus group session with local firefighters

I remember this story as if it was yesterday. It was the summer of 2013. I was laying on my bed, listening to music. All of the sudden, I heard someone screaming louder than my music, and banging on the door of my flat. I quickly took off my headphones, dashed to the door, opened it, and found my neighbour shaking, with her face as pale as chalk. Without any word, she grabbed my arm and pulled me towards her flat. Then, I went into panic. Smoke was coming out of the door! As we entered the flat running, my neighbour quickly managed to explain that the heater above the wooden bathroom door had caught fire while she was giving a bath to the old lady she was caring for. The old lady needed rescuing. Luckily, we both could take the old lady out of the bathroom before the fire developed more. A fire brigade came in a matter of minutes. The bathroom was destroyed, but no-one was injured.

After that experience, I knew I wanted to do something useful for firefighters because I had experienced how extremely dangerous it is for them to enter a building covered in smoke to put out a fire. Did I become a firefighter? Not quite. I decided to do a PhD in swarm robotics at the Bristol Robotics Lab to design useful technology for fire brigades. Swarm robotics is the study of hundreds and thousands of robots that collaborate with each other to solve tasks without any leader, just like swarms of ants, bees, fish or even cells in our bodies. Imagine if firefighters could release a swarm of robots at the entrance of a building on fire to create a map of the hazards, source of fire and casualties, so that firefighters don’t waste time searching (which is one of the most dangerous parts of their profession). Swarm robotics could also be applied in other settings. How about if warehouses had a swarm of robots automatically organising the stock so that employees only have to ask the swarm for the products they want? Or what if a swarm of robots could spread all over a bridge to monitor cracks? In my opinion, robot swarms are almost ready to leave the lab and enter the real world. We just need to know the type of robot swarms that potential users need. So, along with co-authors Emma Milner, Julian Hird, Georgios Tzoumas, Paul Vardanega, Mahesh Sooriyabandara, Manuel Giuliani, Alan Winfield and Sabine Hauert, we did three studies where we spoke with 37 professionals from fire brigades, storage organisation and bridge inspection. The results have recently been published in the open access journal Frontiers in Robotics and AI (you can read the paper here).





Mutual shaping: a bidirectional relationship between the users and the technology developer

For the three studies, we followed the framework of mutual shaping. The long-term aim is to create a bidirectional relationship between the users and the technology developers so that we can incorporate societal choices at all stages of the research and development process, as opposed to more traditional methods where users are asked what they care about once the technology has already been designed. In our studies, we first had a discussion with participants to find out about their job, their challenges and their needs, without any introduction to swarm robotics. After listening to their explanation of the art of their profession, we introduced them to swarm robotics, and gave them examples where robot swarms could be useful for them. Finally, we had another discussion around how useful those examples were for them, and challenged them to think about any other scenarios where robot swarms could assist them.

We found very helpful take-home messages. The first one was that participants were open to the idea of using robot swarms in their jobs. That was somewhat surprising, as we were expecting them to focus more on the downsides of the technology, given how robot swarms are frequently portrayed in science fiction. The second point had to do with the particular tasks that participants felt robot swarms could/couldn’t do. This was an extraordinary insight because we identified their priorities, hence the next steps to advance in the swarm robotics research. For example, firefighters said they would highly benefit from robot swarms that could gather information for them very quickly. On the contrary, they wouldn’t like robot swarms extinguishing fires because of the tremendous amount of variables involved in fire extinguishing. That’s exactly the art of their profession – they know how to extinguish fires. In the study with the sector of storage organisation, a participant from a charity shop said that they wouldn’t like robots valuing the items they receive, but robot swarms could be useful for organising the stock more efficiently. Bridge inspectors would rather assess whether there’s damage by themselves, given the information about the bridge that a robot swarm sends them. Finally, most participants brought up concerns to tackle if we want to successfully deploy swarms in the real world. These mainly had to do with transparency, accountability, safety, reliability and usability. Some of the challenges for swarm robotics that were collectively identified in the studies are the following:

  • How can we really understand what’s happening within a robot swarm?
  • How can we make safe robot swarms for users?
  • How can we manufacture robot swarms to be used out of the box without expert training or difficult maintenance?


Bar chart of answers to one of the questions asked to fire brigades

Personally, what struck me the most in my study was that almost three quarters of the participants from fire brigades expressed that they would like to be included in the research and development process from the very beginning. So, engaging with them through mutual shaping was a good choice because it opened up the relationship that they apparently want to have. And that’s really inspiring! I hope our research opens up exciting paths to explore in the future. Paths that will take swarm robotics a step closer to making robot swarms useful for society.

Daniel Carrillo-Zapata, PhD in swarm robotics and self-organisation, Bristol Robotics Laboratory

COVID-19 opportunities to shift artificial intelligence towards serving the common good

Posted on

Contact tracing apps are just one measure governments have been using in their attempts to contain the current coronavirus outbreak. Such apps have also raised concerns about privacy and data security. COVID-19 – and the current calls for AI-led healthcare solutions – highlight the pressing need to consider wider ethical issues raised by artificial intelligence (AI). This blog discusses some of the ethical issues raised in a recent report and brief on moral issues and dilemmas linked to AI, written by the Science Communication Unit for the European Parliament . 

AI means, broadly, machines that mimic human cognitive functions, such as learning, problem-solving, speech recognition and visual perception. One of the key benefits touted for AI is to reduce healthcare inefficiencies; indeed, AI is already widespread in healthcare settings in developed economies, and its use is set to increase. 

There are clear benefits for strained healthcare systems. In some fundamental areas of medicine, such as medical image diagnostics, machine learning has been shown to match or even surpass human ability to detect illnesses. New technologies, such as health monitoring devices, may free up medical staff time for more direct interactions with patients, and so potentially increase the overall quality of care. Intelligent robots may also work as companions or carers, remind you to take your medications, help you with your mobility or cleaning tasks, or help you stay in contact with your family, friends and healthcare providers via video link.

AI technologies have been an important tool in tracking and tracing contacts during the COVID-19 outbreak in countries such as South Korea. There are clear benefits to such life-saving AI, but widespread use of a contact-tracing app also raises ethical questions. South Korea has tried to flatten its curve using intense scraping of personal data, and other countries have been using digital surveillance and AI-supported drones to monitor the population in attempts to stem the spread. The curtailing of individual privacy may be a price we have to pay, but it is a tricky ethical balance to strike – for example, the National Human Rights Commission of Korea has expressed its concern about excessive disclosure of private information of COVID-19 patients.  

The case of the missing AI laws

As adoption of AI continues to grow apace – in healthcare, as well as in other sectors such as transportation, energy, defence, services, entertainment, finance, cybersecurity –legislation has lagged behind. There remains a significant time lag between the pace of AI development and the pace of AI lawmaking. The World Economic Forum calls for much-needed ‘governance architectures’ to build public trust in AI to ensure that the technology can be used for health crises such as COVID in future.There exist several laws and regulations dealing with aspects relevant to AI (such as the EU’s GDPR on data, or several country laws on autonomous vehicles) but no countries yet have specific laws on ethical and responsible AI. Several countries are discussing restrictions on the use of lethal autonomous weapons systems (LAWS).[1] However, governments in general have been reluctant to create restrictive laws.

A new report commissioned by the European Parliament will feed into the work of their Scientific Foresight Unit, STOA. The report, written by the Science Communication Unit, was led by Professor of Robot Ethics, Alan Winfield.

Broad ethical questions

Reviewing the scientific literature and existing frameworks around the world, we found there are diverse, complex ethical concerns arising from the development of artificial intelligence.

 In relation to healthcare, for diseases like COVID-19, where disease is spread via social contact, care robots  could provide necessary, protective, socially distanced support for vulnerable people. However, if this technology becomes more pervasive, it could be used in more routine settings as well. Questions then arise over whether a care robot or a companion robot can really substitute for human interaction – particularly pertinent in the long-term caring of vulnerable and often lonely people, who derive basic companionship from caregivers.

As with many areas of AI technology, the privacy and dignity of users’ needs to be carefully considered when designing healthcare service and companion robots. Robots do not have the capacity for ethical reflection or a moral basis for decision-making, and so humans must hold ultimate control over any decision-making in healthcare and other contexts.

Other applications raise further concerns, ranging from large-scale and well-known issues such job losses from automation, to more personal, moral quandaries such as how AI will affect our sense of trust, our ability to judge what is real, and our personal relationships.

Perhaps unexpectedly, we also found that AI has a significant energy cost and furthers social inequalities – and that, crucially, these aspects are not being covered by existing frameworks.

Our Policy Options Brief highlights four key gaps in current frameworks, which don’t currently cover:

  • ensuring benefits from AI are shared fairly;
  • ensuring workers are not exploited;
  • reducing energy demands in the context of environmental and climate change;
  • and reducing the risk of AI-assisted financial crime.

It is also clear that, while AI has global applications and potential benefits, there are enormous disparities in access and benefits between global regions. It is incumbent upon today’s policy- and law-makers to ensure that AI does not widen global inequalities further. Progressive steps could include data-sharing and collaborative approaches (such as India’s promise to share its AI solutions with other developing economies), and efforts to make teaching around computational approaches a fundamental part of education, available to all.

Is AI developed for the common good?

Calls have been issued for contributions from AI experts and contributors worldwide to help find further solutions to the COVID-19 crisis – for example, the AI-ROBOTICS vs COVID-19 initiative of the European AI Alliance is compiling a ‘solutions repository’. At the time of writing, there were 248 organisations and individuals offering COVID-related solutions via AI development. These include a deep-learning hand-washing coach AI, which gives you immediate feedback on how to handwash better. 

Other solutions include gathering and screening knowledge; software enabling a robot to disinfect areas, or to screen people’s body temperature; robots that deliver objects to people in quarantine; automated detection of early breathing difficulties; and FAQ chatbots or even psychological support chatbots.

Government calls for AI-supported COVID-19 solutions are producing an interesting ethical interface between sectors that have previously kept each other at arm’s length. In the hyper-competitive world of AI companies, co-operation (or even information sharing) towards a common goal is unchartered territory. These developments crystallise one of the ethical questions at the core of AI debates – should AI be developed and used for private or public ends? In this time of COVID-19, increased attention by governments (and the increased media attention on some of the privacy-related costs of AI) provide an opportunity to open up and move forward this debate. Moreover, the IEEE urges that the sense of ‘emerging solidarity’ and ‘common global destiny’ accompanying the COVID-19 crisis are perfect levers to make the sustainability and wellbeing changes required.

One barrier to debate is in the difficulty of understanding some of the most advanced AI technologies, which is why good science communication is crucial. It is vitally important that the public are able to formulate and voice informed opinions on potentially society-changing developments. Governments need better information too – and up-to-date, independent and evidence-based forms of technology assessment. Organisations such as the Science, Technology Assessment and Analytics team in the US Government Accountability Office or the European Foresight platform are examples that are trying to enable governments and lawmakers to understand such technologies deeply while they can still be shaped.

In order to enjoy the benefits of AI, good governance frameworks are urgently needed to balance the ethical considerations and manage the risks. It is yet to be seen if the COVID-19-prompted developments in AI will herald a new era of public-private cooperation for the common good, but if there was ever a time to amplify this conversation, it is now.

Ruth Larbey, Science Communication Unit, UWE Bristol.


[1] Belgium has already passed legislation to prevent the use or development of LAWS. 


An Ethical Roboticist: the journey so far

Posted on

What do robots have to do with ethics? And how do you end up with the job of “roboethicist”? Prof. Alan Winfield, Director of the Science Communication Unit at UWE Bristol, explains his recent professional journey.

It was November 2009 that I was invited, together with Noel Sharkey, to present to the EPSRC Societal Impact Panel on robot ethics. That was I think my first serious foray into robot ethics. An outcome of that meeting was being asked to co-organise a joint AHRC/EPSRC workshop on robot ethics – which culminated in the publication of the Principles of Robotics in 2011: on the EPSRC website, and with a writeup in New Scientist.

Shortly after that I was invited to join a UK robot ethics working group which then became part of the British Standards Institute technical committee working toward a Standard on Robot Ethics. That standard was published earlier this month, as BS 8611:2016 Guide to the ethical design and application of robots and robotic systems. Sadly the standard itself is behind a paywall, but the BSI press release gives a nice writeup. I think this is probably the world’s first standard for robot ethics and I’m very happy to have contributed to it.

Somehow during all of this I got described as a roboethicist; a job description I’m very happy with.

In parallel with this work and advocacy on robot ethics, I started to work on ethical robots; the other side of the roboethics coin. But, as I wrote in PC-PRO last year it took a little persuasion from a long term collaborator, Michael Fisher, that ethical robots were even possible. But since then we have experimentally demonstrated a minimally ethical robot; work that was covered in New Scientist, the BBC R4 Today programme and last year a Nature news article. I was especially pleased to be invited to present this work at the World Economic Forum, Davos, in January. Below is the YouTube video of my 5 minute IdeasLab talk, and a writeup.

 

 

To bring the story right up to date, the IEEE initiated an international initiative on Ethical Considerations in the Design of Autonomous Systems, and I am honoured to be co-chairing the General Principles committee, as well as sitting on the How to Imbue Ethics into AI committee. The significance of this is that the IEEE effort will be covering all intelligent technologies including robots and AI. I’ve become very concerned that AI is moving ahead very fast – much faster than robotics – and the need for ethical standards and ultimately regulation is even more urgent than in robotics.

 

It’s very good also to see that the UK government is taking these developments seriously. I was invited to a Government Office of Science round table in January on AI, and just last week submitted text to the parliamentary Science and Technology committee inquiry on Robotics and AI.

You can find out more about Alan’s research and engagement on his own blog.

The story behind the cameras: filming robots

Posted on

In 2013, just as I was finishing my Masters in Science Communication at UWE Bristol, I was asked to help film the euRathlon robotics competition in Germany. euRathlon was inspired by the situation that officials were faced with after the nuclear accident in Fukushima in 2011. The competition challenges robotics engineers to solve the problems of dealing with an emergency scenario, pushing innovation and creativity in the robotics domain. The project is led by Prof. Alan Winfield from UWE alongside seven other partner institutions. The 2015 euRathlon competition in Piombino, Italy combined land, sea and air challenges for the robots to overcome. Our 2015 film team included three of us (Josh Hayes Davidson on graphics, Charlie Nevett on camera and myself – Tim Moxon – as producer and sound engineer), taking with us all the lessons I learned from 2013.

Filming robots, particularly complex robots designed to respond to emergency scenarios, is a daunting task. Trying to make sure that we didn’t get too technical was always going to be a problem. We had the additional issue that English was not the first language of most of the people being interviewed which really added to the challenge. Taking care and with plenty of re-shoots we managed to get round both of the problems by sticking to the golden rules: take it slow and keep it simple. This made sure that we never lost sight of what we were trying to do. Our focus was always to bring 21st century robotics into the public eye.Picture 2

The first two days of the competition presented the individual land, sea and air trials. On site we first created two “meet the teams” films where we interviewed all 16 teams and got to know them. Luckily they were all super friendly and very cooperative which meant we got all the teams interviewed in two days. After that the real work began. The land trials were easy enough to film and get a good story line of shots as the robots were almost always visible.  However the underwater robots required a bit more imagination. In the end a GoPro on a piece of drift wood got us the shots we needed.


The aerial robots had some issues too as getting long distance shots was not always easy. Fortunately Josh and Charlie were more than up to the task.

Day 3 and 4 focused on combining two domains, so land and sea or air and land etc. Day 3 went well with fantastic interviews with judges and teams helping to really give some depth to the videos. Again underwater proved to be a bit challenging but we managed, with the help of some footage given to us by the teams that they took on-board their robots. Day 4 didn’t go as well as the second half of the competition had to be cancelled due to strong winds. Wind had been an issue throughout the competition and all of our equipment required regular cleaning to keep the dust out, as well as dealing with constant wind when recording sound.

That day however you could barely stand in the open for all the dust and sand being kicked up by the wind and getting good sound for interviews was nearly impossible. We could only hope that the weather improved for the Grand Challenge.

The final days were the Grand Challenge, as much for us filming as for those competing. The timescale was starting to tighten as we only had two days to film, cut and polish the remaining two videos. With increasing pressure to produce high quality products we pulled out all the stops. Fortunately all the teams rose to the occasion and provided us with some spectacular on-board footage as well as some nice underwater diver footage. The Grand Challenge turned out to be a great success with all the teams at least competing even if they didn’t all finish the challenge.

Tim Moxon completed the UWE Bristol Masters in Science Communication in 2013.

For more information about EuRathlon please visit the project website.

Back to top

Follow this blog

Get every new post delivered right to your inbox.