Robot and human hand-shake
Shutterstock / Willyam Bradberry

Written by Lieve Van Woensel with Sarah McCormack,

A study on the ‘Ethics of Cyber Physical Systems’ has recently been published by the Science and Technology Options Assessment (STOA) Panel. This blog post was prepared using information from a technical briefing paper written for the study by Professor Michael Henshaw (Loughborough University, United Kingdom) and Joost van Barneveld, MSc (Technopolis Group, The Netherlands). The study examined seven key areas, where cyber-physical systems (CPS) will have a significant impact. CPS are technical systems where networked computers and robots interact with the physical world. They are found in a wide range of services and applications. As their numbers are ever increasing, due to the continuous development of these technologies, we have to examine what impact they will have on citizens’ safety and security.

What impact will they have on our safety?

It is clear that we will encounter CPS more and more in the coming years. This will range from encountering driverless cars on the streets, to drones flying over crop fields, to smart homes reducing our energy bill. Currently, certain aspects of CPS are vulnerable to hackers and criminals. If hackers manage to infiltrate them, it can lead to safety issues, as they can access personal data of individuals gathered by the CPS. Medical CPS devices, such as insulin pumps, could also be compromised by hackers. If hacked, they could be used to deliver a fatal dose to the patient. Thankfully, the protection of CPS will come in several forms, one being quantum cryptography. This promises to better protect individuals through nearly unbreakable codes and, once applied to CPS, will increase the security of these systems.

Can we trust them?

As we start to live our lives alongside these autonomous machines, we need to examine the ethical questions raised. Are we safe in close contact with machines that have no moral or ethical codes to follow? Can we ensure that the lack of ethical behaviour does not lead to breaches in security (e.g. drones being used to collect personal data) or safety (e.g. military robots mistaking civilians for opponents)? Who will get to decide what moral code they should follow? Should it be up to the government to legislate on the use a particular ethical code that these systems ought to follow? Such questions will need to be addressed as these systems become more commonplace, while we endeavour to minimise risks for citizens.

Can security and safety benefit from CPS?

There are several key benefits to the implementation of CPS for both our safety and security. Drones can be used by governments to monitor borders. In doing so they can provide this information to the relevant authorities to help them minimise the number of illegal immigrants entering a country. Driverless cars are not subject to road rage, nor do they become tired or aggressive. They are also better at calculating manoeuvres than human drivers and thus are safer. CPS will also aid relief workers: they can access dangerous sites to help in disaster relief, not only helping to save victims, but also keeping the workers safe.

What next?

CPS are here to stay and there are many expected benefits from the development of these technologies in relation to security and safety. Yet there are still key vulnerabilities within these systems and ethical questions which cannot be ignored. After all the positive and negative effects may balance out, leaving the world neither necessarily safer nor more dangerous than before the introduction of CPS. It is evident that the development of these systems will require changes in legislation to take into account the risks that these systems pose and ensure that citizens remain safe and secure in a world shared with CPS.

For more information about CPS check out this STOA video.

[youtube=https://www.youtube.com/watch?v=c5gu8xmmum4&w=640&h=389]