Autonomous systems are being systematically developed, commercialized and deployed to make life-altering decisions on behalf of individuals and businesses which will enhance the ability of human workers to perform tasks that are usually dangerous, costly or impossible for them to undertake or complete.
We are already well along the path.
This shift promises to create unprecedented new opportunities and challenges alike. Autonomous systems could be employed to address the needs of millions of people in countless industries and communities worldwide.
Yet they also raise critical questions that go to the heart of who we are as human beings, such as:
What would happen if a self-driving car hits a pedestrian? Would we even know when a self-driving car has killed someone, make mistakes that result in injury or causes an accident?
Or what would happen if a predator drone targets the wrong person or crowd? Would we trust the technology enough to let it make decisions about who to kill, or when (not) to kill them?
Society is only just beginning to wrestle with the challenges posed by several disruptive evolutions in science and engineering such as super intelligence, bioengineering and the next internet revolution.
Now, imagine the impact on our communities and society if autonomous systems are deployed to streamline the movement of people and goods by another 10-fold?
It is consequently critical to identify a way forward for society to manage the risks of deploying autonomous systems in order to prevent the misuse of such systems and the perils of automation in general. We can also however agree that the potential benefits of autonomous projects such as self-driving vehicles and automation in general are clearly feasible.
But, we can’t let that potential curve past us.
Autonomous systems such as robotic process automation, self-driving cars and automated data manipulation are not prone to human error or conscious bias. However, their interaction with humans and the data they are using to make decisions pose a risk of bias or unintended bias.
In the past and to some extent still today, people act based on prejudice, on fear, on bias, and on a plethora of other factors that have little to do with objective reasoning and decision making.
While the immediate concern is the potential misuse of automation, it is important to understand that it also offers enormous potential benefits in terms of saving lives, enhancing productivity and facilitating connectivity and social mobility.
However, simply abandoning well-established norms of behavior in favor of a technological utopia could prove to be extremely problematic.
As with any powerful new technologies, I believe proceeding with care and caution is always paramount.
As more autonomous systems become integrated into our society, the potential for abuse and vulnerability undoubtedly continues to grow as well. Systems automation will inevitably lead to a world where an algorithm can carry out the risk assessment of a terrorist threat, while human operators sit behind their desks, only intervening when, for example, a self-driving vehicle (malfunctions and) decides to strike a crowd.
This inevitably risks the blurring of the line between machine and human, but it also creates an opportunity to develop machines that can learn to perform in dangerous situations on their own and without prejudice.
The responsibility of the human-machine partnership lies in making sure that the technologies behind automation work in accordance with society’s core values. Making sure that it does this requires authorities and research institutions to set the agenda and parameters for automation.
These parameters should be shaped by the values, morals and ethics of society at large, a key factor in ensuring that automation serves the public interest rather than posing risks to it. Unfortunately, we know that such values are not naturally communicated nor are they explicitly expressed in the text of laws and public policies.
Still, it is possible to create codified sets of values to drive automated decision making in society.
I believe that if society comes to agree that these values are reflective of human nature, then the whole purpose of automated decision making is served.
But where do these “values” come from you ask?
This is an area where autonomous systems have a major role to play. When these values are grounded in real world data that they are anchored in and most importantly, are subsequently shared and approved by society, such values can drive automated decision making.
For example, a system that aims to be trusted by its users and to protect human life, cannot knowingly choose an objective to cause death or subjectively malfunction to some extent.
Autonomous systems represent a leap forward in not just complexity with an enormous opportunity to create new jobs and industries, but also in terms of feasibility, capability and practicality. Additionally, allowing a technological advancement that could save millions of lives to potentially become ‘the thing’ that ends it all, should in itself raise eyebrows and set alarm bells ringing.
Perhaps not yet. Perhaps not for many years.
However, in considering this issue, our own reliance on technological advancements and our consideration of the ethical implications of such developments should not be taken lightly. I believe the stakes are too high to accept shortcuts.
But as this discussion continues and autonomous systems and its most advanced and futuristic forms become mainstream (such as super computers), then we need to think about how we can mitigate the potential negative aspects and not continue to passively accept it as a ‘coming of age’ scenario.
Otherwise, what was ‘the thing’ of the future now becomes the present.
Frankly, we are already at the point where technology can soon be bought to autonomously alter life, as expected with neural implants and their potential applicability to automate learning in humans.