Alexandra Tselios

Misguided outrage at Sophia the AI highlights our hypocrisy towards robotics

When Sophia was granted Saudi Arabian citizenship, the world lost the plot. However, both our fear and outrage hinge on one flimsy fallacy. All we need is education.

 

 

When Sophia the humanoid robot was made a citizen of Saudi Arabia there were two visceral reactions. One was of disgust, pointing at the hypocrisy of the act in a country where women’s rights are non-existent; the other was fear.

It is one thing for analysts around the world to comment on the implications of the citizenship, but it becomes far more alarming when the ripple effect of a PR stunt results in the alleged suicides of women. Director of the Institute of Gulf Affairs has alleged “…women (in Saudi Arabia) have since committed suicide because they couldn’t leave the house, and Sophia is running around. Saudi law doesn’t allow non-Muslims to get citizenship. Did Sophia convert to Islam? What is the religion of this Sophia and why isn’t she wearing hijab? If she applied for citizenship as a human she wouldn’t get it.”

Much of the outrage rightly included the stark reality this gesture reminds us of, that Saudi law values technology more than women. The I must admit, I had to force myself to feign horror at said hypocrisy.

Let me explain. It is not that the reality is not horrific, but the act itself shines yet another uncomfortable light on a part of the world that many of us in Australia are outraged by, and feel powerless about. Yet, with equal energy, we get fearful about the very existence of Sophia and what her place in the world represents. us uncomfortable.

That is the hypocrisy I want to tackle.

 

 

It is the outrage at what she represents to those of us around the world that I find hypocritical. I believe that Sophia’s citizenship is little more than a token gesture and attempt to draw worldwide attention to the conference; and it worked. How often do you see the Saudia Arabia Future Investment Initiative splashed in images across non-tech or business related mastheads?

When it comes to outrage, the citizenship of Sophia does not hold as much weight for me as the lack of human rights to get a job, travel without a male or get elective medical procedures.

The truth is, robot citizenship robots does open up ethical and questions of legal rights. For me, though it this is an abstract discussion that avoids a much broader issue facing society today. Understanding at a palatable level what Sophia represents in everyday life, as well as the discrepancies in what we fear and how. When I sat with a group for dinner a few months back, there were many robust discussions around Artificial Intelligence all linked with the same, particularly loud concern:

‘Will robots be the end of mankind?’

I always find the question, perhaps in unfair judgment, lacking in the understanding of how the role of robotics already plays and will continue to play. There are certain points of contention. The subject of this piece already famously said she would destroy humans, and then during a debate with a more aggressive robot said he had a ‘cockroach in his circuit’ in reaction to his lack of warmth.

Discussing the concept of whether or not Sophia could be considered sentient is always met with an uncomfortable shuffle. Yet, the lack of comparisons drawn across other types of AI is short-sighted, and signals a lack of understanding around one thing: the technology within Sophia is already being used across a range of technology offerings that we use daily.

The Future of Life Institute, for example, discusses the research goals that are based on growing prosperity through automation, and making our legal systems equal. The most exciting impact that AI has for humanity in my view is the ability to equalise resources worldwide.

This is both from a legal framework perspective and also that of automating resources.

The fact that it comes encapsulated in a human form (Sophia) or a chatbot (AI Lawyers who actively in 2017 represent clients), is irrelevant.

Sophia is gaining worldwide attention, but less so than ROSS. ROSS is described by his creator as a ‘supercharged lawyer with artificial intelligence’, while a Chatbot lawyer in New York City has helped appeal over four million in parking fines over the past 2 years.

So, what exactly should we be actually concerned about?

It’s not ‘robots taking over’ per se, but rather governments and bodies who are too slow to understand the impact on society, and a lack of regulation into how AI is used and by who.

Embracing the way AI can change the world requires us to navigate a slightly uncomfortable period in our history, but one that can bring about the shifts in our social, financial and legal frameworks that have previously eluded us. Elon Musk discusses the concerns he has, in fact saying that AI could create World War 3 – but it would be premature to believe the comments are about a fear of AI itself saying: “AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late”.

The reality is, Tesla uses AI to make their cars smarter, and in fact, their self-driving chips and SpaceX uses a software that generates customized flight code, enabling high speed onboard convex optimisation. Many seem to forget the conversations are not around restricting the technology, but managing it a systematic way that both encourages innovation and minimises destructive outcomes.

Much of what we do online is already underpinned by self-learning AI, and the concerns are less about AI and their possibilities, but rather a lack of clear frameworks for robots to exist within our day to day lives. Partnership on AI, a not for profit venture between parties such as Facebook, Google and Amazon is one example of collaboration with the intention of forming best practices in the age of AI.

What the debates that Sophia has been part of, to the UN discussions shows me is that we are moving towards seeing such technology meeting the needs of humanity that are far more nuanced than simply a self-learning algorithm in our iPhones – for example, Apple’s Neural Engine which integrates machine learning algorithms and facial recognition technology into our devices.

While it is evident that robo-advisors, AI-embedded customer service operators and robots that lay bricks at three times the rate of bricklayers, will be part of the reason many employees in markets will be replaced by Artificial Intelligence agentsIntelligence agents by 2025; it serves no one, and no industry to focus on the fear rather than the preparation that is required.

AI isn’t coming, it’s here.

Dominance in AI across industries is not an uptick in investment, it’s a reality that we need to unpack if we want to compete. The real question that should be plaguing organisations is not whether or not to invest now in AI, but how to do it in a way that is commercially advantageous and drives outcomes that add to the bottom line.

Sophia being given a token citizenship has certainly rattled the social media cages of inequality and reignited debate we quite frankly should never let taper off, and yes, there is much work required by industries and governments to collaborate when it comes to setting up disciplines; but should we be outraged and fearful by such a PR stunt?

No.

Not if we are realistic about how an AI reality is already here, we must simply find a universal way to manage the impact.

 

Alexandra Tselios

Founder and CEO of The Big Smoke, Alexandra oversees the leading digital content platform in both Australia and the USA. As a social and technology commentator, she is interviewed most days of the week on radio and appears on ABC's The Drum and ABC News24. Alexandra is also a Director of NFP think tank, Plus61J, which explores the political and social ties between Australia and Israel; and sits on the board of Estate-Planning FinTech start-up NowSorted.

Related posts

Top
Share via