Ethics of Artificial Intelligence (AI)


A serious question facing every citizen today is how to organise, regulate and govern the emergence of artificial intelligence (AI)?

In a recent widely-publicised interview, Sundar Pichai, Google and Alphabet's joint Chief Executive Officer (CEO), is quoted:

“International alignment will be critical to making global standards work. To get there, we need agreement on core values. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”

This is a useful statement, because Mr Pichai identifies both the central role of human values and a connected role of distributed intelligence, or multiple communities of interest, necessary to address AI related risk.

To date, a lot of ethical inquiry into AI has been driven from within an industry, unprepared to deal with one of the most powerful technologies ever developed.

An industry certainly ill-equipped to deal with the level of risk that AI poses to our newly integrating digital world.

Also, an industry that bears much responsibility for this technology and its potentially dangerous impacts.

The implication is that no specialist group or single academic expert has any hope of addressing such risks, without as Mr Pichai implies, a coherent grasp of human values and a broad community of citizens involved.

People might name it benignly,'The Fourth Industrial Revolution', while in fact, AI is much more than a complex development of a single technology, computing.

Exponentially, if AI spreads and grows, as it already is, in the not-too-distant future, it may become so powerful that one person using it, might threaten the entire world.

What about AI powered wars between governments fixated on acquiring new resources, or power and control over entire populations?

Just imagine a mad scientist, or more likely a military-industrial laboratory, uses AI to splice a mutant gene that can kill every human being on the planet. And which no other AI can stop in time.

Or perhaps a negative AI capability that destroys the global financial system, or a new class of regulation that makes us all subservient to the AI overlords?

Your health records, your private data, your life-support systems, your life as you know it, or expect it to become, may each or all, be suddenly denied or destroyed as a direct consequence of poorly designed AI systems.

The power of AI technology gives god-like powers to a young industry and some pretty nasty characters outside of it.

People with money and power who it seems, are already willing to do untold harms, harms that ordinary people would not contemplate, yet today are everywhere in evidence. 

I call them the death cults for that is what they are. People prepared to murder, kill and maim anyone who gets in their way. The rule of law is a smokescreen to such people, a mere figleaf to hide their crimes behind.

No, at this juncture in history, in the animal development of a violent and aggresive species, we should be very careful of who controls AI, or add to the risks of an already catastrophically unbalanced civilization.

For citizens working flat-out to survive, facing threats from a raft of global catastrophic risks (GCR), such speculations may appear fanciful or premature.

While this is the exact time to review and to look at the broad issues as the Google CEO identifies. Otherwise, it will simply be too late to unwind what has been left to chance.

Remember, there are many scenarios in which AI is more dangerous than anything human beings have experienced before.

As the beautiful, late and great scientist Prof. Stephen Hawking warned,

     'The development of full artificial intelligence could spell the end of the human race.'

It's a revolution that truly can change everything, at the flick of a paranoid switch.

So, how does communication ethics support this inquiry into AI and can communication ethics assist world citizens to confront these profound questions?

Well, a central point is that their are no stakeholders in this debate. There are only citizens with rights to life.

While governments routinely diminish these rights in the spurious name of efficiency or balance.

If citizen or human rights are not universally applied, there are no ethics in AI or anywhere else.

Simply because of this: if ethics is defined as the 'examination of values' that each citizen carries throughout their everyday lives, then to consider AI ethics is firstly to consider the rights to hold ethics and to be part of ethical inquiry.

Who should be excluded? Again an old model of anthropomorphic self-entitlement suggests the powerful and the specialist should be left to decide.

Yet, look at what a mess the world is in. Look at the size of the problems emerging alongside AI.

Can we afford any longer to believe the old lie, that some are better than others? Surely, human and civil rights accord each citizen the basic protections which all should be afforded?

In the case of AI ethics, which are and will become entwined within every dimension of our world, all citizens will have an interest in such matters.

Specialists love to tell us that ethics is a complex subject, but actually it's the reverse. It's life that is complex.

Ethics is the simple human experience of examining life, at the always subjective level.

Every human being is subjective, it is only by agreement that we are able to discern something common.

Everything may be disputed in this sense. My values do not agree with yours.

Why, for a thousand, thousand reasons. The critical issue is the quality of agreement over our different values that, today, is necessarily brokered over increasingly short time-frames.

This is where communication ethics begins to support citizens to assess the potential impacts of AI.

Alongside all the other GCR and with the members of multiple communities of citizens, held together to make decisions in legally arranged forums.

Communicating both in person and using the amazing powers of AI enhanced digital communication.

Yes, the paradox exists, that we can use AI to resolve the threats of AI.