Could Philosophy help us use AI wisely?

May 23




Helen Whitten

Posted In


Could Philosophy help us use AI wisely?

A wave is about to crash down on us in the form of AI programmes and robotics.  There is no question that this will happen.  The question is can we use this new and amazing technology wisely? And the second question is could philosophy help us to do so?

There is a major difference between being clever and being wise.  Making a clever break-through or clever innovation may not, in the long run, prove a wise decision for humanity.  We are seeing how Smart phones and social media are interrupting the natural development of children’s social lives and beginning to wonder what to do about it.  This wave has already crashed and it is difficult to reverse it now.

We have witnessed recently with the Horizon Post Office scandal the terrible destruction that can occur when IT technology is allowed to run rampage without the humans controlling it stopping to reflect on the consequences.  What makes a group of adults become ‘wilfully blind’, in Margaret Heffernan’s excellent phrase, to what is happening before their eyes?  To knowingly do injustice and harm to others?

Technology can. It can deskill us. We give way to what we assume is something cleverer than we are. We can see this in our own lives when we submit to a SatNav’s directions although we know a route like the back of our hand and have a better way to go.  Garage mechanics become deskilled as so many parts of a car’s engine are now monitored by computer rather than by hand.  If the computer fails, they are lost. Medics similarly are being led by scans and computer diagnostics where they used to make the diagnosis themselves.  And yes, this can sometimes be more accurate but where a scanning or X-Ray machine is not available or there is a long delay before screening, then that medic needs to retain the confidence to make the diagnosis without technology. I have personally heard of two near-deaths caused by burst appendix because a doctor was waiting for an X-Ray machine, where in a previous era it would have been up to the medic themselves to make the diagnosis for surgery.  We have to watch this tendency.

There are all kinds of clever individuals who are creating AI technology and there are many brilliant applications for it but how can we encourage the developers to stop and reflect on the wisdom of their inventions? Or the ethics? Does there not need to be a wise figure in the room to nudge them in the direction that benefits humanity rather than just makes their business wealthy and successful?  Could, in fact, that wise figure even be AI-generated?

I was reading some Plato the other morning, as you do, and it occurred to me how far governments of countries and governance of business environments have diverted from his ideas of how to govern.  He predicted how Athenian leaders would gain power by telling voters what they wanted to hear rather than defining a strategy for the future and a set of principles and values by which to direct its course.  His solution was to ensure that politicians worked for the good of the state and adhere to their principles through contemplation and reflection of the good rather than the need for a vote.  He was an elitist, yes, and believed that leaders should be well-educated generalists but should have studied mathematics and philosophy if they were to govern well.  Philosophy, after all, is the love of wisdom.

I think his concept was that the ideas, strategy, principles and direction should be created by the leaders and the public servants should be those who excelled at administration and project management.   He saw the ethics of the state as essential as the driver and shaper of individual action. When there is a lack of principle in government, therefore, this can lead to lack of trust and a rottenness within society. As Bob Garratt’s book suggested, Fish Rot from the Head.

So back to my point about could AI actually drive the ethics, this would surely be all about how any AI innovation is programmed.  Fill it full of rubbish and we will get rubbish. Fill it full of evil and we will get evil.  Fill it full of Plato and the wise words of other philosophers then perhaps we could receive wisdom?

OK, this may sound far-fetched but when I look back over the technological innovations I have witnessed in my lifetime I observe that since my family rented our first television in around 1955 we have watched some amazing programmes and yet now, with all the streaming channels available, there is a preponderance of mindless game shows and a vast amount of violence depicted on screen. Couldn’t we do better?  Couldn’t someone in the Board Room encourage scriptwriters to stop and reflect on whether they could produce more uplifting dramas?

People are turning off 24/7 global news because they can’t stand so much negativity brought into their sitting rooms.  Human beings need hope and it is clear that our younger generations are desperate to be fed some optimism. And it isn’t as if there isn’t hope to be had, for even with all the problems we have in the world today, including climate change, there is nothing to say that we don’t have the ability to find solutions for these problems. The key is to believe that we can do so, otherwise people give up.

Plato believed that the health of a society depended on minds being fed with education and inspiration to instil a sense of moral value, a wish to contribute to their community.  He felt that children should not be exposed to negative images, or exposed to literature that glorified lying or violence or lack of self-control but should be provided with examples and role models of justice and self-discipline.  I think he would be pretty horrified by Naked Dating or the scenarios exposed during this year’s Eurovision Song Contest!

My point isn’t that governing politicians or boards of directors should give way to AI technology.  It is that perhaps there could be an AI programme that stood in the corner of the room, so to speak, to ask leaders to stop and reflect on whether an action was wise, or ethical, whether it would benefit humanity for the greater good or only benefit one political party or one tech company.  If an AI robot was programmed with the wisdom of the ages could it not make decision-makers stop for a moment and contemplate in stillness the potential consequences of any future actions?  This should happen anyway, of course, but we only have to look at Horizon and other recent examples of political, medical and business malpractice to realize that this is not happening.

Could an AI robot be the Oracle in the Room, not to force a decision but to make those responsible for it consider the wise and altruistic option rather than the merely selfish one?


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.