Artificial intelligence – a dark future?

Pat Chapman-Pincher posted this on

lonely 2

Stephen Hawking has always been one of my heroes, not just because he has a brilliant mind, but for the way in which he has never let adversity get in the way of what he wanted to achieve.  He has also managed to maintain a sense of humour through all the problems that he has faced.  However I was surprised to read that in his view “the development of full artificial intelligence could spell the end of the human race”.

Elon Musk, who also warned rather darkly “we should be very careful about artificial intelligence” joins him in this view.  He goes on to say “If I had to guess at what our biggest existential threat is, it’s probably that”

It’s interesting that scientists and engineers, all of whom have been going down the path of encouraging scientific and technological progress, should suddenly want to slam on the brakes.

owl

There are undoubted risks, not yet and possibly not soon, from Artificial Intelligence (AI).  The dangers of creating an AI that has the capability to dominate humanity is no longer the stuff of science fiction. Nick Bostrom, in his excellent book “Superintelligence”, published last year, makes a strong case for the effective management of AI and for the need to create AIs that have good intent towards humanity.  He is more optimistic than I am about the ability of humanity to manage that development.

The World Economic Forum has recently published a blog asking, “What risks does Artificial Intelligence pose? “It references an open letter from the top AI researchers in industry and academia (including Hawking and Musk) that argues that AI research and application is moving forward probably at a faster pace than most people realise and that its impact on society will only increase.  So the letter seeks to ensure that any research ensures that AI systems behave in a way that is beneficial to humans.

2015-03-23_1557

The fear is that if AI falls into the wrong hand then it will not be used to create prosperity for the many but prosperity for the few and a dystopian society for the many.

Now for many of us this seems both distant and alarmist and it is very tempting to dismiss the debate as something that can be left to the scientists.
But I believe that business has a real role to play here.  Governments, not all with benign intention, will fund some AI research. But some of the research and almost all of the application will be funded by industry and there needs to be real thinking about what we are trying to achieve as we push the boundaries of AI and robotics ever further.

It seems to me that this is an area where the boards of technology companies should be thinking hard about what they are doing and whether it has a benefit to humanity.  This is far from what many boards do – in an earlier blog I looked at the questions that I think boards should be asking to manage this technology threat – and will require them to think beyond the incremental business strategy that so many companies have.

We can be sure AI will happen, we cannot be sure it will not fall into the wrong hands.  If we look at the example of nuclear weapons, which present a lesser danger than a malign AI, then we have to recognise our global failure to stop that falling into the hands of a number of undesirable people.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>