Who’s to blame when AI systems go wrong?

Pat Chapman-Pincher posted this on

call centre

We are on the cusp of extraordinary times. Businesses are automating functions from online bookings to telling you which lift to get for the quickest rise to your destination.

We are all too familiar with computer system failures. The call centre operator saying, “I’m sorry the system is down” or the supermarket checkout that thinks there is an “unexpected item in bagging area”.

How often have you found yourself shouting at a machine, or wishing that it did what you wanted it to do as opposed to what you told it to do?

We’ve been taught why machines go wrong, it’s the fault of the programmer – garbage in/garbage out, we say. We blame the computer but we know it’s not really their fault. (The exception to this, in my view, is the copying machine because, as anyone knows who has tried to operate one with a deadline looming, they have evil personalities!)

But what happens as systems become increasingly intelligent? When perhaps they are more intelligent than we are?   There may eventually be a time when the world is run by intelligent machines that will communicate with each other and do a far better job of organising the world than we do. Experts estimate that computers will surpass human brainpower in 2023.

driverless car

Already the driverless cars that are being trialled all over the world are communicating with each other. So if they are involved in an accident then whose fault is it? The car, the firm that made the car, the firm that wrote the software?

However, for a long time we are going to have to live with human beings assisted, augmented, directed by computers which will mean if there is an accident, it will not be clear-cut as to who has the liability. Is it the human or AI that has the ultimate responsibility? In the short term it is clearly the human but as the machine becomes more intelligent than the human, what then?

How are we going to manage the process of making those decisions? There is some thinking going on in the insurance industry, particularly around driverless cars but there is nothing like enough thinking being done around a topic that is suddenly becoming very real. Like all thinking about the future, we tend to think things through too little, too late. The march of technology is inexorable and will not wait for peripheral industries to catch up.

Now we tend to think that the responsibility always lies with the human but we already have a precedent for non-human entities being held responsible for civil and criminal misdemeanours. The law holds that companies can have personality and can commit crimes, can be tried and sentenced for them.

So already we have precedents for non-human agents being held responsible for damage to humans, perhaps we now need to develop the thinking around how we might hold machines responsible for their own actions.

It sounds futuristic, but as always with the future it will be upon us before we are ready for it. Which futuristic things are you looking at, what can we learn now?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>