Now honey, go apologize to the robot

AI and Machine Learning (ML) are finding some practical applications: detecting fraud (or ignoring cases of non-fraud), understanding how you’re feeling when you call customer service (angry), and some less practical applications: adding a smile to any headshot.

A common thread with many of the current practical applications is they’re bereft of direct communication between human and machine. The adoption of more interactive applications that involve communication between user and machine, such as cleaning, filling quick serve restaurant orders, retrieving information based on voice recognition, etc. may be slowed by human rejection of machine error.

Machine learning makes predictions, and sometimes they’re wrong

ML, and its corresponding actions or suggestions, are based on statistical inference, not formal logic. From Wikipedia:

Machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly programmed. Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data driven predictions or decisions, through building a model from sample inputs.

Robot reading book

In some sense, it learns much the way a human does. Google showed a machine 10 million images each with an image of some sort of cat, but it could have been an entire cat or just an ear. It achieved 74.8% accuracy when identifying cats, which is an impressive, but means that it was incorrect nearly 1-in-4 times.

Again, there was no formal logic, just 10 million cat pictures which the machine analyzed to make inferences – this is a cat or that is a cat. In contrast, formal logic would provide mutually exclusive rules for identifying what is a cat.

Because predictions are made based on statistical inference, there are errors. Novelty is not something that ML handles well.

Humans can be sympathetic, just not with algorithms

Human error is often written off as an inevitability – bad days, turbulent relationship, and ‘brain farts’. Sure, there’s limited tolerance for it, but in general, the deeper the emotional connection with a person, the more that’s forgiven.

The same acceptance isn’t applied to algorithms. In Daniel Kahneman’s 2011 book on behavioural economics, Thinking Fast and Slow, he outlines how the reaction to an incorrect heart attack diagnosis differs based on who (or what) made it.

The prejudice against algorithms is magnified when the decisions are consequential. Meehl remarked, “I do not quite know how to alleviate the horror some clinicians seem to experience when they envisage a treatable case being denied treatment because a ‘blind, mechanical’ equation misclassifies him.” For most people, the cause of a mistake matters. The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy occurring as a result of human error, and the difference in emotional intensity is readily translated into a moral preference.

Posed a different way, if an algorithm was better at detecting a disease than a human, wouldn’t we definitely want to use the judgement of the machine? Meehl and Kahneman suggest that thinking is too rational.

Sure, the algorithm may be better at disease detection, but can it be trained to garner empathy through apologies and are humans ready to accept them?

Forgiveness for mistakes is driven by human connection

When we take a look at the drivers of malpractice suits, it’s primarily dislike for the doctor. Basically, a doctor can make a mistake if the patient (or patients’ loved ones) feel positively towards them. From the Journal of the American Medical Association:

Risk seems not to be predicted by patient characteristics, illness complexity, or even physicians’ technical skills. Instead, risk appears related to patients’ dissatisfaction with their physicians’ ability to establish rapport, provide access, administer care and treatment consistent with expectations, and communicate effectively.

This means that without human connection, AI may be able to deliver fewer deaths, but maybe not fewer lawsuits.

Treating machines like people will unlock their benefits

Stepping back from the stakes of life and death and back to a futuristic quick serve restaurant where a machine has filled an order incorrectly.

Positive robot acceptance

Without accepting the (fewer projected) mistakes made by machines, we may never realize their benefits. If error rates drop, but complaints rise, the promise of better service and lowers costs may never materialize.

If these benefits are desirable (and worth their costs), we may soon be teaching our children to forgive their robot brethren, because they too, make mistakes.

What do you think? Leave us a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Dig Insights

If you have complex questions and need clear answers, please don't hesitate to contact us. Dig would be thrilled to set you on the path to clarity.

Contact Info

Dig Insights Inc.
372 Bay Street, 16th Floor
Toronto, Ontario M5H 2W9
Ph: +1 866-496-4344
info@diginsights.com

Drop us a line

[recaptcha]