To solve people problems, first get rid of the people

Simon Ellis | January 31, 2019

Time to read: 15 mins

In 2017, Channel 4 ran a season of programmes focused on robots, our attitudes towards them, and the pros and cons of our inevitable co-existence in the (near) future.

One series really struck me with an unexpected sense of positivity, for it suggested not so much a white-walled world of sterility, but a path, through robots, to us rediscovering our humanity.

I am neither techno-skeptic nor techno-phile; every generation since the year zero has looked to the future with a mixture of hopeless optimism, and more often (because it makes for a more captivating book, movie or headline), assured dread. It is always easier to imagine how things can get worse because we can imagine a world without the things we already have, whereas optimism depends on gaining something as yet intangible – better shelter, more reliable food sources, electricity, light, engines, WiFi, micro power generation, a harmonious Brexit agreement.

It is always easier to imagine how things can get worse because we can imagine a world without the things we already have, whereas optimism depends on gaining something as yet intangible. 

I think history would back me up in that, for the most part, the reality of ‘the future’ when it becomes ‘the present’ is a mixed bag of both good and bad and every shade between. My techno-skepticism lies not in doubts about technology’s fallibility but that we entrust it to humans, which are wonderfully fallible. My love for technology is in its ability to keep us simultaneously curious and unsteady – we’re fascinated by what our smart phones can do, but worry ceaselessly (by reading countless articles on our smartphones) at how they’re affecting society.

The Robot will see you now

A flat, perfectly circular face in gloss black and a white body somewhere between roll-on deodorant and pepper grinder 

The programme I referred to at the top was ‘The Robot Will See You Now’, in which couples were invited to a therapy session with Jess. Jess being a robot, with access to the subjects’ data, including emails, searches, social media, order history and banking. And she had a lie detector too. A flat, perfectly circular face in gloss black and a white body somewhere between roll-on deodorant and pepper grinder. The subjects could be partners, best friends or father and daughter, but together they would work through their problems through discussion with Jess.

What was fascinating was how productive the sessions were. Jess was direct, but not mechanically so. She asked pertinent, logical – pragmatic – questions. And the subjects responded, with disarming forthrightness. The question for me was not ‘can therapy sessions with a robot be successful?’, but ‘are sessions with Jess more productive than similar sessions with a human?’ And I think the answer is ‘yes’, for two reasons.

Reason 1: The session itself

She had to have a ‘face’ to give something for her clients to focus on, but that face worked not because it had eyes, but because those eyes weren’t human. 

When I was a kid I was periodically taken along to confession. Not two feet away from me was a man to whom I was expected to confess my ‘sins’ (as a kid, I’m not sure I was really capable of ‘sins’, but there you go). And I already knew this guy – he was the friendly Irish man with dancing eyes and a bone crushing handshake we saw every Sunday. The big difference at confession of course was… he was behind a screen. For the subjects going in to see a robot therapist, I think they had much the same advantage.

That’s not to say Father Daly was robotic, far from it. But I couldn’t see his eyes – I couldn’t see him. Jess had a degree of AI, which was ‘augmented’ (to cries of “fraud!” from critics) by human operatives behind the scenes. But whatever was driving her behaviour I think is moot; Jess provided a notional filter for the depersonalisation of therapist, the depersonalisation of the questions and the depersonalisation of the problem, so the real issue could be identified, rationalised and dealt with in isolation. She had to have a ‘face’ to give something for her clients to focus on, but that face worked not because it had eyes, but because those eyes weren’t human.

The magnetic and penetrating power of seeing other human eyes is clearly demonstrated by their effectiveness in deterring crime, reported here on the BBC in 2013:

https://www.bbc.co.uk/news/av/uk-22275952/watching-eyes-poster-reduces-bicycle-thefts

Jess could therefore be trusted as entirely unbiased, acting without judgement and with no other aspiration than to solve the problem she was confronted with. Her body language was rudimentary, and she did not have to pay lip service to a difficult question; there was no whispering or caveat or precursory warning, just simply:

“Is the question logical, and would its answer help reach successful resolution of the problem? If so, then ask the question.”

The subjects were then empowered to respond in a similar manner – pragmatically, as if the problem was hanging there in the room like a Piñata, instead of shuffling around in the corner, breaking things and dropping large piles of poo.

Reason 2: moving forward after the session

What a problem needs is often not what a problem gets, for what a problem needs is humanity, when what we give it is people. 

The agreed actions between Jess and her subjects came from an inherently and demonstrably unbiased process, focused on resolution of the problem. Even decisions that required compromises were arrived at logically in respect of which path would lead to a better outcome overall, with no basis for resentment on the subjects’ part, such as, say, perceived bias in the ‘therapist’. Because, to steal from Brian Connolly, “It’s a puppet!” It doesn’t have emotions. It doesn’t have bias. It doesn’t have an agenda. It doesn’t really have a gender.

In work, it would be nice to think that we’re all focused on solving the problem at hand, but we know that’s not true don’t we. Personalities, passions and egos. Late arrivals, early birds and egg heads. Bulls, butterflies and wilting violets. Mommas, Poppas and celibates. Artists, engineers and accountants. Wonderfully fallible, brilliantly diverse.

What a problem needs is often not what a problem gets, for what a problem needs is humanity, when what we give it is people.

So, you’re saying we need to act like robots to solve problems, right?

To be human is to err. To be humane is to care. 

Er, no. Bear with…

Luckily for me, I don’t have a long commute. It’s usually free flowing traffic most of the way. Except sometimes it’s really, really slow. Invariably the cause is one of those dang cyclist-types, simultaneously saving the planet while causing a traffic jam. If I’m trying to get home, delays by cyclists will cause me frustration. Hell, I may even get a tad angry. At that point I am Si Ellis, driver, and he is not a fan of cyclisters.

But I’ve ridden a bike. I used to ride quite often in fact. I was a cyclister. And cars and vans would sometimes scream past me, sound their horn or otherwise drive inconsiderately. Si Ellis, cyclister, was not a fan of driverists.

Point being, throughout the day the version of ourselves that we inhabit is defined by the context of the situation we’re in. These versions are layered on top of our fundamental humanity. When we let one wonder into the other, we can obscure our humanity and act unproductively.

When we are seeking to solve a problem we often look first to blame. To find a culprit. We tell them off and then… well, that’s it isn’t it? Traffic is slow, why? Oh, it’s that bloody cyclist. But is it though?

That cyclist is also likely to drive a car too, which means they’re paying for use of the roads just like I am. They’re using less of the earth’s resources than I am though. And they’re trying to do good for themselves and their environment. Good on them.

They do hold the traffic up though don’t they? Yes they do, but that’s not their fault. It’s because roads are too narrow and there aren’t enough cycle paths. Or that we’re simply too impatient and like having a focus for our venting spleens.

The human in me wants to solve my issue by getting that cyclist off the road.

The humanity in me wants to solve our issue by chilling out, giving them plenty of space, admiring their dedication, and finding out where the council has got to with the cycle network.

Who knows, one day I might get back in the saddle myself. At least then I’d have an excuse for wearing all this lycra.

Solving problems the Jess way

The actual crux of the problem remains, elephant-like, untouched, unacknowledged, causing chaos and shitting everywhere 

What the sessions with Jess show us is that to look at a problem pragmatically is not the same as looking at a problem dispassionately. When solving issues it is blame and judgement that we often fear most, not the action that’s needed to resolve them. Consequently, a situation develops where blame is handed around like a really crappy version of pass-the-parcel, while the actual crux of the problem remains, elephant-like, untouched, unacknowledged, causing chaos and shitting everywhere.

Jess’s design literally removed the eyes of judgement from the room. She evaluated, she assessed, she enquired. She cared both about the subjects and about finding a lasting resolution. But she never blamed.

Jess will never be as human as you or I, but next time we’re faced with a problem, will we be as humane as Jess?

The future of humanity is safe

In terms of the coming age of robotics, Jess also shows us that the closer robots get to mimicking us, the less useful they become. Robots with ‘human-like’ faces are simply unnerving and difficult to engage, just like humans with ‘human-like’ faces are – we all saw Oprah Winfrey’s interview with Michael Jackson right? Robots are already being used to great effect in hospitals. If I was a patient, I know who’s face I’d rather see sidling up to my bed with a cup of water and a suppository. Give me Jess every time.

As a species, we are so adept at reading signals that we instinctively look for (and seek meaning in) faces in everything around us – from our pets to our cars, and yes, to our robots. Our ability to empathise with other entities, be they sentient or otherwise, is remarkable and the bedrock of our evolution. Which makes it all the more saddening when we refuse to empathise with one another.

Dear old Father Daly would say that God created us in his image. I don’t subscribe to that belief, but we are incontrovertibly the creators of robots and although we can’t yet create them in our image, rather strangely, rather wonderfully, they show us our humanity.

Thanks for reading! As always, please like and share, and let me know your thoughts.

All the best

Si Ellis