Select Page

Season 1, Episode 2 | 18min

The Ethics of Chatbots

5 May 2019 | Chatbots

Like many new technologies, the chatbot industry is in full growth mode. But what about the ethical considerations – what information should they gather?

Should they declare themselves as chatbots or can they pretend to be human? How manipulative should they be allowed to get in the name of profits?

We discuss some top-level thoughts around the ethics of chatbots. 

Episode Conversation

Topics that were discussed:

  • Should we bother with ethics in a free, open market in the digital space?
  • Microsft’s Tay chatbot disaster
  • Privacy guidelines and information
  • Implications of monetising “choose-your-own-adventure” games or apps into chatbots
  • The exploitation of vulnerable audiences is a concern

Episode Transcript


[00:00:01.410] – Jam
Welcome to the Conversologist podcast, where we talk about the art and science of conversations in the digital space. We know that technology can be a powerful enabler in the customer journey from marketing to customer service. But communication and emotional connection still need to be at the core. I’m your host, Jam Mayer, and I invite you to converse with me.

[00:00:25.890] – Jam
Today with me is my colleague, senior content Conversologist, Rew Shearer. Radio copywriter, chatbot conversation designer and sometime blogger.

[00:00:37.170] – Jam
Welcome, Rew.

[00:00:38.330] – Rew
Thank you very much, Jam. It’s good to be here.

[00:00:40.860] – Jam
Well, this week’s episode, we are going to talk about the ethics of chatbots.

[00:00:48.090] – Rew
Aha. Right.

[00:00:48.930] – Rew
Sounds a bit weird. But it’s kind of a Wild West out there right now.

[00:00:55.200] – Jam

[00:00:56.130] – Rew
Well, chatbots are basically an unregulated technology, and this isn’t the first time we’ve seen this kind of situation. If you look at the pattern and this is certainly true of social media. Ethics aren’t high on the pioneer’s priority list.

[00:01:12.150] – Jam
Got it. So they’re all about the money?

[00:01:14.400] – Rew
Yeah, totally. When an industry or technology is new, it’s a bit like a gold rush. A lot of the speculators are all about making money and cashing in. And, you know, they really can’t be bothered with the ethics or preserving audience integrity for the future.

[00:01:28.590] – Jam
Well, isn’t that just a free market at work?

[00:01:31.780] – Rew
Yeah, yeah. But it shouldn’t be that simple. You’ve got a great example of this with social media.

Ethics in Social Media

[00:01:37.230] – Jam
OK, how so?

[00:01:38.160] – Rew
Well, the ethics put in place originally for some of the big social channels are pretty arbitrary, when you think about it. They’ll censor a nipple, but allow the collection and sharing of 20,000 data points per user. I mean…

[00:01:51.630] – Jam
Whoa whoa whoa. Nipple? Did you just say that? Are we allowed to do that in a podcast?

[00:01:55.480] – Rew
I…I…This is theatre of the mind and everybody suddenly got a picture of a nipple in their head. Sorry about that.

[00:02:00.840] – Jam
Right. Ok, carry on. Carry on.

[00:02:03.390] – Rew
They’ll curate news feeds that reinforce prejudices, shape politics, but they’ll deny responsibility for their part in social change. It’s only really when users and even governments have started to apply pressure and push back that we’re seeing some real responsibility being taken by the social channels.

[00:02:22.320] – Jam
Right. But chatbots are a lot different, right?

[00:02:26.760] – Rew
Yes and no. Chatbots are potentially worse. They’re often set and forget. Meaning, you know, you build it and especially if there’s AI or machine learning involved, we just hope that it comes out OK. But there’s no guarantee.

Chatbot Gone Wrong - Microsoft Tay

[00:02:41.250] – Rew
Now, researching this podcast, I came across a blog that used Microsoft’s Tay chatbot disaster. The debacle! As an example of what can go wrong.

[00:02:51.030] – Jam
I heard about that. I don’t know a lot of the details, but Tay was the Twitter bot that went from “Hello World” – really, really simple – to “Feminists should burn in hell!” in 24 hours, was it?

[00:03:05.430] – Rew
It was pretty much like that. And there was even worse stuff, yeah. Tay got broken by users, I have to add. But so of course, I mean Tay was taken down pretty quickly, but it was an important lesson that at this point in time a chatbot is at best only as ethical as its creator, and it has limitations. But at its worst, it’s very vulnerable to the worst of society.

Commercial Implications & Privacy

[00:03:31.450] – Rew
But let’s just backtrack a little to the ethics around commercial considerations of a chatbot.

[00:03:37.284] – Jam

[00:03:37.770] – Rew
Well, we really only see rules for chatbots based around what’s in place already for like, email or other types of marketing, namely opt-in, opt-out regulations.

[00:03:47.280] – Jam

[00:03:48.210] – Rew
You have to subscribe after a fact and you can always unsubscribe.

[00:03:52.560] – Jam

[00:03:53.280] – Rew
Any rules or consumer protections beyond that tend to be pretty limited and pretty arbitrary.

[00:03:58.680] – Jam
Depends on where…which country.

[00:04:00.600] – Rew
Exactly. You know, Europe has the GDPR.

[00:04:02.880] – Jam

[00:04:03.210] – Rew
But other countries really have very little in place to protect users.

[00:04:07.530] – Jam
Sure. But it’s just a chatbot, right? What’s the worst it can do?

[00:04:12.240] – Rew
Well, I’m glad you asked, Jam. Chatbots can ask a lot of information from users, and since they tend to do it through conversation and prevent…present themselves, as pretty much human. They can earn a lot more trust than just, say, a web form or something. The fact is, people can be persuaded to give up a lot of very private information through a chatbot. But what happens with that information? Where are the privacy guidelines?

[00:04:38.190] – Rew
Was it even legitimate to ask those questions in the first place?

[00:04:42.570] – Jam
OK, but the last thing you want is a Terms of Use document to read and agree to before the chatbot even starts.

[00:04:49.380] – Rew
True, true, true.

[00:04:50.820] – Jam
I mean, no one even reads it. Do you actually read those long things?

[00:04:54.180] – Rew
That’s one of the greatest lies of our generation.

[00:04:56.220] – Jam
Right. Okay. So one of the benefits of a chatbot is that it’s an easy and instant way to get personal with a customer. I mean, that’s the whole point.

[00:05:05.280] – Rew
Yep. So it stands to reason that there should be some overarching code of conduct for chatbots that everybody should actually try to adhere to.

[00:05:13.440] – Jam
Good luck with that.

Is the chatbot a real person or not?

[00:05:15.510] – Rew
Well, see…not only is there no guiding code for chatbots, there’s actually not even a requirement for a chatbot to even declare it IS a chatbot.

[00:05:24.010] – Jam
Well, you can in some form or something, right? Anyway, I’ve talked about that in one of my blogs. I mean, there’s a temptation to make the chatbot seem like a real person. The trouble is, it can really damage user trust, when they finally found out, oh, my God, it’s a machine. It’s a bot.

[00:05:42.520] – Rew
Yeah, exactly. And when you look at the possibilities of conversational A.I. and chatbots, it learns as they go, there is a lot of potential for a really well-made chatbot to fool people into thinking it is a real person.

[00:05:55.210] – Jam
To be fair, there aren’t a lot of Chat…well then again, it is increasing by the day. I’m sure if any that could pass Turing test maybe. I don’t know.

[00:06:03.640] – Rew
Yeah, but I mean, yeah, you’re right. You’d be surprised. Just from talking to people who use chatbots, I found a lot of people have been sucked in. And they weren’t sure if they were dealing with a bot or with a human.

[00:06:15.760] – Jam
Right. Yeah.

[00:06:16.570] – Rew
I think quite justifiably, you can feel a bit cheated when you’ve been chattering in online messages and, you know, slowly it starts to dawn on you that you’re just actually talking to a computer.

[00:06:27.430] – Rew
Nobody likes being made to look stupid and of course, deliberately misleading people into thinking a chatbot is real. Now, there’s a problem. Again, there’s no real code of chatbot ethics out there at this point. But you see as a tactic, it could easily be used to manipulate people out of their money.

[00:06:44.800] – Jam
Well, here’s the thing. Some people would say, well, I’m smart enough to know whether it’s a machine talking to me or not. Right?

[00:06:52.090] – Rew
Yes and no. I wouldn’t be so sure about that. Not if it’s not…if it’s well-made. Let’s give a hypothetical, hypothetical example. Tripping over my tongue there. Say you run a dating website that benefits from subscriptions, ok?

[00:07:04.840] – Jam

[00:07:07.000] – Rew
So you could potentially use a chatbot as a potential match. Send a few flirty messages, even engage in some apparently heartfelt, deep and “meaningfuls” with the promise of more if the user subscribes.

[00:07:22.090] – Jam
Have you been into these? Have you experienced this already? There’s a lot of spam out there.

[00:07:28.190] – Rew
You know what I mean. I mean, there are people who don’t even pick up that… that their followers in Instagram are bots, you know. So it’s actually remarkably easy to fool somebody who wants to be fooled.

[00:07:39.970] – Jam
Oh, wait, hang on. This just reminds me. The F8 conference just finished. And Facebook Mark. Mark. Personal. Mr. Zuckerberg has announced the dating app. I know he mentioned this a year back. I don’t know, a year or two years ago or whatever. Anyway, it doesn’t matter. So you just made me…you reminded me of that. So I hope Mr. Zuckerberg. Facebook with your dating app. It isn’t something like that.

[00:08:07.450] – Rew

[00:08:08.830] – Jam
Oh, my God.

[00:08:09.730] – Rew
Just watch out. Mr. Right might actually turn out to be Mr. Bot, you know, and and you’ve got your subscriber.

[00:08:18.550] – Rew
You’ve got your subscriber and then, you know, the so-called one to date, Mr. Bot, Mr. Right suddenly loses interest and just wanders off. But you’ve got yourself a paid-up subscribed member.

[00:08:29.800] – Jam
Well, again, going back to that, sorry, but they are planning to put…I mean on Messenger. They are planning to put some, I don’t know, another revenue stream. So that could get interesting, anyway. Go ahead. So carry on.

Interactive Novels, Monetisation and a Vulnerable Audience

[00:08:44.050] – Rew
Hence, the need for ethics around it. Look, I’ll give you another example. This is a real one, OK? This is a thing. Interactive novels.

[00:08:52.300] – Jam
Oooh, I love reading.

[00:08:54.640] – Rew
Uh huh.

[00:08:56.410] – Jam
You mean those interactive novels. Uhm.

[00:09:00.130] – Rew
Like the Choose Your Own Adventure books. Yeah.

[00:09:02.320] – Jam
Oh right.

[00:09:02.710] – Rew
You get to choose your button for what your character says, and that dictates the flow that follows.

[00:09:08.800] – Jam
That’s an app.

[00:09:09.670] – Rew
Yeah, but they’re built on chatbot technology.

[00:09:11.710] – Jam
It’s a chat. No, it’s not. It’s an app.

[00:09:14.290] – Rew
Right. So.

[00:09:17.550] – Jam
Still, let’s use the example. Keep going.

[00:09:19.450] – Rew
OK, OK, OK, OK, right, right. So first up, you get to design and name your character, right? Very important, because all of a sudden you’re invested in this. It’s the character is someone, something that you’ve sort of helped create.

[00:09:31.890] – Jam
My avatar.

[00:09:33.090] – Rew
Your avatar. Yes. That’s that’s. Yeah. OK, then as the story progresses, you get to choose what your character says, how he or she responds to situations. Are you acquiescent? Are you diplomatic? Are you confrontational? And of course, that affects the story.

[00:09:50.720] – Jam
Yup! Well, that’s basically partly gamification. Just a little bit.

[00:09:57.180] – Rew

[00:09:57.560] – Jam

[00:09:57.560] – Rew
Yeah, well, yeah. But the problem is in these episode stories, you can freely choose your response to inconsequential stuff. But when it comes to a critical decision, something that will have a serious effect on the flow of the story and the characters, oops, there’s a paywall.

[00:10:15.690] – Jam

[00:10:16.740] – Rew
Well. And in most cases, the moral, compassionate, constructive or yes, most enticing choice comes at a cost. Maybe between three and five bucks at a click. And there are many such points in each story.

[00:10:30.240] – Jam

[00:10:31.260] – Rew
So I mean, like, for example, a friend in the story is depressed. She’s alone at home. Do you leave her to it or do you invite her over for pizza? What else could happen if you do? And it’s just the two of you together. If you don’t invite…

[00:10:45.720] – Rew
Yup yup pizza.

[00:10:47.640] – Jam

[00:10:49.080] – Rew
But to invite her over, you have to pay the chatbot or pay the app. The only free option is to leave her to it. Just leave her alone for the night. Or maybe she has a secret to tell you. But to hear the secret once again, you have to pay.

[00:11:04.890] – Jam
OK, I guess I get it. I really get it. But honestly, I don’t get it. I mean, it’s part of… It’s not just gamification. It’s…that’s how games work. You’ve got levels and well game developers. Hello. Get into the discussion here. We really need you to help me out here. I mean, that’s how it works. I play games all the time. I don’t mind. And I’ve tried these choose your adventure type of things.

[00:11:31.110] – Jam
And look, if I don’t want to pay, then I get the crappy decision and stuff, you know what I mean?

[00:11:38.460] – Jam
If I have to fight a dragon or I have to level up, as you say, you know, like my avatar, my dragon, I need a better sword. I need better guns or bullets and ammo for me to actually win the fight. So it’s it’s part of games.

[00:11:54.810] – Rew
OK, yeah. I think what concerns me here is, first off, the moral predicament it puts people in. These are sort of often choices of right versus wrong and right you have to pay and wrong is the free choice.

[00:12:10.350] – Rew
What also concerns me is that with some of these…some of these interactive app novels, you’re not targeting adults here. These stories are targeted at adolescents, young teens, and they’re dealing with themes like relationships, sex, bullying, break ups. You’re brought to a point where a friend in the story is clearly in trouble, needs your support. But giving that support means going through a paywall. Now, this isn’t like paying a one off fee to read the story and be part of it and make the decisions.

[00:12:43.440] – Rew
It’s a kind of psychological coercion to a very vulnerable age group.

[00:12:49.650] – Jam
OK, but in their defense, if the target, you know, are adolescents, they’re probably asking their parents for their credit card. Well, then again, maybe, maybe not. That puts the power in the adult’s hands. It’s no different from Candy Crush or any number of paywall games.

[00:13:05.640] – Rew
So you don’t think it’s a problem?

[00:13:08.340] – Jam
Not really. Because it’s an app. It’s not really a chatbot, though. I get it. You know, it is it could be a chatbot. Oh, I hope we’re not starting uh someone actually up there listening and thinking, oh, this is actually a good idea.

[00:13:25.260] – Rew
Well, you know, the sort of the technology and the functionality is reasonably similar and that you are presented with choices and you’ve taken down a different flow depending on the choice you make. And, you know, I think the problem to me is that the publishers are pretty much free to manipulate this very vulnerable audience as much as they want. What sort of impact could this have, whether it’s a chatbot or whether it’s the app, especially when you’re marketing it, to say directly to young teens.

[00:13:55.290] – Jam
Maybe not at all.

[00:13:56.430] – Rew
Maybe not at all. Sure. But something like this to me is ethically dangerous.

[00:14:02.820] – Jam
Right. So games, app, chatbots, question mark, end of story. What else?

Impact of Health Advice with Chatbots

[00:14:10.600] – Rew
Ok. Medical and health advice or diagnosis.

[00:14:14.190] – Jam

[00:14:14.790] – Rew
Ethically, right or wrong. Advice to parents or kids about what to do in certain situations, how do we know that advice is actually the best advice? It could be life coaching. Any numpty can come along and write a chatbot. How do you know the advice you’re giving is actually sound and you’re guiding people down the right path?

[00:14:33.010] – Jam
Then again, there are life coaches out there. Hey, life coaches out there. I have.. The good ones. I’ve met really good ones. But there are some life coaches that just because…Yeah, I know that’s not the topic tonight.

[00:14:45.580] – Rew
I do get the point. Yes. And exactly the same with spiritual or psychic healing, obviously.

[00:14:52.087] – Jam
What’s wrong with that?

[00:14:53.380] – Rew
Again, how do you know whether it’s actually quality stuff that’s going to actually benefit people, whether it’s just…just raking money out of people.

[00:15:02.320] – Jam
Or spiritual. You know, it could be religion.

[00:15:05.440] – Rew

[00:15:05.850] – Jam
And let’s not go there.

[00:15:07.210] – Rew
Exactly. And exploiting vulnerable audiences is always an area for concern.

[00:15:13.000] – Jam
Right. So chatbots are kind of…well, yes, they are dark social coz it’s private.

[00:15:19.060] – Rew

[00:15:19.730] – Jam
Like, again, F8 conference still very fresh in my mind, the future is private, away from the public eye. It’s personal one-on-one. So, yes, it is very hard to monitor or regulate. I mean, not like a website, for example, or social business page that can actually be reviewed.

[00:15:38.110] – Rew
Right. It’s the Wild West. Look, I mean, you and I worked on a chatbot for social services last year, MyRivr. I think we did pretty well to make sure that this chatbot was culturally and gender-sensitive in its responses. We were very cautious around how it addressed issues like potential suicides. But even that was pretty arbitrary. It was really only our own moral compasses that guided the ethics of that particular chatbot in MyRivr.

[00:16:06.070] – Jam
Yeah, and it’s still a simple solution. It’s still not the best. We don’t know. And that was something that I, you know, was discussed during my latest presentation or guest lecture in one of the universities here.

Is there a solution?

00:16:06.070] – Jam
So anyway, so what’s the solution then, Rew? I mean, how do we even start to fix this?

[00:16:25.660] – Rew
Look, one thing I’m in favor of personally is, is industry self-regulation. I think when sort of regulatory bodies get involved, there’s way too much butt-covering or politics.

[00:16:38.730] – Jam
Just anyway, sorry go ahead.

[00:16:42.300] – Rew
And not enough consideration of what’s actually best for both the user and the industry. You’ll often get you know, it leans too far to protect the user or goes too far to protect the industry. It’s really important to not penalise one over the other. Also, regulators and their policies can be influenced by the big players and the big dollars and set rules that benefit one party over another.

I know I’m talking generics here, but if the chatbot industry were actually able to, say, create an international independent body of ethics that was funded by the industry and if you were compliant, you’d get their tick or stamp of approval. And users had a central site to find out all the rules and guidelines. That would be a great start.

[00:17:27.340] – Jam
So that’s your project then? Next week? Looks like you’re going to be very, very busy. Anyway. So we want to keep on talking. But, hey, that’s it for this week’s episode.


[00:17:41.380] – Jam
So leave a voice message if you’re on Anchor. I’d love it if you’re on Anchor because I love this app, by the way. Thank you, Anchor. Leave a comment if you’re catching this podcast elsewhere and the social channels, we’d love to continue the discussion. And I have a feeling we’ll be back to this topic before long.

[00:17:58.870] – Rew
Yup, you can bet on it.

[00:18:01.780] – Jam
OK, well, till the next episode. Thanks for listening.

Related Episodes