The idea in a nutshell : People like to be rude to their Virtual Assistant ( e.g. Siri ) and hear something funny in response. Is that a product ? Could I build, market and sell a rude virtual assistant bolt on or plug in which manages that capacity for developers and other people who are building their own bot ? What’s the moral aspect when I would be making the world a worse place through this kind of product ? Is the end result of all this a bot shop in which you can build your own bot.

A lot of people like being rude to their virtual assistant

Two great things about computers are that they are not that smart and they have to be reasonable. One of the great things about being a human is being able to step outside the shackles of socially acceptable behaviour, in the privacy of your own house and say rude things to your computerised Virtual Assistant.

Some people don’t like being told rude things by their Virtual Assistant. In ‘some people’, I am including this individual, who is, unfortunately and ironically given the circumstances clearly a tool. A huge number more people are not idiots and do like messing around with a friendly bot who won’t take offense to what you say.

Given the starting point that people are:

a ) Going to make bots to help their customers.

b) Some of those customers are going to be rude to the bot, there appears to be an opportunity.

If this rudeness was taken care of then there would be more time for people to focus on building a bot which will help them do the business of – erm – their business. Below (From SMOSH): I hate Apple but even I find this response funny. The second one is a genuinely funny joke partly because it’s so wrong. How they got it out of the Apple factory, I will never know.










The moral aspect : The dark side : Virtual Assistants vs Call Center Staff

Before building this kind of bot Manhattan Project, the moral aspects of the investment must be considered. Possibly the closest thing we have at the moment to a Virtual Assistant is a call center worker. They’re remote, faceless, dutiful and, often, I’m sorry to say, often abused by their callers.

The fact that we are the customer and they are the employee seems to set up a power dynamic which is so clear it encourages the worst behaviour from some. People being rude to call center staff is so common it’s almost accepted as a part of life. If someone told you at a BBQ that they’d lost their temper at a Telstra call center rep., would you judge them harshly ?

There appear to be a number of additional factors which tie together to accentuate the rudeness with which these people are treated. The fact we don’t see them face to face seems to dehumanise them. All of these attributes apply to bots.

Computers are slaves – treat them how you want

I’m not necessarily saying it’s OK to be rude to computers – be that in the form of a bot or not. I don’t think the act itself is damaging to the computer. My concern is for the permissions that practice might grant in the person dealing the insults out and the practice that they might enter in to.

If what is being created here is a facility for someone to take their anger out on another without doing harm, then is this a machine which lets people ‘vent’. If so, is venting a good thing or a bad thing.

A lot of the evidence suggests that venting anger is not a good thing at all. It gives people the chance and the excuse to rehearse their feelings and keeps them stuck in an emotional place (for at least as long as it takes to vent) from which they cannot move on.

The economic cost of being rude to a bot

Additionally, the standard response for some computers / bots / AI programs, when they detect a declination in the emotional state (a reduction in happiness) is to escalate the issue the person is facing to a human being. Encouraging tirades amongst people might be negative from a cost point of view.

On the other hand, if it’s just for fun….

At this point, I run out of hard data points. Siri was ‘trained’ to be witty and to say entertaining things as part of a humorous Easter Egg to make people smile. If our goal through bots is is to humanise peoples’ interactions with a brand or company, then humour is a good way to do it. The balance for me is between pretending to be rude to a bot for a bit of fun and to see if they will outsmart you and genuinely losing your temper with it which, as we’ve seen does no one any good.

So, we are building a bot

In any event, we are building a bot which will respond humourisly to attacks from people. Let’s launch it and see if the world is a better place or not. I.e. let’s just ignore all the moral concerns I have so painstakingly laid out, above, because they’re inconvenient.

Summing Up – The Bot Shop

Getting funny responses to questions you ask of your bot is just one thing people are wont to do. There are other ‘aspects’ of a bots’ behaviour which will be standardised too. Customer speech / text parsing will be one. A structure for FAQs is another (every business has those). I wonder if, at some point, there will be an online shop from which you can piece together 80% of the bot you want like you build a player or machine in a computer game. Then you can deploy your bot to the internet with a much faster time to market. If I was making that, I’d call it a bot shop and the user interface would be – you guessed it – a bot.