The idea in a nutshell: Bot frameworks are in their very early days. There is little documentation available which means you’re going to need a good developer who’s not afraid to get it wrong. Multiple iterations got friends at WhatPhone do a good bot in 2 months. Here’s what we learned so you can do it in half that time. Gartner say bots are overhyped. That might be true. But getting to an OK bot is quick and great bots will be here soon.

Don’t believe the hype

Gartner’s hype curve does a lot of useful things. First, it raises the concept (in a funny way, to me at least) that we’ve seen all of this before. Every new technology goes through hype and bots are just another one in a long list. Secondly, it estimates when the plateau will be reached which can be reassuring. Finally, it helps you get a grip of yourself in setting expectations with those who matter up the chain. Promise low and deliver more is a good mantra to have.

Gartner say chat bots suggest most of the things I have talked about recently are near the peak of their hype. What comes next is the trough of disillusionment. My experience being involved in the build of two bots so far suggests there is some truth and some misleading elements to this graph. (I still think it’s worthwhile though.)



Practical problems and what to do about them

Some of these are still holding us up on the WhatPhone bot. Some, we have taken one step with but have not tested fully what appears at this stage to be the next most logical step. Here’s the most useful commentary I can offer at the moment. I hope it helps you.

Problem What to do
Start with a small scope and very clear user journeys.


·  You have nothing to lose by being cautious. It took us twice as long to build the bot as I thought it would. Your developers are going to be figuring this out, it’s a first go for them. You’ve never figured out the logic before. Do yourself a favour, start MVP and build form there.
There is a real lack of official documentation. Your developer is going to have to figure things out for themselves with no user manual. –  Online research : Remembering though that there is a lot of marketing fluff out there which makes it sound like it’s possible to do a lot of stuff with your bot. It’s the practical technical documentation which is lacking.

–  For example, in a podcast, facebook described some reasonably advanced stuff – after 14 minutes, they’re talking about pronouns (it, that etc.) and the real life conversational component that pronouns make up. For example, in a conversation, I could say to you ‘I am going to see my buddy Jarrod for a beer at the pub.’ And then later, ‘Dave is going to be there too.’ Making sure the bot knows where ‘there’ is – i.e. that ‘there’ relates to ‘the pub’ can be tricky. Facebook reassure us that this is all taken care of in their SDK and the bot framework when they’re talking in the podcast above. Good luck finding the paperwork on that !

– Trial and error is the only way we have found to work through the limitation we’re bumping in to. One example is that in Microsoft’s LUIS tool, there is a limit of 20 intents ! They didn’t tell us that. It just stopped working at 20 !

The number of unusual ways people say things


– Get the user testing going early (as usual). Put your bot in front of real users as early as possible. Use people who have not been on the project team. Give them a scope briefing so they know what they can and cannot ask the bot yet. But then let them type / say what they say without interruption. Avoid the temptation to correct them ! Manage the exception queue.

– Waiting until the end of the week was too long. We tried feedback earlier and earlier in the week as we got better at training intents and responses. Once you’ve figured out the process to train intents and responses, it’s basically a spread sheet. After that, user testing can almost be every day. Day 1 of the sprint, you rough up the content. Then from day 2 you can have customer feedback every day and hone the accuracy of the answers.

More – have a more button and maybe a link to the real website content. – People don’t want to read long blocks of text. When your bot replies to a question, make it brief. At the same time, there might be legal requirements for what needs to be shown, depending on your industry. Having a ‘more’ option where the user gets a single line, two tops of information but also has a ‘more’ option seems like a good, usable compromise. There are some limits to the capability of the Rich text capabilities available to you at this stage but it appears this will be possible.

–  Having a link to the full webpage also seems like a useful step.

Images –        Where possible, it seems showing images or tables is preferable to the user to long strings of text.
Error messages







–   When the user gets it wrong, don’t just say (through the bot) ‘I didn’t understand.’ Use what you know to show them some relevant options. Like here, below, where Whatbot knows the user is talking about a price – there’s a ‘$’ sign. But the value they’ve entered ($130) is beyond anything she was expecting. So, she responds with options to frame the user in to a more helpful response.
In the example on the right, Whatbot is trying to help the visitor establish a budget. In the end, these details are used to populate a solution finder tool. The user enters $130 – more than the highest price point on the website. This is good feedback and can be used to generate a new rule for this sort of exception. We need to automate a reply which says ‘the most expensive plan on this website is…’ In the meantime, prompting for standard responses helps the user help themselves. whatbot
Framing : –  Your bot needs to start the conversation in a friendly, engaging way saying something that will help the user understand their capabilities and limitations.
Framing feedback – Have a slide which explains what the bot can answer and what it cannot. We found that unless we did that, people felt they could
User testing : –  One thing we found with all aspects of interaction with other areas of the business is standard and not just related to bots but worth mentioning again here. People love the idea of a bot. Anyone you talk to will want to be a tester for the bot. They will love to talk about AI and it’s implications. This is all great until the day of the testing when 80% of our testers dropped out due to other commitments. Get people in the same building as you (we were asking them to travel which, to be fair, looking back, was stupid) Consider adding beer to the mix.
The very human need to anthropomorphise :


– Sorry for the long word. I am 80% bot, 20% Spock and rarely feel the need to feel. However, I am compelled to give the bots we are working on a name. I think of the bot as a savant 3 year old. We’re training a stupid genius. When it knows the answer, even to a complicated question, the bot nails it. When you ask it something simple it hasn’t been trained on

Lessons learned from building bots

One more (and this might be the most important) thing : Have a Tamogotchi expectation.

Below – a Tamagotchi – one of the many things I will never understand.


The bot is always going to have to be supported and watched. It will happily plod along getting things right and wrong. It appears, at this stage that it will need to be mothered and the WhatPhone guys are going to have to keep it on a short leash for a long time.

Summing up – I think Gartner is wrong

It’s a bold claim but I actually think Gartner are wrong. I think they’re right about the hype and the expectations gap. I think they’re wrong about how long it is going to take to get over the problems and train worthwhile bots.

Full conversation is a while away. That shouldn’t be the goal for your bot anyway. There’s just so much thinking about the things people can say. Even Cortana and Siri can be rubbish and they have millions which have been sunk in to them.

The WhatPhone guys have produced a bot with real customer value in 2 months

I believe what I can see and the developers my friends engaged on WhatPhone have produced a bot of genuine value in 2 months, in 10 hours a week. That’s $1-2000 USDs worth of development investment.

It can deal with customers serving two purposes.

  1. First, it engages them in a humorous way which aligns with WhatPhone’s brand and marketing strategy. (The bot is funny and talks in simple terms about a small domain – phone plans.
  2. Second, it answers product questions to help people select a plan. My point is, it’s useful. It is limited. It cannot conduct a full conversation. There is a tonne of stuff it does not know. And version 2 will be better. And so on.

We will publish libraries of the WhatPhone bot algorithm flowcharts. It seems clear to me that a lot of these bots are going to have very similar elements. As we standardise, produce libraries and attack bot production at scale, a lot of the work will be done before we even start. The final 30% of product and process specific elements will be defined and delivered in well understood ways.

It seems to me we’re less than a year from good bots which serve worthwhile customer needs. OK bots are here already. In two to five years, I think they’ll have taken over the world.

I am at the stage where I will soon be able to post results, not theories. I’ll cover that in another post.