Checking In On Chat Bots
Four months ago, I blogged about our portfolio company Kik’s chat bot platform.
A week later, Facebook launched its chat bot platform for its Messenger product.
If you believed the hype around chat bots, you would have expected every mobile developer to quit developing for iOS and Android and start developing for these new new chat bot platforms.
But that has not happened.
I would be hard pressed to name a super popular chat bot on Messenger, Kik, Slack, and Telegram.
It is not for a lack of trying. There are over 300 chat bots listed on botlist right now. Many from well known companies. And over 20,000 chat bots have been built on the Kik platform since it was launched.
So what is going on?
Kik CEO Ted Livingston addressed some of that yesterday with a post describing what has not worked and what has.
His big takeaways are that AI driven chat bots have underwhelmed and that conversational UIs are not what users are looking for.
He suggests that developers should look at bots as a low friction way to get new users to try out and use their service instantly:
When you look closely at WeChat, the chat app that has completely taken over China, you see that its success as an ecosystem of services comes down to the same things: low-friction access to apps; sharing-related discovery (as well as QR codes); a common interface; and messaging as the front door to a world of digital experiences. In fact, there’s no major conversation-based service in WeChat. Instead, there’s just a whole lot of instant interactions.
I think chat bots will find their place in the mobile user’s daily habits. I have encouraged several entrepreneurs who have pitched me on new projects to consider starting with chat bots instead of mobile apps. And we have seen at least one of our portfolio companies move from a native mobile app to a chat bot as their primary go to market strategy.
New user behaviors take time to develop and sometimes require a breakthrough app to get things started. That’s where we are with chat bots. The hype phase is over and we are now into the figuring it out phase. That’s usually when interesting stuff starts to happen.
They haven’t crossed the rubicon to magical yet. They are still firmly planted on the side of chore.It’s like Abraham Lincoln said. The only thing worse than dealing with a stupid customer service representative is dealing with a stupid chat bot.
Well, as I’ve noted, until someone solves the Natural Language problem … pretty much all tech will continue to be “stupid”.It’s not a trivial problem. From the invention of Babbage’s Analytic Engine through Turing Machine to W3C, Stanford, Google, IBM Watson et al … NO ONE has (yet) been able to solve it.* https://www.technologyrevie…
Yes and no.No one will solve the natural language problem in its entirety.People like David Semeria’s Veesp0 have done a great job of addressing this within context for the purposes of the feedback and opinion market.
Thanks, Arnold. I don’t question that David and his team are solving well for their specific use case.The challenges are whether there can be a UNIVERSAL standard and framework for language that transcends sectors and cultures for language.Google’s Word2Vec (and, by association, Stanford’s GloVe) have been shown to be unrepresentative, according to MS Research. There are also all sorts of issues with skip grams, bag of words, Dirilecht distributions, Markovian affect attribution etc.Someone will be well on the way to solving the natural language problem a lot better than existing and previous approaches, and do so by 2025 (if not 2020).Their system will make the Chomsky & Minsky (Grammar Reasoning & Symbolics) vs Google’s Norvig (Statistical & Probabilistic Reasoning) arguments for how to get the machines to understand our language … obsolete.
Word2Vec/Doc2Vec is already a model being moved away from
by the natural language problem in its entirety you mean what exactly?
Agreed, until then and within the Q & A keyword parameters Alexa functions well, think for the mo we will adapt quite well.
See those are the operative words: we will adapt quite well.What we want, though, are the machines to adapt well to us so they can serve us better.They’d adapt well to us IF they were intelligent and understood what we’re saying.But they don’t, so they’re stupid.
But if you try sometimes you just might findYou just might findYou get what you need (for now)
What about natural language meeting the machines halfway as an interim improvement ?
The Chief Scientist of Netbase recently blogged that their syntax parser is more accurate than Google’s:* https://www.linkedin.com/pu…Pretty much all the big techco’s are trying to meet the machines halfway. The machine’s intelligence is measured in mathematical language, e.g. reduction of statistical errors.Google’s been feeding its AI romance novels to get it to be more conversationally intelligent:* http://www.theverge.com/201…
pretty much all tech will continue to be “stupid”.No. “Tech” does a lot of really powerful, say, smart, things: E.g., GPS. Permitting on-line banking and shopping. Weather prediction with decent accuracy, say, a few days in advance. Forming images of human internals from NMR projection data. Forming images of subsurface layers from audio data reflection data. Automatically landing an airplane. Saying how to operate an oil refinery for maximum earnings. How to mix animal feed for specified nutrition at least cost. Analyzing DNA data. Or as I coded up once just for fun, take a space frame and analyze its stiffness. A still harder problem is to find a space frame of specified stiffness but can be built at least cost.But these applications were all essentially just cases of traditional engineering, that is, using available means to solve well specified problems. We have a long way yet to go with that paradigm, but don’t expect success with much else for a long time.
Have you heard this interview with David Krakauer, President and William H. Miller Professor of Complex Systems, Santa Fe Institute:* https://www.samharris.org/p…In it, he explains the difference between a “complementary cognitive cultural artifact” just like the abacus, the map, the compass and algebra and “competing cognitive cultural artifacts” such as self-driving cars which will not only mean we lose the ability to self-navigate, we also lose the ability of physical motor skills (changing gears, popping clutch etc.).Anyway, whilst listening to him and his position arguing against this Aeon article which itself argues that the brain <≠> computer …* https://aeon.co/essays/your…It’s clear that part of the reason we haven’t been able to solve the Nat Lang problem is to do with DEFINITIONS OF INTELLIGENCE AND INFORMATION PROCESSING.Krakauer used the standard Rational Scientists’ definition of information processing: “the mathematical reduction of uncertainty” and since the brain does that and so do computers, they get analogized as being the same.The “mathematical reduction of uncertainty” acts as a proxy intelligence — hence why+how AI researchers are so obsessed with reducing those loss functions in frameworks such as Generative Adversarial Networks (@ShanaC:disqus )At 8:40, he introduces the M3 “Mayhem” Model (people confusing Maths, Mathematical Model and Metaphors). However, I’d make the case the M3 model is incomplete and inadequately defined.There’s a missing 4th M to complete the base, imo.There are 2 distinct, complementary and intertwined strands to information and the intelligence of that information:(1.) Language of Maths <=> Mathematical Model <=> Logical intelligence(2.) Language of Humans (Metaphors, Moods) <=> Meanings <=> Linguistic intelligence.Where we’ve been going astray is that we’ve made assumptions that maths, models and metaphors are the same species and of the same nature — when they’re not.
nope, but I will read
Tech does a lot of mathematically efficient things. Your examples illustrate Markov chains, Deep Learning image recognition, sound/speech recognition, Operational Research, genetic algorithms are powerful mathematic tools.However, language understanding is much more complex than those specified use cases because well … human languages are subjective and emotional and contextual whilst Maths as a language isn’t — and so, by extension, neither are the [email protected]:disqus wrote one of the smartest and wittiest hidden truths, code and wordplays ever!”No it said there are 10 types of people. Those that understand binary and those that don’t.”So, fine, computer intelligence is based on ASCII which spans BIN, DEC, HEX, OCT.The question is: “What would a mathematical base for subjectivity and emotionality look like and how would it function?”Then and only then we’d be able to say computers are “human-like” when they have comparably similar bases to us. That gets us closer from comparing apples with coconuts to apples with peaches.
On the problem of natural language understanding, I can’t get impressed with the potential of the quotes you gave.Here first I will make some broad points. Then I will move on in three parts.Broad Assertion: I will just assert up front that IMHO (A) a human brain is a computer and (B) we can write software for electronic computers, such as we have now, to do essentially the same. The reason we do not have such software is that we don’t yet know how the heck to write it.The rest of this post will be about this goal (B).Broad Critique of AI Work: To say more, from all I’ve seen in the work so far in artificial intelligence (AI), for the goal of (B) that work is:(i) Not very important. That is, the work makes no serious attempt even to pay attention to what IMHO is clearly the crucial core of what is important.(ii) Trivial. That is, the work has some progress only in some small things, e.g., building on L. Breiman’s work on classification and regression trees, random forests, etc. That is, in spite of how useful that work has been in parts of processing medical data, for the goal (B) the work is trivial.(iii) Irrelevant. For example we have seen that it is possible, e.g., building on Breiman’s work (as useful as that work has been in parts of processing medical data), to program a computer to look at maybe 10 million Internet JPGs and then, for additional JPGs, guess that the image has a kitty cat. Human infants learn that lesson much better from much less input data. Indeed, from my backyard, I have to conclude that so do all small birds, chipmunks, squirrels, and ground hogs.Next, for the goal of (B) above, I will try to say a little more on (1) what is wrong with the quotes you gave where I am not impressed with the potential, (2) a wild guess of mine on what to do, and (3) an explanation for the role of “risk” that you mentioned.1. Natural Language.To me the fundamental problem with natural language understanding is that the work is missing the most important part which is the thinking. Or, in one step more detail, natural language speaking is some output from the thinking, and natural language understanding is input to the thinking. Without the thinking, the natural language, in simple intuitive terms, has no reason or purpose and nothing to connect with.For some clarity on the scope of what I am calling thinking, you have mentioned emotions: IMHO, emotions are covered here just as a special case of human thinking: That is, for how human thinking works, my guess is that the thinking covers emotions, intuition, guesses, wild guesses, extrapolations, testing hypotheses (usually done just intuitively), common sense, creative guesses, scientific hypothesis testing, mathematical deductions, etc.For more on this fundamental problem, by an analogy, each car has wheels, but working just on how to make wheels won’t say how to make a car. Instead, for a car, also need an engine, etc., and those parts are quite different from just wheels. So, for progress, have to get well past just the wheels.Drawing from this analogy, humans who speak in natural language make sounds, and tape recorders can make and play back sounds, but working just on how to make tape recorders won’t say how to generate or understand natural language. Instead, to repeat, natural language speaking is some output from the thinking, and natural language understanding is input to the thinking. Back to the analogy, natural language without the thinking is like the wheels without the engine — no good progress to either natural language understanding or to a car.2. Guess.IMHO, in my wild guess about goal (B), first we have to think of input and output (I/O), that is, touch, sound, sight, etc.Next, IMHO, in my wild guess, the core of thinking, for humans and likely also for kitty cats and much more, is what we might call concepts.At least in practical terms, we can’t make much progress with concepts without thinking about the I/O. That is, natural intelligence over time, from conception on, builds its concepts, and that building depends heavily on the I/O.Yes, for the I/O, there are some low level issues likely close to parts of common electronic engineering signal processing. Here I assume that approaches can be found and, indeed, likely mostly already exist.So, some of early concepts are hot, cold, hunger, sleep, loneliness, happiness, fear, security, pain.Then, sure, two more concepts are position and hard floor. Then soon can learn that being in a position on a hard floor can lead to pain. Now we are adding associations to the concepts of pain, position, and hard floor.Right: I am suggesting that natural intelligence starts forming concepts at conception. Then during development, more concepts are added and more associations about existing concepts are formed.Next, sure, for goal (B), for both the phylogeny and the ontogeny, the development should be in small steps starting with next to nothing.Next, I have not given a complete definition, description, or specification for the core ideas of concepts and associations. First, the ideas I have in mind are likely not clear in an ordinary dictionary. Second, the ideas need to be further developed, e.g., essentially as, or not long before, the corresponding software is written.Of course, a key part of thinking is exploiting the associations to get some conclusions in terms of the concepts, e.g., to add concepts and associations. Just how to program that — i.e,, make it precise enough to program and be effective — is part of the challenge.Here for, say, at least 99 44/100% of the work, I see little important role for mathematics or even much in classic computer science algorithms. Instead, my guess is that what is needed is just good insight, intuition, and cleverness.I do believe, however, that the core software logic will have to be quite clear — e.g., no genetic programming.Or, we may not have a good description of how our solution to goal (B) works much before the programming, but I do believe that for the work quality to be high enough for good success we will need a good description about the time we do the programming. For success, we can’t get away from a good description.It is easy for us to guess that at least among mammals, and likely some birds, and maybe more, there are some core fundamental ideas for how the concepts, associations, and thinking for goal (B) go. In particular, the development from a mouse to a human is a difference in degree, not kind. The main reason for the difference between mice and humans is that the larger brain of humans also was effective enough at finding food to provide the food energy needed for such a larger brain.In likely an important sense, the work will have to be a variety of a self-referencing, bootstrap effort where in the end we like the software because we observe that it works.Still, there is no open door to sloppiness or moon beam approaches. Instead, the core of the work will need to be quite careful about the concepts, associations, and the thinking that uses those.That is, the self-referencing, bootstrap part means that we won’t have very good descriptions of the concepts, associations, and thinking until we have software that works well, and the descriptions will be from the software.Eventually if some such thing does appear to work well, then we should have an opportunity for some science, maybe with some mathematics, that explains why it works and, maybe, how to get it to work still better — maybe.Again, we suspect that there are some core ideas that apply across at least all the mammals. We can guess that, really, with so many examples from the mammals and more, we really should be able to find the core ideas!Summary Remark. Really, the core of natural intelligence is just the collection of concepts and their associations, with both obtained via I/O and experience, and, then, the thinking that uses those concepts and associations as just described.IMHO, wild guess, clean up that summary remark, add detail, program it, make it stable, refine it and then will have achieved goal (B) and be DONE.Natural language? Sure, it’s a representation adapted for the purposes of communications of the concepts and associations and some of the thinking.Is it possible to have goal (B) without a language? Yes. But it is not possible to have goal (B) without thinking. With the thinking, we can communicate about it or not, but without the thinking there is not much reason for the communications.Again, significant progress on natural language understanding needs to be built on good progress on goal (B).3. Risk.There can be several good reasons to reduce something like risk.E.g., once I could not find my little boy, unaltered kitty cat. Eventually out the kitchen window I noticed some movement high in a tree on the border of the backyard. Yup, it was my kitty cat, at least 30 feet off the ground.Risky? Perhaps, but likely not to him — he was an unaltered, little boy kitty cat with genes from a few million years of ancestors who were good at climbing while being careful enough to reduce risk. From that ability to be 30 feet up a tree, safely for the first time in his life, was one of a huge list of just astounding things that little guy’s thinking, intelligence, provided.Exercise. Explain how genes might play a role in the concept formation in 2. above!Risk also plays a role in mathematics.What is Mathematics? If we are clear on points, lines, planes, and angles and have some clever ideas or just look them up, then we can state and prove the Pythagorean theorem. That example is also how the rest of mathematics works — assumptions, clever ideas, proofs, theorems, results. So, if in practice, we do have the assumptions, then can we bet the farm that we also have the results.When we have the assumptions, the results are rock solid; the results, and, really, the theorems and proofs, are the most solid information we have in our civilization and in the top of the list of what is most powerful.However, commonly in applications of such theorems, there are some errors, e.g., in the input data. So, we can get interested in approximations and reducing errors. From some fundamental work in the mathematics of probability, one such source of error is risk so that for a better approximation we can want to reduce the risk. That’s a short description of the reason behind some of the interest in the mathematics of risk.But, IMHO, such applied math risk reduction is not key to how my kitty cat did so well on his first effort climbing 30 feet up a tree and, really, is not key to how thinking works.E.g., for the associations I mentioned, one could leap to guess that the math of independence, expectation, variance, covariance, correlation, conditional independence, conditional probability, conditional expectation, least squares, etc. could play a central role.I don’t believe that guess: My guess is that such approaches to the associations are too simplistic. Why? For one, it is a wild shot in the dark guess that we really have the assumptions for any relevant theorems; correction, in practice for goal (B) we will essentially never have even a good approximation to the assumptions and, instead, will have less than even a wild shot in the dark. For another, the conclusions from the theorems are too simplistic. E.g., a mouse using such thinking would quickly get tricked by a smart kitty cat, e.g., maybe with just the classic trick of bait, scaring the mouse out of hiding so that it could be caught, just waiting very patiently until the mouse starts to move, etc.SummaryThose are my thoughts. It was fun to type in the thoughts today, but I’ve had them for years.I don’t intend to do more with those thoughts for a long time or ever. Instead, I have other interests.Here is another broad idea: If believe what I wrote here, then we see that we have at least two hardware approaches to the goal of (B) — (I) biology and (II) current micro-electronics. So, for goal (B), the basic physics of the universe admits at least those two approaches.Guess: What is special about the physics of this universe? It admits those to approaches to goal (B).Exercise. What other approaches are admitted?Exercise. Is there anything still more powerful than goal (B)?For this exercise, my guess is no. That is, in kind, we have all there can be in this universe. The rest is in degree.
customer service is usually stupid because they aren’t empowered and usually not embedded into other parts of the company well enough
While I know bots exist, I googled how to get bots into Facebook messenger and still could not figure it out.
I look into it after I did my Zork port for Kik. As it turned out, there were already several people who had already ported Zork to a Facebook chat bot. Seems like most people are developing the bots in node from I found. Here’s a starting point: https://developers.facebook…
Ted and Kik team have taken a realistic approach to chatbots and AI. They know where NLP tech *really* is and have embraced the constraints to provide good experiences via button-based interaction model.Facebook and Slack rely more on user discovering, understanding, and typing structured commands.Kik is as realistic as Apple is in this realm. Apple is doing visual, micro-apps in messaging context while Kik is doing chatbots but with buttons that replace the keyboard. Typing is minimum in both cases and that is probably the way to go to get users accustomed to this new model for now.More about what Kik is doing in my opinion and the a few team members at Kik have agreed with me – https://chatbotsmagazine.co…Details about what Apple is doing and how it similar to what WeChat did – https://medium.com/in-beta/…
“His big takeaways are that AI driven chat bots have underwhelmed and that conversational UIs are not what users are looking for.”
The button-based vs conversational argument is at the superficial tip of the iceberg.The iceberg underneath — the one that’s sinks all AI that aspires to be “human-like” / “human-centric” — is that NONE of the big techco’s or research institutions (from Stanford to OpenAI to W3C) know how to get the machines to understand our language and its meanings.For 60+ years they’ve thrown all sorts of maths & science at making bigger, faster hammers to crack the nut (aka correlate words as vectors).
Unfortunately for 99.99999999999999999999999999999999999999999999% of engineers, including the ones at Stanford and Google, they don’t know their Da Vinci.
After those from your slides. Do you have them posted somewhere ?
I have a teenager. I don’t even understand what he is saying half the time.
Haha, yes, but your teenager being unintelligible to you isn’t the same as the machines being unintelligible to us.Your teenager can’t affect $ trillions.
@twaintwain:disqus Well said. You might find these folks interesting, too. They’re an alum of accelerator I went through and the founder’s research at UTEP is exactly in the area you’re talking about… https://theconversation.com…
Thanks for the link, Joe.Not even their 4th dimensionality of AR is sufficient. Their blog notes: “We have to understand people – how we move, talk, gesture and what it means when you put everything together. While people interact, we analyze how they behave, and look for different reactions to controlled characters’ personality changes, gestures, speech tones and rhythms, and even small things like breathing, blinking and gaze movement.”There’s a big missing piece of the puzzle for our existing models of intelligence. A piece of the puzzle that can’t be parameterized by Descartes who has parameterized everything in Machine Learning from image recognition to natural language processing.
Part of why SV’s smartest (including Google & Stanford’s best) can’t solve the Natural Language understanding problem is that whilst they’re great mathematicians and engineers and disciples of Descartes+Bayes, they’re not also GREAT ARTISTS and they don’t understand the art of language.
With this comment, I couldn’t agree more. It goes to one of the questions about creativity, which intersects deeply with our (so far) crude attempts to develop AI. It’s not an area I study, but from reading and following I gather we are still a long ways off.
there is a move away towards descarte. see adversarial models
Adversarial is part of Generative Learning. They’re a mathematical construct to deal with the central issue in unsupervised learning whereby choosing an appropriate objective function ℓ(θ,P) that appropriately measures the quality of approximation between two or more systems is hard.However, whilst it’s more accurate than simple linear correlation between two plane layers of the neural network …There’s still fundamental structure missing that’s specific to getting the machines to language understanding.
So the UTEP team is also doing an iteration of Descartes.Descartes also led Noam Chomsky and Google’s Natural Language Processing (e.g. Word2Vec) down the wrong path, by the way.Once an algorithm assigns binary states in a Cartesian / Bayesian tree such as female = 0, male = 1 or doctor = male, nurse = female … its language path is fixed and PRE-BIASED and no amount of probability or statistics or increasing the quantity of “Big Data” can fix that!
In the spirit of chickens crossing the road jokes…Why did the orphaned bot cross the chasm?………wait for it………Because she needed greater adoption.
What did the therapist chat bot say to its patient?……………………………How does that make you 1’s and 0’s?
No it said there are 10 types of people. Those that understand binary and those that don’t.
Text* Chat bots are just prepping us for voice chat bots. Like uber is prepping us for self driving cars, wearables prepping us for an exciting IoT world and facebook is prepping us for the end of the world.
“New user behaviors take time to develop and sometimes require a breakthrough app to get things started. That’s where we are with chat bots. The hype phase is over and we are now into the figuring it out phase.” Behavior modification on mass user scales to create markets to niche instead of niching markets that exist? Niche in this case being the function or position of an organism or population within an ecological community?
When announced I tried several and discovered that they were harder to use than the web. They were a solution in search of a problem. The coolest thing I’ve seen is cnn during the Rio games allowing you to bot a tour of Rio. They only did it one time.
Until we have a good approximation of HAL9000, their best first use case might be as an answering machine on steroids.
I recently started a new job that uses alex (https://www.meetalex.com/) as the recommended tool to figure out what healthcare might best suit each individual’s situation.While it’s not quite a chat bot, I found the interaction very fun, easy-to-use, and most importantly, useful. I think that a guide that helps you walk through some preselected options is still going to dominate for the next year or two.The reason I ever call in or speak to operator online is when the circumstances of my situation are complicated, and chat bots can’t handle that quite yet. It’s the same reason why I only use voice for simple commands and choose to type out more complicated queries.I would love to use chat bots for everything, but I don’t think it’s quite the time yet.
Agree with you point about the hype phase being over. By the way, I still see so much shitty UX on web platforms that there is plenty to fix there. Being away from Internet and cell-and jumping back into it occasionally really shows you what platforms are intuitive and which ones are horrible.
I have always seen chatbots as a/the way to script out – and prototype – the potential logic and interactions of A/I apps.On this, when Fred posted 4 months ago, I immediately thought of the classic game Zork as an example of something that simulated what A/I might look like, and also essentially was a chat bot. I felt like it was important to have developed at least one chat bot myself by hand as some of my clients are looking into chatbots, so built in node a Zork port for Kik (chat to “zorkgame” on Kik if you’d like to try it).I am on the “business side” and not much of a developer, but was surprised that it really only took me a couple hours to put a chat bot together in node using the Kik API. What I took away was that these are an accessible way to develop prototypes and MVPs and hence as Fred says in his second to last paragraph a plausible and attractive path for new projects.
One Valley investor suggested I code my system as a chatbot.I said it would take a lot more steps for the user and a lot more code for my engineers if it was coded as a chatbot => NEEDLESS FRICTION & CASH BURN.Chatbots are fine if all your algorithm is doing is making a call to an API like Yelp’s and extracting structured data that already exists, e.g. Restaurant X got 3 stars, is located at latlon and has a $$$$ price.It’s not so efficient or effective for the Nat Lang problem.So … it’s a great thing I didn’t expend any time or resources coding chatbots — despite Valley super-hype over the last 6 months. LOL.
For. Real. And, corollary: don’t try to make chat bots that solve for what search can do better if you invest properly.
Bot discovery is still a challenge. Even at botlist.co finding what adds value is a challenge. The bots also don’t decrease friction. Given the fact that I don’t have the messenger open on my phone all the time it’s easier to open the app.
So you’re not chucking in on chat bots just yet?
I wonder how this could work as a block bot? Albert Wenger’s notion of the right to be represented by a bot, but autonomously directed and controlled on a personal sidechain.
We also have to understand that the smartphone is not so much a phone anymore… we were trained to use phones as an audio device; but since the iPhone in 2007, we’ve been retrained to use the smartphone as a computer and swipe/type device and in many places where audio is bad/noisy/unclear or just plain tough to talk… for a ChatBot to work its best, we need to be in a more quiet space… like a HAL in 2001… retraining needed, but value has to be great for people to change their behaviors, maybe 10X great
Sorry, when we’re talking chatbots aren’t we talking about a technology that is text based? By “for a ChatBot to work its best, we need to be in a more quiet space” are you thinking of a chatbot that uses VR? I think most chatbots I’ve seen are text based, but obviously could be the scripting to a VR based interface.
Rob U, you are most likely correct. I made the assumption that to be “magical”, chatbots had to be about verbal chat (e.g. Siri, Echo), not text… my bad
It’s all good. I kind of wonder why we separate them any way. The interface (text vs. voice is different) but the logic and AI behind them need not be. It’s still asking questions and making requests to “a machine” and expecting a response.
Yes as a vertical we have been noodling what we think users want and have some cool stuff planned for next year. it has paid not to rush so far…but this is still an important space
That’s a thoughtful article from Ted, I agree with the gist of his argument. A bot is a chatbot like a smartphone is a phone. Chat need not be part of the experience at all.We’ve brought our iOS app to messengers while retaining basically the same experience, with little-to-no chat involved. We’ve experienced all of the benefits that Ted talks about.That said, there’s strong user demand for interacting with a computer via a conversation. Lots of anecdotal stories suggest this. The problem is that the technology today is far from being able to handle this in anything but a highly constrained way.
What portfolio company is Fred referring to?
Is it really not for a lack of trying though? Even though there are an incredible number of bots, most of them seem to be put together quickly without much testing or thought. They feel like demonstrations, not serious projects.Even the larger companies seem to have been willing to put their name on something that was probably a 20% time project for a few of their developers. There has been little disincentive not to, for them, simply because the quality bar has been so low and they have wanted to test out the space or capitalize on the hype with an easy marketing win.The problem isn’t only the technology (although existing tools do impede UX), the problem is that these are being put out there without care or proper consideration as to what a good user experience is. There are bad bots. And there will eventually be good bots. It’s still too early to draw a conclusion on the space other than “if you don’t pay careful attention to user needs and put in the effort to address them you will fail.”
The big potential that I see is in healthcare and providing immediate support. Doesn’t have to be advanced, but simply providing information in a timely basis could be very helpful.
exactly. still waiting for the breakthrough bot. the angry bird or pokemon go of bots
WeChat is a Chinese phenomenon. I think it is because it works. No, I mean that. We have a Chinese Datacenter in Shanghai. The vast majority of traffic is through WeChat. You just can’t have a datacenter here, or even in Hong Kong. Your performance and uptime will suck. It pained me to put a datacenter in China, but we had to.Funny story: I sent a 6’6″ 350lb red haired man with a Grizzly Adams beard to set it up. He had never been to Asia. I told him: “Look people are going to come up to you and get right in your face and take pictures, Just walk away they’ve never seen something like you” Also after that long flight force yourself to take a 30min walk for your legs.He texted me 5 minutes into his walk to say two Chinese girls came up to him and said “You a BIG American you must like beer!” and then both proceeded to rub his stomach.As Americans we have to realize some things are just different.
Chat bots need to solve real problems with less friction. Why would I chat with a bot to find out the weather, when I can either ask Google or tap once on a weather app of my choosing. The key mistake here is that developers are operating on the assumption people want or need chat bots. They don’t. They don’t until that chat bot creates value for them by either removing friction from their lives or doing something they can’t do any other way.Food for thought…Where can chat bots replace human interaction to create value?Perhaps a chat bot to help with medication adherence that saves the physician and pharmacist time because they aren’t fielding as many patient phone calls while the elderly patient that doesn’t even have a smartphone or doesn’t use apps gets a friendly chat bot that reminds him to take his meds, asks about any side effects he might be experiencing, can answer basic dosage questions and even alert the physician if the bot thinks the patient is not adhering to the prescription. Value for the patient. Value for the medical professional.The current push from developers to build chat bot products reminds me of the evolution of video chat. AT&T learned the hard way that while the idea of seeing someone while talking to them from long distance seemed like it would change the world…people just didn’t want it. It didn’t solve a real need people had and it created NEW friction. Now I have to consider my appearance before calling grandma or joining the work conference call from home. It required hardware and was costly. It wasn’t until it became low cost, accessible from anywhere software on a computer (Skype) that video chat really gained traction. The tech finally found its product/market fit and ease of use and pricing were key.In my opinion, chat bots don’t have to be super smart to create value and solve a problem BUT developers need to make sure the limits of AI technology don’t destroy the user experience. Consider what chat bots can do really well today and build around that. In fact, I believe layering chat experiences onto existing workflows is where the most value is today, not building entire services around chat. Excited to see how it plays out of the next year!
We’re missing a necessary step on the path to adoption: what we really need now is good, mobile-first, text-based communication with companies. In other words, the same way I shoot off a message to my partner on WhatsApp, I should be able to message Delta with a travel-related need. We’ve tried to jump straight into AI bots w/o that intermediary step that will (1) mirror a behavior we’ve already established (text-based communication) and (2) create a new variant of that behavior for customer service purposes. One of the only excellent examples of this that I’ve encountered is Hotel Tonight’s stupendous concierge service – I encourage you to try it out the next time you use them and then imagine what interacting with every other company this way would look like, instead of the endless hold music and cheat-codes to reach a human representative.We’re also missing a master platform like WeChat but I don’t think that’s necessarily as big of an impediment as the lack of focus on a specific, consumer-friendly (non-nerd) issue.
interesting.if it could be so great–which I agree, and it mirror market behavioral need–which it does, how dramatic a technological leap is the platform.
I’ve had a similar experience with WP Engine’s customer support since they got serious about live chat. I always have a quick and useful response from them rapidly. Once you’ve had customer support that good it’s hard to go back!Of course, they’ve got real people doing the support. If someone could reproduce that level of customer support as an automated system they’d be a guaranteed billionaire.
This may sound extreme – but if faced with a human and they upset me I can get to speak to their superior. Call this an emergency override.I imagine chat-bots will not be built with a “contact my supervisor” function – because enterprises are short-sighted.For this reason a) I would not risk my company to an unproven technologyb) a chatbot is not a company it is a feature or product so why take this risk ?c) I will avoid chatbots wherever possible and if impossible take by luddite business elsewherePS -1) I still prefer to pay for food at a human checkout – because I get to smile at a checkout person.2) So long as 1) I prefer and will deliberately take my business to places offering human contact3) The difference is not tiny – its the difference between a pub and a drinks dispenser !!!
I take your point and feel much the same but if a simplified-intermediate OS-level chatbot language-set/syntax interface could quickly surface a complete palette of all the most used functions buried within the complete siloed collection of installed Apps and that could be a welcome reduction of interface friction ?
The big point I’m hearing from Kik and FB Messenger is AI / NPL is hard, and v1 of Bots has underwhelmed so far. My (early stage) company is building a bot and a native Android app, and right now we are seeing FB messenger and other messaging platforms will be a distribution channel for user acquisition and a way of delivering a simple user experience instantly. We are laser focused on a very specific use case (creating and agreeing to simple contracts for sale of goods, services, personal loans). The process of filling in the details of a contract is very structured and negotiating the details is inherently “conversational” so we feel like messaging platforms are well suited for the experience.
No comment on purposeOxO <- I look like that
I agree with Ted, Bits are better without conversation. I’ve been using the Kik Yahoo! News bot since Ted showed me some boys, and I like it. It notifies me when there’s something new. I think it’s early days, but lots of potential.
You mean like this ?http://arstechnica.co.uk/te…
His big takeaways are that AI driven chat bots have underwhelmed and that conversational UIs are not what users are looking for.”AI … underwhelmed”? Who expected something else?Ah, I can’t not be reminded “These are not the droids you are looking for.”So, the fall of AI failures is nearly here. Then another AI winter. Then in another 20 years after a new crop of naive, gullible,exploitable users, AI spring of hope. Then AI summer of hype. Then maybe, in another 20 years, for the first time an AI fall not all just failure!I know; I know; it’s such a bummer, downer to be forced to guess that our desire to make the big bucks from something just totally silly is too much of a long shot and, instead, we might actually, really, have to bite the reality bullet and have something with utility, as in useful, that is, isn’t just some software tricks looking for a problem but software part of solving a problem that is real and existed before the software was written!An, reality is such a bummer!There’s an old recipe for rabbit stew that starts out, “First catch a rabbit”. Or in software, first get a real problem that needs to be solved.If we have bad coffee, we can pour in more cream. Maybe if we have bad software we could pour in some AI! Adding the cream just has bad coffee ruin good cream. Ah, my analogy failed! Adding AI doesn’t ruin good AI since so far there isn’t any!
If human to human conversation is not perfect, how will Bots ever be? However, as they become more powerful and common, people will adapt to them as much as they will adapt to people. There was a time when information from a library was more trusted and widely used than information from Google. We have now been trained to trust our technical applications. And, as we (the humans) become more “trained” to work with bots (primarily because we eventually will not realize it is one), the more prevalent these will become.
Gave this some thought yesterday and here are some thoughts on making conversational interfaces and bots work: https://medium.com/@kleaf/w…
This is pretty much in line with this article from April: http://dangrover.com/blog/2…When reducing friction is one of the main purposes of chatbots, typing every single letter becomes an unnecessary hindrance. Instant interactions with preset options are far more desirable, with conversational UI available if needed for specific purposes.
There is no user behavior for chat bots. No one wants to talk to a call center, or any of those systems. Google is the best conversational user interface, you type, you get a result, not what you’re looking for, adjust your query, repeat until you find the answer.I believe “bots” are more likely to appear as assistants / recommendation engines, collecting and processing a lot of data in background and providing information and recommendation at the right time.My 2 cent 🙂
Developing products through the lens of a Conversational UI/Bot is a fascinating, fruitful exercise. In many ways, it forces teams to get into a collaborative kind of UX practice that might otherwise seem foreign to them. So, even if you aren’t planning on creating a bot, it’s worth reimagining your product in that way, if only to breathe new life into the way you think about your audience.