Last week I started getting lots of stories about Kendrick Lamar and SZA in my Google Now news feed on my phone. I thought to myself “why all of sudden does Google think I’m interested in Kendrick Lamar and SZA?”
Then I recalled sending a text message to my son about the new Kendrick/SZA song from the Black Panther film and thought “Google saw that text message and added Kendrick to my interests.” I don’t know if that is in fact the case, but the fact that I thought it is really all that I am talking about right now.
That whole “why did I get this recommendation” line of thinking is what the machine learning industry calls Explainability. It’s a very human emotion and I bet that all of us have it, maybe as often as multiple times a day now.
I like this bit I saw on a blog post on the topic today:
Explainability is about trust. It’s important to know why our self-driving car decided to slam on the breaks, or maybe in the future why the IRS auto-audit bots decide it’s your turn. Good or bad decision, it’s important to have visibility into how they were made, so that we can bring the human expectation more in line with how the algorithm actually behaves.
What I want on my phone, on my computer, in Alexa, and everywhere that machine learning touches me, is a “why” button I can push (or speak) to know why I got that recommendation. I want to know what source data was used to make the recommendation, and I’d also like to know what algorithms were used to produce confidence in it.
This is coming. I have no doubt about it. And the companies that offer it to us will build the trust that will be critical to remaining relevant in the age of machine learning.
You want it to explain the magic, and to take away the feeling of hocus pocus and that there is a sentient being out there with full transparency into your life, behaviours, tastes and desires. The “why” button will serve as a pacifier to cure that anxiety nothing more.I don’t think I want the “why” button I’m resolved and resigned to crazy powerful AI heading our way. And my position is, bring it on.
They might offer this but it won’t be as easy as a ‘why’ button. Because that would open a can of worms and what they just need is something plausible ‘page three’ which allows you to click 6 clicks and get what you want if you feel that you need it.The amount of people that care about this is small I feel (although the ones that do are vocal for sure) and the chance of having a bad outcome because of potential negative attention is to great if they offer a simple and easy to click ‘why’ button.The point being that having ‘everything out there’ is something that can create problems and isn’t something that everyday users actually care about.
From the blog post this more or less proves part of my point:or maybe in the future why the IRS auto-audit bots decide it’s your turnNever.The IRS discriminate function will never be known. There is a great deal of anecdotal evidence and speculation regarding the factors but never anything concrete. Knowing how IRS decides would obviously allow someone to game and avoid audit. So why exactly would they reveal it? Ditto for college applications.
Is it likely that this is coincidence? I remember before everything we did online, texted, or spoke in front of a phone was tracked that this would happen to me all of the time… I’d be thinking of a song and it would suddenly start playing on the radio. I’m a savant! But psychologists call this confirmation bias–we only remember the times that these coincidences occur, and not the myriads of times that they do not.BTW, I like your why button idea nonetheless.
A why button sounds like a button asking for ‘what are the creepy ways X company have done to spy on me’? All thoughts aside, any bias made by the algorithm to suggest Kendrick Lamar and SZA is a good bias. I have Kendrick and Top Dawg Ent. as one of the most influential rappers/groups of this century.https://www.youtube.com/wat…
one challenge is that often no one can explain it though. take a lot of the computer vision stuff, or predictions about the next move in chess or go or whatever, or even predictions on who will win a sports event or what a stock price will be. many times the pattern detected is so subtle and is dependent upon so many variables and contingencies that the answer is beyond human comprehension.there is a trade off to explainability and accuracy.
I echo this thought. A big part of ML is finding patterns that humans can’t see. Your example is fairly simple: input mention <artist>:: output <artist>AV, finance, security all mine many inputs and explaining attribution is very hard with highly dimensional data.
.The intel community (NSA) is way out in front of this predictive learning.They can take the combination of what type of toothpaste you use, whether you have an electric toothbrush, and your garbage and predict whether you are either a terrorist or a spy.It is at the root of why the US has not had a second 9-11. Prayers.JLMwww.themusingsofthebigredca…
HmmmFalse negatives and False positives. I suspect that many patterns are castles in the clouds .I also suspect that the more emotional a response to an output the more confirmation bias in interpreting the inputs.Do we feel safe because we are, because we want to be or because or when there is no benefit in fear. – I have never been killed – Am I therefore invincible.An absence of evidence etc.But I just a big red *engine* – what do I know ?
.Agreeing more with you than you do with yourself.Remember, our most sophisticated enemies are drowning us with disinformation in order to break down our algorithms and to send us down rabbit holes.They are also willing to sacrifice some of their own people to lull us to sleep while they fashion the killer blow.Evidence must be paired with analysis and, sometimes, analysis is just a hunch. This is why humint will always be necessary alongside sigint and other forms of intelligence.JLMwww.themusingsofthebigredca…
I only can hope that the NSA is more successful with their combinatorial algorithms than Amazon…
ghosts in the machine.
I Robot ‘s Dr. Lanning?
i am a nerd jason. :]https://www.youtube.com/wat…
Thought you meant the Alan Parsons Project!
See the 2nd step after START? Predicting a category. The tradeoff already happens there. https://uploads.disquscdn.c…
Gee, until I did a zoom in I thought that that was a highly simplified diagram of the steps I’d go through get ready to have a girl over for dinner!!! :-)!!!
The explainability of why Google thinks Fred is into Kendrick Lamar. The AI is dumb as a box of hammers, literally.https://uploads.disquscdn.c…
But a lot of people don’t perceive it as dumb, so “it works” and the vicious circle is fed.
No one in the Valley cares about the “WHY?” No one in Blockchain etc cares about it either. Look at the way the systems and codes are set-up.Everything has been geared only towards convenience and efficiency for a long time.https://www.youtube.com/wat…
I agree completely.This is going to be snide to make my point:Most marketers know nothing about technology (if they did they wouldn’t have gone into marketing)And they are not using to digging in and actually doing things (why else would ad agencies exist?)And you don’t have to be precise. You admit half works, half doesn’t but you don’t know which.So they want automagical solutions, and are willing to believe in them.
Because data driven branding is really hard, which is why there’s tons of back and forth around marketing and advertising tech investment, much of it driven by ml
Yes, I agree. It just can be really annoying. You can’t make this up. Yesterday my wife used my computer (her’s was out of batteries) to search for Bra’s. Now I am flooded with banners and such for women’s undergarments. I asked her did you use my computer? Yes, for some new web company that advertises on TV about nice fitting ones.
Well, she expressed intent to buy bras, it’s very difficult to separate her out from your cookie data.My advice, go look at some men’s clothing, the ad will disappear soon after
You mean figuratively
The difference between statistics and machine learning, one reports the probability of a type 1 error, the other doesn’t.
Trade off is not just on accuracy, but also trust. Many of the variables driving the machine learning often include data from online and offline third party data brokers. Think of what the user is going to feel if the explanation says ” This ad is shown to you because you bought baby diapers from Target last week, you have been buying baby food from Safeway, and your annual income is > $200k”.Getting recommendations from Amazon or Netflix based on buying behavior on their sites is one thing. But using third party data to personalize is quite another. The latter does not help build trust and I would argue is not very effective either as third party data tracked without consent lacks context.
Knowing the Why is good for satisfying our curiosity and hopefully learn something, but I doubt these companies will be so forthcoming with the real Why without having to give hints about their proprietary algorithms.I think there are 2 levels of Why. 1/ Simple Why is what the algorithm company wants to disclose to satisfy the user, e.g. the car sensed an object.2/ Real Why is answering a 2nd Why about the first Why answer, and that’s the real inner working which shall remain proprietary mostly, e.g. exactly what distance and sensitivity levels caused it, and what other parameters entered into the algorithm, etc.
I give reason #3 which is that they know giving the why will show the imperfections to their algorithms and allow people to game them.Not saying I don’t want to know why. I have stopped using Google and Chrome because I know that somehow if I search for “bathroom remodeling” I get ads in my work email box. I am positive.As to self driving cars: As somebody that owns two. I can tell you those algorithms are very imperfect. They might be nice as you go down the highway even in traffic, but if you are like me and they are cleaning the drains on a big bridge and you are in the wrong lane, then after you straddle the white line because there is an oncoming tractor and cars are over the double yellow, and then you again take a wrong lane for construction. When it freaks out and says you need a break!! I think about where I would like to break my foot off on the person that designed that “feature”
Truth is that for feeding ads you only need the coarsest criteria.In home we shop and consume we are simply not that complex.
I agree. In the case of digital advertising using the term ‘machine learning’ sounds to me as an overstatement.The bot that serves ads to me is absolutely dumb.
When it comes to ads, it’s almost never really about optimizing for the target and almost always about optimizing for the shooter…if you apply real machine learning, you prob. find out “almost nobody” wants this offer…but then what are you able to charge for?
True. I complain a lot about unwanted ads thrown to people. The most common counter reply I get is “it works”, but even so I continue to think about better and more elegant ways to deliver messages to consumers.Make it consensual, for real or simulated. Ask them what they want, don’t pretend to know what they want through coarse and simplistic analyses. Focus on the people that didn’t react, ask why.
You removed your post about wine….. :-)I think you have pointed out that you sell high end wine by the story.Look at my two favorite places in Napa:http://www.pragerport.com/http://www.atlasofwines.com…I knew both owners well once. (Stan Anderson is unfortunately no longer with us) I was lucky in that Mitsubishi Corp who I worked for would buy literally a container of different ones to ship back for Christmas, and I got to place the orders. (I also love Cake Bread, BV, Stags Leap and Silver Oak)How lucky was I a 20 something….we’d like to buy 250 cases, yup the NYK driver will pickup and he’ll have a bank check from Mitsubishi full amount, FOB not your problem once it is off your dock. Game?
Yes I did.Thanks for sharing that story Phil.
I was commenting on your wine post and had to go out in a rush this afternoon. My wife’s aunt had her 100th birthday today and we had to pick up the cake!Made me think about how do I react to wine advertising or marketing. I am hooked to my favorite brands and varieties. The offer in Chile is broad so there is quite a lot of differentiation and positioning through marketing. Labeling is more sophisticated and artistic than before probably targeting younger and aspirational audiences. Very detailed marketing.
My relationship to wine is somewhat unique as I”m ingrained in the artisanal, natural producer community where there is almost no marketing outside of social and by definition they are brokered only by small shops as the productions are tiny by definition.
Tweeted by MIT’s former Director of AI Lab. https://uploads.disquscdn.c…
I can tell you what the problem is, is that the fire truck is “not parked correctly” The algorithms HATE non standard behavior.I’ll give another one. On my way home I go past a big hospital and fire station. I had an ambulance give me the baaaap, baaap, baaap at a red light with sirens and lights on.That for us means look, then run the f’ing light and get over into the wrong lane when the opposing traffic has stopped to let them go, because they have somebody that is having a serious issue in back.I had all kinds of crap going off. It tried to stop, it had heads up displays it gave me warnings.However, at least where I live when you get that noise the correct behavior is to do what I did.
NYT 2012: “Big Data proponents point to the Internet for examples of triumphant data businesses, notably Google. But many of the Big Data techniques of math modeling, predictive algorithms and artificial intelligence software were first widely applied on Wall Street.At the M.I.T. conference, a panel was asked to cite examples of big failures in Big Data. No one could really think of any. Soon after, though, Roberto Rigobon could barely contain himself as he took to the stage. Mr. Rigobon, a professor at M.I.T.’s Sloan School of Management, said that the financial crisis certainly humbled the data hounds. “Hedge funds failed all over the world,” he said.The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according to the laws of physics.In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences.”And then everyone wonders how 2008 collapse happened and 2016-8 democracy/”fake news” ditto.
You are correct, and the financial crisis is a great example. You know I believe machine learning can help us, that is why I am a big believer in wearable robotics where the human ultimately controls full disclosure I am an owner in http://www.wearablerobotics.comBut when exceptions happen….watch out! And humans are full of exceptions. I might search something to see what I have, not that I am going to buy another.
This is the type of wearable innovation that makes real differences.* https://www.youtube.com/wat…
Yup and not just for the disabled. http://www.wearablerobotics…They help people that load boxes for FedEx, and help the backs of the hardest job on the back: Nurses.
You got only alerts right? Can’t you disengage fast during an emergency?
I don’t run with it engaged just occasionally experiment.But even with it off the wheel shakes, the brakes judder and a red light is flashing in the windshield heads up display.But I can move the car and make what the car “thinks” is a horrible decision.Believe me where I live that driver of the ambulance has a clear expectation (and the police as well and other drivers as well) that I better run that damn light. I don’t know how to describe the noise other than I wrote, but it means get out of my way right now.
I think that eventually, when a critical mass is achieved, autonomous or semi autonomous vehicles will collaborate forming a network of surrounding nodes, combining the data of all their sensors and negotiating special cases as the case of the emergency vehicle that you describe.
Crawl, walk,run. I think people think we are going to skip the first two. If you are on a not crowded highway in clear conditions. No issue. But you have to have at least a finger on the wheel. I have now doubt this person had one on the wheel and a phone
True. There is a lot of hype and inflated expectations and market forces aside of the complexity of the technical problems to solve.
When we lived in SW Ont, the speed limit for the 401 (which runs from Detroit to Montreal, across the top of Metro Toronto) was 100 km/h.If you did 100 in the right hand lane when there was 4+ lanes, you would have caused an accident.I once came over a hill heading west near Cambridge ON, on a lovely Sunday afternoon, no one on the road, only to see an OPP truck in the shoulder. Officer’s arm came out with the palm down, then make an up and down motion….’take it easy Hoss.”I looked down and saw I was doing 140.How does AI handle that?Sports has already figured out AI. You use high end, tightly constrained stats to back up your soft judgements. If you think the Cavs should play all around small (tell LeBron to be the 5, sit Thompson, IT) you will find the stat that shows the Cavs’ strongest differentials ( on D & O ) are with the Korver, JR, DWade, Love & LeBron lineup.Techies have the cart before the horse.
Love this simple concept of a “why” button. On everything. My toaster. The thermostat. Spectrum Cable box. But of course, especially as you point out in situations where we are being communicated to. The next time I get served an advertisement about hiking boots which I NEVER researched and only talked about to a friend a few days before in person…. the “why” button could say: “Because we heard you mention hiking boots while the Facebook App was live on your phone 3 days ago and thought you might be interested.”
the IRS tax algorithm will be paying you net.- The ‘WTF’ button comes first.’Internet We’.
I think about it differently.I don’t think we will be getting the why.I do think that the UX of this type of thing is primitive at best and that all those who are doing it now with little foresight, from FB to Twitter will simply erode their brand further.Amazon kinda invented the recommendation engine as a customer sales tool and they are still pretty damn good about it. And I don’t hate them (as yet).
Duck Duck Go as the shield…something else as the scout….love it!
It’s not.Maybe something generic like “influenced by…”, but there will not be a true “explain” option…because the truth is *we* don’t even understand the “why” behind our own thoughts…and so the closer we get to true A.I., the less we’ll actually be able to really know everything that went into the recommendation.Related: Check out the Alpha Go documentary on Netflix. It’s exceptional, and one of the small tidbits in there is the fact that even the programmers really don’t know why the computer makes the decisions that it makes…best they can do is get some probabilities and guesses as to why it did what it did…
Love this and agree intensely. It’s not going to be easy and it may be a strong limiter to the power we can realize from more advanced algorithms. We need to floor baseline for acceptable levels of explainability – e.g., does the general public need to understand it, or a third party expert in a given field? In any case, this will be a healthy gating factor to any (however fantastic) AI-spun-out-of-control Armageddon.
This reminds me of a game I used to play when I signed up for marketing lists and similar things. I would give a different first name and make a note in my phone of what I signed up for (e.g., website, contest, etc.) and when. When new mail would arrive at my house for the fictitious person (or email) sent by various third parties, I could easily determine where my data entered the system and how long ago. I also had a similar reflection recently while listening to Pandora — I was getting a lot of advertisements for hair loss treatment, without having searched for the same. I am guessing Pandora is accessing pictures on my phone and determining I am a good candidate (darn).
I like the concept…..but I can’t really think of a precedent for such a generous ‘why’ button on…well anything. Either you understand what happens through lengthy study and research, or ‘it just works’ – either way you are not given much by the manufacturer / creator. Any precedents?
Oligarchical collectivism with a much smaller Telescreen than George Orwell even imagined…
I have yet to see AI do discovery as good as humans, but let me explain first what discovery means to me first.I listen to a lot of jazz (primarily hard-bop — think early 60s stuff) and jam rock (Phish, the Dead – don’t judge). But I like and am open to pretty much any genre out there, bar none. Right now I am rocking a traditional Persian classical music channel.Every music discovery service I’ve used keeps me in my lane, so to speak, with discovery and personalization. Same is true with movies or Netflix/HBO style TV. And we all know about the echo chamber of political posts on social media.Our host and a few of his friends have an ongoing twitter convo about music, usually with activity and suggestions, in the AM. “Listening in” to that thread has given me much richer new music ideas than anything AI ever has.I am sure an AI expert will explain that there are ways to do recommendations that consider and include altogether (or what appears to me altogether) unrelated recommendations.Moreover recommendations, whether they are for a song or job candidate, always mean more from a human who I know and trust. Maybe that will change for me someday, but not yet. A recommendation feels like a gift from another soul, a sharing of our common humanity. I can’t get that from a machine. If a friend says “see that movie” I think “I should see that.” If a machine tells me “see that movie” I treat it as an ad an try to block and hide it.I can readily find the stuff that I know and like. It’s the stuff I don’t know, don’t yet like and/or may make be uncomfortable that I want to find. And right now I find relying on a diverse set of humans to do that for me “manually” works best.
I like to know more about that Twitter convo!
I’ll let you discover it! It’s easy to find — not really one twitter thread so much as a regular share among friends in real (i.e., offline) life I believe.
The tool may only start the process of building trust – there already seems to be a number of people who just think they’re being spied on: https://gimletmedia.com/epi…
I think this is one of the myths of AI, in particular Deep Learning, that the model cannot be explained. A good Deep Learning solution should allow you to drill into the underpinning of the model and what examples the model is basing its conclusions on. This is very true with things like recommendations where the end user really wants to see what its basing the recommendation on and even allowing the end user to say “ignore this” or “pay less attention to this” etc. Its definitely a focus of what we are building at indico data.
It can definitely be designed in. A B2B example, but Demandbase’s Account Selection solution uses ML to identify accounts that are in market for your goods & services (based on a profile of what you’re selling and the types of companies who typically buy it, against 100TBs of data) and provides the rationale and examples of the data used in the selection process.
This is only sort of a myth. Say your doing nlp on pathology reports – this issue gets a lot more complicated
Startup idea for the AVC community: blockchain that stores your data and logs whenever someone accesses it. This should solve that problem, IMHO.
One of the challenges is when techniques such as deep leadnjng or neural networks are used, even the people who built the machines don’t necessarily know for sure why the machine has chosen a certain path (algorithm), so disclosure may be limited to what data sources they used. Of course, that’s a subset of cases, but when machines increasingly make increasingly complex decisions based on essentially unknowable logic, we have a broader issue to contemplate.
Is explainability reasonable when the product creators no longer understand how their products arrive at their results?
It seems that Fred is implying Duck Duck Go will soon launch a search version of “curated content” called “curated searching.” Kudos to Fred for floating this idea!It’s a compelling idea that would require customers to trust Duck Duck Go. Because I am Jewish, I would like to know that a “mashgiach” https://en.wikipedia.org/wi… was overseeing the business. Also, I think users would want Duck Duck Go to allow them to export their data so that they would not be locked in to Duck Duck Go.I imagine a $10 per year and a $25 per year non-freemium version of such a service.This blog post is a welcome relief from Fred’s recent litany of crypto-shilling, complaints about consolidation in business generally and in the venture capital business in particular, and posting of fluff (filler) to meet Fred’s self-imposed requirement of a daily blog posting. One worthwhile blog posting a week on AVC.com would be better than daily postings that are often not worth reading.
The much loved GDPR has a much debated requirement to some form of explainability on the outcome of processing personal data. There is much debate on the exact meaning of the provisions but will bring challenges for many data uses.
You might value it at an individual level, but most won’t. But share that data across multiple contexts/apps, etc.. and it has great network effects. No one is approaching it this way. It’s still siloed, winner takes all model.
Crucially, if the recommendation algo says “we showed you this because we think you are interested in X”, I need to be able to say “stop”. Maybe I’m trying to kick an addiction (booze, social media), and I don’t want my habits reinforced.Giving the algorithm feedback allows the remembering self to choose which parts of ourselves we want reinforced.
The line at which personalization begins to feel like obtrusiveness is murky. I would not want to opt into data sharing ahead of time because that would only make me cognizant and would probably lead me to withhold information that, had I not, would have created a level of personalization that I’d enjoy without obtruded upon. Yet, once obtrusiveness has occurred I’d want to stop it. And that obtrusiveness line for each user is a moving target. I’d only want to be Explained after the line was crossed.All of this is to say this is understandably difficult at a macro-level.
Fred, what you are asking for can work for some simple cases, but generally now and more the case in the future what you are asking for essentially cannot exist.As a compromise, maybe you could be given a list of the input data that was used for the recommendation you got, but usually that would not be very informative and would still omit the processing of that data. E.g., for the processing, you want to be told about clustering, nearest neighbors, principle components, or techniques much more novel and advanced?
Great topic again Fred. But this comment made me think of my own creepy instances of being served ads in times where I would not like my content being distributed and sold to the highest bidder.”I want.. is a “why” button I can push (or speak) to know why I got that recommendation.”Nope. Not for me. Personally, creepy AI that read your text messages and serve you ads based upon the content is where I draw the line on privacy.I am hoping that decentralized systems, networks, etc. will assist with managing the “creepy ad” factor.
CONTRIBUTORS:No Explainability required just opt-out and stop participating in giving free information to any medium that doesn’t pay you for your information.Use Duckduckgo.com for your searches. Just like you promote to control your brand, control your content.Captain Obvious!#UNEQUIVOCALLYUNAPOLOGETICALLYINDEPENDENT
kind of reminds me a little of rap genius and the broader mission to annotate all text on the web – Explainability of AI: EoAI will be a thing that AI solves for once it better understands human sentiment and translation
Fred, I entirely agree with your “why” button. Before founding Kyndi, I saw this as a significant hurdle to the adoption of machine learning in the enterprise, so I made this a requirement when we designed our algorithms. Today we build Explainable AI products for government, financial services, and healthcare. While there have been academic debates about the need for explainability, we’ve found that when customers judge a business based on the outcome of a decision (e.g., credit decision), that is a prime use case for Explainable AI.
Yes! We all should want a why button or “explain these results.”
This is coming. I have no doubt about it. And the companies that offer it to us will build the trust that will be critical to remaining relevant in the age of machine learning.Maybe. My guess is that the company that we’d most want this from, Google, is also the least likely to provide it. Conversely, the ones most likely to provide it are the ones that currently have the least amount of information.
I notice Facebook doing this to some extent. They have a link, “Why am I seeing this ad?” And when you click it, they tell you the targeting mechanism(s). I’m not sure if it’s full disclosure but it’s a least something. But it would be nice to know where the data originated from.Also, if Google is scanning your text messages (where you have the highest presumption of privacy) and later showing ads on another platform based on that content, that is creepy.
Our ML does Underwriting for top 5 banks and top insurers, reinsurers and national security. Explainability and the visualization of it is essential for us, not only due to the curiosity factor, but also due to audit, regulatory, equal lending, non-discrimination, and due process needs. It affects the techniques we can use, but we are pretty glad that we have these safeguards built in.
the “Why” button should not tell you why, but simply “what you’re most likely to believe is why”. win-win.
I’m not so interested in why the algorithm made it’s selection as I am in adjusting the algorithm when it makes (in my opinion) a “bad” choice. Sends me an ad that I’m not really interested in seeing for example. The only way those calculations can be improved is feedback.
Their recommendations bother me to no end.Outside of my job I am a standup comedian. I had this joke I was working on about going to Wyoming. Next thing I know I am seeing ads on my Facebook feed about taking a vacation in Wyoming.I never wrote it down. I just recorded it on my iPhone.This feels so intrusive. I can’t stand it.
A “Why?” button would also be amazing at the bar
No matter what we do in the future Trust and the concept of Trust will loom large. Networks operate on that basis
Explainability is critical in autonomous weapons as well as industries under regulation which need to explain part of their decisions. There is a genre of research which focuses on explainable algorithms. This is an interesting entry point to the topic https://www.darpa.mil/progr…
I’ve been thinking about this in terms of information dissemination (more specifically news and related information), rather than algorithmic comprehensibility. I’ve come to think about it more as ‘transparency’, akin to showing one’s work. IMO, what is needed is that information (especially conclusions) be presented in an interrogatable way which also places the fallibility of the information front and center. While I’m currently of the view this is what is needed, I see little economic incentive to provide information in this way. I’ve not yet worked out a business model which can aggregate and monetize the value of a more well informed and cogent public. For the same reasons, I am skeptical that “explainability” will be viewed as anything more than a nuisance for service providers.
I get “targeted” emails and “programmatic” ads all the time that try to sell me my own book.Context is tough in automated user interactions, and usually connected to “monoculture” marketing agendas that see every interaction as a push cross-sell/upsell opportunity.
One of Regina Barzilay’s students actually was working on this problem nlp in medical data. This explainability is an active research area in a number of CS department across the countrySo if you can wait until 2222, we’ll probably get some answers to this question
Just adding a link to a relevant article by David Weinberger:https://medium.com/berkman-…
Agreed. As more decisions are automated, Explainability will rise in importance. This topic is important at Go Moment in our hotel staff-facing dashboard for Ivy, the world’s largest travel chatbot.