The Ethics Of Algorithms

Last week, USV hosted one of our regular portfolio summits. We do this on almost a weekly basis now for some subset of our portfolio (engineering leaders, human resources leaders, marketing leaders, etc, etc). It is all part of our USV Network offering for our portfolio companies.

The summit last week was for the data analysts in our portfolio. These summits have produced some of the most interesting insights for us over the years. And last week was no different. There was a recurring discussion of the ethics of machine learning and recommendation engines.

So we decided to make that the topic of the week on usv.com. The discussion is here. Check it out.

#Web/Tech

Comments (Archived):

  1. William Mougayar

    Isaac Asimov nailed it with the 3 Laws of Robotics. Same applies to algorithms, AI and machine learning:1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.http://en.wikipedia.org/wik

    1. LIAD

      – what if a human is guaranteed to be harmed- how does the machine decide which onehttps://www.usv.com/post/up…

      1. Matt A. Myers

        Then they must protect us from ourselves by taking over control/power! Good thing we have Will Smith!

    2. kevando

      I was going to say, that post reminded me of iRobot.. Maybe it was on avc, but I remember reading something similar regarding self driving cars; having to make a decision in an accident and choosing to kill a person in order to save 3. I’m glad to hear this is top of mind for USV companies.

    3. Twain Twain

      What about animals in the First Law?Is “may not” sufficiently strong or should it be “should not”?Is any AI currently able to classify and recognise (visually, aurally, by smell and by touch) the different forms of human beings? For example, suppose there’s a natural disaster. The robot is sent in to rescue people. It sees an injured young man and a heavily pregnant woman. Would it recognise there is the life of 3 human beings involved? How would it calculate which human being to save first?

    1. fredwilson

      There we go. Now we’ve got something to talk about

    2. John Revay

      HUMM, I am most worried about people hacking in to cars firmware and taking control…and crashing said vehicle.

      1. Matt A. Myers

        People or government?

        1. John Revay

          actually terrorists

          1. iggyfanlo

            More worried about terrorists hacking into drones

          2. Matt A. Myers

            Drones are pretty cheap – why would they have to hack into them? I suppose they could hack into 1,000,000 of them and program them to crash into people?

    3. pointsnfigures

      King Solomon Cab Company

  2. pointsnfigures

    When I was getting my MBA at Chicago Booth, I was exposed to classical, keynes/tobin, and the burgeoning field of behavioral economics. I found behavioral economics fascinating. Then I read the book Nudge. Being a floor trader, I saw irrational behavior and experienced some of the precepts in Nudge. I was intrigued.As a person that prizes the rights of the individual over the state, I was very uncomfortable reading Nudge. My fear is algorithms could cause Nudging all over the place, with people making decision that they normally wouldn’t. Someone writing the algo is determining the ethics, not the individual.At first, it starts out innocently. Placing healthy food in certain parts of the store versus junk food sounds like a good idea. But then it could get really dangerous. Medical records for example, and insurance companies. What if there was an algo that told you to make this choice, and you didn’t, and your insurance company denied coverage because of the algo data?The result was I think that classical economic theory works best. If algos put the power in the hands of individuals so they make the decision whether to opt in, and can opt out at anytime, they can provide incredible feedback and make your life better. But with the way Cass Sunstein and Dick Thaler articulate nudge, or the way Keynes/Tobin works, I find it very dangerous to a free society.

  3. Richard

    This is not so much an ethical issue as it is a philosophical one and/or a computational one (modeling extreme events), the work by the faculty at USC (irvine) Department of Logic and Philosophy Science, School of Social Sciences examines the former.

  4. Matt A. Myers

    Let us know the stats / impact of linking to USV conversation for engagement that occurs, sometime next week please? 🙂

  5. David Mitchell

    Thanks, Fred! Really cool to see how these discussions are helping portfolio companies. I’m curious – within the USV Network offerings, what have been the most beneficial benefits that the network brings to your portfolio companies I.e. are their specific tools/services/educational seminars you provide exposure to through the Network that have accelerated companies more than anything else? For example, strategy workshops, design workshops etc.

    1. fredwilson

      we do all of that and more. the strategy is to allow the portfolio to help each other. we facilitate. but they do the actual work. it’s been a great success.

      1. David Mitchell

        Thanks! Very cool. On the network site you mention external resources – and presumably external facilitators. Can you share any that have been home run successes?

        1. fredwilson

          Hmm. Let me go read that copy closely. We rarely use outsiders

          1. David Mitchell

            I may have made the wrong assumption too 🙂 Appreciate the post today – love the window into those conversations. Cheers

  6. Richard

    Unethical practices are not so much about dealing with valid models, such as recommendation engines .Most unethical behavior are scientists not including data outliers and creating spurious models to validate their theory or viewpoint.This happens both in pure and social sciences. By obscuring data or taking only the data points that reinforce a particular theory, scientists are indulging in unethical behavior.Wishing the data says something is in large part the ethical problem both in business and academia.

  7. LaVonne Reimer

    Kudos for making this the focus of portfolio discussions. I think of this as black box systems vs open system. The difference between the two is the degree of transparency and accountability in data collection practices. First question is can this output only be delivered in a black box (for efficiency or to assure data fidelity)? What data is going in? How is it being collected? Who gets to say whether the data are correct or not? What’s the output of the algorithm that is munging on said data? What benefit is being withheld if those affected by the algorithm are precluded from gaining insights from it? The FTC put out a decent report last May on data brokers, calling for transparency and accountability as to the data collection. Decent because it was about time there was more exposure to the degree of data trafficking among brokers. Less than satisfying in that there is such a range in broker outputs. We should worry much more about data collection and algorithm in credit decisions as compared to recommending movies or books. But when data from a behavioral marketing behemoth finds its way into an algorithm predicting creditworthiness, it gets complicated. The application of stakeholder analysis that is at the heart of ethical decision-making plays out differently in these various cases. Since I’m in the credit decision space, in some ways taking an ethics-driven approach is easier. It’s pretty clear who all has a stake in the data. In our case, it meant creating a much more open system relative to data collection and management. We opted for a very interactive learning system approach.Lots of food for thought in this. I look forward to following the conversations for each topic and this one too.

  8. Bruno

    I think a central point of the decades-old Trolley Problem (http://en.wikipedia.org/wik… is that certain moral dilemmas may be irresolvable. Unclear whether algorithms can provide irrefrangible decisions or simply utilitarian solutions.Interesting coverage from the NYT on robotic morality: http://www.nytimes.com/2015

    1. chiara

      Good morning

  9. Twain Twain

    OMG!!! I just attended Deep Learning Summit in SF and the leading AI researchers believe that the concerns about killer robots are overblown and that we can now automate AESTHETICS to the machines as well as ETHICS.Yahoo and EyeEm stood up on stage and showed how their algorithms can now decide for us if something is “aesthetic”.Meanwhile, I’m on a mission to show that if our systems are to be ethical, intelligent and capable of understanding Natural Language and able to make sense………….We have to overcome the limitations of probability as a tool for modeling the human mind, language, behaviors, moral-ethics etc.Sure, developers can A/B test (a form of Descartian deductive reasoning which constraints an either/or scenario for the user) and measure a whole bunch of quant KPI.However, for the machines to grasp ethics we CANNOT leave it to some type of stop-loss function. The reason is because that’s what was used in the global financial systems and see where that got us (pls read slide 5: NYT 2008 article — $ trillions of value loss because the machines HAD NO QUALITATIVE PARAMETERS FOR WHAT WAS CONSIDERED ETHICAL LIMITS.As I’ve seen consumer startups migrate towards Machine Learning, my concerns have increased about all the fallacies in Machine Learning being multiplied.And there’s a mistaken assumption that the “data tells us everything”.It does NOT. It only tells us the 50% that refers to the quantitative part. The other 50% is qualitative and needs a whole other new tool that’s not probability.The finance industry led the way in Machine Learning and the same “overfitting to the data model” mistakes that were made there are now being made in the consumer space.It’s partly why I am ON A MISSION to build a system that can balance out the limitations of probability which is a blunt instrument incapable of setting ethical and linguistic parameters for the machines.

    1. ShanaC

      I heard parts of the same things about aesthetics – the thing is the eyes get bored and you can always introduce new ideas to the eyes

      1. Twain Twain

        Aesthetics appeals to the Golden Ratio geeks amongst the Machine Intelligence community.

    2. Dave W Baldwin

      Great batch of slides. Just want to clarify that I agree with you. Some of my comments lately might come off as “there is no issue and its overblown”. My bigger concern is the ground we’ll lose over the next 36 months as some things that are overblown and not defined become excuses for those that knowingly set forth programs that are harmful.For instance, some big hedge fund groups utilized ML and it worked for quite awhile, but then didn’t. On the note of the subprime mess, we have to stress that it isn’t a matter of an AI with crooked wishes working independently and then the group that raked in the money later says, “Gee, whoops, we didn’t know that was going on…”. They knew!The bigger challenge is forming a council in whichever form. It will require a miracle for something along that line that will involve dialogue which is straight forward and not twisted by lobbying groups and those that want to make big claims to be given investment dollar.Last- you are on the money regarding NL and its limitations. That is the key to being able to work toward ethics in ML. You need to write a guest post.

      1. Twain Twain

        Thanks, David. There was less of a need to think through whether the algorithms could qualify (language, ethics, relationships etc.) before. However, as we delegate more and more of our decision-making over to them — or even enable them to support our decision-making — it becomes vital to think it all through. Intelligently, democratically and diversely.Probability and the machines facilitated the Industrial Revolution because suddenly there were tools that could help us calculate physical power, processing speed and functional utilities.We’re now on the fractal edges of an Intelligence Revolution and so need to ask whether probability and the machines’ physical power, processing speeds and functional utilities adequately reflect our natural intelligence with all its subjective variations and diversities, especially wrt language and ethics.@wmoug:disqus – as someone with experience of this, how does someone get to write a guest post?

        1. Dave W Baldwin

          You’re right. Going back to the last decade (and the one before), it seemed those of us who stood for the fashioning of a code of ethic were lost in space as far as the rest of the world is concerned.I will probably drop a quick note regarding guest post, it probably comes down to writing one and sending it to him and see if he likes it.Thanks for putting up with how I term things, it is a matter of putting it into language the bigger population can understand. We’re going to be in enough trouble over the near future with the almost daily headlines that include “Rise of the Robot” and so forth.

    3. Joe Cardillo

      Nicely said re: qualitative vs. quantitative, was telling my partner yesterday…figuring out all the bits of data around a problem is not the same as figuring out how someone feels about all of them. Which means we have to really hustle w/our efforts, because like it or not our technologies are being designed more and more to listen to everything (e.g. http://www.bbc.com/news/tec… )

      1. Twain Twain

        All experiences are entangled. To-date, developers have approached it as “Let’s measure the behavioral functions — user clicked a button, user stayed on this page for N time, user commented on an audiovisual, etc.” and then inferred emotional interest from those functional behavior metrics.Whether it’s Samsung, Amazon Echo, Google Now, Cortana etcetera it’s all about inferred emotional abstractions from functional behavior.It’s not necessarily the whole moving picture of our emotional experiences, though.

        1. Joe Cardillo

          Right, that’s a good point – and the inferred emotional interest is big jump that needs a more careful look than most are giving right now.That’s one of the interesting things to think about re: current data collection / behavioral ad models. If I’ve previously said something a few times online about preferring cremation when a pet dies, when it actually happens what if I feel differently? A couple possibilities include that I may have already been conditioned by things I’m presented to see cremation as the only option, or that I may have an uncanny valley reaction to it. There are all sorts of Brave New World and 1984 implications.

  10. Twain Twain

    “Moral and ethical QA for algorithms is a worthy and complex undertaking” — USV.comThis is an understatement. It’s more than complex.It means re-writing layers of code akin to developing a BlockChain for language that doesn’t even exist in W3C standards yet.It requires us to re-write all the data, especially language, we’ve recorded since human records began.The reason is because all the leading AI researchers from Geoff Hinton the father of Deep Learning to Andrew Ng of Baidu all agree that we’d need a new language understanding model and we’re nowhere there yet.It’s through natural language rather than mathematical language that humans learn ethics and morals.Worthy and complex indeed.

    1. Matt Zagaja

      Why would algorithms exist in W3C standards?

      1. Twain Twain

        Because it’s best practice for developers to ensure the algorithms pass various validation checks, including W3C validators:* http://www.w3.orgValidation means search engines and spiders, including Google’s, can read the markup language.There is now even this Working Group:* W3C and Automotive Industry Start New Web Standards Work for Connected Cars

  11. Simone

    I read this book during winter hols and meant to recommend it – Who-Owns the Future by Jaron-Lanier; Also, I started reading about neuroscience some months ago and my understanding is AI is, for now, a rather distant ‘risk’ (given it is early days for science to figure out consciousness)

  12. Twain Twain

    @fredwilson:disqus The issues of whether the machines can do ethics, aesthetics, language is part of a deeper drive by Data Scientists towards enabling the machines to QUALIFY. Whether they can quantify has, to an extent, been proven with Big Data. The need for qualification arises because the brands are asking for more insights and not just bigger information clusters.It remains debatable whether the machines can qualify better than we can — in the absence of coherent qualitative frameworks. The machines can certainly quantify because the Data Scientists have been training all sorts of mathematical functions into them for decades if not centuries (starting with Bernoulli numbers circa Charles Babbage and Ada Lovelace and certainly with Turing and Von Neumann’s parameters for computer thinking).Quality, though, isn’t the same as quantity just as correlation isn’t equivalent to causation and stochasticity isn’t the same as subjective biases. For quantity, correlation and stochasticity, probability and logic are reasonable tools to use.For qualifying the data (whether in terms of ethics / aesthetics / linguistic appeal etcetcetc), a system invention as vital as Newton inventing Gravity, Galileo inventing the telescope and Einstein inventing the Theory of Relativity would need to happen.The image shows one of Google I/O’s June 2014 sessions. Qualification is certainly on the tech horizons from UX through to backend and how the algorithms make recommendations.Along with blockchain technologies, the most interesting space in tech is in Machine Intelligence and how we enable / don’t enable it to reflect human intelligence.The constraint assumptions had been that we’re just rational, logical and probabilistic so the algorithms were also rational, logical and probabilistic.Now we’ve decided humans are also ethical, linguistic, social, contextual, sensory and perceptual those constraint assumptions will need to change.Otherwise, the algorithms will be autistic and have inadequate frameworks to support our intelligent decision-makingWorthy and complex to do indeed!

  13. fredwilson

    I prefer to think about as we added a second party

  14. Kirsten Lambertsen

    You’re back 🙂

  15. Donna Brewington White

    You did it!