When The AI Comes To Your Annual Shareholders Meeting

I was looking at the top twenty shareholders of some public companies last week and saw quite a few “quant funds” on those lists.

With the news that Blackrock is going to move much of its asset management business to models and machines, I think we will see more of this in the coming years.

It’s annual meeting season for public companies and all of this made me think about when the AI shows up to your annual shareholder meeting.

Or when the AI gets your proxy and needs to vote for Directors, executive compensation, and the choice of auditors.

Governance is an important part of being a shareholder.

When the shareholders are all machines, how does governance work?

#stocks

Comments (Archived):

  1. William Mougayar

    Q- When the shareholders are all machines, how does governance work?Answer: Blockchain-based governance.Not only is the AI coming to your meeting, but the blockchain will also come, as it will enable, validate and record transactions and decisions.

    1. fredwilson

      True believer. I love it

      1. William Mougayar

        Hypothetically speaking, everything is possible….until it can be proven and tested.

        1. awaldstein

          Spoken like an engineer and analyst my friend.Certainly everything is possible even in cultural changes but you start and end with behavior when it comes to markets.Science is a servant of market vision not the process of discovering it.

          1. Twain Twain

            THIS: “Science is the servant of market vision.”I’m a Humanist. We put consideration towards Humans before AI — in design, in intelligence and in governance+control.Science has been great for lots of things (eradication of small pox, the Industrial and Digital Revolutions and space exploration being a few examples). And I’m a scientist by education and career experiences.However, I’m also aware it’s imperative that human culture (art, language, morals+ethics, values) are as important as Scientific Empiricism (the ability to measure and repeat experiment results) and baked in from the start of any system.Now, the facts are that black box AI algorithms are opaque and not traceable nor repeatable. That makes them RISKS to the good functioning of the financial system.So human governance and values implementation is needed.

          2. awaldstein

            Always been true. So yes agree.

          3. William Mougayar

            i don’t think we disagree. Market acceptance (or non-acceptance) can also prove/disprove desired change.

          4. Twain Twain

            We agree that investing in Science and Innovation bring great benefits and neither of us are Luddites nor naysayers.The difference is simply that the way my brain works is I can envision totally out-of-this-world-doesn’t-exist-yet tech (like hover screens that are completely wireless that appear in my dreams when I sleep; seriously) and, at the same time, all the technical and capital investment barriers that need to be overcome AND how it affects human culture and evolution.@fredwilson:disqus’s note on “an inspiration for the liberal arts to catch up to the computer scientists and mathematicians” the other day speaks to the inherent foundational problems with systems like Blockchain, Ethereum and AI.It is that for millennia BEFORE the Enlightenment augured in by Leibniz and Descartes, art+culture+language+science CO-EXISTED IN HARMONY to the extent that Great Civilizations were formed and thrived and laws were formulated that enabled this.Firstly, Descartes is objectionable because he removed the heART out of our models for human intelligence just a 100 years after Da Vinci’s death. If Da Vinci was alive, Descartes’ ideas would NEVER have been allowed to take root and to eradicate human heART from our systems in the way they have.Then Leibniz brought us the idea of Causal Determinism (0 = nothing exists; 1 = something exists) and then along came Bayes’ probability (the chance of something existing is some % of event occurrence).It’s worth noting that neither 0 or 1 or % can explain WHY something happens — why a person votes one way or the other; buys a product or other; invests in one thing or another; etc. They can quantify and correlate but not qualify causation.0, 1 and 1% also can’t explain the value of human life or value derived from education, employment and inclusion.So, fine, the Scientific Rationalists’ argument would be that being able to measure and model efficiencies has accelerated innovation and enabled the Industrial Revolution.However, Scientific Rationalism is reaching its limitations.It’s really clear that Google (the ultimate Scientific Rationalists) of the modern age can’t train the machines to understand human language, culture and values.So we should return to Da Vinci’s model for our systems where intelligence is about heART and Science — not just some quant logic of the machines (whether that’s in Blockchain / Ethereum / AI).At a philosophical level, having all the shareholders be machines is wrong.At a practical level, having all the shareholders be machines is also wrong. Unlike humans, they can’t explain WHY they made the decisions they did. They can’t rationalize and reason the literal and lateral factors that inform good decision-making and risk management.Now, can we fix the problems for human representation and our intelligence that Descartes, Leibniz and Bayes’ models cause?Sure. We have to hope that the next iteration / invention of systems have Liberal Arts and Science baked in from the start, in coherency — just as it was in Da Vinci’s Renaissance times.

      2. Twain Twain

        What will happen when the shareholders are all machines…Unabomber’s manifesto: “But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. …Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”* http://www.nytimes.com/2008…What would happen to the brains of investors if all the machines replace them? They’d lose all their investment expertise and those areas of their brain would atrophy.In any war with “Skynet,” investors would be the least valuable because they won’t have that know-how to beat the machines. So “Skynet” will win and cripple the financial system and take us back to before Enlightenment and economics whilst it advances towards a world without humans.

        1. Salt Shaker

          I wouldn’t worry too much. If life imitates art, we’ll soon first encounter a Cyberdyne Systems Series 800 Terminator made of metallic endoskeleton over living tissue, followed by a run for Governor, and then the hosting of a really shitty reality TV series developed by the POTUS. Sigh and fade to black.

        2. Girish Mehta

          I am sure the investors will be happy to be….”re-accommodated”.

        1. William Mougayar

          I saw it and deleted it earlier.

          1. LE

            What I saw was an upvote to Fred’s reply to you (as noted by the screen grab) and that is still there.Can mod’s remove upvotes? (It’s still there right now even on page reload).

          2. William Mougayar

            Not that I know. Hmm. That’s a new loophole. So, a spammer’s vote stays after they are squished out. Will need to as ask Disqus @disqus. Thanks

      3. Twain Twain

        The other day another proof point emerged that I invent systems that are so ahead of everyone else it’s outrageous. https://uploads.disquscdn.c…I do hope AI, Blockchain and Ethereum are tools for enabling Representative Human Governance (the branding “Limited Governance” doesn’t work).Not in their current forms, though. Not when 50% of the data is missing and the data that exists is biased and can’t enable the machines to understand our language, culture and values.

        1. sigmaalgebra

          I know that talking about grammar and writing is nearly off topic in fora, but, still, you might check with, say, The Chicago Manual of Style on capitalization. E.g., we write “Pythagorean theorem” due to the name of the person, similarly for “Newton’s second law,” but write “central limit theorem”, “law of large numbers”, “probability theory”, “calculus”, “stochastic processes,” “deterministic optimal control”, etc. without any capitalization.

          1. Twain Twain

            Yes, cheers.Applied Computational Science wasn’t invented by a person named Applied.:*), lol.

    2. Jake Baker

      One challenge/confusion I have with blockchain based optimism is that I don’t see many people mapping “the answer” to many of the basic structural questions on a simpler scale — everything lives at the very high level of the techno-optimism.Blockchain and related developments are GREAT and exciting (I’m stilling “hobbyist” investigating how to get a piece of commercial real estate owned by a dedicated token/blockchain/ethereum as a test of how to think about blockchain-based asset ownership), but with a white board in front of me, I’d say many conversations should start simpler in terms of mapping how existing procedures work — how organizational coordination already works. Laws and contracts are not in any sense optimal, but they are still the result of targeted development towards intentional goals.Blockchain allows for trust and decentralization and other benefits, but imagine the hypothetical of 10 trusted friends tackling some of the same “topics/ideas” that people moot as being GREAT for solving with blockchain and attempting to use just a centralized ledger or spreadsheet. The policies and procedures and things can still get horribly complex.Matt Levine (of Bloomberg View) has written a lot of stuff about this related to public equity/shareholder governance and legal precedent and contract law. He takes a slightly snarky tone, but I find his questions very useful coming from someone quite experienced with complex contractual structures and derivatives and the like (he’s a former Wachtell M&A attorney and a Goldman Sachs derivatives salesperson). See for example:- https://www.bloomberg.com/v…- https://www.bloomberg.com/v…- https://www.bloomberg.com/v…- https://www.bloomberg.com/v…- https://www.bloomberg.com/v…- https://www.bloomberg.com/v…- https://www.bloomberg.com/v…All the same – really looking forward to the token summit! Thanks to both William and Fred for their thought leadership in this space!

      1. Twain Twain

        On techno-optimism, when inventing anything it helps to have an excess of hope…Combined with clear-eyed pragmatism —Because the nuts and bolts of the “basic structural questions on a simpler scale” are not as straightforward for Blockchain/Ethereum/AI as they seem.”AI researchers are examining the tradeoffs involved in creating fair machine learning systems. Scientists at the Department of Computer and Information Science at the University of Pennsylvania conducted research on how to encode classical legal and philosophical definitions of fairness into machine learning (see 1,2,3). They found that it is extremely difficult to create systems that fully satisfy multiple definitions of fairness at once. There’s a trade-off between measures of fairness, and the relative importance of each measure must be balanced.Replace the word ‘fairness’ with ‘justice,’ ‘liberty’ or ‘non-partisanship,’ and we start to see the challenge. Technologists may be unconsciously codifying existing biases by using data-sets that demonstrate those biases, or could be creating new biases through simple ignorance of their potential existence. Technologists should consciously remove these biases and encode laws, policies, and virtues, (shortened for our purposes to ‘values’), into machine learning systems. These values can be mathematized, but there will always be tradeoffs among different values that will have real-world impacts on people and society.”https://medium.com/artifici…

      2. William Mougayar

        Great comment. I’m saving it. Thank you. Are you coming to the Token Summit? (pls make sure we meet)

        1. Jake Baker

          Indeed I am — glad to see a location got locked down (and sorry I wasn’t able to help)! Will do, and thanks again!

    3. Twain Twain

      Validating and recording transactions and decisions that are purely about logic, mechanics, probability & statistics and game theory (that’s what machine shareholders will be communicating between each other) means the eradication of human language, culture and values.The last time the machines were left in charge of financial decision-making and risk management, they caused $22 TRILLION LOSS to the US economy alone and 8+ million job losses.I’m in favor of HUMANS not machines being the shareholders and in governance, with support from the block chain simply for transaction auditing purposes rather than end-to-end decision making.

    4. ShanaC

      How does that work?

      1. William Mougayar

        Blockchain can validate agreements between people, peer to peer- without central checks. You chain your governance actions and ensuing transactions to blockchain software.

  2. Jon Michael Miles

    I’ve always thought the best active managers were the CEOs of the companies themselves.

  3. Michael Weiksner

    As the article points out, here’s what the effect of AI is at Blackrock: – 3 tech guys take the job of 34 fund managers. – fund returns are increased and fees are lowered: investors win – Larry Fink, CEO, personally makes more than previously20% of the US economy is going to financial agents. If you control the wealth, why would you want to pay fees to untrustworthy, fallible, self-interested human agents when you can have perfect robots work for (almost) free?

    1. Jake Baker

      Because the downside of the machine screwing something up in a “flash crash” style scenario or the example of the ethereum “theft” is amazingly high. Fiduciary laws and policies and procedures inside asset managers exist for a reason and for the vast majority of individuals, they are still FAR better served with a low-fee index ETF or equivalent than risking anything to do with AI or the like. From my personal experience inside a very large asset manager, my read on Blackrock’s statements o far is that this is mostly a blend of great marketing/PR and a shift towards more quantitative strategies (e.g. smart beta) that at some level are probably still run on spreadsheets. It doesn’t mean they aren’t working to actually apply “AI” or machine learning or other fancy forms of statistics and regressions that approach what some of the quant hedge funds do (See for example Renaissance) but it’s still very early days, and most announcements are likely to be more PR than substance.

  4. Mark Mc Laughlin

    Fred. Interesting to see what L&G are going with governance and climate change https://www.ft.com/content/… so politically AI governance could change a number of political issues on both sides of the Atlantic.

  5. LIAD

    Interesting whether we’ll work up or down the ‘ethical stack’ when figuring out AI questions like this.On a micro scale, AI diagnosing an illness and prescribing treatment or making life/death decisions with self driving cars is far more unnerving than it acting according to accepted governance principles.Will we figure out the easier stuff and push the harder stuff off down the line until we’ve momentum and accepted conventions? Much like they do in tough geopolitical negotiations.

    1. Jake Baker

      Asking genuinely: Can you point to any examples where the harder stuff was successfully figured out first in law, medicine, politics, science, technology? It seems to me there are occasional “hard” breakthroughs that enable advances, but that many applications take advantage of early opportunity first and then expand from there (e.g. web browsers happen before iPhones) or something like that…

      1. Twain Twain

        The example in Law of hard stuff being figured out first was “Don’t kill other humans.”Independent of religion (be it Christianity, Islam, Hinduism etc) our species has the higher level intelligence about that long before it was codified and written down in various religious laws.It’s an emotional and physiological thing. The sight of someone dying makes people puke and feel terrible — unless they’re a machine or sociopath.

        1. Frederico Mesquita

          That is not a consensus. It assumes human life as a fundamental principle which is not endorsed neither by all philosophical perspectives, countries´constitutions or people, as death penalty support shows. Any area where ethics would be involved is dependent on people, how their shared values evolved etc. These values are different from person to person and, even more important, from community/country to community/country. Which poses the question: if the systems are decentralized and AI is everywhere, how do you accommodate these differences?

          1. Twain Twain

            This is a great question: “If the systems are decentralized and AI is everywhere, how do you accommodate these differences?”At the moment, AI can’t accommodate cultural differences.One of the problems with Scientific Rationalism is that it assumes the mindsets of a few dozen Western philosophers and economists (Aristotle, Descartes, Hobbes et al) maps and fits over to 7+ billion people, of whom 3.8 billion are online and 1.9 billion of those communicate in English (in some cases as their second language).That’s another reason I’d say the machines don’t have the tools to deal with ethics. Aristotle et al certainly didn’t provide for the cultural and linguistic differences in the way people think, communicate and value things.

    2. Twain Twain

      In the case of AI, it’s becoming clear that the hard stuff (getting the machines to understand our language) is not going to be solved by all the easy stuff that went before (beating us at chess, Jeopardy, GO).The basis of morals and ethics and governance are in language. That’s the hard part.Meanwhile, the machines (be it Blockchain / Ethereum / AI) can only do the easy parts of logic.

  6. static

    “When the shareholders are all machines, how does governance work?”I think this not too far removed from when the shareholders are low cost index funds, how does governance work? However, an AI driven fund could be an improvement over the index fund, if its objectives are sufficiently long term.

    1. Jake Baker

      I think AI as a term distracts from what a lot of the current application of AI or machine learning is at a “simple level” — very complex forms of statistics finding patterns and correlations. In this context, you’d feed a ton of data into the program and see whether a vote for A vs B coordinates with higher returns based on certain target parameters. To some degree this is what activist hedge funds do today (in addition to all sorts of other investors and money managers and traders).It always confuses me why people say “AI applied to ETFs” as their first line of thinking as opposed to thinking, AI utilized by activist hedge fund (e.g. BILLIONAIRE guy very motivated and driven to utilize any possible advantage to make more money). Carl Icahn can move SO MUCH faster than a massive mutual fund or ETF asset management company.I’ve worked for an $800B+ asset management company. There are so many layers of policies and procedures and compliance and committees to do anything new. You can’t just change existing funds (Blackrock is a bit different in that it still has in essence a “Founder” leading it instead of a “corporate manager” and may be able to move faster as a result).Also – an index fund’s only goal is to replicate an index. There are other ETFs designed to use various factors to outperform a benchmark while still charging low fees.

  7. Marissa_NYx

    Interesting if AI would be programmed to comply or ride over corporations law, directors duties and disclosures. Flick the switch to “good ethics” or set it at “highly gear the organization and decision making to achieve maximum financial leverage.” I was reading up about the Wells Fargo scandal earlier today, curious whether an AI driven culture would have engineered the organization to same effect ? Maybe the answer is – as long as AI remains focused on running governance’s back office processes rather than strategy and content then we should be ok. But we all know culture is determined as much by how we do things as to what we do ….

    1. Jake Baker

      Humans consciously self-optimize to one degree or another. Programs unconsciously follow rules. Machine Learning/AI opaquely find patterns and coordinations using complex methods to reach certain targets or end-state goals.All of these can result in unexpected outcomes or can be abused. See for example the Ethereum theft. When you know in advance all of the rules of an incredibly complex and opaque system, you can often find ways to produce personal gain independent of the stated goals of the system.I ::STRONGLY::: recommend this analysis by Matt Levine on JPMorgan’s power traders being fined by FERC for what seemingly was “within the rules” as a result of these kinds of unintended consequences: http://dealbreaker.com/2013…I’d paraphrase the apocryphal Winston Churchill quote, and say that bureaucracy is the worst form of human organization, except for all of the others that have been tried…

  8. Maria

    The public companies governance process is broken today. Shareholders today do not really vote, instead they have ISS do the proxy voting for them. They are heavily influenced by portfolio managers who in turn have not interest on long term and sustainable company results. Having a mechanism maybe through blockchain to shift the power back to the individual investors would be a very positive change.

  9. Vendita Auto

    A musk tweet

  10. Tom Labus

    And will AI get manipulated? Politics always creeps in !!!

    1. Twain Twain

      Will AI get manipulated?Venturebeat, 02 April 2017: “Ian Goodfellow, inventor of generative adversarial networks (GANs), showed that neural networks can be deliberately tricked with adversarial examples. By mathematically manipulating an image in a way that is undetectable to the human eye, sophisticated attackers can trick neural networks into grossly misclassifying objects.The dangers such adversarial attacks pose to AI systems are alarming, especially since adversarial images and original images seem identical to us. Self-driving cars could be hijacked with seemingly innocuous signage and secure systems could be compromised by data that initially appears normal.”Is it a panda or a gibbon? Is it a safe stock or a bad stock?https://uploads.disquscdn.c

  11. JamesHRH

    Really, really good question.Maybe AI will make the decisions for CEOs too?

    1. Twain Twain

      Right now, it can’t even pour a cup of good coffee — much less create value and consider (think with care) how to keep people in employment.

      1. Matt A. Myers

        I’ll just hire this Twain Twain AI here for now, it seems to have a good grasp in a lot of areas.

        1. Twain Twain

          :*), thanks Matt.I’m just trying to ensure Human Survival and more humans stay in charge and get to decide our future.

      2. JamesHRH

        I meant to type that in Sarcasm font.Sorry!

    2. Twain Twain

      Yup, I spotted the snark :*).A washing machine is also AI, by the way. It knows how much resource to add (water and detergent) and it keeps looping its To-Dos in cycles …Are we saying washing machines can also, potentially, replace CEOs and investors?[We could have endless fun with the folks who want to replace Humans with machines and let the machines over-step their limited abilities.]

    3. ShanaC

      That could happen

  12. Dan Ramsden

    If the investment cycle can be roughly split into three parts – (1) entry (value perception), (2) holding (value formation) & (3) exit (value realization) – then one can more or less see how the machines have made inroads into (1) & (3), as exemplified by high-frequency trading and other quantitative methods… maybe because these are the pieces most readily suitable to data and formula. If/when item (2) – of which governance is a part – comes into play, then there may no longer be a market, because then everyone will always arrive at the same algorithmic answer across the entire spectrum.

    1. sigmaalgebra

      > because then everyone will always arrive at the same algorithmic answer across the entire spectrum.Not really: The estimates may become more accurate, but there is no end to the opportunity to do something different and better, for a long time likely significantly better.

  13. Daniel Dowd

    Relevant from Jack Clark’s AI newsletter a couple weeks ago: Tech Tales[2025: The newsroom of a financial service, New York.]“Our net income was 6.7 billion dollars, up three percent compared to the same quarter a year ago, up two percent when we take into account foreign currency affects. Our capital expenditures were 45 billion during the quarter, a 350 percent jump on last year. We expect to sustain or increase capex spending at this current level-” the stock starts to move. Hundreds of emails proliferate across trading terminals across the world:350?!?R THEY MAD?!URGENT – RATING CHG ON GLOBONET CAPEX?W/ THIS MEAN 4 INDUSTRIAL BOND MKT?The spiel continues and the stock starts to spiral down, eventually finding a low level where it is buffeted by high-frequency trading algorithms, short sellers, and long bulls trying to nudge it back to where it came from.By the time the Q&A section of the earnings call has come round people are fuming. Scared. Worried. Why the spending increase? Why wasn’t this telegraphed earlier? They ask the question in thirty different ways and the answers are relatively similar. “To support key strategic initiatives.” “To invest in the future, today.”Finally, one of the big analysts for the big mutual funds lumbers onto the line. “I want to speak to the CFO,” they say.“You are speaking to the CFO.”“The human one, not the language model.”“I should be able to answer any questions you have.”“Listen,” the analyst says via a separate private phoneline, “We own 17 percent of the company. We can drop you through the floor.”“One moment,” says the language model. “Seeking availability.”Almost an hour passes before the voice of the CFO comes on the line. But no one can be sure if their voice is human or not. The Capex is for a series of larger supercomputer and power station investments, the CFO says. “We’ll do better in the future.”“Why wasn’t this telegraphed ahead of the call? The analysts ask, again.“I’m sorry. We’ll do better in the future,” the CFO says.In a midtown bar in New York, hours after market close, a few traders swap stories about the company, mention that they haven’t seen an executive in the flesh “in years”.

  14. Andrew Lee

    Fred, I don’t think computers should govern humans!

    1. Twain Twain

      THIS.And it’s a total UNTRUTH that rational autistic logic (which is what the machines do better than us) makes for better decisions.Brian Uzzi, Northwestern: “When traders are low in emotional states, they’re very cool-headed, they tend to make bad decisions. They’re too slow in taking advantage of an opportunity in the market, and they tend to hold on to bad trades too long. Exactly what you don’t want to do. We also found that when they were in a very high emotional state, they did the same thing. When they were at an intermediate level of emotion, somewhere between being cool-headed and being highly emotional, they made their best trades.”* https://insight.kellogg.nor…That entire philosophy of Descartes’ about intelligence being about the removal of emotions is BUNK and positively dangerous for the continued evolution of Humankind.As is Bayes’ theorem which gets used to model us — as if we behave like random dice / the weather / Brownian molecules when none of those things even remotely resemble the considered intelligence (thinking with care) of humans, our culture and our language.

  15. Salt Shaker

    Not everything in life needs to and should be automated. (Ever wonder if AI is so great why is it called “artificial”?) Artificial cheese is Cheez Whiz, artificial leather, splether. You buying any of that? Perhaps a more apt name is “Processed Intelligence.” Sounds a bit degrading, no? Anyone old enough to remember the Automat….and how’d that work out? Call me old fashioned, but there’s still a lot of value in human touch and interaction.

    1. LE

      Sure Horn&Hardart.and how’d that work out?We still have vending machines. H&H really was really a supersized vending machine that let you impulse buy. I think the automat was replaced by the buffet concept.

  16. jason wright

    do machines have legal personality?

  17. pmakku

    Will be an interesting coopetition of AI s. If the Strategic decisions/projections FOR the company is made by AI and the Shareholder is also AI, would love the decision making hierarchy..

  18. creative group

    CONTRIBUTORS:Life imitating art. Two Tom Cruise flicksMinority Reporthttp://m.imdb.com/title/tt0…Edge of Tomorrow http://m.imdb.com/title/tt1

  19. Vitor Conceicao

    Maybe this super shares structure we are seeing is already a deffence agains this…

  20. Hiyito Patada

    This may seem crass, but I only care if an investment is performing. Solid fundamentals, and appreciating share price are good things. Dividends? Even better. But if there is no return, then once it hits my trailing stop I will sell and invest somewhere else. If it’s time to take profits, sell. So I usually care little about a proxy vote. Show me the goods, and I’ll vote with my money. No emotion is smart investing. Money talks, bullshit walks.

  21. sigmaalgebra

    Here Fred is making a mistake very common in the current public hype discussions of AI:If have lots and lots of data, then maybe can do some version of curve fitting to estimate the effects of some one variable in the data, e.g., for small changes in that variable. The results of doing that work can look really smart and, really, also be powerful and valuable.Alas, without lots of data, for applying such AI in practice, that curve fitting flops. The main, simple, intuitive reason for the flop is that without the huge amounts of data, the real world has lots of variables that must be considered were there is little or no data to determine the effects of such variables just from curve fitting.But, still, natural intelligence does quite well: E.g., now in the Spring weather, my kitty cat is doing just fine on my back porch. The porch deck is maybe 10 feet above the ground, but my kitty cat doesn’t fall off. She knows not to get in a situation where she might fall off. She knows this even though my deck is new and she has had very little experience out there. My little girl kitty cat is a bit old and plump, but some years ago I saw something moving maybe 40 feet high in one of the trees in my back yard: Yup, it was my young, athletic little boy kitty cat! He’d climbed up there! That was his first and last time; he was safe and came down on his own, all without any curve fitting from big data on the possibilities of being 40 feet up in a tree.So, what is natural intelligence doing? Well, for humans, natural intelligence has some powerful reasoning based on causality. The understanding of causality can come from just a little data and be applied quite broadly. E.g., one of the crown jewels of such reasoning is Newton’s second law of motion (force = mass times acceleration) and the law of gravity. Those two, and calculus, that in the story Newton figured out from a falling apple, work great to send a space craft to Pluto and have it take nice pictures. Yup, Pluto is a lot of cold rocks!AFAIK, so far AI has made essentially no progress identifying and then using causality. Right, Prolog logic programming does not change that situation if only due to lack of input data.Then for board meetings, some AI would have to consider a lot of data it never saw before and never used in curve fitting with big data. Here what humans do is causality, or guesses based on causality, etc. E.g., when the boy arrives to pick up the dear daughter of 12 for her first date, her father looks at the boy and in less than a millisecond does some estimating of her daughter’s safety based on some estimates from some data and causality.At a board meeting, the new CEO will be examined by the board much like that father examined the boy about to take his dear daughter of 12 on her first date. AI from curve fitting from big data can’t do that.Natural intelligence, humans and kitty cats, do a lot of such working with causality.Moreover, what really can be done with data, including small, medium, large, and big data, can depend on a lot that is important, something like causality, that is not in the data but, say, just from common sense from outside the data. E.g., near noon at Google, the arrivals of search queries form a Poisson process, and from that we can say a lot about the arrival rate, times between arrivals, do sizing of the server farm and communications network, detect denial of service (DOS) attacks, etc.How do we know it’s a Poisson process? One way is the super cute axiomatic definition of a Poisson process as an arrival process with stationary, independent increments (from E. Cinlar, long at Princeton). Another way is the renewal theorem (from W. Feller, long at Princeton, and from his second volume) that says that under very mild assumptions that clearly hold with high accuracy in this case in practice, with lots of people sending search queries, whether what they are sending is a Poisson process or not, the arrivals at Google must be very accurately a Poisson process. Can’t get such detail just from curve fitting to big data, but with some nice applied math can have such detail without even looking at the Google data. The math derivations were super nice, and no way can we expect anything like current AI to do any such things ever. Cinlar’s argument is darned clever; Feller’s is long and complicated.Human intelligence can do lots of such things; so far AI can’t. Net, don’t hold your breath thinking that AI will replace human intelligence in managing, guiding, funding, and evaluating companies.

  22. sigmaalgebra

    “Quant funds”?Okay. So maybe they have programmed some triggers or used some time series analysis as in, say, the now oldDavid R. Brillinger, Time Series Analysis:Data Analysis and Theory, Expanded Edition, ISBN 0-8162-1150-7, Holden-Day, San Francisco, 1981.Brillinger was the go-to guy for time series in the 2006 NAS report on temperature reconstruction for the past 2000 years.IIRC there are claims that the J. Simons people programmed some triggers.Brillinger has been a big name in time series for a long time, e.g., IIRC, was a J. Tukey student at Princeton. Right, that’s Tukey as in exploratory data analysis (data science anyone?), stepwise regression, Tukey’s lemma (another statement of the axiom of choice), convergence and uniformity in topology, a publication citation index (page rank anyone?), and, sure, the Cooley-Tukey fast Fourier transform.Time series analysis, e.g., interpolation, extrapolation, smoothing, goes way back, e.g., to N. Wiener. And there is filtering of time series, e.g., Wiener filtering. And a lot has been done on non-linear filtering, but some of the math is a bit much.I like the fast Fourier transform: At one time it got me a nice, new high end Camaro! Driving home from a Navy lab, I used to enjoy pulling up a long hill at full throttle to see the shift from 2nd to third at 100 MPH (Turbo-400 with a 2.56 rear end). Or if I wanted to work more, I’d go to a sea food bar in Silver Spring, MD and pig out on broiled flounder, french fries, and coleslaw with one or two small glasses of beer.Once at that bar, reading Blackman and Tukey, The Measurement of Power Spectra, right, J. Tukey again, the next guy at the bar, apparently seeing the book, asked “You work for the Navy?”. Gee, was I being recruited by the Ruskies? I didn’t answer and went back to reading.Soon, from the book, I knocked out some software to illustrate how to do power spectral estimation and illustrate the trade-off between accuracy and resolution. That work, I just did on my own, got my company sole-source on a nice Navy development contract.Still, no good reason to insult such good pure/applied math work by calling it AI!

  23. pointsnfigures

    Most proxies in the US are never even voted. What happens if the AI is just programmed to follow “management recommendations”?

  24. ivanhoff

    Just because a stock is acquired via a systematic approach, it doesn’t mean that a machine owns it. A.I. can make a recommendation, but it cannot own or vote, for now.