The AI NexusLab
NYC is an emerging hub for AI and AI startups. That is because of the large number of mathematicians, scientists, and programmers trained in AI who work on wall street, because of leading institutes like NYU’s Courant School that work on cutting-edge science in the field, and because of a number of programs aimed at AI startups here in NYC.
A few weeks ago I was at the Future Labs AI Summit to hear about AI from Yann LeCun, Gary Marcus, and many others. Below is a short highlight video of the summit.
Here is a short highlight video of the summit.
The Future Labs at NYU Tandon are now accepting applications for the second cohort or the AI NexusLab to find AI companies to support.
Applications for the next AI NexusLab cohort close Wednesday, May 3rd and conclude with the next Future Labs AI Summit in November.
If you are and AI startup or you are familiar with any early stage artificial intelligence startups who you think could benefit from our program, please have them apply at www.nexuslab.ai/
Accepted companies receive
• $100K in funding
• An NYU student fellow for the duration of the program
• mentorship from leading AI faculty and industry experts
• Access to papers and academic research
• Access to data sets
• Partner opportunity to pilot partners (the last cohort included Daimler, Tough Mudder, Quontopian, and others)
• More than 400K worth of support and services.
• Present at the next Future Labs AI Summit (last speakers included Yann LeCun, Gary Marcus, and others)
The Future Labs are also hosting office hours this Friday, April 28th from 1:00pm-5:30pm for teams who have questions about the program at the Data Future Lab – 137 Varick Street, 2nd Floor.
Comments (Archived):
AI sounds like super sexy stuff, and probably because I don’t yet understand what it is, but your ‘tree branch’ post helped. Thx.
Somewhat OT but not entirely as I took the class in large part because of its utility to AI and ML work: today I have my final exam for the functional programming class (in OCaml) I’m dong through the Harvard extension school this afternoon. If you think of it about mid afternoon today, send good vibes Brooklyn way where I’ll be taking my proctored exam at the Brooklyn Public Library.As I said on Twitter yesterday, I think 4 months in that the light is finally coming on for me on the Lambda Calculus, curried functions and partial application – https://twitter.com/brookly….Lazy programming, infinite lists, lexical scoping, the environment model, and functors — we’ll see.Re AI, I also think AI research and, especially, practical use, benefits from a diversity of industries and applications — a classic strength of NYC.
Good luck!
Crossing fingers, touching wood and sending you “Go ace it!” vibes.
OCaml:Gee, yet another programming language — in the tradition of Lisp, APL, Prolog, C — with a totally goofy, delicate syntax.I know; I know; it’s functional!IIRC John Backus, said to be the inventor of Fortran, tried for years to get functional programming powerful in practice but flopped.I see some places to improve programming languages, but functional and object-oriented are not those ways!So some of what I do in writing code is use just whatever language seems the easiest for my work and there write in ways that would be a compiled version of the language I would like to have.The language easiest for my work now is Microsoft’s Visual Basic .NET — syntax and semantics easy to read, write, understand, teach, and learn with really good access to Microsoft’s .NET Framework, plenty of elementary data types, plenty of means for constructing aggregate data structures (types), good enough exceptional condition handling, good enough control structures (if-then-else, do while, for-all, select-when, etc.), good enough access to TCP/IP, SQL Server, direct access disk I/O, good tools for time and date manipulation, good Unicode support, cute Unicode string support, object instance de/serialization (darned useful in my server farm), okay but not really good scope of names support, managed memory that seems to work plenty well enough for my work, with ASP.NET and IIS, good for building Web pages/sites, compiles blindingly fast, good error messages, surprisingly small EXE files, loads fast, really nice, simple, very cute integration of the compiler with IIS and ASP.NET, works fine with just a good text editor, without bothering with all the overhead and complexity of Visual Studio, syntax simple enough that can write simple little editor macros for important operations on the source code, etc. Those are some of my reasons for liking VB .NET — functional? I don’t care — too much botheration.
Yeah, it’s just what the class is taught in.
SITA can do admirable job in functional programing.A great scholor, bacus,who first gave me limitation of number machine through his writings.The functinal ability of Sita is either way traversing,heterogeneous computation, elimination data considerations,evasion of infinite loops.One variable change in structure creates new information.
VC is going back to deep tech, which means that your blog is going to be tough to maintain, for broader audiences: higher barrier to entry for topics with higher variability of support (cryptocurrency naysayers for example, although I can’t think of one of the top of my head…).And, Y Combinator appears to have created a totally sustainable class of premium startup creation. Paul Graham is a genius.
Agreed. On the deep tech front – and broader topic of “you now need to know more tech than how to scaffold up an app in Rails, a little jquery, and mailchimp” – I’ve been enjoying our host’s partner Albert’s “uncertainty wednesday” series on probability and other related topics – http://continuations.com/ta…
Here’s the consumer-friendly AI I built this w/e:https://uploads.disquscdn.c…My view of AI is it needs more than mathematicians, scientists and programmers. I’m all three and grateful to my parents and teachers for getting me hands-on with STEAM from the earliest age. This is why I can do what I do today.Abstract thinking on DeepTech and actually BUILDING systems inventions.https://uploads.disquscdn.c…AI is stuck. And, if you’re based in Bay Area as I am and go to the events I do, you’d hear Google Research folks admit they “don’t know how to solve it (NLU)” and they share research on all the permutations of Deep Learning, xNN, Tree-LSTM, DeepMind’s Wavenet, 1000’s of machines and their huge data sets and algorithms and some of the best maths + CS PhDs they’ve thrown at the problem.To solve Natural Language Understanding (NLU) — not just the current Natural Language Processing (NLP), which uses probabilistic and statistical methods — needs the “integration of integrations” combining knowhow of maths, sciences (information theory, quantum physics, neuroscience, biochemistry, CS), liberal arts (economics, philosophy, psychology, anthropology, history, geography), languages (English, Latin-based, Chinese and programming languages) and the arts (design, architecture, music, poetry).Solving NLU is the biggest multi-disciplinary, multi-cultural engineering opportunity of our times. Bigger than getting to the Moon/Mars, building the Large Hadron Collider and the Three Gorges Dam in China.Naturally, it’s possible to build Web and mobile apps on top of existing stacks … in a weekend and have it launched as a startup within a month.It took less than 10 hours to integrate IBM Watson Speech-to-Text, Text-to-Speech, NLP, Google Maps API and more into a consumer app that (potentially) billions of people could use to do “DISCOVERY BY VOICE.”I even coined the term ‘Sharing “voice crumbs” with friends’ [(C) Twain 2017.]https://uploads.disquscdn.c…Over the next couple of days I’m coding and publishing some Alexa skills. It’s never been easier to make apps on top of AWS, IBM Watson, MS Cortana, Google Home, Apple.Nevertheless, I committed to “Do and make something meaningful with my life” after my Dad died.So I applied Twain brain to the signal:noise in data and meaning of language problem. Fast forward to today and I’m just about the only person in the world who isn’t stuck in the black box of AI.LOL.I really want to win MIT’s $200,000 Disobedience Award because if I’d listened to any number of AI and business folks … I’d be as stuck as everyone else in AI and I wouldn’t have been able to work out how to build BEYOND THE BOX.* https://www.media.mit.edu/d…https://uploads.disquscdn.c…Then I’d have $200,000 to pay for AWS and a team.This and super-smart angels and technical mentors need to help me.
How do we help you get the award?
Thanks, Sebastian, :*).I think MIT accepts open nominations.* https://www.media.mit.edu/d…On MIT website: “This disobedience is not limited to specific disciplines: examples include … the freedom to innovate.”
@twaintwain:disqus I always read your comments with a lot of attention and today I have to react to your comment. Initially I found you to come across pretty pretentious, but I eventually realised you actually know what you are talking about. Where on the internet can someone who also went beyond the box follow and support your work?
Thanks for your honesty, Julian. Inventors need a certain bravura/chutzpah because we’re N-times lonelier than founders — although most folks may not be aware of this.Founders can ask their team and investors. Inventors have to work out a lot of things by themselves. It’s the nature of the process and there aren’t many Professors of AI I can ask (because they’re all building boxes).After searching for a few years, I finally met three other developers who think beyond the box. They’re all into Quantum Physics, the arts and examining the philosophical variances in how different cultures think and communicate.So the sooner I win $200,000+, the sooner we get AI and systems that better represent human language, culture and values (moral, ethical, economic).
Suggestion: When discussing natural language understanding (NLU), don’t mention “quantum physics” (QP) without a clear and solid explanation of just why that QP has anything important to do with solving the problem of NLU.Next, maybe there are some opportunities for significant, hopefully useful, even valuable, progress in NLU,but IMHO for the problem as a whole it is essentially inseparable with making quite a lot of progress in real AI.Why? The U in NLU means some intelligence to receive the intelligence.Or, by analogy, language is like TCP/IP, HTTP, and HTML, and the understanding is like a Web browser. That is, language is just communications, but for any “understanding” what is needed at the other end of the communications channel is some actual intelligence.Will some mathematics, applied math, probability theory, statistics, etc. be involved in constructing the intelligence? Tough to say. By analogy, that’s like asking if are going to build a new airplane, will we need a Torx screwdriver? Well, if we see a need for one, sure. Otherwise, we don’t know.Can we solve the NLU problem with probability theory? Not by itself. Will probability theory play a role? Like a Torx, if we can find a role for it, sure.Is solving the NLU problem really important? Well, if it is just one computer talking to another, likely no since computers can use, say, just simple JavaScript JSON. For humans communicating with computers, we have some pretty good means now.How about the problem of real AI, that is, computers with real intelligence, is that a biggie problem? Maybe, but I see no hope for any progress for decades.Again by analogy, the work I’ve seen so far, e.g., “deep learning,” looks to me not like the Wright Brothers’ Kitty Hawk flyer but like some feathers glued to some poor guy’s arms and not even a single step forward to something what would fly.
Quantum Physics is involved and I’d worked this out when I was a teenager. Three facts about teen Twain:(1.) I worked in the chemical industry. In chemistry, we learn Schrödinger’s wave functions and that gets us into the whole realm of subject+object entanglement.(2.) In the maths side of my degree, I got 99% in Prob+Stats and summa cum laude in Linear Algebra which are the basis of all ML.So, nowadays, when a PhD from SAS Institute shows how he does covariances of Poisson distributions using 4 time-series variables and the model then falls apart on the 5th time-series variable set, I have 0 problems following along.Ditto when Google Research’s PhDs show their lovely sigmoids and softmax algos.(3.) In the business side of my degree, I wrote a paper that started, “Perceptual processes of information retrieval, understanding and interpretation differ between people. Observers believe they see clearly and objectively. This essay will outline the main perceptual problems of stereotyping, implicit personality theories, perceptual defense and attribution practices which are intrinsic in everyone’s perception.”So, nowadays, I can read any number of neuroscientists and psychologists’ papers on “attention gates” and emotion scales and get it.Language is a subject+object entanglement problem. It’s not a random, stochastic process problem => probability and statistics are unsuitable tools.And this is why, despite throwing all the Prob+Stats tools ever created, Google et al have been incapable of cracking the NLU problem.Ok back to coding my Alexa Skill with node.js.
There are lots of meanings for entanglement including, say, roots of two trees, two people in a co-dependency relationship, what some vines do to trees, etc.You have two meanings of entanglement, and they are very different, so different I see no meaningful similarity at all.a random, stochastic process problem => probability and statistics are unsuitable tools.Let’s consider that: Go observe a number. Call that the value of real random variable X.That approach works great! We can use classic measure theory (grown up version of freshman calculus) as in, say, the classic P. Halmos, Measure Theory, to add a super nice foundation to that view of a random variable.So, for the set of real numbers R, we can have random variable X having its values in R. For positive integer n, we can also have X having its values in the n-dimensional vector space of linear algebra R^n. Okay. To be more careful, we should specify a topology for R^n, and the usual one is fine.Given a non-empty set I — call it an index set — a stochastic process is the set of all X_i (X subscript i, borrowing notation for D. Knuth’s TeX) with i in I. A common special case is when I = R, the set of real numbers; then typically i in R is regarded as time.So, both random variables and stochastic processes are so general that tough to say that they don’t apply or are limiting.To say that they do apply, just need an example.Sure, if you browse the literature in probability, stochastic processes, and mathematical statistics, then you will find from darned little down to just nothing that gives even tiny hints about how to solve natural language understanding or real AI. We didn’t expect anything else, right?In my garage, I have a nice collection of tools, especially from Sears Craftsman (from back when) including socket wrenches in both English and metric, and much more. Over the years, I’ve gotten a lot of use out of those. But nowhere do those tools, or anything I got when I bought them, tell me how to use those tools to work on a bicycle, lawn mower, house plumbing, screw down warped boards in the back porch, etc.Two of the tools are super nice: One is a certain, crucial size of a deep socket, and the other is a breaker bar about 2.5 feet long. While I got no instructions with either of those two, they are just terrific for the lug nuts on the wheels of my Chevy S-10 Blazer, especially since I am using some solid, especially strong, lug nuts — the originals from Chevy had a sheet metal covering which quickly got mangled.Again, math, applied math, probability, stochastic processes, statistics are useful if you can find a use for them.Same for eggs in a kitchen — omelets, batter for French toast, deviled eggs, batter for pancakes, emulsifier for Caesar salad dressing, the main ingredient in soufflees. And when they have been in the refrigerator too long and start to get black marks on the outsides of the shells, maybe uniquely useful for some purposes having to do with some total nut-job, wack-o politicians or their MSM newsies!Find a use for some math? Look at a real problem. See what assumptions you can make. E.g., can quickly argue that the number of search requests arriving in the next 10 minutes at Google will have Poisson distribution and the arrivals over that time will form a Poisson process (e.g., see the nice axiomatic derivation in E. Cinlar, Introduction to Stochastic Processes –warning, he has a lot of implicit monotone class arguments, and at one point in his axiomatic derivation I have a slight tweak that is a little better).Okay, example: With that assumption can do some nice work on sizing the server farm.So, you start with a real problem, take the assumptions that seem reasonable in that problem, let them be hypotheses in theorems, and see if the conclusions of the theorems give you some good answers for your real problem. I was teaching that, with a little cartoon, to B-school students long ago. In my Ph.D. dissertation, I made sure to follow this little recipe. No one told me to do that, but I wanted to be able to defend my work against anyone, even people as nasty and ignorant as my worst high school math teacher.Or course, AI is willing to take the theorem conclusions, forget about the assumptions, and start typing in software. About the best can say is that they have some heuristics, and that is a generous remark. Here they lose a lot of quality control: To have any idea if their work is correct, they have to test on real data. E.g., for self-driving cars, they hope that the real data contains enough of the special cases.But what they don’t have is a clean, that is, comprehensive, formulation of the problem of a self-driving car: Then they didn’t take that formulation, find the relevant, justified assumptions, use the assumptions as hypotheses in solid theorems, take the conclusions of the theorems and use them to observe that their design will be safe always.Sure, maybe their response would be that their design worked fine in 10 million miles of driving — uh, use big data — and that’s good enough, maybe even safer than humans. Hmm….Why not have a formulation? Getting that is too difficult. But, I’m sure that both NASA and John Glenn did essentially just such a formulation before John climbed on top of all that liquid oxygen and kerosene. And just such a formulation was prior to and crucial for GPS. Similarly for knowing that some new bridge design will be successful for, say, 200 years.So, what is the variety of real problems, reasonable assumptions for each of those, math theorems where those assumptions cover the hypotheses of the theorems, and conclusions of the theorems? Of course, enormous, and larger if state and prove some new theorems.Will such theorems apply to AI? Well, if you have a specific AI-like problem, have some reasonable assumptions (e.g., in some cases, a Poisson process), the assumptions cover some theorem hypotheses, and the theorem conclusions seem useful for the problem, then, maybe.There’s a good quality control advantage here: If the assumptions really are reasonable, the proofs of the theorems rock solid, and the corresponding software correct, then likely that technical part of the work is ready to do its part in your business.There is also a power advantage: Some of the theorems are amazing — true but amazing. Proceeding with theorems and proofs can at times show some things that really are true but would be essentially impossible to guess or believe even if guessed.Actually, there are some such points with the Poisson process. The martingale convergence theorem is another example — without a careful proof, no one would ever believe the conclusion. And there are some associated martingale inequalities, some of the strongest in mathematics. Martingales are the easy way to prove the strong law of large numbers. E.g., every stochastic process is the sum of a martingale and a predictable process — all of Wall Street should be drooling (but they need to look a little more carefully).A Hilbert space is complete. So, a Cauchy convergent sequence really converges. Well, the vector space of all real random variables X so that E[X^2] is finite with inner product (X,Y) = E[XY] is a Hilbert space. Think about that for a little and just will NOT believe that it could be true, but it is. Then any sequence that converges in that space has a sub-sequence that converges almost surely. Also tough to believe. Typically in practice, the whole sequence converges almost surely.There’s lots more that’s true, sometimes powerful and valuable, and impossible to believe or discover without mathematical proof.Math proof is the crown jewel, really, the all-powerful commander, of information technology. Of course, Silicon Valley has yet to discover this, which is an opportunity!For the paradigm or technique, that’s about all there is to it. Of course that’s about like saying that the 88 keys on a piano are all there is to music. But, in either event, the truth is not indescribable, inscrutable, or black magic.
Oh … where I’m taking maths is way beyond this …Let’s consider that: Go observe a number. Call that value of real random variable X. That works great! We can use classic measure theory (grown up version of freshman calculus) as in, say, the classic P. Halmos, Measure Theory, to add a super nice foundation to that view of a random variable.So, for the set of real numbers R, we can have random variable X having its values in R. For positive integer n, we can also have X having its values in the n-dimensional vector space of linear algebra R^n. Okay.+++++Let’s suppose we ask 100 people from different cultures to give us a real number between 0 and 10. We want to understand WHY the Chinese folks in that 100 people have a bias towards the number 8 and an aversion to the number 4.Now, we can do a prediction model for the likelihood they’ll buy 4 of something or 8 of something very very differently from if we just did the n-dimensional vector space of linear algebra R^n with some type of sampling ratio (1/5 of 100 people tested are Chinese and 80% of those chose 8 whilst 2% of them chose 4).LOL.Let’s make the distinction between two different processes that are happening to the same variable:(1.) A real number upon which an operator can be applied to produce a functional objective.(2.) A real number upon which an opinion is applied to produce a subjective feeling.Now, we know Maths is tooled for the first. The question is how to tool Maths for the second.Ok, THAT is entanglement.
For your Now, we can do a prediction model for the likelihood they’ll buy 4 of something or 8 of something very very differently from if we just did the n-dimensional vector space of linear algebra R^n with some type of sampling ratio (1/5 of 100 people tested are Chinese and 80% of those chose 8 whilst 2% of them chose 4). You started with> Now, we can do a prediction model for the likelihood they’ll buy 4 of something or 8 of somethingThat’s okay except likelihood is a bit misused; better would be just probability.Likelihood is usually the value of a probability density function. For a random variable on the whole numbers from 1 to 10, there is no density.Then for most of the rest, that is, theif we just did the n-dimensional vector space of linear algebra R^n with some type of sampling ratio (1/5 of 100 people tested are Chinese and 80% of those chose 8 whilst 2% of them chose 4). sorry, but this is gibberish, word hash: You have described no role at all for good, old, simple workhorse R^n.For your(1.) A real number upon which an operator can be applied to produce a functional objective. With operator and functional objective you are back to gibberish, word hash again.In math, an operator is usually a function where the domain of the function is a set of functions. E.g., can have a linear operator on a Banach space of continuous functions (use the max or L-infinity norm so that convergence is uniform and use the classic result that the uniform limit of continuous functions is continuous — it’s in W. Rudin, Principles of Mathematical Analysis and was on one of my Ph.D. qualifying exams. Since I’d carefully studied Rudin, I got that question and, really, did the best in the class on that qualifying exam. Same for two more of the exams, the one in linear algebra and the one in computing. For a norm on that space, assume that the domain of the continuous functions is a closed interval of finite length on the real line.).For the Banach space of real valued continuous functions on a closed interval of finite length in the real line, the integral of calculus is a linear operator.Some of the results about operators on a Banach space are astounding — see, e.g., W. Rudin, Real and Complex Analysis. For operators, sure, see, say,Nelson Dunford and Jacob T. Schwartz, Linear Operators Part I: General Theory, ISBN 0-470-22605-6, Interscience, New York.Schwartz was long at Courant.When have, say, function f: R –> R and real valued random variable X and take f(X), then we still regard f as an function and not an operator.For “functional objective”, I have no idea what that is.For yourNow, we know Maths is tooled for the first.Not at all. From the theorems of high school plane geometry, trigonometry, solid geometry, the fundamental theorem of calculus, that the uniform limit of continuous functions is continuous, that E[E[Y|X]] = E[Y}, and many thousands more, in the writings of J. von Neumann, his assistant at the Institute of Advanced Study in Princeton P. Halmos, J. Doob (dissertation advisor for Halmos), W. Rudin, H. Royden, D. Bertsekas, R. Rockafellar, and dozens more I have studied, there is no hint of your statement.That in what we are observing there might be a role for “a subjective feeling” is just fine. E.g., maybe Mary is looking for a movie for the four grade school children of her sister. For some Disney movie, on a scale of 0 to 100 with 0 the worst possible and 100 the best possible, she rates that movie 92.Fine. Now we have a random variable, all it X, so that X = 92.We can guess that at times Netflix does some related things.We have not learned much about Mary, but we have started to learn some about what Mary thinks about the grade school children of her sister.Just fine.For your Ok, THAT is entanglement. maybe so, but it’s nothing like entanglement in tree roots, a couple in a co-dependent relationship, swamps, mosquitoes, and malaria, or quantum mechanics.
Did you take note of the age groups used when testing Chinese samples ? Wechat generation very different. Probability – quantum mechanics is as elegant as it gets according to our granular realty !!
> elegantWell, with quantum mechanics, apparently as far as we know in physics, and with high irony considering how much physics likes the Dirac delta function, there are no discrete distributions. Or in math language, all the distributions are absolutely continuous with respect to Lebesgue measure. Sure, maybe there is something discrete at Planck scale!
Thanks re “Planck scale!” I would lose the plot if I drill down.
there is no negation.but ultra low level power and thresholds are mandatory.may be one day we may have single electron gates.conversion that into understanding level is also possible.
It is full work of language understanding . The character chemistry is defined by grammatical structure. It is either way travesing phenomena.From grammatical information(structural) word is generated and vice versa.Let us go from structural information to charater chemistry.structural information is four stages.Structural infomation either connected to character or space. Once it is conncted to space, in that stage continuas space connection. The 32 positions have divided into 4 stages.The character may recieve many structural information.But structural information one connection in that position.Once. input is coming in binary ,at least one divergent information willbe there.Because of this reason all other structural informations are switched off.What is the stageExample. Un becomingUn onestageBecom(2nd stage)Ing(3rd stage)The structural information moves into 4 stages.Un and 12 spaces.Becom and 9 spacesIng and 2 spacesLast stage is one place for ambiguity.It is infinite filtration competency.If there is interest total project i can explain
A deviation, from present drudgery is dire nescicity.for AI systems.
The NLU will make quantum mechanics easy.The reverse I am not confident
Get an Alexa skills published and you can get Amazon to give you credits for AWS…thanks to my math mania skill, AWS fees for all my side projects and hacks are now covered for about the next year!
Thanks for this great tip.Their current offer: publish 3 skills within 30 days = Amazon Echo.Maybe AWS credits happen when 1,000+ people use your Alexa skill?
I don’t think there’s a min. number of users before you can get the credit…just has to be ‘published’ is my understanding.That being said, my skill has been up since the very early days of the platform and though I’ve never really advertised it other than a few social media mentions, it’s had over 10k users enable/try it (so maybe I met a min. that I didn’t even know about)…Advantage of being early to an emerging system I guess (because the skill itself is kinda lame, but was a good learning exp. for me)…I just built http://tokentogether.com for the Token app @fredwilson:disqus mentioned a week or so ago for many of the same reasons (early to system, want to understand the dev. process/pain/tech, fun).I guess I jump off the edge faster than the average person… 😉
Re “…(so maybe I met a min. that I didn’t even know about)..”Nice when that happens :-). Very impressive !
May 2016, at IoTWorld, we tried to teach Alexa how to dynamically TALK a visually-impaired person / young child across the road towards their destination by mashing GE Predix traffic flows, Google Maps API and other technologies.At the time, Alexa Skills flow-structure and developer tools were … poor.Also, obviously, Alexa needs to be hooked up to cameras (traffic and on the person), GPS and the voice slots+intent extractions have to be a lot more dynamic than they currently are. The Machine Learning would need to be on edge-device because cloud latency could be the difference between being hit by an oncoming car or not.Even yesterday at AWS Alexa training workshop in SF, they showed us how to do the typical “Alexa, give me a fact.”Fine, so I’ll code 3 vanilla Skills to win the Echo.But it would be really cool to give visually-impaired folks technologies to help them cross roads safely, using Alexa, right.Their Developer Advocates yesterday said my ideas are frontier stuff because it would really be a test of ASR and NLP when it’s not only voices Alexa hears but also traffic sounds, crowds etc. Plus the camera + GPS integration thing.
Very cool!Some of the newer stuff they’ve opened up will help in this general direction, but yes, your ideas are currently ahead of most of the actual tech…In my actual day job (as CTO of Veritonic) I’m working on predicting emotions and feelings that sound envokes…one of the things I use is audio fingerprinting, which I suspect could also help in filtering out noise from specific sounds/actions you would want to support…but I don’t think anything in the Alexa world is actually to that level of control/detail (yet)…
IBM Watson has ToneAnalyzer for emotion prediction that’s based on the OCEAN model (openness to experiences, conscientiousness, extraversion, agreeableness, and neuroticism):* https://tone-analyzer-demo….I was saying to the IBMWatson team that for the WatsArt hack we did, it’s possible to parse emotions from the voice files AND, in V3, to add token functionality to it.So say your friends and you spot an artist / work of art you’re interested in borrowing for a period of time on a revolving credit-type basis. Well, technically, it’s possible to do a smart contract and pin it onto a Google Map alongside the video+voice files of the parties and assets involved.That’s the way I see tech.
Kevin! You got quoted on TechCrunch about Alexa payments to developers! https://uploads.disquscdn.c…
heh – thanks – yeah, there have been a few articles put out about it today…fun stuff!
AWS fees for all my side projects and hacks are now covered for about the next year!What is the value of that?
I currently run two ec2 instances, use a couple of EBS volumes, a couple of small usage lambdas, and use a little bit of s3…the monthly AWS bill is a little under $100. The promotion/program they just accepted me into gives me $100 AWS credit a month (for up to 12 months) because I have an Alexa skill in production (which mostly uses the lambdas and one little bit of one of the ec2 instances).BTW – all that AWS stuff currently runs/hosts/powers everything you see listed on http://digdownlabs.com as well as a handful of things that either run behind the scenes are just aren’t live/in-production yet (I have a couple of mobile games in-dev that I just can’t seem to finish out yet).Insane what you can do on the cheap these days!
Thanks.You are so honest and self deprecating with your ‘deadpool’ list!See below typo btw…… https://uploads.disquscdn.c…
Hey thanks – spelling will be the death of me!It’s easy to be open and honest about Dig Down Labs stuff, because it’s all just passion/learning/for-fun stuff…sometimes I make a little cash with something, but financially I just try to operate it so that I don’t lose a lot of money learning new things, playing with new ideas, and trying stuff out…I also just realized I haven’t added Token Together to this list yet…so gotta get on that now I guess too 🙂
Ok, Amazon is officially eating everyone’s lunch. Instead of Echo, I’m asking my AWS friends for this to hack with:https://www.youtube.com/wat…
I saw a few people talking about this the other day…haven’t seen anything in the wild yet though. So keep us posted if you get one and build something we can play with! 🙂
I’m waiting for the Echo Ego release.
Haha! You want to copyright that, Jason.The success of the iPhone and any number of devices and apps speaks to our inner Sigmund Freud.
Are any pro sport teams using AI? Like Moneyball 2
Funny you should say that, Tom… Tomorrow the SF 49-er’s senior analytics manager is sharing how they apply data+AI to do these things:* Ticket Sales, Member Services, Marketing, CRM, and our Loyalty Program* Game day operations: Food & Beverage, Retail Sales, Stadium Operations
On the topic of AI, I read a super interesting post by Kevin Kelly challenging the near-consensus view that AI is on a path to attain super-human levels of intelligence, take our jobs, and potentially threaten our very existence. https://backchannel.com/the… . I’m not sure I buy all of his arguments, but some of them are hard to dispute, and all of them are thought provoking.
Gee, the video mentioned “unsupervised learning” and “pattern recognition”. Good grief. A paper I published on anomaly detection appears to be an example of those two long before I ever heard of any such things.Yes, looking at my paper now, I’m ashamed to admit that the list of key words included “artificial intelligence” and “machine learning”.At one point in the paper I included:Even without considering our i.i.d. assumption at all, we can just regard our techniques as heuristics in artificial intelligence instead of results from applied probability and continue with real applications.I regard the paper as some useful derivations in applied probability.Good grief: It appears that I published a paper in “unsupervised learning” and “pattern recognition” without knowing it!Actually there are some aspects of my paper a bit far from AI/ML: For the anomaly detectors I consider, I actually have a way to adjust false alarm rate, know that rate in advance, and get that rate exactly in practice. Also my work is both multi-dimensional and distribution-free and, thus, a nice step forward in statistical hypothesis testing and also monitoring of server farms and networks. Also, I did the work to improve on our earlier work in AI — my work totally blows the doors off that earlier work. I didn’t just mumble about heuristics; instead, I got a solid result on false alarm rate.
Worth information
Watching the video at YouTube, at the end at https://www.youtube.com/wat…is another video, this one on “reinforcement learning” by someone all fired up about that.Sorry, guy: You definitely should put down your AI tinfoil hat and get at least first-cut informed on discrete time stochastic dynamic programming — although apparently you don’t know that, those are actually the problems you are talking about, and that subject is not nearly new. Look for work by authors R. Bellman, D. Bertsekas, S. Shreve, R. Rockafellar, R. Wets, et al.. Modesty keeps me from saying you should look at my Ph.D. dissertation in stochastic optimal control.Your learning is an iterative, approximate, heuristic solution technique essentially forward in time although you might want to add a backward in time version.You should look at other long well-known solution techniques, e.g., the clever scenario aggregation (uh, where you might want to know some measure theory, e.g., as in the classic text with that title, by P. Halmos or just read from W. Rudin, H. Royden, J. Neveu, etc.), by Rockafellar, and compare with yours. Uh, in part you are working with scenarios. You stand to learn some things!Uh, you will also want to know what a sigma algebra is! Early exercise: Prove that there are no countably infinite sigma algebras. Post your answer here, and I will grade it.Uh, IIRC, early in high school, get told that when writing a paper, first hit the library and get caught up on prior work. That is still good advice, even in your supposedly highly advanced, erudite, never seen before, highly original, innovative, powerful, and valuable, what was it you called it, “reinforcement learning”?Uh, your “reinforcement” is essentially a crude, merely heuristic version of the well known optimal value function.Suggestion: State and prove some useful theorems on when something like your reinforcement heuristic converges to or approximates the actual optimal value function.There’s another important case of “learning” — before trying to do some research, writing a paper, or giving a talk, hit the library and get caught up!Apparently so far what you have is a C- minus, sloppy, heuristic approach to some simple versions of some classic problems where a lot really is and long has been known.
Reinforcement learning is based on BF Skinner the American psychologist’s 1930s experiments on mice and operant conditioning for two states: 0 = no food, 1 = food.Let’s not insult the intelligence of humans by applying this type of reinforcement learning.
Well, the AI/ML people are determined to use anthropomorphic, psychological, neurological, biological, etc. terminology for their intuitive heuristics, etc. in their computer software; this practice is contemptible, essentially the same as the old publicity that IBM’s computers were “giant electronic human brains”.So, yes, in this case, the terminology borrowed was from psychology, likely good ol’ BF Baby. My brother’s Master’s was in psychology. His conclusion was that the field knew some interesting, scientific things about rats but so far not much both really scientific and interesting about people and got his Ph,D. in political science where he didn’t have to be embarrassed about not being fully scientific.So, the guy in the video, in his version of the contemptible practice, borrowed BF’s “reinforcement learning”.Yes, along the lines my brother noticed and you wrote, what BF Baby said about rats is an insult to humans.But really that is not a biggie shot at the video: There, again, he just did the contemptible usual and borrowed anthro … bio terminology.But he’s not really doing anything at all anthro … bio or intelligent: Instead, although apparently he doesn’t know it, he’s attacking a problem that was noticed long ago and heavily studied since. E,g., one role for pure research is just to notice and formulate nearly all the main problems in advance and study them to see what if anything can conclude. E.g., the Ford and Fulkerson work on network flows may have been an example; but since then we have seen some cases in reality of what they were studying and have been able to use what they learned.And in the video the guy’s claim that his little intuitive heuristic is roughly, first-cut, simple-minded a little like some of what some humans do in some circumstances, maybe more like what the squirrels and possum on my back porch do due to the bread heel scraps I toss out; is not all wrong. E.g., last night when I turned on the light on the back porch to let in my kitty cat, she was at the door ready to come in; on the far other side of the porch was the possum; and between the two, but closer to the door and my kitty cat, were some scraps of bread the possum was after. But the possum didn’t want to get close to my cat. And my cat was nearly ignoring the possum.For more, an ambitious boy of 14 with a pretty girl of 13 has, each time he sees her, some choices about what to do. Well, he tries some of the choices. Maybe she orders him to leave and slams the door, maybe bites, slaps, and kicks him, maybe scowls and pushes him away, maybe she smiles a little but averts her face, or maybe she … jumps up and kisses him. So, each day he gets some reaction, some “reinforcement,” to help him learn if he is on the right track to his ambitious goal.So, he has a multi-stage (one stage for each day he comes to see her) decision making problem under uncertainty and is trying to learn. Or, we could say that he has a directed graph where each arc corresponds to an action he takes and is from one day to the next day and changes the state of the relationship from the state at one day to the state at the next day — if she slams the door, maybe say that the next state is the terminal one.If make this problem quite a bit more precise, use a Markov assumption (past and future of the process conditionally independent given the present) to make finding solutions much easier and the need for data much smaller, then can be essentially in the old field of discrete time, stochastic dynamic programming, although don’t really have to take the stochastic case and could just assume the deterministic case.For a well-posed problem of realistic size, to heck with all the anthro … bio mumbling around, finding the best solution is easily a grand super computing application, really, easily well beyond what could have if just took over all of AWS.And it is easy to formulate such a problem: Just set up a business financial planning spread sheet with one column for each month and one row for each variable. Fill in all the cells as usual in spreadsheet usage, that is, each cell has an expression in terms of cells in earlier columns, except have some special cells: First, have some cells that are just random numbers, independent, uniformly distributed on [0,1] (like hope to get from the usual software random number generators).Second have some cells for decisions to make. Third, in the bottom, right hand corner, have what the business is worth.Then your mission, should you decide to accept it, is to say how to make the decisions to maximize the expected value of the cell in the bottom right-hand corner. Easy to think of.Any of us can quickly learn the basic, but quite amazing, mathematical solution to that problem and quickly write corresponding software. And the software can be written to be astoundingly parallel, e.g., take over all of AWS. Then in just problems of realistic size, AWS will draw electric power enough to strain Bonneville and dim the lights in the US North West for months without ending.Yes, beyond the basic solution technique, there are lots of ways to make it all faster, in some circumstances many factors of 10 faster. I had some such ways in my dissertation; else my software would still be running — literally since my estimate without my “ways” was 64 years! As it was, my software ran in a few hundred seconds on a computer that was so slow that now we would call it a toy, maybe something for free in a Cracker Jacks box. The problem I was attacking was not from a spreadsheet but was quite realistic — at one point could have saved FedEx many millions of dollars a year.So, in effect, the guy in the video is trying to have a simple heuristic for a poorly formulated version of that spreadsheet, etc. problem. What he is doing is maybe a little close to what the 14 year old boy is trying with the 13 year old girl — getting some level of reinforcement each day and based on that adjusting what he does the next day. For this, we don’t really need to bring in ol’ BF and his rats.
Heart touching.saying where we are