Funding Friday: Wines Of The French Alps
I woke up this morning to find this tweet in my feed:
Projects worth funding: Wink Lorch’s Kickstarter project: Wines of the French Alps https://t.co/zfp6uytR5u @kickstarter #wine
— awaldstein (@awaldstein) April 28, 2017
And I immediately knew what I was gonna do. Back the project and feature it on Funding Friday.
I love Arnold, wines from the French Alps, and Kickstarter. Nothing more to say.
Watch the video and back the project.
Thanks Fred.To me this is the magic of Kickstarter to let these types of projects find a way to happen.
Yup. And the rewards are fantastic
Nice one to back. Thanks to Arnold, Chambers Street Wines and a few trips to Paris, the Savoie is one of my favorite wine regions; famous for their Mondeuse and especially artisanal wine-making producers, churning out naturally made wines, like this Pinot.Vive la Savoie! https://uploads.disquscdn.c…
Love Savoie.The reds are great but i am completely smitten by Gringet, a white. So few hectares planted (less than 10 I believe), so refreshing, so delicious and made by Belluard, truly one of my heroes.And as Wink has brought out there is a quality that comes from Alpine (defined at the intersection of longitude and latitude) that shapes agriculture and likewise the wines themselves to some degree.Glad to have added to your obsessions my friend!
I’ll check it out at Chambers. I’ve switched all my wine buying to them. Screw the LCBO.
Try all the whites–Gringet, Altesse and Jacquire.Next is too hook you on Ograde from Skerk in Carso/Kras. One of the most beautiful, elegant and interesting natural wines from a friend of mine.Hard to find but worth it.
Also try the ones from Jean-Yves Peron, who we met in Paris in March at Saturne (a top restaurant with natural wines to boot). He is from Haute-Savoie…no sulfur. I’m sure your friend Wink would know him. https://www.chambersstwines…
I’ll make a note.May need to be in Milan and Lisbon in July and if so may come back through Paris.Wink is not allowed to have visitors so she can finish her book!Though my dream trip this year is to somehow get to Tblisi in Rep of Georgia. Talking to the importer as i will obviously need someone to take care of me there.
I wish the BIAS documentary team had chosen Kickstarter instead of IndieGoGo for their project on how women are discriminated against, conciously and sub-consciously, in technology.Here’s Mahzarin Banaji, Professor of Social Ethics at Harvard:* https://vimeo.com/214583018…
There needs to be female Jobs, Zuckerbergs, etc. And also female Fred Wilsons.https://uploads.disquscdn.c…The frameworks of women also need to inform AI design. When we look at General Adverserial Networks, Reinforcement Learning or even simple things like 5-star rating systems etc, too much of it is about taking some thesis and bias from the mindsets of a handful of men and generalizing it over to billions of people.Those handful of guys don’t represent the language and meanings of billions.That’s why it’s not surprising NLU hasn’t been solved — quite apart from probability&statistics and syntactic trees being inadequate tools, and inventor(s) needing to craft better tools for folks.
> probability&statistics … being inadequate toolsThere you go again:Gee, I have a big collection of socket wrenches, and they don’t fix all possible problems with my cars, house, Weed Eater trimmer, or lawn mower so must be “inadequate tools”.
For context, there is currently a thread on LinkedIn with NLP practitioners posted by the CTO of Klangoo which opens with:”Statistical/Data-Driven NLP? (and the likes of Frege, Russell, Quine, Carnap, etc were wasting time?)I guess the functional word ‘were’ which statistically occurs in all contexts equally likely, is not important. So the countries that ‘were refused membership’ and the countries that ‘refused membership’ are the same. For fun, try other crucial functional words (quantifiers, negation, relative pronouns, …) that are very crucial in understanding the overall meaning but nevertheless will be completely ignored by an approach that essentially tries to find statistically significant patterns in data.”I commented like so: “At Deep Learning Summit 2015, I asked Greg Corrado of Google Brain (they’re responsible for Sentence2Vec and Word2Vec): “Can your system do combinatorial and conditional probability?”He tilted his head and said, “What do you mean? Can you give me an example of that?”I replied, “Well, in Latin languages, there’s the subjunctive tense. Those are sort of examples of combinatorial and conditional probabilities — and yet…not quite.Sentence2Vec has examples like “She gave him a pen in the garden”, “He got a pen from her in the garden,” “In the garden, he got a pen.” Now, what happens with something like “He was happy to receive a pen from her in the garden as it started raining.” There’s an emotion-based expression that happens during continuous time as well as deterministic and closed-loop events.”Greg replied, “No. We can’t do that type of combinatorial and conditional probability.”I also provided this link:* https://aeon.co/essays/it-s…
That meaning of “functional” is from some grammar or natural language theory and not at all from math. And as usual in grammar, linguistics, and language theory, all the terminology is somewhat less solid than cotton candy.Forwill be completely ignored by an approach that essentially tries to find statistically significant patterns in data.So THAT’S some of what the heck you are talking/complaining about claiming that probability/statistics are useless for NLP/U. Good grief.Of COURSE that description of statistics, or statistics in that description, won’t be very useful for natural language processing or understanding. Of COURSE not. No one who knew much about prob/stat would ever think such a use of stats would work or think that that description of stats was at all close to what stats really is and can be.By analogy, you have been talking to NLP/U people — maybe they were English majors — who took a non-calculus high school or freshman college course in stats — like someone who took a set of socket wrenches, tried to repair their car, pinched a finger, and concluded that metal tools are useless for repairing cars — concluded that stats is useless for NLP/U. Sorry about their finger.The quote you gave as a description of stats is just incompetent, sick-o, worse than useless, harmful, and nothing like what stats really is or can be.We’ve been discussing the Poisson process: Just where in that discussion did we have anything like that quote? Right, nowhere. Power spectral estimation of a second order stationary stochastic process? Again, nope.Some of statistics: For the real numbers R and some positive integer n, given random variable X in R^n and a function f: R^n –> R, f(X) is a statistic. Right, it’s darned general stuff. And something that general always useless for NLP/U? Where can we place some bets, short that stock?Again, yet again, once again, over again, one more time, this time just for you, as for many prior times, for applications, hopefully powerful and valuable ones, pure mathematics, applied mathematics, probability, applied probability, stochastic processes, mathematical statistics, applied statistics, optimization, etc. can be regarded as tools. They are useful if and only if someone can find a way to make them useful.Of COURSE math, …, etc. do not have ready made solutions for NLP/U — to expect otherwise is absurd. Of COURSE math. …, etc. will likely be useful, maybe even crucial, somewhere in NLP/U — to expect otherwise would be absurd.Thankfully for US national security, the US DoD didn’t draw any such uninformed, misinformed, sick-o, nonsense, brain-dead conclusions about math, physical science, or engineering.How many more times? They are TOOLS and not ready made solutions and useful if and only if someone can find a way to make them useful — I just repeated that; tough to be more clear.If a bunch of English majors, computer science majors, history majors, social scientists, self-taught computer programmers, etc. have yet to find good uses for math, …, then I’m not surprised, but that says nothing about the math and little new about such people.Or, there is a flim-flam from the computer science community: Applications of math usually need computing to do the data manipulations specified by the math. So, presto, bingo, the computer people conclude that they are also the mathematicians. Then they flounder around.For an analogy, to have a nice building, need a good architect. Then maybe also need some concrete finishers. But the concrete finishers are not the architects or a replacement for the architects. And the computer people are not the math guys.Quite broadly across the STEM fields, math is super important, and people without good applied math Ph.D. degrees and a lot of experience in applications flounder terribly with applications of math. So, no surprise that NLP/U people will be floundering terribly with math. Sorry ’bout that.Want a good high level overview of the potential of math, probability, stochastic processes, and statistics for applications? Okay: Talk to, say, D. Bertsekas at MIT, M. Avellaneda at Ccurant, I. Karatzas at Columbia, E. Cinlar at Princeton, S. Shreve at CMU, K. Chung at Stanford, D. Brilliinger at Berkeley, or R. Rockafellar at U. Washington. And none of these people are in computer science or, really, narrowly in statistics, but they are all world-class applied mathematicians.That remark you quoted about the nature and utility of statistics is from some ignorant, inebriated, wack-o, nut job — I’m trying to be nice here — from some funny farm. Set that stuff aside. There’s lots of garbage out there, and you found some of the worst of it.There is an old remark: A good education is expensive but much cheaper than the alternative. It appears that you went to some good schools, but now that you are out of school stay with the good stuff and flush the garbage.You got misled. I’ve tried to give you some tutorials to get you on track before. You are able to get back on track. I suggest you do so.
Here’s the profile of the guy I quoted. He has a PhD in CS and worked at IBM previously:* https://www.crunchbase.com/…That article on statistics and its flaws was written by David Colquhoun Professor of Pharmacology at University College London and a Fellow of the Royal Society. He’s the author of Lectures on Biostatistics (1971) and blogs at DC’s Improbable Science.They’re not English majors. They’re scientists.
> They’re not English majors. They’re scientists.That’s usually not enough to have much promise of being good at new applications of statistics. I believe you know that.
Ah, I just spent this Saturday in an AI class where an AI instructor (with a Stanford PhD) said he was only doing “hand-wavy maths” because “we don’t really need to know how the maths works” and that it’s already baked into tools like scikit-learn, Keras and Tensorflow (which are the latest applications of statistics for Deep Learning and data science), and all we need to know is how to use the tools. https://uploads.disquscdn.c…@fredwilson:disqus — naturally, being conscientious, I posted examples of Gaussian and Bernoulli equations to supplement the instructors’ slide of code.Knowing the maths means I know where the shortcuts in DL are made and that’s why the machines can’t do NLU.* https://uploads.disquscdn.c…* https://uploads.disquscdn.c…
Looking at your first picture, the one with the bubbles. that is, follow the link to the full image, save that as a file, thankfully lossless file type PNG instead of JPG so that can magnify the thing and actually read the text, I see a lot ofregressionclassificationclusteringdimensionality reductionSo, basically we are back toLeo Breiman, Jerome H. Friedman, Richard A. Olshen, Charles J. Stone, Classification and Regression Trees, ISBN 0-534-98054-6, Wadsworth & Brooks/Cole, Pacific Grove, California, 1984.and much of the rest of classic linear multi-variate statistics (stacks of texts going back decades), e.g., discriminate analysis, analysis of variance, and ong-linear for categorical data analysis seem to be missing.Compare log-linear with fitting sigmoids?Whatever, a lot more that can be done is missing.But I can believe that in practice, following that scikit diagram will often do good curve fitting. With enough good (e.g., simple random sample ) data not just for fitting but also testing the fit, might conclude that have an effective predictive model, at least for the context where got the data. Sadly in practice, that context may be not well specified in which case predictions outside that context can be quite poor.Partitioning the data into two sets, fit to one set and test the fit on the other set, is very old advice but will have to be the main approach to quality control for scikit.Does scikit provide a significant tool for real AI?Not really: Much of the problem is the issue of context.Have much to do with NLP/U? I doubt it.Beyond that, the scikit approach requires far too much data for real AI — e.g., kitty cats, puppy dogs, and toddlers all learn much better from much less data.Again, can statistics be useful in (A) NLP/U, (B) real AI, or (C) just getting some results valuable in some real situations? Sure, just as soon as someone can use existing or original derivations in statistics to get such results. For case (C), there are many old examples. For cases (A)-(B), those are challenging.
(A)-(B) = forget about it with scikit-learn, Keras and Google Tensorflow.(C) = any standard operational research type of problem, e.g. optimal number of widgets to produce, diet plans, which plane should land first for fuel efficiency, electricity grids.So now do you see how the practitioners of AI in the heart of Silicon Valley don’t even have the right tools to start to do NLU?You can look at Tensorflow’s maths library here:* https://www.tensorflow.org/…They have the Monte-Carlo too:https://www.tensorflow.org/…If any of this can do NLU and “real AI”, the Pope isn’t Catholic.
Last night, actually less than 6 hours ago, watching a movie, The Music Man, gorgeous Shirley Jones, working out a proof of the result I saw in one of my grad school courses that in a Hilbert space, each non-empty, closed, convex set has a unique element of minimum norm, right, use convexity and the parallelogram equality, right, just derive parallelogram equality yourself in two lines or look it up in, say,https://en.wikipedia.org/wi…get Cauchy convergence and then, sure, with Hilbert space completeness, convergence in the norm of the Hilbert space, that is, the norm from the inner product (close eyes, see picture, ask how come the points in the convex set with norm converging to the greatest lower bound of the norms have to get close to each other, that is, have essentially Cauchy convergence, and with the picture, sure, see to exploit convexity and the parallelogram law — right, in a fully general Hilbert space can’t use finite dimensionality or a finite basis — fun, easy stuff to do during a movie and on the way to sleep) and about my post mentioningscikitandLeo Breiman, Jerome H. Friedman, Richard A. Olshen, Charles J. Stone, Classification and Regression Trees, ISBN 0-534-98054-6, Wadsworth & Brooks/Cole, Pacific Grove, California, 1984.it dawned on me that the little scikit flow chart you included actually didn’t mention one of the main ideas in Breiman’s book, not just regression but regression trees.It may be that the deep learning outlined in the Fred’s post athttp://avc.com/2017/04/vide…”Video Of The Week: Why Toddlers Are Smarter Than Computers”is close to regression trees.So, scikit left out a lot of classic multivariate statistics and also omitted Breiman’s regression trees.So, right,So now do you see how the practitioners of AI in the heart of Silicon Valley don’t even have the right tools to start to do NLU? Or, borrowing from a poem,They dance ’round and ’round and suppose while the secret sits in the middle and knows. Or, they have a research problem and are floundering around with it. Or, they are discovering some of the challenge of doing good research.The education for research is the Ph.D. The criteria are, say, “an original contribution to knowledge worthy publication” and where the criteria for publication are “new, correct, and significant”.The record of STEM field Ph.D. programs is both clear and bloody: Even with the help of courses and faculty advisors, only a tiny fraction of the population is able to do Ph.D. research in STEM fields.For the research needed for NLP/U, ML, real AI, and the future of computing, a Ph.D. from a computer science department is likely not the right stuff — the students just don’t get the right material in the right courses, and one biggie reason for that is that nearly none of the faculty has such a background (nearly none of the faculty, including some chaired professors of computer science in some of the best computer science departments at some of the world’s best research universities and trying to write mathematics, even knows how to write mathematics — maybe most frequently learned proving theorems in a college junior level course in abstract algebra). Maybe the faculty knows about writing compilers, linkers, programming the locking in relational database managers, showing that heap sort runs in time O( n ln(n) ), finding the fastest possible string search algorithm (maybe doing some version of dynamic programming), analyzing memory garbage collection algorithms, etc.Their struggle is the one I explained by analogy: For putting up an office building, need an architect and likely some concrete finishers. Well, the concrete finishers work more directly with the construction of the building than the architects but, still, are not the architects, or the building structural engineers, or the mechanical, electrical, or HVAC engineers, and neither are the steel workers, brick layers, electricians, tile layers, dry wall hangers, painters, etc.Well, the computer scientists are not the research applied mathematicians. A few people have crossed over from one of those fields to the other, e.g., Jack Schwartz as in Nelson Dunford and Jacob T. Schwartz, Linear Operators, and long at Courant did get into some computer science (e.g., the set theory language SETL) — right, and his wife was long in compilers and scientific subroutine libraries at IBM’s Watson lab, on the same floor I was.Research applied math is apparently not an easy field, that is, only a small fraction of people who try for such a Ph.D. are successful at it. And it’s bloody — lots of people who were terrific students K-12 and college suddenly have the first failure of their lives and get seriously damaged for life.E.g., again, as buried in D. Knuth’s, The TeXBook, isThe traditional way is to put off all creative aspects until the last part of graduate school. For seventeen or more years, a student is taught examsmanship, then suddenly after passing enough exams in graduate school he’s told to do something original. Or the joke goes, all that is needed is to pick a significant problem, say that von Neumann looked at but didn’t solve, and solve it!Sure, for NLP/U, ML, real AI, computer scientists doing fast, catchup reading of undergraduate level multi-variate statistics commonly taught to students in sociology (my wife), psychology and political science (my brother), and economics (econometrics), which is essentially what they are doing, are in a position to do original work in mathematical statistics.Gee, in the diagram you showed for scikit, I saw SGD or some such. So I had to look up that TLA (three letter acronym). Right — it unwinds as stochastic gradient descent. I can guess what the heck that is right away just from the name — it’s one of the first things every student of unconstrained optimization thinks of! Unless the way downhill is awfully easy to find, SGD stands to be slower than molasses in January. Gee, we’re not even talking dimensionality reduction and then BFGS (Broyden, Fletcher, Goldfarb, Shanno) quasi-Newton, conjugate gradients, etc.! Lesson: Scikit is still talking work in some applied math at the undergrad level some decades ago!With all the AI hype, it finally dawned on me that, by using the hype technique, some of the work I did, a published paper and my Ph.D. dissertation, could be called ML and/or AI, really especially good cases of those two! How ’bout that! But, then my work is much better than the ML/AI hype standards!The published paper is wildly, radically new and different, e.g., has absolutely nothing to do with the methods in scikit or deep learning. Moreover, as I remarked in the paper, if just throw away the math, then can consider the work AI! But if include the math, then get exact control over false alarm rate (before even collecting data, and with any amount of data, not necessarily big data) and such a guarantee is and has to be more rare than hen’s teeth in the AL/ML heuristics.For my dissertation, given some initial data, then in real time, right along, it continues to learn and looks not just smart but brilliant, prescient, etc. So, right, it’s in stochastic optimal control or best decision making over time under uncertainty. Yes, the AI people have tried to do planning, but their work was essentially just more heuristics. My work gave, guaranteed, the best possible decisions from any means of processing the data available when the decisions were made, and there’s a theorem in the work that shows this. So, my work was not just planning but very much best possible planning.It finally dawned on me last night that some of what a kitty cat, puppy dog, or toddler does is or maybe something like — over several learning situations all relevant to the same lesson, e.g., don’t fall down get burned, or get punished — just looking at some variables that seem to be more prominent than the usual background values and keeping some and throwing the rest away.Right, could pick some reasonable assumptions, state and prove some relevant, helpful theorems, and call it mathematical statistics for real AI — see how new and different mathematical statistics can be? Hint: A first cut guess is that, at least at first, the work is closer to just cross tabulation and, maybe analysis of variance, than to regression, scikit, or anything else in multi-variate statistics.That is, keep the prominent variables that occur in all or nearly all the cases and throw the rest away. Then, the variables they keep are the candidate causes. So, this is dirt simple, no arithmetic or regression analysis needed, in much of practice maybe fantastically effective, heuristic, dimensionality reduction and, thus, causal identification.Causality is a fantastically effective approach to data compression or dimensionality reduction and in my guesses, e.g., in earlier posts here, crucial for real AI.Simple. Dirt simple.So, case 1: Fall down on the kitchen floor. in the morning, when Dad is still in the house, and Mom is in the bedroom, and a dozen more such variables.Case 2: Fall down on the bare floor of the play room when Dad is at work, just ate lunch, the kitty cat is hungry, and Mom is out back and many more variables.Case 3: Don’t fall down on the carpet when also ….Analysis: Time of day, where Dad is, where Mom is, what the kitty cat is doing, etc. are irrelevant. Instead, fall down on smooth floors but not on carpet.Conclusion: The cause of falling down is a smooth floor.Application: When trying to stand, prefer carpet and try to avoid smooth floors.That’s how to learn from tiny data instead of big data and, moreover, how to identify a cause and make use of that cause in new cases of wide variety. “Look Ma, no scikit required!”.Remember, you first heard this idea here!Right, in regression, we try to say that the variable with the largest regression coefficient is the most important. Well, just some simple examples can show that there can be some cases of, call it, joint importance of more than one variable that really dominate the situation.Then notice that if the independent variables are all mutually orthogonal, which is rare in practice, with the usual case from principle components, then, sure, get a case of a generalization of the Pythagorean theorem that does say in some significant terms that the size of a coefficient is the importance of the corresponding variable.But expecting that this regression stuff is related to how a toddler learns that it’s easier to stand on carpet than on a smooth floor is silly talk, smoking funny stuff, and nonsense — no toddler or kitty cat would be fooled!E.g., Newton had an apple fall out of a tree. He’d seen lots of things falling and had to conclude that apples and trees were just tiny special cases and otherwise irrelevant of a general cause of things falling — the law of gravity. Chalk up one of the crown jewels of all of civilization. “Look, Ma, real, very general causality with no big data or scikit required!”.Actually, there are whole books written on how to find causality.Or, nearly everyone who got lung cancer had smoked a lot for years. Moreover, that’s about all they had in common. Presto, bingo — the leading candidate cause of their lung cancer was smoking. Any questions? Toddlers, kitty cats, and puppy dogs can all do that analysis.IMHO, doing well with candidates for causality will be one of the keys to real AI. What I explained here might be a good start.For your(C) = any standard operational research type of problem, e.g. optimal number of widgets to produce, diet plans, which plane should land first for fuel efficiency, electricity grids. with (C) I was talking statistics but here you are talking essentially the optimization parts of operations research, and while there are connections between the two, really they are significantly different.Yes, all this, borrowing from a letter I got once from P. Halmos, “warms my heart”: The chances that any entrepreneur in Silicon Valley will be able to duplicate or equal my original, powerful, valuable applied math for my startup are slim to none!Here from me as an example is some on how to do research in applied math or at least what doing such research can look like:E.g., it’s not all just nose to the grindstone, ear to the ground, shoulder to the wheel, dotting i’s, crossing t’s, looking for praise from pretty handwriting from a third grade teacher type work! Indeed, the work is new which means that at first you are the only person in the world who knows and understands the work which means that no one else either knows or understands the work!E.g., I did the intuitive part of my Ph.D. dissertation research with my eyes closed on an airplane flight from Boston to Memphis before I went for my Ph.D. Often the easiest way to draw and have good pictures is with eyes closed.E.g., in high school algebra, it is taught implicitly that the work is to keep doing permitted algebraic manipulations until get the desired result; better is to close eyes, think of what is going on in larger terms, get an outline of how the algebra should go, and THEN start writing.I did the clean, original applied math in six weeks independently in my first summer in my Ph.D. program. I wrote the final copy of the work, and fast, illustrative software, without any faculty direction. I did give a seminar on my work, and some other students of some of my faculty advisors had those students in their Ph.D. dissertations make other applications of my work. My work was not trivial — it was approved for the university by a committee with a majority from outside my department with Chair from outside my department, Chair an expert in some of my work and a Member, US National Academy of Engineering and student of A. Tucker at Princeton.As a graduate student, in a course I saw a problem not solved in the course and asked for a reading course to try at least to address the problem, say, with an expository paper, or solve the problem if I could. In two weeks I had a really nice solution. Part of the solution was to show for the set of real numbers R, any positive integer n, and any closed set C in R^n (with the usual topology), there exists a function f: R^n –> R so that f(x) = 0 for x in C, f(x) > 0 otherwise, and f infinitely differentiable.E.g., the level set of an infinitely differentiable function can be any closed set, e.g., a sample path of Brownian motion, a Cantor set of positive measure, or the Mandelbrot set, that is, really bizarre closed sets. Curious that such a smooth function can have such a bizarre level set. So, using that result, I solved my problem, having to do with the Kuhn-Tucker conditions in optimization. Then when I went to publish, I discovered that I’d also solved a problem stated but not solved in the famous paper in mathematical economics by Arrow, Hurwicz, and Uzawa. Poor Uzawa — IIRC he has yet to win his Nobel prize yet.So, that’s some of how I did original research in applied math. Being able to do such research make my Ph.D. research fast, fun, and easy.I saw many others struggle terribly trying to do such research. I’m able to do such research. Just why is not so easy to say, but the above may be helpful to others.Do I want to be a college professor? Nope. Never have, don’t now, never will. Instead I want to apply the applied math to business, the money making kind.I ran into some big, very well hidden chuckholes on the road of life — no fault of mine. So, I got delayed. But I can still see real problems, derive applied math for solutions, and write the corresponding software, and in this I have nearly no competition.No doubt some of the computer science Ph.D. students are plenty bright. But, next to no one with a Ph.D. in computer science will have even the prerequisites to do such applied math research; and there’s no chance they will reinvent the prerequisites (so far, no one has been that smart).As I have outlined, I don’t believe that doing well with NLP/U is only a problem in applied math — instead, more is needed that initially may be just heuristic but later be good applied math. But some good, appropriate applied math may be a useful tool now.If you want to use some applied math or statistics to help make progress on NLP/U, then maybe you should do some appropriate research to find the tools you need.The usual minimal background for much of applied math research is: linear algebra, elementary version, intermediate version (e.g., Hoffman and Kunze, for the most beautiful version, P. Halmos), advanced version (e.g., R. Horn and/or R. Bellman and others) with lots of applications, associated numerical analysis, etc., mathematical analysis, e.g., W. Rudin, Principles of Mathematical Analysis, the first (real) half of W. Rudin, Real and Complex Analysis along with H. Royden, Real Analysis, although there are some good, more recent texts, what is sometimes called graduate probability from the relevant texts of Loeve, Neveu, Breiman, Chung, Karatzas, Bertsekas, Shreve, E. Wong, and some others, maybe more recent texts from Varadhan (NYU), Shiryaev (maybe in the US, back in Russia, working for a hedge fund, in a university), etc. E.g., for a good book on statistics, seeR. S. Lipster and A. N. Shiryayev, Statistics of Random Processes I: General Theory, ISBN 0-387-90226-0, Springer-Verlag, New York, 1977.Nice book. The parts I read I greatly respected; at the time I didn’t get to read all of it. For this, want a good background in topology, functional analysis, etc. This is some statistics not much like what is in, say, scikit!While reading graduate probability, pay good attention the Radon-Nikodym theorem and martingale theory.There is now in paperback the gorgeousRobert M. Blumenthal and Ronald K. Getoor, Markov Processes and Potential Theory, Dover Publications, ISBN-13: 978-0-486-46263-9, Mineola, New York, 1969.if you will, the mathematical foundations of exotic options but much more. The parts I enjoyed were the tools of general usefulness in work in stochastic processes — e.g., getting good with the strong Markov property, currents of sigma algebras, etc. The book is just awash in results not easy to prove.This book, as old as it is, illustrates a huge point: What’s long been on the shelves of the research libraries makes essentially everything else, especially everything computer science people touch on, look like just trivial, toy, baby talk. The material is there, on the shelves of the libraries; maybe you can find some uses for some of it. Maybe you can add to it and make use of what you add.Get some material in stochastic processes.Get some material in optimization. So, cover linear programming well — there’s much more there than see in the simple treatments; in the end, linear programming is one heck of a powerful tool in both pure/applied math research and applications; some of the results are astounding with the easiest derivations using the theorems of linear programming, e.g., Lemke’s proof of Nash’s game theory result. E.g., linear programming is usually the easy way to prove nearly all the many theorems of the alternative important in lots of separation arguments.Then do some work in non-linear optimization — linear integer programming, min cost network flows, multi-objective programming (notice that there isn’t much there), quadratic programming, dynamic programming, convex programming, non-linear duality theory, Lagrangian techniques, unconstrained programming, the Kuhn-Tucker conditions, etc. Then for the usual junior level undergraduate mathematical statistics, mostly just derive that for yourself. For more, pay attention to the Halmos-Savage work on sufficient statistics from the Radon-Nikodym theorem and then see, say,Robert J. Serfling, Approximation Theorems of Mathematical Statistics ISBN 0-471-02403-1.Much more can be relevant or even crucial — tough to know. Your choice where to allocate your limited resources. Then do the research and make progress on NLP/U.
From the depth of my ignorance on the subject, I am still dreaming of finely and carefully crowdsourced-tuned syntactic trees for specific purposes. Maybe this is a coder’s distortion.
There are many times I’ve wished I was 6’6″ with huge amounts of hydrogenase and water distribution so I could “Drink like a fish and not get drunk,” as the saying goes.
These wines (and a quality of more natural wines in general) is that they have less alcohol by percent.10.5 is not uncommon for the whites here.You might enjoy this old post on the impact of global warming on wine and alcohol content.http://arnoldwaldstein.com/…
Ok, make that 8’8″, lol.I go a nice shade of sun-dried tomato on 4.5%. By 10.5%, I’m the color of beetroot or damson.
(Alpine) Quality, not (mountain) quantity is what matters…
Ah, Wink, I’m all for quality over quantity.With chocolate, I need cocoa solids between 60-70% (and preferably Hachez). With cheese, I’m a happy bunny with St. Agur.It’s just that even 4.5% alcohol can make me turn the color of a sun-dried tomato.And I LOVE grapes of all types!https://uploads.disquscdn.c…
Due Diligence requires an onsite complete investigation and plenty of tasting too!!
There’s a reward for that… a personalised trip next year, after the book comes out!
Just left France and we’re now in Italy (Bologna). Need to check out that region of France next trip. The only thing we generally buy when traveling is wine. We’re very strategic. Usually we can pack as mementos 8 bottles in our luggage wrapped in dirty underwear to insulate from bumpy travel and to insure that nobody steals. No one would dare unwrap those bottles 🙂
And the Jura as well.
Went to a Jura dinner in 1998 at Bistro du Sommelier and had never heard of them. They were pretty good and I have a bottle of Vin de Jaune in the cellar.
I am a passionate fan of Poulsard and Trousseau.The chard from Domaine de la Tournelle is one of the most pure expressions of the grape I’ve ever tasted. And that Domaine is really wonderful.
An honour to be on a tech site or a VC site or whatever this is… I am a mere wine writer, but I prefer the control and rewards of self-publishing to the conventional route and crowd-funding is one thing that makes that possible. Thanks, Fred and Arnold.
Hi Wink. You ask a great question “whatever this is?”. It’s a community of curious people who like tech and lot more. I get to start the discussions every morning while is so much fun
Wink, are you familiar with Amazon Alexa? I just checked on Alexa Skills and there are no apps that provide folks with information on Savoie wines or Wines of French Alps.https://uploads.disquscdn.c…Now, it so happens I’m coding 3 skills for Alexa this week (to win an Amazon Echo) and I did my ‘Da Vinci quotes’ app yesterday.If you’d like, I’ll code you a “Wink Lorch’s Wines of the French Alps” app. All you’d need to do is provide me with a list of 25-50 facts about wines (e.g., how to tell a great one from the rest) and producers in that region. It would be a way to market your book to another audience (the ones seeking audio sound bites).
Thanks so much for the offer, but I don’t think it’s the right time for me to do this – I need to get this book done first… And the next 6 months will be full on. I’m just waiting till the Kickstarter is done before I retreat into a cave… Also, for now, distribution of Savoie wines is very small too (and that’s anywhere outside of the region). Only the really trendy wine stores in big cities have some.
No worries. Best of luck with your Kickstarter!I think I’ll code one on chocolate instead, :*).
Or if you’re more a beer kinda guy (like me), here’s one I just backed yesterday as well -> https://www.kickstarter.com…
The web and community funding have made the world of the artisanal possible. One of the wonders of our times.
+infinity!BTW – the people behind this also have built what looks like a *really* interesting market place ( brewmarketplace.com ).I’m really looking forward to diving into it (once the campaign delivers my stuff hopefully sometime in the fall) 🙂
I’ll check it out.
Yes. It is a wonder and wonderful. I just started beekeeping I always wanted to but never did. Even the Amish that built my hives have a sitehttp://foresthillbeesupply….
Did it a lifetime ago (can share a pic) and really enjoyed it.Was a certified Bee Inspector in the Okanagan Valley (hard to believe) and ran bees in the high alpine areas there. Bears were a huge problem.Was the local guy you drove down the road to find when you had a swarm that needed to be hived.
Awesome. Got my first nucs
Mediachain – have you exited?
Her cheese? Maybe!Her wine? Likely not.Why?(1) Early in my career, with my wife, I investigated wines. Did a pretty good job. Found a lot of darned good wines, nearly all from France, and a few more from Italy, some too expensive and some others quite reasonably priced. Those wines all had very long histories, e.g., Chambertin back to Napoleon, the Bordeaux 1855 survey, famous Montrachet, Pommard long in some literature, lots of good information in good books, etc., help from the appellation d’origine contrôlée rules, well known parings with food (I only consume wine with food), etc. So, sorry, my wine investigations ended years ago with a terrific list with a lot more good wine opportunities than I can use, and for such investigations I’ve moved on to other subjects!(2) If she wants me to get interested in her wines, then she has one heck of a lot of competition, maybe for her wines but definitely also for information about her wines.Maybe she has some new wines? In such cases, automatically, no thanks! Why? The wines on my list haven’t changed in 100 years and likely won’t change in another 100 years which means that my investment of investigation time was well placed and remains well protected!Also, in her video, she didn’t even begin to touch on what I would want to hear first about her wines: (a) geographic location, (b) grape varieties, (c) red, white, or rose, (d) flat or sparkling, (e) dry or sweet, (f) to be aged (malolactic fermentation and all such things) or drunk young, (g) flat or acid, (h) appellation d’origine contrôlée status, etc.Several of the likely crucial words she spoke I likely have never heard before and from her couldn’t hear clearly — she should speak up, speak clearly, and put the spelling in subtitles in her video. Uh, the French tend to mumble! E.g., in the French-German split, the French got too many vowels and the Germans, too many consonants!Her video should include some maps of her area; e.g., start with a map of the Alps, zoom in to the French Alps, zoom in to her area, then zoom into the fields with her vineyards or some such. Show these maps while she is talking about the wines.If she has the video production resources, some place in her video, beginning or end, have her standing, maybe walking slowly, in one of her vineyards, say, late enough in the season to show some bunches of grapes, and with some of the scenery of the French Alps, and for that last play some music, maybe something French?I know; I know; there’s lots of romance, excitement about investigating new wines. As in the ad for tuna fish, “Sorry, Charlie”. I have a bottle or two of wine on my kitchen counter; they’ve been there for years; maybe now they are just decorations; they were free from, IIRC, some place in Ohio; they are white wines which means that by now they likely taste like skunk stink glands or some such. Net, I’m no longer much interested in romance, excitement about investigating new wines.Or, investigating new wines? Been there, wasted some time, money, and calories, found plenty of wines I like, done that, not doing it again!But, for cheese, sure, I’m open to more cheese possibilities! She did say a little about her cheese; to me she was better at selling her cheese than her wines!Added in summary: Here in New York State and also before in Ohio and DC, the local shops have been just awash in some huge variety of wines. So, there have been wines from France, Spain, Italy, Germany, Georgia (the one in Europe), Israel, Hungary, and more. And in the US there have been wines from California, Ohio, Michigan, the Finger Lakes of NYS, etc. Maybe there are even wines from Long Island. And, sure there are wines from Maryland and Virginia. Then we can add in Australia. So, no doubt there are thousands of wine labels on bottles with wines I’ve never tasted. And there’s no way I could ever taste even a small fraction of all of those.Of the many wines I have tasted, there is a pattern: All the wines that tasted good have some indications from their grapes, type, country, etc. that suggest that they might be good. E.g., wine from the nebbiolo grape, as Barolo, from Italy has a chance! Wine from the Chardonnay grape as flat, dry white wine from near Macon in France has a really good chance.Then ALL of the rest I didn’t like. So, everything from the US I didn’t like. The one from Israel tasted like plastic from some garden hose. The Chardonnays from California were not dry (low sugar), crisp (high acid), and clean (nice, delicate flavors) like the ones from Macon but sweet (lots of sugar), flat (low acid), and just messy overwhelming flavors like some fruit cocktail — not good in the traditional parings, e.g., fish. Anyone want to drink sugar syrup from fruit cocktail with some nice French fish dish, e.g., Coquilles Saint-Jacques Parisienne that I got relatively good at making?Since the California wine growers have access to the U. C. Davis wine program, they have plenty of expertise. What they don’t have is the same objectives I do, that is, the same taste in wine I do, or the same tastes as the Medoc, St. Emilion, Côte-d’Or, Macon, Rhone Valley, Chianti, etc.So, try some new wine? Only chances are from France or Italy. And there I’m not interested in trying anything not already well described in some now old books I have on wine.German wines? I can believe that some of them are fantastically good but usually white, sweet, and expensive that I don’t know how to pair with food. From Germany, I’d rather get, say, Kirschwasser and use it in desserts, especially Schwarzwälder Kirschtorte that I got pretty good at making!Somewhere in the house I have a relatively good Corton. Maybe the first month my startup gets $1 million in revenue I’ll have a nice dinner and open it! Ah, maybe I’ll have a good Porterhouse steak with some decently good steak sauce I have learned how to make, some green beans, and some French bread!Revenue? I’m getting my desk cleared off from the random, exogenous interruptions! Today I just solidly swatted one of them, put the details with a summary in a file folder, with notes on my computer, in a file drawer and closed it. Earlier this week I filed a court case — the court date is 6/6/2017. Maybe I’ll get that old interruption cleared up. And I have a few more.One big interruption has for now solved itself for no good reason: Some months ago, my development computer was rebooting several times a day. The Ethernet port failed, but I got an adapter card that gave me a good Ethernet port again. The main memory failed the test of MEMTEST86, so I got some new memory from Kingston. But, for no particular reason, now the computer reboots less often than once a week, and that’s good enough for getting fully ready to go live. From the many reboots, there is some corruption on the boot drive, but I can use NTBACKUP to restore a clean, old version, add the Ethernet adapter as a new device, use NTBACKUP to back up that copy, and continue. I have some really good notes and scripts for NTBACKUP usage! I’ll do the restore likely in June after the court date.
enjoy your musings about “life” including CF opportunities. a fellow commonsewer? thanks for your time and thoughts.Steven Merrill.
Oy, that’s way below normal temperatures.
Magical and perfect for hot chocolate and marshmallows with 1/2 thimble of Glenfiddich whiskey.