Do An Hour Of Code
It is CS Education Week, which happens at this time every year to celebrate and energize the growing K12 CS Education movement.
The highlight of CS Education Week is the Hour Of Code, in which students, teachers, parents, and community members all do an hour of code during the school week.
I would like to encourage everyone in the technology business to find a school this week, maybe it is your child’s school, maybe it is the school building in your neighborhood, or maybe it is a school where a friend teaches, and volunteer to lead an Hour Of Code.
It is really quite easy to do this.
Here is a guide on how to help a local school
Here are some activities you can do for your hour of code
Here are some ways to volunteer.
I’m on my way now to a school in the Bronx.
Comments (Archived):
Love to see this. I started coding a few years ago and it has greatly helped my logic and reasoning in other disciplines as well. Thanks for promoting this, Fred.
As a CS educator, I would recommend this tool: http://jupyter.org/ Might be a good Friday project donation Fred! ๐ (I’m not connected with them in any way)Its the gentlest python environment I’ve seen and can do just about anything. Great for intro CS programming!
Per my reply to Fred, there are so many modern languages – what are students learning and is there any “standard” curriculum around coding in K-12.
There has been A TON of work done around curriculum, standards, and frameworks in the last few years, much of it funded by the incredible Jan Cuny at NSF, who leads up CS Ed for that org. Much of the research has been driven by groups like the CSForAll Consortium (which exists in part due to our host), CSTA, and Code.org, along with many of the universities at the forefront of K-12 CSEd such as UTexas, Berkeley, Duke, Brown, Trinity, UCLA, U Oregon, etc. There are also crosswalks to Common Core as well.See https://k12cs.org/ and https://www.csteachers.org/…
In my limited exposure/experience most schools seem to teach one of the following:1. Java2. Javascript3. PythonMost self-taught/interested beginners seem to focus on:1. Javascript2. Python3. Ruby(all have really great intro info and tutorials all over the web)But it also depends on your approach to learning…if you are focused on adding to a platform (like iOS for iPhone) you’ll be directed towards a specific language or two (like Objective-C or Swift).Personally – I think it’s easier for people to pick a project, and an environment, define and focus the problem…then pick a language. A language, after all, is just the ‘how…the *real* learning is in the what, when, and of course why (if you know all that, you can always just look up the how; regardless of language).
Woohoo! Language wars!! ๐ The path we take our CS majors on (with my personal commentary) is:1. Python: Its just the best thing to start with for just about anybody2. C++: The only real application language left.3. C: The original and still the best systems language4. Haskell/Scala/etc.: AI du jour5. Java: Android, otherwise don’t bother unless you’re in D.C.Things like HTML, CSS, PHP, Javascript are considered “popcorn” languages. You’re just expected to pick them up as needed.
CSS is an art. I gave up on keeping up with it long ago because unless you are working in it every day, you can’t keep track of what works each time a browser changes. I now hire CSS people as needed for projects.
CSS problems? Simple solution, the one I used — use the pillar of good engineering, the KISS principle, Keep it simple, stupid! So, in my project, for the Web pages and HTML, I used only trivially simple, standard, old CSS. And to keep down the number of separate files to sent the users, I put the CSS code directly in the HTML code.”If the wine is sour, then throw it out.” If a tool is too complicated to document well, be reliable, or use, then don’t use it. Of course now, we save sour wine, keep the vinegar, and use it in salad dressing! For high end CSS, there’s no such benefit!Also, for more with KISS, I have minimal use of JavaScript: Microsoft’s ASP.NET wrote a little of it for me, maybe for cursor positioning, but I have yet to write even a single line of it.My Web site users don’t need to have JavaScript enabled and can do just fine with a Web browser up to date as of about 10 years ago! “Look, Ma, no Web browser compatibility problems, from smartphones to desktops, over many years!”Gee, big advantage in my doing my own project instead of trying to get the bugs out of some spaghetti code of some sick-o project of someone else!
Yes, my intern loved learning Python and it then gave her confidence in her ability to code and strength of spirit to continue learning other languages.
Are there signs “Don’t tease the girls!”? If not, then I will! “Girls”? Right, politically INcorrect version of “women”!!For the teasing, what is it about you girls and “confidence”, “spirit”, and “strength”? Or, to borrow from Yoda, “Always emotions, women.”. Or “Always difficult to understand, women!”!Uh, for a simple view, computers are very obedient and nearly always will do just what they are told to do! That’s a big help in writing software!Some men used to want such from girls, rarely got it, but those days are over!/teasting
Not to get into a religious war with you, but almost *everyone* I know currently vastly prefers GoLang for systems stuff over C these days….and for #4, Lisp is another great one to throw in there (because it forces you to think differently than most other languages).BTW – you could argue that Python is also a ‘popcorn’ language too (and also, I wouldn’t consider HTML or CSS actual languages, but that just might be the code snob in me).
Golang, Rust, etc. wannabees! ;)Lisp is AI from yesterday ๐ Its so, pre Haskell…Python is definitely a popcorn language (my term for Very High Level Languages that typically don’t require you to do memory managment) but you need to call one out for novices and I think its the best!
Python is simultaneously both a “popcorn” language (sort of) and not one, because it has been around for many years (20+) and has evolved a lot (both the language and the libraries, both standard and third-party) in that time. (It was also being used for serious work from early on, despite its ease of use.) Also, while it is somewhat easier to pick up the basics of Python as compared to many other languages, it is a deep language with some subtle features and interactions between them.Also, since it’s a recent topic on this blog due to their acquisition by Zeta, I’ll mention that Disqus is (or was – not sure about latest status) one of the biggest production sites in the world that uses/used Python (and Django, IIRC). YouTube and Google use Python a lot too, as do tons of other companies and organizations, big and small.https://www.python.org/abou…And a relevant plug: I conduct online Python training via the Net.https://jugad2.blogspot.in/…Anyone interested in Python training can contact me via my web site:vasudevram.github.io/contac…Python is also huge for data analytics and data science these days.
Since I don’t like nasty things done to vulnerable children, I have to respond.In particular, pushing C/C++ to children is nasty.First, HTML and CSS are, of course, computer languages, e.g., have some Backus-Naur syntax, but are not really computer programming languages because, no doubt, they are not Turing equivalent. HTML and CSS are just data markup languages.E.g., I argued with a guy for months that there was no way to translate D. Knuth’s TeX to HTML because TeX is a real programming language: It can read and write files, allocate and free storage, has loops and if-then-else, has variable names with both character and numeric data types, etc., and HTML and CSS — last time I looked, although I have not looked at HTML5, etc. — have none of those.C and C++ are awful: We still should use them where have to and otherwise banish them to the dustbin of history.Why? C was designed to be a minimal language on an old DEC computer with 5 KB of main memory. So, C syntax is sparse and “idiosyncratic” as in the fundamental, now classicBrian W. Kernighan and Dennis M. Ritchie, The C Programming Language, Second Edition, ISBN 0-13-110362-8, Prentice-Hall, Englewood Cliffs, New Jersey, 1988.In its original definition, C++ is just a pre-processor to C. Bell Labs liked pre-processors, e.g., Ratfor (rational Fortran), and C++ was another example. C++ tried to make software objects a religion and did with enormous harm for decades.There are some fundamental problems with C/C++: (1) Memory management. (2) Exceptional condition handling. (3) Nearly missing scope of names functionality (need scope of names for some source code modularity and more), (4) Crude and clumsy memory management, (5) Absurdly clumsy use of pointers. (6) And especially (1) — (4) in the context of multi-threading. For more, the string and array handling, in a word, suck. I still have to look up and think through again dereferencing. Bummer.For exceptional condition handling, scope of names, memory management, data types, strings, arrays, data structures, and much more, PL/I, well before C, was much better. E.g., PL/I has pointers, and I used them heavily, but I never had to worry about dereferencing!If like object oriented programming style, then that is easy enough to do in PL/I. Indeed, the key to object polymorphism is just entry variables as in PL/I and even Fortran. E.g., in Fortran, it was long common to write a numerical integration routine and pass to the routine the name of the function to be integrated. So, the integration routine was essentially polymorphic. E.g., in .NET, the polymorphic functionality is via interfaces which are essentially the same as the Fortran idea.It has long been recognized that in moderate to large software projects, at least the C++ (1) — (4) become debilitating, e.g., digging the Panama Canal with workers on their knees using teaspoons — ordinary teaspoons with C and tricky, hard to use teaspoons with C++.C was a big step down from PL/I and Algol and was done just to fit in the old DEC 8 KB. Sad stuff.We need to move on.It’s long been the case that we didn’t have to use the C/C++ teaspoon tool even for operating systems. E.g., Multics, 1969, a milestone in computing, with security rings, gate segments (still in the Intel architecture), hierarchical file systems, and authentication, capabilities, and access control lists (all now pillars of computer security from SQL to Web site certificates) and Primos, 1975, were written in PL/I.Long IBM’s operating system code was written in their PL/X version of PL/I, far ahead of C/C++.I’ve had occasion to write some C code: For some small, focused, narrow projects, it’s okay; like mixing up some sour cream and chives, use a teaspoon.E.g., my current project on Windows uses the .NET facility platform invoke to call (as a small part of some of the numerical parts of some of the work) some now classic, rock solid, quite important, open source C code; okay, I’ll confess, Linpack. Sure: Commonly the computing in a huge range of applied math boils down in part to some use of Linpack! Right, I took the official, open source, Fortran version of Linpack, used the open source Bell Labs C code program f2c (translate Fortran to C), compiled f2c on Windows, used that to translate Linpack from Fortran to C, compiled the resulting Linpack C code on Windows, and called the result with platform invoke.E.g., once at IBM’s Watson lab, I took the C-callable TCP/IP API and wrote C code callable from PL/I to let PL/I use TCP/IP.E.g., once I wrote a really nice, cute, sweetheart matrix package for C and, with that, wrote a quite general linear system solver in C.Once I wrote in C a relatively capable program to sort files. For large file sorting it’s still what I use.So, I’ve used some C.For C++, I had to conclude that not evenBjarne Stroustrup, The C++ Programming Language, Second Edition, ISBN 0-201-53992-6, Addison-Wesley, Reading, Massachusetts, 1991.clearly understood it, and I didn’t want to paw through the output of the C++ pre-processor, look at the generated C code, and, thus, make more clear just what the definition of C++ really was. Since Stroustrup was not able to do a good job describing his language, I didn’t want to bother doing it for him.Besides, being just a pre-processor to C, fundamentally C++ is stuck-o with not being able to do anything a C programmer couldn’t do. Since fundamentally C is too limiting, so is C++.I just looked at my collection of references on overviews of C++ and its version of objects: The references are now 5-15 years old, voluminous, and devastating. Net, C/C++ remain important for various reasons for some special purposes, but for significant projects we need better tools, and now for nearly all significant projects we have much better tools.Computing has moved on — not surprising since C was a big, cheap-o, step backwards from PL/I and Algol. Gee, PL/I compiled and ran fine on a 64 KB non-virtual memory small 360 computer — now squeezing down to a tool written for 8 KB is absurd.Okay, okay, okay, okay, can use C/C++ for writing code for routers for software defined networks and for some cases of embedded code more generally. But, heck, apparently that code will soon be in application-specific integrated circuits (ASICs) via field programmable gate arrays (FPGAs) or some such anyway.If NASA, DARPA, Department of Energy, Boeing, etc. want better languages for embedded code, let them continue their work, long in progress now.To heck with teaspoons as programming tools.For better or worse, a big part of computing is Windows, and on Windows a big part of software development stands on the .NET Framework and the common language runtime (CLR), and the main path to .NET and the CLR is Microsoft’s C#.Well, C# borrows some of the deliberately idiosyncratic syntax of C, syntax I find difficult to teach, learn, read, write, and document and, thus, clumsy and error prone. Right, e.g., I don’t like the comment syntax borrowed from the source code stream syntax (to get around the 80 character limit of old punched cards, literally) of PL/I. Yes, I have some nice editor macros that help a lot with that syntax, but I still don’t like it.But we don’t have to put up with the masochistic C syntax. Instead we can use the essentially equivalent but much sweeter and, really, classic flavor of syntactic sugar of the .NET version of Visual Basic (VB.NET). That’s what my project is using, now with 100,000 lines of typing of VB.NET.Equivalent? IIRC there’s a translator to and/or from C# and VB.NET.So, the difference is mostly just syntactic sugar. Sure, IIRC C# has lambda expressions — if I need those, I’ll write some C# code and call it from VB.NET — it’s all the same CLR guys! In particular, it was easy enough for Microsoft to document the .NET Framework all in the same Web pages for all the Microsoft CLR languages. Gee, they even have a compiled Python, Iron Python!The Microsoft, Windows CLR and .NET Framework are milestones in computing and, no doubt, were not easy to do. E.g., IBM struggled for years on just the same important, even crucial, broad goals and achieved at most only a tiny fraction of what Microsoft did with the CLR and .NET.Net, if want to introduce young people to programming, go really slow on C, much slower on C++, and get on with the .NET Framework, the CLR languages, and C# and/or VB.NET.And we don’t have to push C/C++ to children and, thus, be nasty. That’s not as nasty as pushing love of boarder-less globalism, rushing to flood countries built on the glories of Western Civilization with people from brutal, pre-Enlightenment, pre-Medieval brain-dead cultures, or fears of human caused climate change, but it’s still nasty.
I learned C/C++ as a child. Like skiing or riding a bike or any skill, that’s the best time to do it. Give em a little credit, they’re at least as smart as we were at that age.
Sure, they CAN learn it, but SHOULD they learn it? Let’s move on.Learning C/C++ is not the point. I was responding to your2. C++: The only real application language left.3. C: The original and still the best systems languageAs I argued, on your 2. C++ sucks. E.g., for decades we all suffered from the “memory leaks” from the bad memory management. On Windows, the CLR languages, e.g., C# and VB.NET, are decades ahead.On your 3., that was false already in Multics of 1969.My point is that C/C++ for significant software development are like digging with teaspoons — possible, yes, good tools, no.
C++ doesnt suck, you just don’t like it. ๐ It has two things that are absolutely necessary for developing any REAL application (i.e. one that actually requires performance). 1) The object model has every possible feature and 2) it compiles to a binary. These two things also make it hard, which is why you likely think it sucks ๐
C++ doesn’t suck, you just don’t like it. ๐ It has two things that are absolutely necessary for developing any REAL application (i.e. one that actually requires performance). 1) The object model has every possible feature and 2) it compiles to a binary. These two things also make it hard, which is why you likely think it sucks ๐ No:(1) It’s about C/C++, not me.You seem to be another one who believes that their first programming language is the best for everything and should also be the first programming language of everyone else.This is not about me, but I can show from my career that your primary emphasis on C/C++ are very strongly misplaced: I had learned well and heavily used at least four programming languages and two cases of machine language and assembler before I learned C.I have had a good use for C a few times and used it then. There’s some C code in one place in the code for my startup. I’ve never needed C++ but will use it when and if I need to. Same for Python, Java, JavaScript, R, Mathematica, etc. R, SPSS, and SAS are mostly beneath me, not up to my work in applied statistics; I do much better just writing my own code. Excel? It has some good utility I don’t care much about. For me, for doing computations, Excel sucks: If I want a graph, then I get the data ready outside of Excel and then with Excel import the data and do the graph.(2) My business interests are for my information technology startup based on some of my applied math. I do my own software development using the best tools I can find for the work. I drive my own car, but I’m not a chauffeur and similarly for cooking my own food, mowing my own grass, and writing my own software.(3) Why C/C++ sucks is fully clear and has been for decades for anyone with much experience in computing.I explained well.For more, there are big, huge, gigantic reasons for Microsoft’s common language runtime (CLR), associated programming languages, and .NET Framework. Compared with these, C/C++ are wildly out of date, simplistic, and inferior.(4) “Performance:” Far too much of the design of C/C++ forbid them ever being very good for performance on computers much like past or current ones. E.g., PL/I would beat them just as it beat, say, Fortran 66.(5) “Binary:” Your compile to “binary” is not quite correct. We compile to an object module with a symbol table for external references. The linkage editor reads that, resolves the external references, and writes a relocatable EXE file. A loader reads the EXE, uses the relocation dictionary, and starts execution.In some cases, it’s possible to have the EXE ready for running without a loader or operating system, say, “on the bare metal”. The famous x86 program MEMTEST86 no doubt does this.I addressed this issue of “embedded” code, etc.That some language processing might involve some intermediate language, a large runtime library, high end memory management, exceptional condition handling, and threading, lots of APIs, just in time compiling, late binding, reflection, etc. also brings lots of crucial functionality.(6) For your> It has two things that are absolutely necessary for developing any REAL application (i.e. one that actually requires performance). 1) The object model has every possible feature and 2) it compiles to a binary.Nonsense.For “performance,” I addressed that in my (4) above.In more detail, right away it’s fully clear that C and especially much use of objects in C++ are poor for performance.Just the first considerations of programming language design and compilation and program execution for common operations, data types, data structures, and algorithms shows this situation in overwhelmingly strong terms. People knew that in very, very clear terms back to PL/I and know it today with the Microsoft CLR languages and much more.For “object model,” that’s from somewhat useful occasionally down to a big waste and a silly quasi-religion.But, again, there are some uses: E.g., I used objects and polymorphism in my post to “For Hour of Code for K-12” inhttp://avc.com/2017/12/do-a…And my current startup makes use of object instance de-serialization and conversion to/from byte arrays for sending/receiving much as in remote procedure call via TCP/IP.For decades computing has been awash in “real applications” that had nothing to do with C/C++ or object oriented software.Quite broadly, running on the bare metal is seen as just crippling. E.g., instead we have APIs to micro-services with threads in processes in code from managed languages with just in time compiling and linking in containers in processes on operating systems in hardware assisted virtual machines on automatically managed, dynamically load balanced, fault tolerant clusters of hardware. So, what’s this bare metal stuff?You’re talking solid, wooden wheeled oxcarts with bear grease on the axles. The main good part was, even back in those days, the young women could be real attention getters!There is a problem here: The problem is your excessive respect for C/C++ and your ignoring the huge steps in software development tools in the decades since the beginnings of C/C++.
I know probably two dozen computer languages well enough to be productive with them. This is due to the facts that I’ve been designing and building computer systems for all sorts of applications from bare metal to your IT level environment for nearly 40 years, not to mention my Ph.D. in Computer Science from a top 10 institution. I’ll keep my own counsel about languages and choose the best one for whatever job I’m doing. Since its obvious you’re just argumentative, conversation over.
(1) On “performance” and “real applications,” you made some outrageous remarks about C/C++, and I calmly responded.(2) Your response was about me, that I was inadequate, did not understand C/C++ because they were difficult. I’ve done some difficult things; computing has never been even close to any of them! Practical computing, including C/C++, remains a dirt simple subject.(3) I responded about your claims about C/C++, and you are now hostile to me personally. Ah, if can’t attack the content, then attack the person!With your background in computing, you should be able to understand that C/C++ have serious problems for both “performance” and your “real applications”.Trying to respond to you rationally and tutorially, I wrote you 3945 words in 24,190 bytes. Since that was too long, I just cut it down and omitted what I thought had to be obvious.About C, I’ll interject: In C can writei = ++i+++++j++Gee, that should be illegal.But apparently what I omitted is not obvious. So, I’ll make it obvious:C++ is slow because (1) it makes too much use of pointers and (2) does too much with subroutines (or call them functions).For pointers (1), at execution time, the pointers form chains of indirection; to get to the problem data it is necessary to follow those pointer chains, and that work is slow and, thus, helps keep C/C++ from being competitive in performance with, say, Fortran 66 and PL/I F-level.The main reason for the over use of pointers is because the language and its compilers offer too little functionality to the programmers.(2) For the subroutines, they are essentially always ‘external’, that is, resolved by the linkage editor. Then calling and returning from an external routine has overhead.The reason C/C++ has performance problems with subroutines is mostly because the C language offers so little functionality that there is way too much use of external subroutines for far too many little operations that, really, should be handled directly, and, thus, with much higher performance, by the compiler.For some of the overhead:(A) It’s common, nearly standard, for a subroutine at the beginning of its execution to save the values of lots of processor registers and just before returning restore them. That’s time consuming overhead. If the work was compiled without an external subroutine, the compiler would be handling the registers much more efficiently.(B) The argument passing, from the calling code into the subroutine and out of the subroutine back to the calling code, has overhead that takes execution time:Of course, there are several popular approaches involving a ‘stack’ or pointers for the standard functionality of call by value or call by reference (sure, Algol has call by name, but that’s special and rare otherwise), but in all cases the argument passing is overhead.Compiled, in-line code — as is much more often the case for PL/I F-Level and Fortran 66 — avoids that overhead and is much faster.For a subroutine that doesn’t do much, e.g., a lot of the standard C subroutines and more that programmers are forced to write due to how little the C language offers, the external subroutine overhead can take several times as much processor time as the actual productive work the subroutine is doing. The overhead can be so bad, commonly is, that even if the work of the subroutine was done in zero time, C/C++ would still be slower than PL/I F Level and Fortran 66 that actually did the work.(C) Likely the poor subroutine needs some storage, if only for saving the registers. So, it has to allocate some storage and, then, just before returning, free it. Sure, can use a global for the program ‘push down stack’, but this way there are monsters in debugging from the programmer not knowing how much stack space is needed and, then, encountering the common disaster of “stack overflow”. That’s standard in C/C++ and now even famous on the Internet but unheard of in PL/I F Level and Fortran 66. Why? Because they make no such use of such a stack.(D) A really severe special case of a crippling performance problem, especially bad where high performance is wanted, is the fact that C/C++ have nothing at all like the arrays of PL/I or Fortran 66.So, when pass an array to a subroutine, PL/I and Fortran do fine but C/C++ gets all twisted out of shape: In Fortran, get to pass the array and, say, M and N, and then in the subroutine have, say,SUBROUTINE CALC( X, M, N, Y, B )and then declare, say,REAL X(M,N), Y(N,M), B(N)That’s terrific and, as short and simple as it is, is in performance well on the way to blowing the doors of anything that can be done in C/C++.Why? Because in the subroutine, the C programmer needs to take the array bounds and run their own code for the standard array addressing, column major or row major as desired. For a one dimensional array, that’s easier, but for 2 or more dimensions, need the extents of all but one of the subscripts in the addressing.The big bummer is that a Fortran compiler can do this addressing — again, the big issue is when have more than two subscripts — with compiled code making good use of registers, and the poor writer of the C compiler has no such opportunity.Here, then, in performance, in that subroutine the Fortran code and compiler will totally blow the doors off the C/C++ code and C compiler. Why? Because the Fortran compiler has the M and N and can compile the array addressing arithmetic with good optimization and good use of registers while the C programmer has to receive the M and N and put them into his own code for the array addressing arithmetic; then the poor C compiler writer has to treat that array addressing like any other C code and can’t make nearly as good use of optimization and the registers.In high performance computing, arrays are important, first cut, right at the top of the list of just what is important. E.g., the first step toward a supercomputer is vector instructions that do inner products of arrays. Why? Because applied math is awash in inner products.Then, also of crucial importance, especially for “real applications,” is that the PL/I and Fortran 66 compilers fully understand what the heck an array is and can do some run time array bounds checking. The poor C compiler has no such opportunity. Bummer. No doubt the main bug in using arrays, and a huge fraction of all software bugs, is violating array, etc. bounds. Fortran and PL/I can provide the programmer a lot of help; C can’t, as in K&R, C doesn’t.For arrays, there is a solid, historic lesson here: Character strings are nearly always important, but, sure, Fortran 66 didn’t have any. So, sure, it was standard for a Fortran programmer to use arrays to write a nice collection of subroutines for the usual functionality wanted for strings. He was proud of his work.But his string code was all external subroutines, and each little operation on a string required an external subroutine call with its overhead. Bummer.And, as the compiler compiled the calling code, all it saw was just the many calls to the string subroutines and, thus, had no opportunity to do any compiler optimization. Sure, at times people tried writing compilers that would compile across external subroutine boundaries, but that, sure, weakened the meaning and functionality of an external subroutine when those were important.So, the IBM designers of PL/I, led by George Radin, bright, hard working guy, quickly saw that having good functionality for strings as a data type supported by the compiler would let PL/I be much faster, maybe some factors of 10, in string operations compared with the Fortran 66 collections of external string subroutines. Well, C/C++ are as bad in strings as those collections of string subroutines in Fortran, and PL/I F Level would beat the C/C++ string handling just the same or worse.Then for arrays, as for all the storage aggregates, especially the PL/I version of structures, the astoundingly beautifully functional and efficient PL/I structures, PL/I has a table the execution logic calls a dope vector. So, that table describes the extents, etc., of array or whatever aggregate. So, when an array is passed to a subroutine, the subroutine gets the dope vector. And when the compiler is compiling the subroutine, it knows just what data was in the dope vector and can do good compiler optimization. The compiler can do that. So, on performance on arrays, PL/I was competitive with Fortran 66 and much faster on strings and beat C/C++ on both arrays and strings.There’s more: The design of the PL/I exceptional condition handling is a wonder to behold. It integrates beautifully with both the static descendancy of scope of names and the dynamic descendancy of execution. It integrates beautifully with storage management and multi-tasking. So, it’s all far beyond what is in Fortran 66 or C/C++ or, since detailed compiler support is crucial, what Fortran 66 or C/C++ programmers can do for themselves.And, PL/I was in solid shape by 1969 and was used for Multics then and Primos a few years later.To move on specifically to the objects in C++, for performance, Fortran 66, again for arrays in case a C++ programmer writes an array class, and, especially PL/I totally blow the doors off C/C++.Oh, I should mention: Of course the C++ objects are awash in pointers to non-sequential storage, and any object execution has to chase pointers and suffer poor locality of reference.But the structures of PL/I are tightly in sequential storage with, thus, excellent locality of reference, are nearly as useful as objects (even have a version of inheritance with the attribute LIKE), and have beautifully fast addressing with essentially no pointer chasing at all. In performance, PL/I structures totally blow the doors off C++ objects.To be more clear, with PL/I structures can have arrays of structures of arrays of structures of arrays, etc. And all the addressing is JUST normal multi-dimensional array addressing. SUPER nice stuff! “Look, Ma, no pointers!”.Sure, maybe C/C++ can be faster than Java, JavaScript, Power Shell, Rexx, the original Python, Lisp, etc. But against PL/I F-Level and Fortran 66, C is slower, and much use of objects in C++ is much slower.I assumed that you knew all that stuff. Well, you do now.
I have a college intern working for me right now and she started with Python. She says she loves how you can code something that works so quickly — it gave her a lot of confidence. Now she is learning some other languages.
how you can code something that works so quicklyIs she going to be writing production code for you? Or are you just saying she is fooling around with it?The reason I ask (I have done this pro-sumerly since the mid 80’s) is that writing the code is only one small part of creating something usable. It’s like knowing how to fly a plane and/or give a needle injection and not being an actual pilot, physician or nurse. [1] Just writing code doesn’t mean the code will be secure and there aren’t big gaping errors that will cause some large future problem. And as you know this is something that happens to the biggest and the brightest as it is. There is a ton of things to know just security wise and it changes all the time. And is a full time job almost just to keep on top of that.Honestly I get a kick out of hearing that people are learning to code on such an expedited basis. This reminds me of martial arts studios giving out black belts in a year or two vs. how long it used to take to make that achievement.[1] In other words how do you handle problem situations? What do you do in emergencies? Hard to replace actual experience over many years with a course no matter how bright someone is or who the teacher or teaching method is.
She is not writing code for me. I am training her and one other intern (who happens to be my son) the following:- Social media marketing management – Graphic design and Illustration (just my son)- Photoediting – Website planning and interface wireframing and design- Website content management on various CMSs (mainly Drupal and CMSMS but also SquareSpace and Duda since some of my smaller clients use these last two platforms).
My intern took Python in college.
In the seven pillars of wisdom in designing software, the first pillar is the KISS principle of engineering — keep it simple, stupid.The other six? I’m not sure!
See my list above…
With the disclaimer that I am a strategic advisor to them, I’d also recommend Codesters if Python is of interest. See http://www.codesters.com/hoc — Python based curriculum focused on being classroom ready for middle school.
I have no relationship with codesters but will chime in that I very much like what they’ve put together.
Fred — I watched Banking on Bitcoin last night. Great movie. I was surprised to see you in it. I was like “Is that Fred Wilson?”Can you talk about the implications of the Bitcoin Licensing Act and how it’s affected the ecosystem now? Thanks.
What “language” do they program in? I have followed these links and am still a bit lost (except for the “donate” part.) As a parent, you would be surprised how difficult it is to even determine what a modern school system is teaching – let alone identify gaps. I’ll keep plugging – but some examples would be really helpful.
There’s no single language, curriculum, process, toolset etc. either in NY or across the country. At the high school level, the gorilla in the room is the college board with APCS-A (Java) and APCS-Principals. APCS-P is something of a “course framework” where both the toolset and rigor vary tremendously.The tools range from drag and drop languages like Scratch to scripting type languages such as Python but you’ll also find Processing (Java), straight Java, Racket (Scheme) and more.
Exactly what I had feared. I had hoped that one of the first things a non-profit would do is create a standard for the “hello world” app.In my opinion, you cannot tell someone “go code for an hour” without some clear direction of exactly what to do. Especially in the school systems, which are woefully behind in technology adoption. If you make them figure out the language and the platform and the use cases – you will get very slow adoption.
A number of the “Hour of Code” lessons are very structured and directive, similar to what you describe. If anything they’ve been criticized for being too prescriptive.
That’s good to hear. In my opinion, the “movement” should standardize on a single on-boarding process. It sounds like you guys just developed your own. It doesn’t matter WHAT language is used, as long as it is widely accepted and teaches basic principles.
CONTRIBUTORS:Those over 50 remember the ms-dos commands. Looping, etc. We never ventured past ms-dos and entering the commands. Enjoyed reading Computer World. (1978)https://uploads.disquscdn.c…
Heck no: Just last week I had to write some code with redirection = ‘< N >> test11.out 2>&1’Right, straight out of old high end DOS stuff to automate running an old DOS command line program. Ran the program about 500,000 times so wanted it automated!Command lines remain great as an easy, general purpose API for automation via interpretive scripts, e.g., old Rexx. Actually, command lines remain, e.g., with the Windows icons and the command lines there. Mostly there remain just programs, EXE files, started with essentially just command lines.
I have always been bothered by use of ‘code’ vs. ‘programming’. ‘Programming’ is what I was raised with. [1] So I decided to look into this and found this article which is interesting on the topic and confirmed validated my feelings.https://www.huffingtonpost….For a long time, โprogrammingโ was the word most commonly associated with entering instructions into a machine to execute commands. There really was no debate. Programming was the formal act of writing code, and the term also encompassed the greater nuances of computer science.Believe what you read only if it confirms your point of view:In recent years (read: post-Hour of Code, circa 2013) the term โcodingโ has resurfaced as a much more playful and non-intimidating description of programming for beginners. I still refer to it as ‘programming’. And no coding was definitely in use prior to 2013 obviously.[1] To me it’s like ‘snowboarding’ vs. ‘skiing’. Two different sports but the one that my brain thinks is more legit is the one that I was raised with…. https://uploads.disquscdn.c…
Imagine if this happened in every elementary, middle and high school once a week, all year round? THAT would be impressive. It should be required.
The obstacle is not the teachers, school, and parents — they are, by and large, on board. It’s rather 1) lack of adequate networks and equipment, 2) lack of PD for teachers (a problem our host is working on), and 3) finding time in the day given conflated problem of myriad state standards AND that in most states CS is still not a mandate itself. Ultimately it’s a choice of how we expend public resources and if this is a priority or not.There is also SO, SO, SO much more the technology community could be doing in terms of financial resources and volunteer time. Our host is the model — the tech community really could step up and do so much more.
It should be requiredI just don’t agree with that level of enthusiasm for the benefits of this ‘once a week, all year round.’What subjects would you remove and what would happen to the teachers for those subjects as a result of less demand?That ‘hour per week’ has to come from somewhere. Right?And I don’t think this is something that close to everyone should be doing either.As an example I think in teaching we have to get away from this idea that ‘everyone should learn a foreign language because a small percentage of those who learn will end up using it in some valuable way’. There are to many things that can be learned and not everyone is going to benefit from the same thing one size fits all. And kids tend to gravitate toward what they like if exposed to it they don’t need to be force fed for something at least like this.I do agree that we have to get back to teaching more of the trades and de-stigmatize that for those who would not benefit from college.
Unfortunately I don’t know enough about coding to lead a class yet… but I’ll continue my JavaScript lessons on CodeAcademy for a hour today and one day I’m sure I’ll get there ๐
CodeBrooklyn, the campaign co-founded by Brooklyn Borough President Eric L. Adams and myself, whose mission is to “be an advocate for Brooklyn school communities โ teachers, students, parents, administrators, and their surrounding neighborhoods โ in their school specific roadmaps to introduce, expand, and fortify computer science education”, could use more volunteers to help with Hour of Code and similar CS Ed week activities in Brooklyn public schools. You can volunteer here: http://bit.ly/volunteer-cse… (or email me at rob-at-ttmadvisors-dot-com). Please also spread the word. Thanks
Some photos of one of the early events in Brooklyn this week: https://twitter.com/HG45k32…
For Hour of Code for K-12, okay, I’ll make a contribution here. If I were to give a lecture to the students, it would have at least this material. As a student, I always wanted the content in well done text, not just a lecture, certainly not just foils or audio. So, the text here is about the best I’d give the students, anyway.So, there are four steps.Step 1:Checkout from the libraryDonald E. Knuth, The Art of Computer Programming, Volume 3, Sorting and Searching.and read, if only quickly, about (1) the heap data structure (just a clever use of an array to make a cute binary tree), (2) heap sort, (3) the Gleason bound that shows that with meager assumptions heap sort is the fastest possible sort routine based on comparing pairs of keys.Step 2:See the code (in Microsoft’s Visual Basic .NET but with classic syntax easy enough to read) below for using the heap data structure for a priority queue. This code is real: The real problem is, look at 10 million or so numbers and end up with the 20 or so largest, and do that efficiently in both processor time and main memory space.’ Function obj_heap_insert” Object heap insert to use the heap’ algorithm to maintain a ‘priority’ queue’.” So, suppose for positive integer n we’ have x(i) for i = 1, 2, …, n and’ for positive integer m <= n we want’ the m largest of the x(i). So, we’ allocate an array y(j), j = 1, 2,’ …, m.” Suppose we regard y as an ascending’ heap, e.g., so that y(1) <= y(j), j =’ 2, …, k <= m where the number of’ locations of y in use so far is’ integer k where 0 <= k <= m.” Then for i = 1, 2, …, n, we’ consider inserting x(i) into the’ heap.” If k < m, then we set k = k + 1 and’ set y(k) = x(i) and ‘sift’ to’ ‘promote’ the value in y(k) until we’ have a heap again.” If k = m, then we compare x(i) and’ y(1). If x(i) <= y(1), then we are’ done with x(i). Else, x(i) > y(1)’ and y(1) is not among the m largest’ and, to ‘remove’ value y(1) we set’ y(1) = x(i) and ‘sift’ the value in’ y(1) to create a heap again.” After i = n, y contains the m largest’ of x(i), i = 1, 2, …, n.” The advantage of this routine is’ speed: When k = m, the effort to’ insert x(i) is proportional to’ log(m). When k < m, the effort to’ insert x(i) may be proportional just’ to k. The worst case would be when’ the array x was in ascending order’ since then x(i) would have to be’ inserted for each i. Similarly, in’ the case the array x is in descending’ order, after m inserts, no more’ inserts will be done. For the order’ of array x ‘random’, once a’ relatively large fraction of the m’ largest are in the heap, additional’ inserts become relatively rare.” This routine is to be ‘polymorphic’,’ that is, to work for essentially any’ user defined class my_class1 where” Dim m, n As Int32′ Dim x( n ) As my_class1′ Dim y( m ) As my_class1” This routine can build an ascending’ heap, as discussed above, or a’ descending heap. The only difference’ is how the comparisons are made in’ the class used for the interface.” The interface IComparer is used. If’ the usual class Comparer is used for’ this interface, then this function’ builds an ascending heap, that is,’ where upon return” y( 1 ) <= y( j ),” for j = 2, 3, …, m. Function obj_heap_insert( _ x As Object, _ y() As Object, _ ByRef k As Int32, _ m As Int32, _ compare As IComparer) As Int32 Dim routine_name As String = “obj_heap_insert” Dim error_code As Int32 Dim return_code As Int32 = 0′ Dim result_code As Int32 Dim message_tag As Int32 Dim i_father As Int32 Dim i_child0 As Int32 Dim i_child1 As Int32 Dim i_child2 As Int32 Dim i_2 As Int32 = 2 Try error_code = 1001 message_tag = 1001 If console_msg_level >= console_routine_messages2 Then _ Console.WriteLine( routine_name & ” ” & message_tag & _ “: Started …” ) error_code = 1002 If m <= 0 Then return_code = 1001 Goto out End If If k < 0 Then return_code = 1002 Goto out End If If k > m Then return_code = 1003 Goto out End If If y.GetUpperBound(0) < m Then return_code = 1004 Goto out End If error_code = 1003 If k < m Then i_child0 = k + 1 k = i_child0 error_code = 1004 Do ‘ Sift value of x into correct position. If i_child0 < 2 Then Exit Do i_father = i_child0 i_2 error_code = 1005 If compare.Compare( x, y( i_father ) ) >= 0 Then Exit Do y( i_child0 ) = y( i_father ) i_child0 = i_father Loop ‘ Sift value of x into correct position. error_code = 1006 y( i_child0 ) = x Goto out End If error_code = 1007 If compare.Compare( x, y( 1 ) ) <= 0 Then Goto out i_father = 1 error_code = 1008 Do ‘ Delete value of y( 1 ) and sift x to correct position ‘ starting at location y( 1 ) If i_father > m i_2 Then Exit Do i_child1 = i_father + i_father If i_child1 < m Then i_child2 = i_child1 + 1 error_code = 1009 If compare.Compare( y( i_child1 ), y( i_child2 ) ) < 0 Then i_child0 = i_child1 Else i_child0 = i_child2 End If Else i_child0 = i_child1 End If error_code = 1010 If compare.Compare( x, y( i_child0 ) ) <= 0 Then Exit Do y( i_father ) = y( i_child0 ) i_father = i_child0 Loop ‘ Delete value of y( 1 ) and sift x to correct position ‘ starting at location y( 1 ) y( i_father ) = x Catch Console.WriteLine( ” ” ) message_tag = 1002 Console.WriteLine( routine_name & ” ” & message_tag & _ “: Error condition raised. ” & vbCrLf & _ “: error_code = ” & error_code & vbCrLf & _ “: err.number = ” & err.number & vbCrLf & _ “: err.source = ” & err.source & vbCrLf & _ “: err.description = ” & err.description ) message_tag = 1003 Console.WriteLine( routine_name & ” ” & message_tag & _ “: Raising error condition with error_code = ” & error_code ) err.raise(error_code, routine_name) End Try out: message_tag = 1004 If console_msg_level >= console_routine_messages2 Then _ Console.WriteLine( routine_name & ” ” & message_tag & _ “: Returning.” ) Return return_code End Function ‘ Function obj_heap_insertRight, the error handling is crude. A fix, a system-wide log server, is on the way!Could slightly modify this code to find an approach good under some common circumstances to finding the unique lines in a large file, maybe a Web site log file. This was an interview question I once got from one of the more technical people at A16Z venture firm.Of course, could also useRonald Fagin, Jurg Nievergelt, Nicholas Pippenger, H. Raymond Strong, “Extendible hashing — a fast access method for dynamic files”, ACM Transactions on Database Systems, ISSN 0362-5915, Volume 4, Issue 3, September 1979, Pages: 315 – 344.That is useful material; once we used it in an artificial intelligence project at IBM’s Watson lab.Of course, the conservative approach is just to sort the file, which, for each duplicated line, puts all the duplicates next to each other, and then read the sorted file. Of course, for a file with n lines, this sorting approach will take time, best case, average case, and worst case, proportional to n log(n).Under some common cases, the priority queue or hashing could be much faster.Step 3:This problem is also real.Suppose we are given an array A with its components sorted into ascending order. In Knuth as above, read about the binary search algorithm for finding components in the array. Or, the algorithm is essentially the same as looking up a word in a dictionary by cutting the dictionary in half, seeing what half the word is in, cutting that in half, etc., until find the page that should have the word if it is in the dictionary. Now you just learned the binary search algorithm and, thus, don’t have to look up the binary search algorithm or even bother doing homework with it!But, suppose we are also given array B and for each component of B want to look it up in array A. One way, then, is just to use binary search on array A once for each component of array B. But can we think of a way that, in total for all the lookups of the components of B, might, in some cases, on the sizes and contents of the arrays A and B, be faster?Cook up such an algorithm.As an exercise program and test it.I did that, and my project is using the code! So, this exercise is real!Then, you will see some possible variations in the algorithm. Do some algorithm applied math computational time complexity analysis as is often done in Knuth above to find in some appropriate and realistic sense the fastest possible such algorithm, maybe depending on the sizes and contents of the arrays A and B.Done well, that work might be publishable in a good peer reviewed journal of original research. At some of the world’s best research universities, the main requirement for a Ph.D. is “an original contribution to knowledge worthy of publication”. Even if don’t get a Ph.D. out of it, might get some high end contacts in the research computer science community and a high end college admission and scholarship.Step 4:For artificial intelligence and machine learning in computer science now, a central issue is the polar decomposition in matrix theory.Below is a simple proof of the more general complex case but only for the non-singular case. An exercise is to invent and/or find a proof for the singular case. Note: Consider a perturbation argument.Then, sure, watch the MIT lecture on quantum mechanics athttps://www.youtube.com/wat…and try to generalize this result to Hermitian (self-adjoint) and unitary (distance preserving) operators on Hilbert space as in quantum mechanics.If you need help, e.g., some simple explanations of quantum mechanics and/or Hilbert space, then ask your teachers and/or ask them to ask some professors, etc. Or just call up some math or physics profs at, say, Courant or Columbia.Polar decomposition?Okay.Since apparently the standard HTML elements sup and sub don’t work at Disqus, here I borrow syntax from D. Knuth’s mathematical word processing software TeX.Theorem (polar decomposition): For positive integer n and n x n complex non-singular matrix A, there exist n x n Hermitian H and n x n unitary U so thatA = HUThat is, somewhat intuitively, for a positive integer n and the set of complex numbers C, function A: C^n –> C^n and, there, the work of A is just, first, to rotate and/or reflect keeping distances constant and, then, stretch or shrink along mutually perpendicular axes. So a circle gets rotated and/or reflected and then stretched or shrunk into an ellipse. So, that is all a square matrix, that is, a linear transformation on C^n, can do. Similarly for the real numbers R and R^n.Proof:Let A* denote the transpose of the complex conjugate of A.Let K = AA*. Then K is Hermitian, that is,K* = (AA*)* = (A*)*(A*)= AA* = K.Then there exist unitary Q and diagonal D = [d_ij] with d_ii > 0 such thatK = Q*DQLetH = Q*D^(1/2)QThen H is Hermitian, i.e.,H* = (Q*D^(1/2)Q)*= Q*D^(1/2)Q) = HThenH^(-1) = (Q*D^(1/2)Q)^(-1)= Q*D^(-1/2)Qand(HH)^(-1) = H^(-2) = Q*D^(-1)Q= K^(-1)LetU = H^(-1)AWe show that U is unitary, i.e., that U*U = I.U*U = (H^(-1)A)*(H^(-1)A)= A*(H^(-1))* (H^(-1)A)= A*(H^(-1))(H^(-1)A)= A*(H^(-2))A= A*(K^(-1))A= A*(AA*)^(-1)A= A*((A*)^(-1))(A^(-1))A= II= Iso that U is unitary.Then we have A = HU for Hermitian H and unitary U. Done.So, in this proof, all we did is do a little matrix algebra using the standard properties of Hermitian and unitary matrices. That is, we didn’t have to bring in anything but just routine matrix algebra. For such an important result, interesting.The result also holds when A is singular, and can construct a proof similar to the above but with some more details.
@twaintwain I saw this article and thought of you: โWhy Women Should Lead our A.I. Futureโ https://medium.com/intuitio…
@mlbar:disqus — Thanks, Carlos Perez of Intuition Machine follows me on LinkedIn where I’m very vocal about how male-only frameworks and definitions have resulted in AI’s biased data, inability to understand Natural Language, our cultures, our values.I’m now part of the Consciousness Prior AI project led by one of the three “Godfathers of Deep Learning,” Yoshua Benugio of Montreal. One of its key objectives is to find another way of tooling the machines to do Natural Language Understanding.