“phylogeny recapitulates ontogeny” in mathematical learning?

It seems most sophisticated numerical packages for differential equations and numerical algebra are so “bloated” in terms of jargon… names and names and names, and, tedium.

In such an area of computing where the mathematical algorithm (not the compiler) is the utmost determinant of performance, the “multiple re-branding” is ultimately confusing for students of mathematics, and any new industrial learners. This is actually absolutely justifiable and necessary in their different fields of applications, design philosophies and programming languages employed, but in doing so, giving credit to the supporting government agency (Sandia, Livermore, Argonne Labs) or commercial corporation, they hide away the same mathematical concepts used over and over again.

And to make meaningful use of all such packages, though they’re promised to be “robust, powerful and time-saving”, any new comer need to first re-learn an essentially aliased set of vocabulary just to punch the correct sequence of keystrokes to get an answer! (and avoiding bugs due to an incomplete understanding of the variagated design philosophies) No wonder, people finally give up and return to MATLAB or just reinvent the simplest wheel they need for their own applications. Only those who have their income tied to large-scale operations would have an incentive to devour a huge user guide and invent exercises for herself to train her mammalian brain to sufficient mastery of the new trick.

When, what’s a good tool? The bottom-line is that it should yield its control in areas it is not good. There are huge amount of abstraction-leaking out there, and people need to learn the “framework” correctly to do something basic, contrary of how mathematical knowledge is actually learn (suppose you did not start your education from epsilon-deltas). It is through counter-examples that people feel the need to re-invent the framework, and prudent human wisdom usually would not prefer to reinvent the whole language again, but make a minimal amount of notational change to convey a simple idea. That’s why Einstein is so proud of his summation.

The single unique language that everybody shares is the very mathematical language which is refined for centuries if not millennia. Now that programming languages are maturing to reproduce features of “Lisp” and eventually a mathematical facet of human language (with wonderful LLVM facilities that gives you a C-language boost over interpreted MATLAB), we should devote our efforts to write programs literately to avoid re-learning.

How can a user learn a language used to perform math computationally in a way that recapitulates the chronological order of his learning of math ideas? How can a mathematical programmer design an OOP (with both nouns and verbs)-based body of knowledge  to best facilitate this?

Note: this classical theory by Ernst Haeckel is largely discredited in biological community: “”http://en.wikipedia.org/wiki/Recapitulation_theory

P.S. Lisp is succinct to the author and not to the reader without training (and even to those with training if the program is poorly documented or commented). Procedural programs are easier to read sequentially but by nature leaky in abstraction. The epitomes of functional and procedural programming languages each suffer from a different kind of cognitive overload: dense syntactic processing in a few line of codes (plus a PhD in parsing parentheses) vs high STM requirement in a thousand-lined code (and the first line doesn’t make sense until you finish hundreds of them). LOL, but whoever solves this conundrum hit gold. Ultimately, things are converging back to a subset of plain English / Mathematics, but make it executable. Isn’t it?

Hope it will be GPL or MIT or BSD, but not Intel, Microsoft or even MathWorks.