Volatility vs Persistence

From volatile to persistence, brainstormed while programming…

Most volatile

(parallel stuff changing the same state?)

[In RAM]
Lexically defined variable

Dynamic / global variables

Persistent / static variables

Constants declared in the program code

[In Hard-drive]

A temporary file

A user file with read/write access

A high-clearance file with read/write access

A read-only file

[In Physical Terms]

A file written in an unchangeable Read-only medium, like CDROM

A file written in durable medium for archival purpose.

A data carved out in a traditional medium like a stone tablet or diamond.

[In Transcendent Terms]

A self-reproducing life form / automaton that spreads the information faster than it can be destroyed.

Irreversible physical changes to an object or a collection of objects, with sufficient perpetual side-effects to deduce the original act of change. (Big bang?)

The physical laws that do not change during the lifetime of universe.

Timeless sayings. Re-discoverable by any soul.

The WORD of GOD.

Most persistent.

To hide or to leak

Leaky abstractions, especially with recursive programming drives the user to read the code that refer to the code that refer to the code (and repeat) just to locate an issue at some low-level function… In many use cases, a user should not have to read and understand the library function in its entirety just to use it – it’s supposed to off-load the work away anyway. The ideal case would be to have a function that works in almost the English sense of the name of the function. People are working towards context-detection so that programs knows the context within which the user is applying the function in, and calls the right implementation without throwing an error.

However, the on the other end…

Hiding all the implementation details underneath the hood is also removing the possibility of understanding the tool. If the tool is well designed, well and good, but if not, and if the tool works against or hurts the wielder, then there is no easy way to change the tool. People are then forced to work for the tool (fix the computer, maintain virus checks etc…), instead of having the tool work for her.

Two examples:

  1. when you see a blue screen in Windows, you are given a code and you need to ask your system administrator.
  2. using Mathematica, you don’t learn math at all, you just call the function and it should just work, but you don’t learn the math at all and when it returns an elliptic integral of the n-th kind, then what can you do…?

To hide or to leak, that’s the question. And how to hide if it should be hidden and how to leak if it should be leaked. This calls for wisdom.

Goto and \Eqno

To intersection{mathematicians,programmers}:

   does equation numbering and reference feel like goto statements?

Dijkstra’s tirade against goto:

http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html

Knuth’s defense for structured programming:

“Structured programming with go to statements”

Anyways, rampant goto statements are abhorrent to both of them, and everybody else. The agreement is that it is a necessary evil in low-enough-level codes to optimize, but jumping lines are by nature unnatural for human readers.

But fellow mathematicians, don’t you feel like reading somebody’s paper with equation number over two dozens is likewise a torture? They feel like a goto statement actually, and your linear flow of reading is disrupted again and again by the necessity to refer to something that refer to something (and loop), or worse, another unpublished article, or (gasp) a volume over 500 pages.

This line of thought is triggered by the criticisms of the engineering students against a certain matlab course’s “explanatory document” written by a Math professor.

Seriously, that’s not a surprise to me because I knew this professor, and he is not unique. Mathematical presentation is hugely influenced by the TeX typesetting system by Knuth… and the inherent idea of cross-referencing isn’t that different from goto from the point of view of a reader.

Perhaps, as much as structure is found important in a program, some conceptual organization is necessary to write a readable mathematical prose – both are a form of prose anyway, with esoteric languages that the untrained cannot comprehend.

Why make the life hard, for others’ and ours’ alike? How will you understand your own paper ten years from now? Can you summarize your idea using a few sentences that make common sense?

Publish or perish, what a vicious cycle.

“phylogeny recapitulates ontogeny” in mathematical learning?

It seems most sophisticated numerical packages for differential equations and numerical algebra are so “bloated” in terms of jargon… names and names and names, and, tedium.

In such an area of computing where the mathematical algorithm (not the compiler) is the utmost determinant of performance, the “multiple re-branding” is ultimately confusing for students of mathematics, and any new industrial learners. This is actually absolutely justifiable and necessary in their different fields of applications, design philosophies and programming languages employed, but in doing so, giving credit to the supporting government agency (Sandia, Livermore, Argonne Labs) or commercial corporation, they hide away the same mathematical concepts used over and over again.

And to make meaningful use of all such packages, though they’re promised to be “robust, powerful and time-saving”, any new comer need to first re-learn an essentially aliased set of vocabulary just to punch the correct sequence of keystrokes to get an answer! (and avoiding bugs due to an incomplete understanding of the variagated design philosophies) No wonder, people finally give up and return to MATLAB or just reinvent the simplest wheel they need for their own applications. Only those who have their income tied to large-scale operations would have an incentive to devour a huge user guide and invent exercises for herself to train her mammalian brain to sufficient mastery of the new trick.

When, what’s a good tool? The bottom-line is that it should yield its control in areas it is not good. There are huge amount of abstraction-leaking out there, and people need to learn the “framework” correctly to do something basic, contrary of how mathematical knowledge is actually learn (suppose you did not start your education from epsilon-deltas). It is through counter-examples that people feel the need to re-invent the framework, and prudent human wisdom usually would not prefer to reinvent the whole language again, but make a minimal amount of notational change to convey a simple idea. That’s why Einstein is so proud of his summation.

The single unique language that everybody shares is the very mathematical language which is refined for centuries if not millennia. Now that programming languages are maturing to reproduce features of “Lisp” and eventually a mathematical facet of human language (with wonderful LLVM facilities that gives you a C-language boost over interpreted MATLAB), we should devote our efforts to write programs literately to avoid re-learning.

How can a user learn a language used to perform math computationally in a way that recapitulates the chronological order of his learning of math ideas? How can a mathematical programmer design an OOP (with both nouns and verbs)-based body of knowledge  to best facilitate this?

Note: this classical theory by Ernst Haeckel is largely discredited in biological community: “”http://en.wikipedia.org/wiki/Recapitulation_theory

P.S. Lisp is succinct to the author and not to the reader without training (and even to those with training if the program is poorly documented or commented). Procedural programs are easier to read sequentially but by nature leaky in abstraction. The epitomes of functional and procedural programming languages each suffer from a different kind of cognitive overload: dense syntactic processing in a few line of codes (plus a PhD in parsing parentheses) vs high STM requirement in a thousand-lined code (and the first line doesn’t make sense until you finish hundreds of them). LOL, but whoever solves this conundrum hit gold. Ultimately, things are converging back to a subset of plain English / Mathematics, but make it executable. Isn’t it?

Hope it will be GPL or MIT or BSD, but not Intel, Microsoft or even MathWorks.

Watchlist for 2014

It’s almost a year since I blogged at UBC blogs. Just want to note down things to watch for the coming year,

Projects to watch:

  • Julia (http://julialang.org/) and iJulia (http://nbviewer.ipython.org/url/jdj.mit.edu/~stevenj/IJulia%20Preview.ipynb)
  • Chebfun (http://www2.maths.ox.ac.uk/chebfun/)
  • XPPAUT in iOS and Android (not yet done, read his development plan to learn design ideas for an interactive PDE/ODE software) (http://www.math.pitt.edu/~bard/xpp/gsoc_ideas.html)
  • QuickLisp, a nascent package system for Common Lisp (finally… at http://www.quicklisp.org/)
  • Kivy, a cross-platform (iOS & Android) GUI framework in Python
  • Autodiff.org, a collection of tools for AD in different languages, should focus on Fortran implementations

Worthy mature projects to learn more:

  • noweb: (http://www.cs.tufts.edu/~nr/noweb/), and it’s LyX environment.

Liberally Deliberate

Academic learning is not a liberal process,
     it neither just comes to you, nor is it natural.
It is brought about only by deliberate training ,
     it is both self-initiated, and essentially artificial.

Respect

If you want to be respected,

     be someone who is worthy of respect.

          When you have become so,
you will not ask for respect anymore.
     Not because people do respect you,
but because your former need is no more.