Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Understanding Mathematical Notation as Code (github.com/jam3)
72 points by mattdesl on June 30, 2015 | hide | past | favorite | 38 comments


This is a nice little document, but you have to understand that mathematical notation is syntactically closer to spoken language than to code, and it's only expected to be semantically rigorous after context is taken into account.


Ken Iverson's Turing essay was Notation as a Tool of Thought[1]. Both his languages, APL and J [2] were based on the idea of mapping mathematical and a programming language. Iverson's Math for the Layman presents that link explicitly despite being an enjoyable read. [3]

[1]: http://www.jsoftware.com/papers/tot.htm

[2]: http://www.jsoftware.com

[3]: http://www.cs.trinity.edu/About/The_Courses/cs301/math-for-t...


I've always thought (ever since university) that math notation should take inspiration from code. Instead of writing Xi, they should really write X[i]; because Xi is ambiguous. I find that dealing with all the ambiguity in syntax can be rather frustrating.


> "Instead of writing Xi, they should really write X[i]; because Xi is ambiguous. I find that dealing with all the ambiguity in syntax can be rather frustrating."

This is a feature of math, not a bug. The potential for ambiguity is the cost that one pays to have a flexible and extensible notation which can be adapted to concepts yet undiscovered. This is generally not the case with code [1]. When you are exploring new ideas you want the ability to redefine your notation to match the nature and structure of the abstactions you are examining.

There is a finite number of symbols in the set of all human languages, and thus far we know of no reason that there should be a finite set of concepts in mathematics. Enforcing a one-to-one mapping from a given sequence of symbols to a given concept forces you to either limit the space of concepts you can consider or to eventually deal with impractically large sequences of symbols for relatively simple concepts.

[1] Yes, Lisp and DSLs are a thing, but you still have to define what a given sequence of symbols means. In an interpreted language, the interpreter computes the meaning using inputs and any necessary state. In math, the meaning is necessarily dependent on context as well.


> deal with impractically large sequences of symbols for relatively simple concepts.

I don't think they would ever have to be impractically large.

Right now they are just impractically weird.


> math notation should take inspiration from code

I believe that's the pseudocode format? I grew up in physics before the web took off and am now teaching myself bioinformatics, transcribing problem statements and pseudocode into working python.

There's several hundred years of pencil and paper math where these forms developed natively, unconstrained by $MY_KEYBOARD, which offers a far more limited range of expression. I am quite grateful for LaTeX, but I also understand the utility of suppressing that degree of expressiveness in code, where getting the point down quickly with a few more keys is a reasonable trade for the power of the machine that $MY_KEYBOARD interacts with.

Personally, I find iPython's web notebook to be an amazing environment.


> Instead of writing Xi, they should really write X[i]; because Xi is ambiguous.

How so, and in which cases would it be ambiguous to write X_i, but not X[i]?


X_i is a valid variable name. Although it seems in math you can only use single-letter variable names, which is kinda ridiculous (coming from a computer science background).


Using "ab" would lead to ambiguity, since it could mean "a*b". Also, there is no reason for using two letter variables when you can use any letter from the latin or greek alphabet, with any notation like ä or â or even create your new symbols like ∫, ℝ or ℕ when nothing matches the concept you want to express.

Unlike programming languages, math notation evolved without being constrained by monospace ASCII characters and text lines.


Draw a circle/oval/rectangle/cloud/whatever around your names and suddenly there is zero ambiguity. The only constraint was EGO.

PS: For typesetting Just capitalize the first letter. Radious ^ 2 * Pi = Area is common and unambiguous.


You can use multiple letter variable names. Nobody is stopping you from doing that.


> Instead of writing Xi, they should really write X[i]

In math, x_i is i-th element in a set/list and X[i] is the ring of polynomials.


No, both are whatever you want them to be. http://mathworld.wolfram.com/SquareBracket.html gives some options for square brackets. https://en.m.wikipedia.org/wiki/Bracket_(mathematics) has quite a few more (do not forget to check out the 'See also' section)

And x_i could for example mean 'x in base i' (http://mathforum.org/library/drmath/view/57226.html)

Mathematicians invent notation to suit them in whatever problem they are working on. That notation doesn't have to be globally consistent.


That's true. These just so happen to be ubiquitous. We can agree to call 2 a unit. To most, though, it won't be.


Nitpick, under standard definitions, 2 is a unit, over the field of rationals (or reals, or many other systems).

EDIT: for a less trivial example, 2 is a unit in the integers mod 9, but 3 isn't.


That seems like a rather selfish and narrow thing to say.

I've always thought (ever since university) that code should take inspiration from math notation.

Instead of writing:

    for (int i = 0; i < limit; i++) { acc += 1; }
just write:

    acc = ∑_(int i=0)^(limit);
I think maybe one should do some more math before telling a world of mathematicians what to do. Besides, it's far easier to create computer languages than the change math notation the world over.


mathematical notation is usually contextual - due to the broad range of concepts


The part about using = for definition doesn't seem to reflect the way that works in mathematics.

The mathematical meaning of x = 2kj isn't var x = 2 * k * j but var x = function(k, j) { return 2 * k * j; };


I'm working on improving this part; feel free to add more discussion here: https://github.com/Jam3/math-as-code/issues/17


I see it more as an "Assert(x==2kj)".


I don't like being negative, but this seems really patronising and unhelpful. Just because someone is a 'self-taught game/graphics programmer' doesn't mean that describing maths concepts to them as code is at all intuitive, especially for things relating to geometric concepts like vectors, for which a simple diagram would speak volumes.


I agree, sadly. I'm not sure anyone who didn't already understand vector products is going to be helped by staring at "var rx = ay * bz - az * by; var ry = az * bx - ax * bz; var rz = ax * by - ay * bx" (as opposed to, say, [0]). Sure, it's precise and unambiguous, but it's missing the context that make it possible for a human to actually understand what it is as opposed to just calculating it (which the javascript interpreter is capable of doing fine on its own).

To say nothing of "var determinant = require('gl-mat2/determinant'); determinant(matrix)"! I guess the point is to go read the source code of the library, but I can't imagine many worse ways of learning linear algebra than reading the source code of optimized linear algebra routines.

[0] https://upload.wikimedia.org/wikipedia/commons/6/6e/Cross_pr...


The point is not to understand linear algebra in a few lines of code. The point is to present the language of mathematics in another light; and hopefully demystify intimidating-looking equations that often appear in literature surrounding games, graphics and other fields of programming.

The audience is hobbyists and self-taught developers with no formal background in mathematic notation. This audience might have no problem with a for loop, but the Summation symbol is (literally) just Greek to them.


I've found it helpful - not as someone trying to learn a new concept from scratch but as a coder who has struggled with maths in the past and needs a refresher.


As someone from the target audience of this document I find it really helpful and descriptive. I was looking for such a resource for ages.

Also keep in mind, that the whole thing is created maybe yesterday, so it's ok if it's not perfect yet.


Not sure if I agree with the statement that "=" is used for definitions. It's true meaning is to simply state that two things are equal. I can see where the confusion comes from since mathematicians have a tendency to just state "Let x = 2kj", instead of the more explicit "Let us assume that the statement 'x = 2kj', is true". It's important to note the distinction though, since introducing a new variable by simply stating it's properties is more powerful than just defining it to be equal to something. For instance it's equally valid to define a symbol x by simply stating "Let x∈A". This is used quite a lot since anything you can then prove about "x" must automatically be true for all elements of A.


I like the idea of using the compound symbol ":=" for "is defined to be". So, you can say something like "For x∈A, y := 2x". When I take math notes, it helps a lot using this shorthand. I'm able to remove some ambiguity between what is and is not a definition.


I definitely agree with the use of ":=", but "=" simply doesn't mean the same thing. For example, the similar statement "For x∈A, y = 2x" could even be interpreted as using y to define x.


I don't understand -- you say "=" and ":=" are not the same thing, but give an example where they act the same?

Practically speaking, using ":=" removes any doubt when quickly skimming over old notes. It is nice to be able to easily separate what is defined and what is asserted in each of dozens of statements.


I'm working on improving that section; would be great if you could chime in here:

https://github.com/Jam3/math-as-code/issues/17


Surprised there's no mention of the Curry-Howard correspondence between proofs and programs.


Mention? The title led me to believe that was the topic.


Nice, it would make a great book : "Math for coders: from code to math notation.". Seriously ;)


This is amazing; It's always annoying having to Google arcane mathematical notation, in order to translate something into code.


Math is pure = not mutable, and that code is mutable. Damn barbarians try to desecrate math!


Math notation is just obfuscated code in a language that doesn't compile or run.


It's not really code - more of a pseudocode, because it leaves the details of the implementation to the reader.

Might be what you were getting at with "doesn't compile or run".


This is neat. I hope to see more!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: