This is a nice little document, but you have to understand that mathematical notation is syntactically closer to spoken language than to code, and it's only expected to be semantically rigorous after context is taken into account.
Ken Iverson's Turing essay was Notation as a Tool of Thought[1]. Both his languages, APL and J [2] were based on the idea of mapping mathematical and a programming language. Iverson's Math for the Layman presents that link explicitly despite being an enjoyable read. [3]
I've always thought (ever since university) that math notation should take inspiration from code. Instead of writing Xi, they should really write X[i]; because Xi is ambiguous. I find that dealing with all the ambiguity in syntax can be rather frustrating.
> "Instead of writing Xi, they should really write X[i]; because Xi is ambiguous. I find that dealing with all the ambiguity in syntax can be rather frustrating."
This is a feature of math, not a bug. The potential for ambiguity is the cost that one pays to have a flexible and extensible notation which can be adapted to concepts yet undiscovered. This is generally not the case with code [1]. When you are exploring new ideas you want the ability to redefine your notation to match the nature and structure of the abstactions you are examining.
There is a finite number of symbols in the set of all human languages, and thus far we know of no reason that there should be a finite set of concepts in mathematics. Enforcing a one-to-one mapping from a given sequence of symbols to a given concept forces you to either limit the space of concepts you can consider or to eventually deal with impractically large sequences of symbols for relatively simple concepts.
[1] Yes, Lisp and DSLs are a thing, but you still have to define what a given sequence of symbols means. In an interpreted language, the interpreter computes the meaning using inputs and any necessary state. In math, the meaning is necessarily dependent on context as well.
I believe that's the pseudocode format? I grew up in physics before the web took off and am now teaching myself bioinformatics, transcribing problem statements and pseudocode into working python.
There's several hundred years of pencil and paper math where these forms developed natively, unconstrained by $MY_KEYBOARD, which offers a far more limited range of expression. I am quite grateful for LaTeX, but I also understand the utility of suppressing that degree of expressiveness in code, where getting the point down quickly with a few more keys is a reasonable trade for the power of the machine that $MY_KEYBOARD interacts with.
Personally, I find iPython's web notebook to be an amazing environment.
X_i is a valid variable name. Although it seems in math you can only use single-letter variable names, which is kinda ridiculous (coming from a computer science background).
Using "ab" would lead to ambiguity, since it could mean "a*b". Also, there is no reason for using two letter variables when you can use any letter from the latin or greek alphabet, with any notation like ä or â or even create your new symbols like ∫, ℝ or ℕ when nothing matches the concept you want to express.
Unlike programming languages, math notation evolved without being constrained by monospace ASCII characters and text lines.
That seems like a rather selfish and narrow thing to say.
I've always thought (ever since university) that code should take inspiration from math notation.
Instead of writing:
for (int i = 0; i < limit; i++) { acc += 1; }
just write:
acc = ∑_(int i=0)^(limit);
I think maybe one should do some more math before telling a world of mathematicians what to do. Besides, it's far easier to create computer languages than the change math notation the world over.
I don't like being negative, but this seems really patronising and unhelpful. Just because someone is a 'self-taught game/graphics programmer' doesn't mean that describing maths concepts to them as code is at all intuitive, especially for things relating to geometric concepts like vectors, for which a simple diagram would speak volumes.
I agree, sadly. I'm not sure anyone who didn't already understand vector products is going to be helped by staring at "var rx = ay * bz - az * by; var ry = az * bx - ax * bz; var rz = ax * by - ay * bx" (as opposed to, say, [0]). Sure, it's precise and unambiguous, but it's missing the context that make it possible for a human to actually understand what it is as opposed to just calculating it (which the javascript interpreter is capable of doing fine on its own).
To say nothing of "var determinant = require('gl-mat2/determinant'); determinant(matrix)"! I guess the point is to go read the source code of the library, but I can't imagine many worse ways of learning linear algebra than reading the source code of optimized linear algebra routines.
The point is not to understand linear algebra in a few lines of code. The point is to present the language of mathematics in another light; and hopefully demystify intimidating-looking equations that often appear in literature surrounding games, graphics and other fields of programming.
The audience is hobbyists and self-taught developers with no formal background in mathematic notation. This audience might have no problem with a for loop, but the Summation symbol is (literally) just Greek to them.
I've found it helpful - not as someone trying to learn a new concept from scratch but as a coder who has struggled with maths in the past and needs a refresher.
Not sure if I agree with the statement that "=" is used for definitions. It's true meaning is to simply state that two things are equal. I can see where the confusion comes from since mathematicians have a tendency to just state "Let x = 2kj", instead of the more explicit "Let us assume that the statement 'x = 2kj', is true". It's important to note the distinction though, since introducing a new variable by simply stating it's properties is more powerful than just defining it to be equal to something. For instance it's equally valid to define a symbol x by simply stating "Let x∈A". This is used quite a lot since anything you can then prove about "x" must automatically be true for all elements of A.
I like the idea of using the compound symbol ":=" for "is defined to be". So, you can say something like "For x∈A, y := 2x". When I take math notes, it helps a lot using this shorthand. I'm able to remove some ambiguity between what is and is not a definition.
I definitely agree with the use of ":=", but "=" simply doesn't mean the same thing. For example, the similar statement "For x∈A, y = 2x" could even be interpreted as using y to define x.
I don't understand -- you say "=" and ":=" are not the same thing, but give an example where they act the same?
Practically speaking, using ":=" removes any doubt when quickly skimming over old notes. It is nice to be able to easily separate what is defined and what is asserted in each of dozens of statements.