I find APL very difficult to read. Incidentally, I am told (by stack overflow) that the APL expression "A B C" can have at least four different meanings depending on context[1]. I suspect there's a connection here.
Yes, it's either an array (if A, B and C are arrays), a function derived via the dyadic operator B, with operands A and C being either arrays or functions, a dyadic function call of the dyadic function B (A and C are arrays), or the sequential monadic application of functions A and B to array C, or a derived function as the tacit fork (A, B and C are functions). Did I miss anything?
Yes, it can also a fork where A is an array while B and C are function and a tacit atop where either B is a monadic operator and A its array or function operand or A is a function and C is a monadic operator with B being its array or function operand. Finally, it can be a single derived function where B and C are monadic operators while A is B's array or function operand.
Do APL programmers think this is a good thing? It sounds a lot like how I feel about currying in language that have it (meaning it's terrible because code can't be reasoned about locally, only with a ton of surrounding context, the entire program in the worst case)
It gets me thinking about the “high context / low context” distinction in natural languages. High context languages are one where the meaning of a symbol depends on the context in which it’s embedded.
It’s a continuum, so English is typically considered low context but it does have some examples. “Free as in freedom versus free as in beer,” is one that immediately comes to mind.
À high context language would be one like Chinese where, for example, the character 过 can be a grammatical marker for experiential aspect, a preposition equivalent to “over” “across” or “through” depending on context, a verb with more English equivalents than I care to try and enumerate, an affix similar to “super-“, etc.
When I was first starting to learn Chinese it seemed like this would be hopelessly confusing. But it turns out that human brains are incredibly well adapted to this sort of disambiguation task. So now that I’ve got some time using the language behind me it’s so automatic that I’m not really even aware of it anymore, except to sit here racking my brain for examples like this for the purpose of relating an anecdote.
I would bet that it’s a similar story for APL: initially seems weird if you aren’t used to it, but not actually a problem in practice.
It makes parsing tricky. But for the programmer it’s rarely an issue, as typically definitions are physically close. Some variants like BQN avoids this ambiguity by imposing a naming scheme (function names upper case, array names lower case or similar).
I am not good enough with APL to be certain but I think you can generally avoid most of these sorts of ambiguities and the terseness of APL helps a great deal because the required context is never far away, generally don't even have to scroll. I have been following this thread to see what the more experienced have to say, decided to force the issue.
Huh? Currying doesn't require any nonlocal reasoning. It's just the convention of preferring functions of type a -> (b -> c) to functions of type (a, b) -> c. (Most programming languages use the latter.)
Of course it requires non-local reasoning. You either get a function back or a value back depending on if you've passed all the arguments. With normal function calling in C-family languages you know that a function body is called when you do `foo(1, 2, 3)` or you get a compilation error or something. In a currying language you just get a new function back.
Functions are just a different kind of value. Needing to know the type of the values you're using when you use them isn't "nonlocal reasoning".
And it's not like curried function application involves type-driven parsing or anything. (f x y) is just parsed and compiled as two function calls ((f x) y), regardless of the type of anything involved, just as (x * y * z) is parsed as ((x * y) * z) in mainstream languages. (Except for C, because C actually does have type-driven parsing for the asterisk.)
Another way to look at it: languages like Haskell only have functions with one argument, and function application is just written "f x" instead of "f(x)". Everything follows from there. Not a huge difference.
In an ML-like syntax where there aren’t any delimiters to surround function arguments, I agree it can get a little ambiguous because you need to know the full function signature to tell whether an application is partial.
But there are also languages like F# that tame this a bit with things like the forward application operator |> that, in my opinion, largely solve the readability problem.
And there are languages like Clojure that don’t curry functions by default and instead provide a partial application syntax that makes what’s happening a bit more obvious.
And, 'A B C' as an array isn't valid (ISO) APL but an extension, the 'array syntax' only covers numbers and the parser is supposed to treat is as a single token.
[1] https://stackoverflow.com/a/75694187