This article is a good example of why I don’t use uncommon programming languages for actual projects. I watched an Elm-based project steadily slip behind schedule while the team insisted that Elm and FP were actually going to save us a lot of time… eventually some day.
These uncommon FP frameworks and languages can be good tools in the right, experienced hands. They can also be fun for side projects and as learning exercises. But every time I’ve watched programmers try to use uncommon functional languages for real projects they end up like this article:
> More accurately, learning Functional Programming concepts used in Haskell in 3 months after having thrown out 30,000 lines of code on a project that was now monumentally behind schedule was the hardest thing I had to do in my career.
When you’ve reached the point of being severely behind schedule, throwing out mountains of code along the way, and struggling mightily just to get basic things accomplished: It’s time to stop. Don’t double down on a new language that you also have to learn from scratch. Pick something tried and true and get the work done. Revisit the functional language at a later time for an unimportant project or a side project, not something with a deadline.
If the programming language or ideology has become more important than shipping the project, we’ve lost the point.
Also the basic pitch of functional programming is "we have better abstractions, you'll write better code". The basic pitch of something like Java is "everyone knows how to get things done in Java" and of Python "there is a right way to get the task done, and we will tell you what that way is".
There is a fundamental mindset here that the Functional languages aren't task-first. Not in the languages, mind, it is in the community. And the tasks themselves can be so simple once the abstractions are under control that people don't write tutorials about the task.
It is visible in this article - the Haskell community is trying to convince this guy that he needs to know a wall of jargon from a relatively obscure branch of maths in order to program effectively. They may or may not be correct, but it is going to be a while before his attention makes it to the task at hand.
> It is visible in this article - the Haskell community is trying to convince this guy that he needs to know a wall of jargon from a relatively obscure branch of maths in order to program effectively. They may or may not be correct, but it is going to be a while before his attention makes it to the task at hand.
This seems to be the core problem in the communities, like you said: a tendency to believe that time spent learning the language or learning concepts or fighting with libraries or struggling with documentation doesn’t actually “count” towards time spent getting the job done.
I think this plays into our tendency to view learning and education as investments rather than costs. That may be true on a personal level when using your own time, but using your employer’s time to experiment with difficult new concepts and languages when you have a deadline approaching (or long since past) is not cool.
> using your employer’s time to experiment with difficult new concepts and languages when you have a deadline approaching (or long since past) is not cool.
As kindly as I can put it: I’d argue that within this sentiment are the foundations of everything wrong with the ecosystem of professional software engineering.
Let’s start with, if you don’t let people learn on the job, you have to have at least 20% annual turnover to be only 5 years behind the knowledge curve.
At 10% turnover, you’re 10 years behind. Managers think people shouldn’t be learning on company time, HR tells the board lower turnover is a goal.
Inevitable next step is pausing all productive work to undertake a “Digital Transformation™” to try to replace 20 years of old tech. But the firm won’t know how.
> Let’s start with, if you don’t let people learn on the job…
Learning in the job is great, within reason.
Abandoning all of your existing experience and trying to write new projects in a completely unfamiliar language with zero prior experience is not reasonable, though.
Learning isn’t a binary yes/no feature of a job. There is ample room for learning without allowing reckless decisions like trying to use Elm for server-side code when even the Elm authors are hostile to such a use case.
>> using your employer’s time to experiment with difficult new concepts and languages when you have a deadline approaching (or long since past) is not cool.
> As kindly as I can put it: I’d argue that within this sentiment are the foundations of everything wrong with the ecosystem of professional software engineering.
Almost all Software Engineering, as practiced, is not a form of engineering in any way. Many times Software Engineering is really a bunch of commodity workers, who are learning, assembling commodity components that don't really work, under the oversight of a more senior developer who helps everything not fall apart.
You might be interested to read Hillel Wayne's crossover project, where he interviewed a number of people who had worked in traditional engineering roles as well as in software development.
It’s pretty clear that these numbers are made up. Software engineering is an incredibly multi-dimensional field, and it is not a fact that learning on the job is beneficial to anyone let alone employers. And you assume so many things that will not always be true, like the fact that everything will always be changing in the field.
You may feel different, which is ok, but don’t bring made up numbers into it.
> using your employer’s time to experiment with difficult new concepts and languages when you have a deadline approaching (or long since past) is not cool.
It's in fact very cool. I get to take those skills with me when I leave in a few years :)
> using your employer’s time to experiment with difficult new concepts and languages when you have a deadline approaching (or long since past) is not cool.
To defend the counterpoint, artificial scarcity is abundant in the industry. Deadlines always approach, projects are already late. If employees don’t learn and improve on the job, employers will be happy to replace them.
> This seems to be the core problem in the communities, like you said: a tendency to believe that time spent learning the language or learning concepts or fighting with libraries or struggling with documentation doesn’t actually “count” towards time spent getting the job done.
Well... if you learn them on this project, they don't count for the next project. So in that sense, if it levels up your group, then you only pay the price once, but you reap the benefit for a long time.
For purposes of any one project, though... the time counts as overhead. It needs to pay off, or it's a waste.
>using your employer’s time to experiment with difficult new concepts and languages when you have a deadline approaching (or long since past) is not cool.
Boss makes a dollar, I make a dime, that's why I learn zygohistomorphic prepromorphisms on company time.
Every community does that though. Try to do something in Java and there will be hordes of people who tell you you need to learn a wall of jargon from Spring or Hibernate or JAX-whateveritis. Try to do something in Python and there will be people telling you you need to learn a bunch of Django stuff.
A lot of people put the cart before the horse - the whole point of using a monad or whatever is to let you write clear, straight-through code with plain functions and values, if you can do that without using a monad then that's even better. But I don't think that's an FP problem per se so much as a hammer-nail mentality or, worse, a mentality where if someone's spent a lot of time and effort learning something then they want to prove that it was worthwhile by forcing everyone else to put the same effort in.
I'll say this though: functional techniques really can save a lot of time. I'm not sure I'd recommend Scala for a new project if you don't have Scala experts already, but I apply ideas I got from my experience using it to projects all the time.
The question then is whether it was functional programming that actually made the difference, or the people who had enough perseverance and skill to actually learn Scala in the first place running the project.
I've found that people who go out of their way to learn things like that would be more productive regardless of the language itself.
>The question then is whether it was functional programming that actually made the difference, or the people who had enough perseverance and skill to actually learn Scala in the first place running the project.
Oof, it may indeed be the second, but what do they get out of the deal?
A lot of FP programmers refuse to go back, simply because it feels nicer.
It's a hell of a lot easier to "persevere" on a project when your language doesn't make you want to dig your eyes out with a spoon.
Well, the ideas I liked, stuff like deferring side effects and avoiding state, were functional ideas. You see a lot of functional concepts and constructs bleeding out into mainstream languages (just look at the last few C# releases, for instance, which look like a list of features borrowed from Scala), which I think will be their legacy more than widespread adoption.
one reason i asked is because while i don't really come from an FP background (as a student, i started with c/obj-c), after working in the field it was always appaling at how many variables and global state people would put in their code and how many side-effects were present... its just something that seemed so obvious even in "oop-land" but maybe im just an outlier...
I think that's been recognized for global state for a long time, but I think that mutating local or instance variables/collections wasn't really thought of the same way as much when I started doing this like 8 years ago.
Functional programming has the same problem as "AI" - as soon as something's adopted by the mainstream, it's "not really functional programming, just common sense". Ten years ago lambdas and map/reduce/filter were "functional programming"; now every language has them. Fifteen years ago having interfaces rather than just classes was "functional programming". Five years ago pattern matching was "functional programming"...
I argued recently that the core to functional programming is composition, rather than specific language features. This goes many ways, such as functional composition (using currying, higher order functions etc) and type composition (using algebraic types). Functional polymorphism using HKT's and typeclasses are also compositional.
Pattern matching, for example, is not itself 'functional programming', and never has been. It's a feature common in functional languages because it compliments algebraic types.
OO/Imperative languages (C# and Rust) getting pattern matching is useful but doesn't make them functional languages. C# and Rust are compositional in the sense types may implement interfaces (or traits), but with varying degrees of power. However, C# can't have HKT's until some work on the CLR is done, Rust is much closer.
Programmers (particularly web programmers) these days don't really consider the computational cost of their actions, so pure functions with immutable data types are now possible, despite the unnecessary allocation. Almost all popular OO architectures now are some variant of 'functional core, imperative shell'. There is definitely a 'functional shift'.
It makes me a little sad tbh, to see OO languages embrace FP principles. I just like writing tiny functions and types, gluing them together somehow (not how Haskell does it*), and building up to a bigger system. You can follow that approach in F# or OCaml, but it's not really possible in an OO language, regardless of how 'functional' it now is.
* Haskell is cool, but trying to explain to someone the difference between `.`, `$`, `<|>`, `|>`, `<$>`, `>>=` and more is quite painful.
>Haskell is cool, but trying to explain to someone the difference between `.`, `$`, `<|>`, `|>`, `<$>`, `>>=` and more is quite painful.
For those who aren't familiar, I'll explain:
First of all, these are all infix operators, meaning they take two parameters: one before the symbol, and one after. You already know many infix operators: +, -, %, etc. I'll be surrounding them in parentheses, as that's idiomatic when they're not being used in the infix position.
(.) is compose: run one function, then feed its result into the other.
($) is just a tool for avoiding parentheses. It means "wrap everything after this in a set of parens".
(<|>) is alternative. Try one computation that can fail. If it doesn't work, try the other.
(|>) is either snoc (the opposite of cons) or pipe--as in bash--depending on what you have imported.
(<$>) is the general form of map, called fmap in Haskell (since map is just for lists). Given a function and a value inside a container, return the function applied to the value, inside the container.
(>>=) ah, bind. One half of the interface to the famously difficult monad. It's really not that hard, conceptually: run a computation, then use the result of that to run another computation. You might say "that sounds like compose!" and you'd be right. The difference is that a "computation" (or "action", or whatever your local monad tutorial calls it) is a function in a context. That context can be "it might not exist", which is called Maybe, or "there are a lot of values in order", which is called List, or "it can do side effectful IO", which is called, well, IO. If you want to compose those kinds of computations, you need to also "compose" their contexts as well. The implentation of that composition varies from context to context, but the interace is the same: (>>=), or bind.
Of course, conceptually is the easy part. This is the one operator in your list that can be a little difficult to gain an intuition for.
It's both. You can defer side effects, and avoid state in most other languages; but it's not enforced, or encouraged; and might even be tedious. The difference with functional languages, is that they encourage or enforce deference of side effects, and avoidance of state.
That’s certainly a big part. The other one is less obvious, but is one that shines through this article as well. Once you learned a functional language or any language that is well designed (for me that typically incorporates functional and possibly logic and relational paradigms) then you’ll be more confident and productive than before and will have a hard time to justify using a tool that is inferior other than adherence to the lowest common denominator.
I’m sure this can be observed with any profession. A good cook can work with a cheap knife and mediocre ingredients, but give them a nice set of professional kitchen tools and fresh, tasty vegetables and they will happily cook you a meal that makes your evening.
a good language would allow you to express thoughts that are difficult in another language. But this presumes that you have those thoughts to express in the first place. If you don't, the language isn't gonna magically be able to give you the ability to have those thoughts.
So learning functional languages is both learning the language, but to also improve your cognitive capabilities to have more thoughts.
I wouldn't say the used an uncommon language. They build some kind of tool to run on top of Elm which wasn't inline with the goals of the Elm project (unlcear how). Seems like something that should've been investigated before commiting
It is inevitable that someone will use a language in the way the developers didn't intend, but this is the only time I've heard of the developers specifically setting out to break their language for some users.
I think it's more that you shouldn't use a tool that you're incompetent at. It has nothing to do with how common or not it is. Just know what you're doing or don't do it.
> I think it's more that you shouldn't use a tool that you're incompetent at. It has nothing to do with how common or not it is.
These two points are closely related, though.
Common tools will always have more available programmers, more documentation, more tutorials, more help, more libraries, more maturity.
We had a server written in a functional language at a company I worked at. It was fine, but when the two people who wrote it left the company it became a huge pain point to even hire someone to work on it. Consultants knew this and demanded exorbitant fees for basic work on the project.
Eventually we just rewrote it from scratch in a common language and saved a huge amount of time and money compared to trying to build teams and schedules around this obscure language.
id tell you duh, and go actually has (or will have soon) the pieces for the majority of the daily useful functional programming concepts. namely first class and polymorphic functions.
As far as I know, the Future concept for async/await started with Twisted's Deferred, which is a framework for Python. It's not really from functional land.
Futures were first proposed in 1976, in a book called "The Impact of Applicative Programming on Multiprocessing". [0]
Applicative programming is an older term for functional programming[1], but note this isn't pure functional programming like in Haskell; it's functional programming like in Scheme and Javascript.
> Futures were first proposed in 1976, in a book called "The Impact of Applicative Programming on Multiprocessing".
According to Wikipedia, Promises were, Futures (which are similar but not identical) were proposed in a 1977 paper. All of the closely related concepts of promises, futures, logic variables, and dataflow variables were used in functional languages first, and long before their use in, or the existence of, Twisted.
> Really I think the mistake there was not forking Elm.
Forking a framework and trying to maintain a new, separate open source project is a huge burden. There is no way that would have fixed their problems of being behind schedule, but it definitely would have permanently worsened their maintenance overhead.
They needed to scrap the alternate language/framework plans and return to something safe and proven as soon as it became obvious that they were too far off track. Continuing to double down on commitments to unpopular frameworks (or worse, creating your own niche fork to maintain) would only worsen the problem.
Rather than forking elm, there's purescript, which is unlike elm in all the restrictive ways. It seems to have regular releases and plenty of packages as well.
Or port over to bucklescript-tea (a port of the Elm architecture to the OCaml-to-JS compiler BuckleScript) using the Philip2 migration tool, which was announced almost a year before OP was published: https://medium.com/darklang/philip2-an-elm-to-reasonml-compi...
I agree. Assuming this article is about the author’s experiences in early 2019, Elm has only had one release since then. They wouldn’t have missed out on much at all by shipping with a forked compiler but would have gained a lot of time to migrate.
The fork wouldn’t necessarily need to be public either.
I mostly agree. However you could use the same argument to dismiss any programming language the moment any team is failing using the language. No programming language has a zero fail track record. Because success depends on context and team much more than programming language.
FP is absolutely way more time-efficient for most kinds of projects, if you know what you’re doing. Switching without having sufficient background first is probably not a good idea though.
> FP is absolutely way more time-efficient for most kinds of projects
I absolutely do not agree.
FP binds your particular choice of implementation to the code architecture. FP actually makes refactoring in the small trivial and refactoring in the large ferociously difficult.
A good example would be a program that runs fine suddenly now needs a "timeout" on an operation. FP implementations now need to thread the notion of time from somewhere near the top of the implementation the whole way down the chain to the function that needs "time".
This is painful.
An imperative programmer throws in a global time variable, possibly a local timeout variable and gets on with life.
Now, if I'm trying to manage a high-complexity codebase, I will probably eat the FP penalty. Having complete determinism, decoupling, and visibility in, say, a network stack makes debugging possible that will be very difficult with sorta-state smeared across a bunch of variables at various level of hierarchy.
However, I accept and acknowledge that I am making a tradeoff.
> An imperative programmer throws in a global time variable, possibly a local timeout variable and gets on with life.
Which actually destroys the ability to refactor in the large. You have no idea what might use that global time variable from where or why. Choosing to use FP is choosing to ban yourself from taking on that kind of tech debt (although we should note that most FP languages have an "escape hatch" if you need it - even in Haskell you can always unsafePerformIO). There are definitely times where you want to do that, but I don't think it's right to frame that as the non-functional language making things easier to refactor in the large - rather the non-functional language makes it easier to not refactor in the large because you don't fully decouple things in the large in the first place.
> FP actually makes refactoring in the small trivial and refactoring in the large ferociously difficult
This is not at all my experience, in fact I'd say I encountered the opposite. Large scale refactoring in C++ and python were nightmarish and are a breeze in Julia and Elixir.
>A good example would be a program that runs fine suddenly now needs a "timeout" on an operation. FP implementations now need to thread the notion of time from somewhere near the top of the implementation the whole way down the chain to the function that needs "time".
A sufficiently complicated FP codebase almost certainly has some form of effects management in place (monad transformers or algebraic effects, for instance), so adding a timeout is as simple as adding the effect to the type signature at the top level and then letting the compiler tell you all the places you need to wire it up. I've done this in many codebases. It's actually a dream, as the type system won't allow you to make a mistake.
> A good example would be a program that runs fine suddenly now needs a "timeout" on an operation. FP implementations now need to thread the notion of time from somewhere near the top of the implementation the whole way down the chain to the function that needs "time".
I mean, 'FP' is so broad a term that a lot of languages fall under that, almost all of which can introduce the concept of a timeout quite easily. E.g. OCaml's Lwt, or Scala's ZIO are two that quickly come to mind.
20 years experience with OO code. Over time, OO always leads to entanglement due to inheritance based solutions that seemed like a good idea at the time. Eventually, these become the basis for many other parts of the system.
You combine the potential for side effects in the code with a tree that can't easily be changed because of how many other things it will negatively affect and you end up with a code base that is harder and harder to change the larger it gets.
In a FP approach, this is a class of problems that you can't easily replicate. Yes there's a learning curve to operating in a FP style, but once you're able to move quickly with it the long term rewards are a byproduct.
The OO solution to avoiding this problem is to build with microservices instead, which forces different parts of the system to be isolated and minimizes the negative effects from entanglement...but microservices come with their own maintenance and speed of development headaches as well.
35 years of OO experience here. Maintaining very large C++ code bases. I never have OO entanglement problems due to inheritance based solutions. Why? Because I only use inheritance for interfaces. Nothing else.
I know what I’m doing in many languages, and I’m uniformly more productive with the FP ones. The only time I’ll reach for a non-FP language is if I’m forced to for work or if I’m working on an embedded system with no allocator.
I would love to know that there is a panacea that automatically improves productivity. Unfortunately, I haven't seen one in my career, as yet.
I am certainly more happy in many functional languages. And I can see happiness contributing to some. But I can't count that as data, without RCTs or something similar.
I'm not going to go so far as to claim that computer science isn't science. But I do fear that there are more claims than there are checks in our industry.
>I would love to know that there is a panacea that automatically improves productivity. Unfortunately, I haven't seen one in my career, as yet.
I suspect you've seen many of them. They're just the ones that are so normal you don't notice anymore. For instance, no one does pure waterfall-style project management anymore. "goto" programming has completely dissappeared. Etc.
I wouldn’t say it “automatically improves productivity” - I suspect it might be limited to more intelligent and/or mathematically inclined programmers. That said, I think for many programmers not using it is a huge and pointless missed opportunity. This is one of those visibly high-return arbitrages that it pains me to see go unexploited.
"mathematically inclined programmers."
This kind of programmers are literally the worst I met in 25 years of carreer. They are underperforming very badly compared to others.
In fact I worked mainly with java and .NET C# in the past 2 decades ... Now I'm more into Node and Javascript ... I worked mainly in finance, banks and insurances. This is where I met all those underperforming math-oriented peoples. Most of the time they where not able to find a job in their field and went to programming as a way to earn money, not by passion like ALL the best engineers I met.
That's the problem with "uncommon": how do you obtain that background and how do you learn what you're doing? (of course, not just you - you, the entire team, and future hires)
The same way you learn anything else that is difficult: time and practice, and start with fundamentals. You can't put it on any work-related critical path until you have it down solid.
I mean in the context of an existing company / team of nontrivial size. Most cases I know of, the uncommon language was there from the days of it being a personal or 1-person project. For other types of tech (e.g. infrastructure) sometimes the new/different capabilities are such a defining feature that the lack of familiarity is not as scary. But in programming languages, you can fundamentally do pretty much anything in existing and well known ones.
Have fun writing a web app in assembly language ;). Yes it's been done, but not often, and for good reason. Haskell is sort of a special case: it's best suited for writing compilers, but has been pressed into just about everything else because people are into it for its own sake.
Yes if you're running a C++ shop and someone leaves behind a small program in Haskell, you're probably better off reimplementing it than becoming or hiring a Haskell guru just to maintain that thing. That's a question of whether to learn Haskell, not how to learn it. In the case of the linked article, the author was the one who decided to get the company involved with FP, and gave his thoughts on whether it was worth it.
I can certainly support the idea that someone juggling all the plates required to keep a company running, has no time to embark on a deep and nerdy self-education project in something as abstruse as FP. Better to keep it at the level of a side interest or hobby until you're comfortable with it, before even thinking of doing anything important with it.
most "failed" FP projects I've seen mostly lack the former - often due to managers or engineers that come on the project after it first hits prod
then comes the rewrite to add a notch in said managers' and engineers' belts
then the original FPers leave and now there's systems in prod with little to no people who can work on it. that rewrite suddenly got a lot more business-critical!
the original FPers weren't as PM-savvy, so the rewrite is successful despite having less functionality and still taking a year or more. doesn't matter - the savvier managers and engineers know how to set goals & milestones that they know they'll hit and can say that they hit when evaluating the project's success
This is just a summary of my years of experience as an FP professional. I've seen this happen 3+ times across 3+ companies.
---
Haskell is by far the most worth-it skill I've cultivated. It wasn't easy but I really do do a 10x job on my personal projects in large part due to it. It's just not a good language to use in a corporate setting. It's better to get paid big bucks to be less productive. Save the technical brainpower for things you yourself find valuable :)
(and I sneak Haskell in all my jobs anyways - scripting, for instance - so I still develop my Haskell knowledge on company time & dime, despite the language being all-but-banned by the higher-ups)
My previous employer would teach new employees to use a functional language from scratch and people would be productive in 2-3 weeks. I don’t think it’s actually that difficult in reality.
> FP is absolutely way more time-efficient for most kinds of projects, if you know what you’re doing.
That may or may not be true, but most programmers don't and aren't really all that interested in learning. Most know what they are doing in a handful of Algol-derived mostly-imperative OO-ish languages and aren't comfortable goony much afield from that.
It's not hard at all. It's routinely taught to beginners.
However, it's not a silver bullet. I found that basic software engineering principles are way more important than the language. I've seen extremely messy OCaml code and super clean C code. What is important is how the code is organized at high level. Whether you use a for loop or a fold, an error monad or an exception mechanism matters less.
I'm also wary of functional programming gurus that tend to over-abstract things and use all the language latest features, making code very hard to read.
Also, when developing in a niche language, you tend to miss important tools and need to rely on unstable third-party libraries.
I used to be quite enthusiastic about FP, but I think I'd stick with more mainstream languages unless there's a good reason not to.
Programming is about clearly communicating theory.
Mariners use "port" and "starboard" for vehicle relative directions not because it's "cool" but because it's clear and unambiguous. Problem decomposition works the same way: some problems decompose better with FP (I would argue databases do) some with OOP, some with EF etc. The more you know the clearer code you can write and the faster you can extract the theory from other people's code.
Fun fact: the "port" side used to frequently be called "larboard" until the British Royal Navy realized it was too easily confused with "starboard" and ordered sailors to stop using the term. After that it gradually fell out of favor.
"Port", "starboard" are unambiguously defined relative to the vessel. "Left" and "right" require specifying or guessing a point of reference.
For instance if you're facing port and I ask you to take two steps forward, you could either take two steps to port or two steps towards the bow. If I tell you to take two steps to port then the direction is unambiguous.
No it isn't because the term was invented specifically to avoid this confusion -- it means "left from the perspective of the performer facing the audience".
It's the difference between "forward for you" and "forward for the boat". Without words like "port", you might do the wrong thing when I say "take one step forward". Did I mean for you to go further in the direction you're already facing, or did I want you to step towards the prow?
We use similar approaches with terrestrial navigation too.
At the intersection of Main and Oak, go two blocks east. The building will be on the north side of the street.
Those instructions are the same no matter what direction the person is facing or direction they came from.
However, on a ship, the absolute positions of NSEW (relative to the Earth) aren't as useful as the ship may be heading any direction (and may be going forward or reverse). So a different absolute coordinate system is use - fore, aft, port, starboard.
It's not as unambiguous because, to the guy with his back to starboard, forward could just as easily mean port! It may seem clear if someone's shouting, and pointing, at the bow, but if I'm recounting the action of a tense combat scene and say he "slips on a banana peel and falls 'forward'", you can't know for sure what I mean- does he fall straight towards port, or does he a break a leg, twisting, tumbling toward the bow? c:
The alternatives are left and right respectively. They're more clear because left and right are usually relative to the speaker or listener who may be facing each other and are often moving around.
As a former US Navy Sailor, port and starboard are pretty clear. Along with Forward and Aft. These are things relative to the ship and points in which everyone is aware of on a ship. If I say starboard, that is right of the center of the ship. Port is left of center line of the ship.
a) They refer to left and right, not front and back (which I believe would be "forward" and "aft").
b) With the possible exception of "forward", if you hear one of these words you always know it's specifying a direction relative to the direction the boat is pointing, which is not true for "left" and "right".
Often just fore instead of forward, or a particular pronunciation like for’d for “the front of the boat as opposed to in front of the speaker”.
Lots of other jargon too for other directions or places (leeward, windward, amidships, athwartships, etc) but that is all much less common than the four listed above.
Windward and leeward are still commonly used in my experience.
Also, there's a fun additional distinction for forward and aft, which is that they tend to refer to locations within the vessel. Ahead and astern are used to refer to locations outside of the vessel.
After about 8 years of sailing fairly regularly with various crews, I still get many of the terms mixed up. For instance, stanchions and shrowds are terms that are useful, but they never seen to come to mind when I need to communicate them.
Right! Why use port when left is right? Nonmariners left starboard right at port, left port there as well. Maybe modern mariners should port left back to their lexicon before they're left behind.
> I'm also wary of functional programming gurus that tend to over-abstract things and use all the language latest features, making code very hard to read.
Wanted to echo this point.
I'm a huge FP proponent, and I do believe that liberal use FP paradigms can result in code that's in many ways measurably more expressive and less complex than code that make liberal use of imperative looping, in-place mutation, and inheritance, etc. When I'm coding on my own for side projects, I always strive to use the most elegant and expressive FP constructs I can think of.
However, at the end of the day, most of the coding we do is part of a social activity. And in a social setting, familiarity matters.
Writing slightly less expressive, more complex code in a way that most people who read it will be already familiar with can result in more productivity in aggregate than writing code that's super expressive and simple but requires everyone reading it to first learn the paradigms/patterns that enabled the code to be written that way, especially in a fast growing startup where people are constantly onboarding into the codebase.
Optimizing too much for familiarity can also become detrimental though, in that it results in stagnation and closes the doors to paradigms/patterns that can have an outsized impact on overall productivity compared to their cost in learning/education.
The real challenge is striking a good balance between familiarity and expressiveness in the choice of paradigms/patterns to use.
> What is important is how the code is organized at high level.
This was often forgotten about OOP as well, as Alan Kay himself reminds us...
"The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be."
> However, it's not a silver bullet. I found that basic software engineering principles are way more important than the language. I've seen extremely messy OCaml code and super clean C code. What is important is how the code is organized at high level. Whether you use a for loop or a fold, an error monad or an exception mechanism matters less.
Agreed, developers are great at finding new ways to write questionable code no matter what languages or programming paradigms claim about their respective abilities to reduce code errors and smells. I know because I'm one of them.
Agree. I can program in pure languages like Haskell, Idris etc. but still prefer to program ‘in anger” in OO languages. Having easy access to mutable state is just so damn convenient. As long as you are careful how you do it. Mutable state is a very sharp tool so be careful when you use it.
For one, languages like golang prevent over engineering or bad practice. I’ve never read unintelligible golang code. Some might abuse the interface{} but that’s the worse that you’ll see. Not bad compared to some other languages.
Most people are taught from day 1 to code in some combo of imperative and OOP styles.
The issue is not that FP is hard to learn. The issue is that most people start off from a much lower baseline knowledge-wise than they will when learning a new imperative and/or OOP language, framework, etc.
I come from the OOP school and have done 10s-100s(?) of C/C++ projects, which I like, btw. But when I discovered FP it was an eye opener, the breadth of things that became possible/tractable is great, even my OOP programs became much better because of it.
I think that the OOP curriculum is actually a regression, and would recommend anyone who is just starting out to try out FP first.
As someone who only dabbles in code, I’m very interested in starting FP first, but I find that the beginner tutorials mostly assume previous programming knowledge/skills, and the communities aren’t really geared towards folks who don’t have backgrounds from other areas.
It is certainly possible I’m not looking in the right place for the right thing, but it should probably be an area of improvement for those communities in the future IMO.
It appears to exclusively work with the usual FP understanding of computation as reduction (simplification of an expression), avoiding all named mutable state. This is (in my opinion) the most important and fundamental difference between functional and imperative, and is something you miss if you try to learn FP in, say, JavaScript.
It also doesn't seem to assume any prior experience with programming. The last chapter even has a section specifically directed at students who aren't computer scientists or software developers.
The 1st year computer science programming class at Waterloo is based on the 1st edition of this book. (Not sure why not the 2nd edition--the waterloo website says so. [1])
The authors of this book are some of the major folks behind the Racket programming language. Mattias Felleisen, Matthew Flatt, Robert Findler, Shriram Krishnamurthi.
You might recognize Shriram Krishnamurthi's name in the credits to PG's guide to Bel too [1]
I didn't really use this book much directly - when I took that class in 2012, I mostly learned via the lectures, lecture slides, and doing via the assignments. The book is probably sufficient and you can probably talk to other people about it via Racket subreddit, IRC, etc
Hey! I am writing an ebook about high-level programming concepts that starts off with a FP view point and builds on top of it. If that sounds interesting, I'd love to share (I am eventually going to sell it but I am fine giving away drafts)
Yeah I agree with you in that there's definitely a steep curve for newcomers. Some work on that is needed, but it's coming along, take @blagovest's comment as an example.
As with anything in life, just don't give up. Everything sucks till it's finished :D
Exactly. I came from a math background and only ever used Wolfram Mathematica for programming, and naturally used functional programming. Later on I tried to learn imperative programming and it boggled my mind. The idea of changing a state over and over seemed so janky to me.
This is an oft-repeated canard. The fact is that "OOP/Imperative" happens to align with how the human brain works, and how to a large extent reality works. Some aspects of FP also align with reality and mental processes (e.g. pure functions, comprehensions), but others do not (e.g. closures, monads, lambdas).
I think FP advocates (I'd count myself as one, sometimes) like to gloss over this aspect and instead blame the student for being too stupid, or for having been indoctrinated in the dark side too much.
FP I believe would be better received without the smugness surrounding "you're too dumb or too much of a dinosaur to understand". Be more like Dave Farley.
I disagree, I don't think Imperative aligns with the human brain. Humans have thought thru problems for millenia before computers existed. For a 1700s mathamatician to solve a problem in grueling, iterative steps that loop and change a state would be seen as a boneheaded caveman method. Pure functions were the norm of elegance in the olden days of famous polymaths. I didn't grow up with access to any computer, but I did a lot of mathematics, so going from that to Imperative thinking was super jarring and not natural at all.
It's just the Haskell family of functional programming languages that is hard to pick up. There are plenty of functional language families that are quite easy to understand and much more pragmatic:
* ML - includes SML, OCaml, F#, Scala, and Rust
* Lisp - includes Common Lisp, Racket, Scheme, Clojure
* Actor Model - includes Erlang, Elixir, and Pony, as well as other languages that have actor model systems at the library level
You'll find endless sources of opinions on why the Haskell family is so hard to use well. My personal opinion is that most languages create abstractions with concrete types of problems in mind. Haskell created abstractions with other types of abstractions in mind. If you ask the question "what is a monad used for?", the average Haskell user isn't going to respond in any form about making side effects safer, because that's just one thing that they do...they're going to respond with other abstractions. And after 45 minutes of explanations of what they are, they still haven't yet gotten to the explanation of what you can do with it. And then when you finally understand what you can actually do with it, you have to confront the fact that they made an incredibly easy thing hard, just in case you might want to use it a different way.
Scala and ocaml suffer from the same reputation. SML didn't really make it out of the academia. F# is easier, but still has a bad rap from C#ers. Rust is making incredible efforts to be accessible, but the learning curve is still steep.
One could say haskell itself is also an offspring of the ML family.
In that family, the article author correctly identified that Elm is probably among the most accessible. Choosing a specific application domain enabled the language creators to cut a lot of complexity, and to use a simple state machine as a runtime.
> If you ask the question "what is a monad used for?"
You'll get the same kind of answers as you'd get you'd get when asking Java programers what this "class" concept is about. The functor-applicative-monad stack is at the heart of Haskell's flavor of functional programming. You can write small programs without it, of course, but it's going to be the same experience as writing Java with a single "main" class.
A few nits: I would not put Scala in the ML category (I'm not sure about Rust). Haskell is, in-fact an ML. Racket is a Scheme. I would consider the actor model to be more OO than FP.
Scala may be hybrid OO/FP, but Martin Odersky has been absolutely clear that Scala's primary functional programming design was inspired by ML. At various times in the past, he has referred to Scala as OCaml with a different object model. I believe I read a post on Reddit quite a few years back from Martin Odersky about how he originally intended to only have structural typing, a-la SML, but when the decision was made to implement Scala on top of the JVM, that nominal typing made it's way in, with an object model that was closer to Java than OCaml, for interop purposes.
Rust's primary influence was ML. The compiler was originally written in OCaml. In fact, Graydon Hoare has commented about how he preferred OCaml's module-based polymorphism, but the Haskell advocates would never shut up about type classes so he eventually relented on that one single idea and implemented traits. But he held on to OCaml's ideas for almost everything else, at least until the point where he resigned his BDFL position.
Haskell has some ML influence, but it received that influence via Miranda. Miranda deliberately diverged from ML in the operator emphasis, and execution (lazy vs eager). They both might have been considered at arm's length with the ML family right up until the point (1990ish) that Haskell decided to eliminate all unpure IO and adopt Monadic IO. There is almost no resemblance anymore, and the functional programming community is very cleanly divided: you're either in the ML camp or you're in the Haskell camp, unless you're one of the few Lisp weirdos sitting in the corner singing hippy songs.
The actor model does have a lot of roots in functional programming (as well as logic programming), but you're right that it is also related to modern OO programming insofar as most modern OO languages inherited a ton from Smalltalk, which was a similar message-passing model. However, unlike Smalltalk, the actor-oriented languages rely on immutable state and pure state transition functions. That makes them functional languages at least in some respects.
Lisp is not a pure functional language. So it won’t teach you FP in its pure form. I already knew how to program in Lisp/Scheme/Clojure before learning Haskell. But wow what a difference it was.
Agreed. I would bet on a language like Elixir or F# being simpler to learn and grow for a complex system than a class-oriented-imperative-oop (Java, Ruby, Python, etc) language any day.
F# is such a sleeper language IMO. It's compositional tools are quite beautiful. I don't think it'll ever get wide adoption though, which is quite sad. It's just such a sensible ML language. The abstractions chosen (computational expressions, for example) were the best choice for F#, rather than copies of other FP languages. They can't implement any form of HKT because of limitations in the CLR, so they had to use alternatives.
Elixir is also really great IMO. The pipeline composition is a really nice model. If adoption grows more the tooling should step up, because it's lacking a little. The language server can't do things like rename a function, there isn't a complete TreeSitter parser either. I also have this fear with actor model that I'm inadvertantly leaving some process dangle somewhere, which in my experience is not unjustified.
I would dip my toes into FP occasionally (but very, very briefly) for years. I bought a book on Scheme in 1995, to give you a sense of how long I wandered in the wilderness.
It wasn't until I discovered Erlang in 2012 (thanks, Seven Languages in Seven Weeks) that I finally found the motivation, aided in no small part by the fact that it's a very simple language and it's designed for server programming, where I've always been happiest.
I still haven't graduated into category theory or type theory. I still don't know the difference between a monad and a monoid. But functional programming really speaks to me, because I have an old, tired brain and I need pure functions wherever I can employ them to keep things straight.
Pure functions concept was a breakthrough for me too. As a design pattern they are wonderful, there is so much less mental context to keep track of.
Now whenever I see mutated variables and class attributes, or random side effects besides reading/writing to a database, it kinda makes me cringe and think it's a "oh here we go" into a rabbit hole just to understand what the code is doing. 9/10 the code does what it's supposed to, but the mutations and side effects makes understanding and extending it so much harder.
A monoid is just a collection of things that can be associatively "added". Think addition with integers, or append with lists.
Members of the dreaded monad can be sequenced, or composed, while taking into account their context. For instance: if I want to get a value from stdin, then use that value safely; or make a network request and then use the result safely; or run a function that can fail, and use it or short-circuit as needed.
My two cents is that anything is hard to learn without an application you can test your knowledge against. It’s always hard to learn language X until you go ahead and build something in it.
Functional programming is hard because in a lot of cases, especially ones beginners encounter, the imperative solution is simpler. Purity and types are things I think you only truly appreciate when you’re writing large or complicated programs. I wasn’t able to really grasp FP (beyond using things like map() in Python) until I took a compilers course which used OCaml, and the ability to pass around and destruct these very complicated immutable trees was a very natural problem to tackle in the domain
> My two cents is that anything is hard to learn without an application you can test your knowledge against. It’s always hard to learn language X until you go ahead and build something in it.
If it's something really new and different, like Haskell is to an old school imperative programmer, I think the opposite approach is best. Treat it like a topic in math, start from zero, work out small problems to exercise the basic concepts, then start putting them together.
Immutable data structures are another new and shocking thing, but less complicated than fancy typed FP is. Start with seeing how you can "update" the first element of a linked list by making a new first element and linking it to the existing tail. Then see how AVL or red-black trees let you do something similar with tree nodes in log(N) time, so you can use those instead of hash tables without a monstrous slowdown. That's probably all you need, but the next thing after is probably Chris Okasaki's book Purely Functional Data Structures. It is pretty readable once you've seen some basics.
It was two courses actually, the first was taught out of https://www.cs.cornell.edu/courses/cs3110/2019sp/textbook/ and the professors personal notes on the history of programming language design, and the second was taught primarily out of Andrew Appel’s tiger book with some content from the dragon book
I think Haskell is kind of hard to learn for these main reasons: what you know from imperative programming mostly doesn't transfer, you kind of need to know a pretty big set of library functions in order to be productive, and for practical programs you often need to know a few tricks that aren't obvious that allow you to mix pure code and mutable state (how to exploit laziness, how and when to use the ST monad, what should be in the IO monad, etc..).
Additional roadblocks are that the syntax is strange if you're not familiar with ML-derived languages, the type system is fairly complex and you need to understand quite a bit of it to make progress, laziness can cause performance problems (lost parallelism, excessive memory use) if you're not careful, and the tooling isn't always as user friendly as it could be.
That said, I'm glad I learned Haskell, and though I don't know everything about the language, these days I'd feel pretty comfortable using it for anything I'd use any other decently-performing garbage collected high-level language for.
I would add, Haskell is bad at naming stuff, especially function arguments and generic type parameters. It has a tradition of point-free style which deliberately avoids names, and when names are required, they're typically meaningless single-letters. This makes it hard to build intuition and to follow code.
For example, the main Haskell graph library (fgl) has two type parameters, named 'a' and 'b'. Instead of meaningless letters, why not 'NodeLabel' and 'EdgeLabel'? Now it's obvious what they mean!
Yeah, that example seems like not-very-good naming. The standard library uses a lot of terse names, but that's often because in a lot of cases the types could be almost anything. Might as well just call them a, b, and c.
There's another general principle that the length of the names of variables should scale with the size of the context in which they're meaningful. So, a 10,000 line program might use a four-or-five syllable name, whereas a one-line function that takes two arbitrary arguments can just call them "x" and "y".
I do agree that points-free style and terse code can be problematic. Haskell lets you use very high levels of abstraction, but if the next person to look at your code doesn't understand those abstractions it'll tend to look like gibberish. I think my own preference tends to be not to go out of my way to make the program extra terse unless there's some compelling reason, like avoiding a lot of repetition.
I think plain old low-level C code is (sometimes) easy to read just because it has a lot of visual cues like "for" loops that your brain can pattern match on to tell you what the general control flow is. In higher level code, you often end up with fewer visual cues. Which is good in the sense that it eliminates redundancy, but bad in the sense that it can take longer to understand what unfamiliar code actually does.
Something that seems to happen in Haskell more than other languages I've used is that sometimes you can kind of forget how the lower layers of abstraction actually work, and it sort of just does what you want as if by magic. For example, the Lens library let's you update data structures using a syntax similar to imperative languages, but behind the scenes it's actually constructing a new data structure from the root on down to the "edited" leaf, and (unlike in imperative languages generally) if something goes wrong you can just throw up your hands and error out, and the old data structure is still there. It's like the convenience of imperative languages, but with transactions practically for free. But the type signatures that the Lens library uses are fantastically complicated. Though I sort of understand what the Lens library does, I really have no idea how it works internally. And I've decided that that's actually fine.
Functional Programing is hard to learn because it literally changes the neural pathways in your brain. Once you learn it, there is no going back : your grey matter will be changed forever.
I used it a lot in the past, and less nowadays (Rust), but I occasionally have to teach my coworkers on basic FP principles so they can use and write good Python code. Simple descriptions make the learning more palatable and doesn’t scare people (“you could use a Monoid here” -> “Try using a class like this, with an `empty` and `combine` function, here”).
It usually takes about 6 months of daily practice to learn basic FP skills and 1-2 years to go from beginner to “intermediate” level. Occasionally, you might encounter a FP grandmaster who will melt your brain in less than 2 minutes of conversation, some things never change.
So true! It's a tough disease to catch but totally incurable.
Once you have the bug, you start to pick-up code smell in anything that isn't functional. You see code and ask, "Why are we touching that?, Why are we holding onto this?" Being able to think in a functional style encourages you to throw away as much code as possible and hang-on to as few crufty elements of state as are minimally required.
It also makes you very anti-OOPS and hesitant to define "classes," since these violate the first principles of functional.
I have a good 30 years of C/C++ under my belt and have been learning Haskell for the last 4 years. You can write a buggy version of a program in C after spending an afternoon learning it. You can write a bug-free version in haskell but you'll be spending a few weeks worrying about monads.
I think C just looks easier because you can learn enough to be dangerous without much effort, getting to a level where you can write reasonably safe C will take a lot longer than getting to that same level in Haskell.
After trying to join in on the monad jokes for forever, I opened up the the Wikipedia page on Monads (in the functional programming context, not the page in raw category theory) and it actually kinda made sense.
The problem that made monads make sense for me was when I had to chain (err, val) tuples for six or seven functions that only took val and handling the (err, _) bit was awkward. Someone showed how to rewrite it using monads to handle that without the boilerplate and voila.
I strongly suggest starting with a language like Elm to get into FP, since you start to use `map` and `andThen` quite often, but you also get sick of writing `a |> Result.andThen fn1 |> Result.andThen fn2`. This can help a programmer realize why it might be better to have a concise syntax for this, like:
myFun : String -> Result String Int
myFun a =
do b <- fn1 a
c <- fn2 b
return c
That said, I think the other problem with Haskell is the definition of the bind operator: it is not obvious to a beginner which concrete function is actually being called for each monadic operation. Idris2, for instance, lets you specify the bind operator in its do notation[0].
I think it'd become Just Another Monad Tutorial. Its something that clicks after a bit of FP programming without monads. Simon Peyton-Jones wrote a good paper about how IO worked prior to Monads which is nice - context helps with these things. I remember struggling to understand OO until I got my first junior dev role so I do think it's just practice practice practice to get through these things.
I like the way that "Learn You a Haskell" puts it:
"If we have a fancy value and a function that takes a normal value but returns a fancy value, how do we feed that fancy value into the function?"
That's basically all a monad is. If you know what map() does (in Haskell terms, map is a "functor"), then you know what a monad is, as it's just a fancy map().
I had a chance to speak with John Backus when I worked at IBM where he invented functional programming with the goal of removing some of the spaghetti and incomprehensibility of imperative and procedural code. It certainly was not an immediate hit!
Despite being an IBM Fellow (the company's highest rank with complete freedom to work on whatever you want), John was having trouble getting any traction for his ideas. I certainly didn't grok it at the time. I couldn't see the utility over the procedural PL/AS and imperative assembler we were already using to create the mainframe's higher-level language compilers.
I've since become a big believer in the functional style, sadly after John's passing. It's certainly not the solution for everything. Even the lambda calculus requires that you feed it a starting series of "magical" integers to work. But functional is a useful way of thinking about programming, especially for library functions.
I would say the key difference between the functional style and imperative/procedural is not the presence of recursion but the lack of state. A functional function cannot have any internal state store, nor rely on anyone else having one. In other words, all of its arguments and values must be fully defined by parameters. This is a super-critical concept in debugging because it helps bulletproof your function.
Having said that, no real working program can be fully bulletproofed with the functional style because we need to hold onto state in real programs. (Is the user logged in?, etc.) We cannot pass these values in every single time and have a practical program.
I think merging these concepts of functional when you can and state when you must is the easiest approach. Certainly there are many functions in every program which are functional in style in that they do not contain or rely on any state, and those are good jumping-off points for starting to understand the functional style.
No, a program without effects is useless. Side effects, i.e. some action done on the side while evaluating the program are not required. There are other ways to treat effects in programs, particularly as values.
Effects, yes. Side effects are a particular form of effect. A pure functional language can compose an effectful value that is interpreted by a runtime, but no effect happens on the side as the program is evaluated.
> In computer science, an operation, function or expression is said to have a side effect if it modifies some state variable value(s) outside its local environment, that is to say has an observable effect besides returning a value (the intended effect) to the invoker of the operation.
GP is correct. There are zero side-effects inside a pure program. All the side-effects happen outside it, in a "runtime" (a better name IMO is "imperative shell") that exists totally separate from the program. The main() function of a pure program executes completely and terminates before any side effect has a chance to happen.
I put quotes around runtime because it's not a runtime like an interpreter in dynamic languages, or a library that you can call from inside your pure program. This runtime just calls your main() function, which return a complex value (it's a chain of lambdas, to spoil it) that is interpreted by this runtime or "imperative shell".
I understand this sounds unintuitive and might seem very confusing, but this is what makes useful pure programs possible. These couple presentations show how this type of boundary between imperative and functional code works with simpler type of program [1]. The only difference is that Haskell's "imperative shell" is lower-level than the presenter's. It only deals with IO, etc, whereas the presenter's imperative shell also has some domain code.
Yes! JS and TS allow this kind of functional style which I try to adhere to. There must be state, of course, but as few functions as possible should rely on it and certainly no function should "reach" into anywhere else to get a value. Those things have to be passed-in.
The change from AngularJS to Lit (or React), is an example of this kind of functional refactoring. AngularJS had two-way databinding (state!) and it attempted to pass that state upward when things changed. This made horrible spaghetti and impractical large apps.
Lit and React are only top-down. Yes, each component has state but only at the top. It gets passed-in as a parameter to other things, but they can't change it in return. This is much more modular and debuggable.
The biggest downside with doing it in something like JS is you don't have the same under-the-hood optimizations. In functional languages it's not actually going to allocate a whole new array each time an immutable list is appended to, but that's exactly what will happen in JavaScript. But yes I agree, these concepts are showing up everywhere, even if you don't do "FP."
At the end of the day, software always exceeds the ability of the hardware to run it. That's the cyclical nature of our business. Just for myself, I'd rather write to the easiest software methodology and wait a bit for compilers/transpilers/interpreters, etc to be fast enough that it doesn't matter what I write.
In other words, one can't really optimize for the top level (ease of use) and the bottom level (speed) at the same time.
Turing demonstrated that there are only two differences between any computers that have ever existed: how fast they are, and how difficult they are to program.
> In functional languages it's not actually going to allocate a whole new array each time an immutable list is appended to, but that's exactly what will happen in JavaScript.
Efficiency depends on how your js runtime is optimized. Elm works fine, for instance.
Yup, React is all functional these days, and pretty widespread. My team, for example, is "discovering" the benefits of functional approaches, and it's nice that it's a gentle slope.
Use of state is still widespread. `useState` is a function in the javascript-specification sense, but not in the FP sense, since its only purpose is a side effect, not the return value. I don't think react can do anything useful if you use only FP constructs.
It would be possible have a purely functional React if 1) the current state of the component were injected via an argument, and 2) instead of a setState call we just returned the state changes we needed.
With that our code would be 100% pure, and all the side-effect part would live outside our JSX files, similar to how Haskell does.
I think learning functional programming is harder with a statically-typed language as there's so much more to learn which revolves solely around the type system. I would recommend anyone new to FP to try Clojure first. No mon[a|oi]ds necessary. I also think the transition from procedural languages is easier than from OO languages. I was lucky not to be exposed to Java or C++ in the early days of my programming career, opting for Perl instead. When I transitioned to Ruby I also encountered Clojure at the same time and could appreciate the functional/lisp elements in the design of Ruby.
Mon[a|oi]ds are everywhere, but in some languages you just don't know about it. Which of course lets you not have to learn those concepts knowingly, but they're still there and you still could be a better developer for at least knowing they're there and they're behind some of the repeating patterns you see (string concatenation / promises / etc.)
I believe "not worrying about them at first" is actually a good path to learn these patterns. Using them first, some time later having a thought "oh hey, the API of how JSON decoders are written is kinda similar to how I'm generating random values, that's cool" and only later on learning that the it is in fact monads/monoids/functors/whatever and that they have laws and you can exploit that (eg. [repeated squaring for anything that's a semigroup](https://blog.jle.im/entry/shuffling-things-up.html)) and whatnot.
I actually think that FP programming and FP type systems are almost completely seperable and most of the FP jargon comes from FP type systems. Elixir is another example that makes FP programming very straightforward.
That being said I think once you get the basics of FP programming learning about FP type systems is worth it. Especially FP effect systems (i.e. IO monad).
I had only one FP experience with scala. I actually liked that it was statically typed. I have nothing to compare it to though.
But, higher order functions and type parameters are still a bit mind bending for me. I didn't have to write much code for those, but when I did it was really hard. I'm sure I'd need at least couple days of reading and playing around to somewhat grasp how to implement them again.
Well, as alluded to, mostly because it requires you to start over and relearn basic things. But the compiler devs for your language specifically setting out to break your program is an unusual hurdle. Based on that alone I would never touch Elm.
People building their apps on a discouraged "leaked" implementation detail (JS native/kernel modules) got cut off from using it.
The reason for disallowing that implementation detail wasn't "hey, somebody is using it, let's teach them a lesson" but, as far as I've heard, improvements to dead code elimination.
You can (and people, me included, do) use Elm in production peacefully. Huge apps, nontrivial JavaScript interop needed. You can do all that without depending on JS native modules.
I feel like whether you use a (discouraged) implementation detail of the language is a good indicator of whether you'll have a bad time later on when that implementation detail changes ¯\_(ツ)_/¯
As someone who is huge on Functional Programming, I'm not sure you're reaping the benefits you're talking about. Your dedication is impressive, but it's nowhere close to being required to build any kind of product.
Sure, I agree that learning FP will make you a better development, but spending all these learning resources in order to build a product is not very convenient.
I've been shipping code with mediocre languages all my life. Even now, I routinely pick node.js over Haskell or Rust just because the complexity of the solution I need to build is not high enough to justify writing amazing bug-free code. Sure, you may get more bugs and less help from the compiler, but there is more material online and I can easily find a cheap developer to throw at the project.
I've been developing for 15 years. After 5 years of C++, PHP, JS, I decided to jump into Haskell for my side projects. I can't say the learning curve was as hard as you made it out to be: there are plenty of great resources for learning Haskell and you don't need to go through Elm or Purescript (which weren't even a thing when I learnt Haskell). Actually the differences between the languages may make things more confusing.
In the last couple of years I stopped using Haskell completely, simply because Rust (a language designed to build things, unlike Haskell which is more of a language research project) is functional enough, pleasant enough to use and is developing a nice community.
The most useful FP concepts trickled down in other languages.
It's not functional programming, it's functional languages that go bat shit crazy with the amount of symbols they use that'd make looking at heliographs a refreshing pass time.
In a LISP, the parenthesis are structural syntax. Where other languages use curly braces, whitespace, square-brackets, and usually a combination thereof; LISP simply uses parenthesis.
In Lisp parentheses are used as a syntax for nested lists. Lisp programs then are written on top of that with a syntax, which is structural on top of lists. Most other languages don't use a primitive data structure for encoding programs (other than text).
We're talking about typed FP so only SML in your list really counts. So let's see: functors, polymorphism, higher-kinded types (does SML have those?), Hindley-Milner type inference, etc. Then for Haskell (the main topic of the linked article), bring in a bunch of unfamiliar algebra such as the notorious monoid on the category of endofunctors. It is actually worth understanding that. I liked this article (prerequisite: some exposure to Haskell):
There's not really an official definition of FP. There are some proposed ones that involve types and some that don't involve them. Mainly though, this is a thread about the linked article, which is about the tribulations that the author had learning Haskell. Most of those tribulations were with the type system and I think that matches most people's experience. You can't transplant it to Lisp.
That’s the way we ought to be working. If you don’t write the DSL, you’ll have to macro-expand the DSL in your head and write a bunch of boilerplate which everyone will be forced to try to reread and maintain forever.
You can write a DSL using words rather than symbols, and IME that makes programming a lot easier - you give up very little density and in return you can discuss your code aloud, search for it, ...
Functional programming seems to be extremely easy if you have never learned imperative programming first. I have seen beginners grasp FP much faster than OOP and write production ready code only after a few weeks/months of learning whereas beginners need on average multiple years to learn "production ready" design pattern style OOP.
On the other hand I have observed some of the best OOP developers really struggle with FP. It's not that they find FP hard to learn, they find it really hard to unlearn OOP and the thinking that the way things are done in OOP is the holy grail of good software design.
For example, only recently there was this blog post trending on HN (Am I stuck in a local maximum - https://blog.ploeh.dk/2021/08/09/am-i-stuck-in-a-local-maxim...), which was triggered by a "blue tick" OOP programmer (tastapod on Twitter - inventor of BDD) making false statements about FP because he seemingly struggled learning it and wasn't able to work out how to program without mutations. He came to the conclusion that all functional programmers actually use mutations by default and immutable data structures are not common in FP at all. It was a completely unfounded assertion and clearly one made from frustration by someone who was so hard wired into OOP programming that they couldn't adapt to the FP way of thinking. It was a prime example of an "old dog" (citing the original article) finding FP harder than the new guys.
Yes, this is very true. It took me a lot of time 'unlearn' bad habits about state and side effects before functional programming really clicked. The interesting bit is that it made my other programs better as well, because I still find myself avoiding mutable state and impure functions.
I'd personally say it's less about unlearning statefulness and learning what the alternative tools are and not be bullheaded and kick and scream about not having the tools one is used to, e.g. map and reduce vs for and while.
Once you learn how to use them, their utility and benefit (they explicitly limit the scope of the changes in the loop and reduce mental overhead) become clear. But a lot of people never get past "why can't I have a for loop" and don't get there.
Map and reduce immediately clicked, the state thing had me for the longest time ('how do you generate output, how do you get a real world effect from a function'), those seemed to me to be far more magical in the FP world than in the step-by-step alteration of the environment that I was used to from imperative programming.
It also seemed to map harder onto the real world than IP, where instructions about piecemeal alterations seem to be the way of the world.
It really clicked for me when I started to think in terms of transformations, where each function performs a transformation of the input on the way to some output. This allows you to be 'pure' most of the way and to limit input and output to the top layers of the program, where they should be (should in my opinion).
Thre's a little bit of psychological esoterism too, fp is very minimal, OOP gives people some new mystery rope to hang themselves with. verbose syntax, procedures to follow, it's probably psychologically a better impedance matching than 'a -> 'a -> b -> ('a -> 'b) where people have to hold very very evanescent ideas floating in the air without as much ways to play with them.
It's "monad is just a monoid in the category of endofunctors": if you don't understand something, here is an explanation in terms of other things you understand even less.
Imagine you have this pipeline that already works for data.csv. But now you have data2.csv which has some difference (e.g., some values are null, while the original data.csv had no null values).
Monads are an approach to making the existing pipeline work (with minimal changes) while still being able to handle both data.csv and data2.csv. The minimal changes follow a strict rule as follows (this is not a valid shell command anymore):
In other words, only two kinds of changes are allowed:
- You can bring in a wrap function, that modifies the entries of the given csv data.
- You can bring in a new kind of pipe ']' instead of '|'
The idea being, the wrap function takes in original data stream, and for each "unit" (a line in the csv file, called a value) produces a new kind of data-unit (called monadic-value). Then your new pipe ']' has some additional functionality that is aware of the new kind of data-unit and is able to, e.g., process the null values, while leaving the non-null values unchanged.
Note, you didn't have to modify any of the process-1 through process-n commands.
BTW, the null value handling monad is called the 'maybe monad' (and of course there are other kinds of monads).
If you make the existing pipeline work in this way, you essentially created a monad to solve your problem (monad here is the new mechanism consisting of the new value, and the two new changes, the wrap function, and the new pipe).
edit: There may be a need to also modify the '>' mechanism. But I think that is not essential to the idea of a monad, since you could replace ">" with "] process-n+1 >" (i.e., you created a new outermost function 'process-n+1' that simply converts the monadic-values back to regular values).
edit 2: If instead of handling null-values, the purpose is to "create side-effects" e.g., at every pipe invocation, dump all/some contents of the data into a log file, then the kind of monad you end up creating would be something like an "I/O monad".
Try this instead, simply put monads are used to provide an easier to use API to some black box abstraction. Example monads can include a "List" or a "Class"
Why does Haskell etc need this? because its hard for them to make an easier to use API to access the internals of some abstraction due to the strict type system unless they use the monad pattern. In comparison in untyped FP everything is transparent while OOP allows you to create your own API within the abstraction itself.
Funnily enough from this thread you can see all sorts of wrong ideas about Monads that beginner Haskellers have
it doesn't make sense to an imperative programmer because "sequence computations" is like water to a fish. the idea that computation isn't always sequenced doesn't occur to someone who hasn't encountered functional programming.
Not sure what you mean by that. Threaded and event-driven systems don’t necessarily have a predictable sequence. Same with data flow through any non-trivial web application using background processing.
I’ve worked on systems that run through a chain of background workers. Each job had a complete list of operations (one per worker) to preform. When each worker finished, it posted the job back to the general queue with the new state and one less operation to preform.
All programs are eventually sequenced. You can’t work on data that doesn’t exist yet.
I’m pretty sure I don’t lack the ability to understand what your talking about. I am sure I don’t know what the words you are using mean.
As an educator, I get to see many young programmers learn about functional programming usually around their sophomore year of college. They've never used threaded, event-driven, dataflow, or similar systems. All they know how to write are single-threaded Java programs, and their perception is that programs are "a list of statements that tell a computer what to do, in order".
They are especially uneasy about the concept of lazy evaluation. It goes against everything they know about programming - that you write a line of code, it's executed, and you move on to the next line of code. With lazy evaluation (as in Haskell) it's an uphill battle getting them comfortable with the idea of writing a line of code that will be executed at some unspecified point in the future. For many students, this can be a mind-bending realization.
Let me just explain to you what functional programming is at a high level and I'll get a little bit into the monad. Maybe that will help you understand.
IF you can compress all your javascript into one line of code or as close to it as much as possible then you are absolutely doing functional programming.
That is essentially what functional programming is, how to program so everything goes on one line. You can think of it as expression based programming, or how to compress your entire program into a single expression!
Now, when you see multiline functional code, what's actually happening is that the programmer is giving parts of his expression a name and placing it on another line so that the code is more readable or the programmer could be generalizing logic in the expression for reuse in other places. Example:
That's it! Turns out that doing this type of organization is EXACTLY the same as doing procedural programming with one extra property! Keeping everything immutable! So if you program in javascript and you keep everything immutable you are doing the exact same thing as compressing all your code onto a single line!
Now that being said there's a lot of this going on in functional programming:
The above is literally the same thing as operator overloading you just define the operator to be:
f | y = function(x) => f(y(x))
and you use it as such:
( f | a | b | x )(x)
like bash kinda.
This type of thing is called function composition!
A monad is just a special type of composition Not only do I want to compose all the functions but at each step of the composition I want to do an extra thing! So let's say I want to log the output
So I define
f | y = (x) => {
result = y(x)
print(result)
return f(result)
}
Then when I compose:
( f | a | b | x )(x)
It will print out each intermediary value along the pipeline!
That is essentially one type of monad. A monad is a way to compose functions such that they do an extra thing! And this intuition probably takes you 85% of the way there on how to use monads in haskell. Monads in haskell just have some extra rules but the intuition is 100% the same thing.
Now you will note that I cheated for f | y. I wrote the code on multiple lines! That is exactly what "sequential" code is!
It is the fundamental property of reality that is at odds with functional programming. Haskell is trying to get rid code that requires you to write things on multiple lines! It is trying to abstract all of that away with a bunch of crazy abstractions so all your code can fit beautifully onto a single line! It is in fact impossible to write the multi line code I wrote above in haskell. What haskell does is present to you the IO monad as an API so you print things through composition and you never have to write "sequential" lines of code.
Turns out when you do single line coding a whole class of errors disappears and your code is also far more modular. It's hard to convince you of the benefits with just words. If you want to know more, you have to walk the path, I can only show you the way.
One more thing. When code is written this way the compiler can do much more tricks with it. Because state is immutable the compiler can execute code in a different order to achieve the same result as your intention. You don't have to often think about this when programming in haskell but it does allow you to do certain tricks.
Mainly because we are monkeys to whom a reasonably accurate reductive analysis can be applied if we label the axes "asshole" and "idiot".
It was a good blog post by a reasonably intelligent monkey who seemed to score low on the "idiot" axis. But some other monkey pegged the "asshole" axis by preventing us from highlighting the text so we could right-click and search to answer the primary question, "What is it?"
Namely, what is functional programming? Even after besting the right-click hurdle, no one easy and succinct answer is available.
The meta-analysis of this blog post therefore circles back around to perform a second-phase adjustment of the author's "idiot" score once we realize that he has done exactly what he is complaining about; he is a bad teacher because he did not answer the "What is it?" question.
He seems like a reasonably nice person, so I think we have to leave his "asshole" score alone in the second phase.
The collective effort, however, scores high on both the "idiot" and "asshole" axes, and this is the core of the "bad teacher" problem.
I spent most of my long university career angry about this problem. Why is is that monkeys can't teach?
Partly, it is because they are arrogant assholes who don't want to weaken their position by making it easy for others to access their expertise.
And partly it is because they are monkeys who think more highly of themselves than is justified, and therefore they cover their ignorance with jargon, hand-waving, and obfuscation.
Other than extinction, I'm not sure how to solve the problem, but I'm sure that the first step is by starting every lesson by answering the "What is it?" question succinctly and concretely, and, more importantly, realizing that if you can't do that, you should shut the fuck up and go away.
To your point about others not teaching well to preserve their status, I'm sure it exists but not that prevalent. I think people who have known something for a long time just forget how to empathize with not understanding what they've known for so long.
For this reason alone I think this is why fellow students tend to be much better at tutoring/helping each other instead of their professors.
Probably because every single FP tutorial is very far away from real tasks that average software developer deals on everyday basis.
They describe pure functions and categories of endofunctors, while I have tasks like "invoke this stateful external API if that stateful external API returns specific values".
Personally I think it is a matter of what teacher one happens to bump into.
I was quite lucky to have such set of teachers for logic programming (Tarsis World and Prolog), and Functional Programming (Lisp, Miranda, Caml Light).
It felt no different form other programming classes.
In fact, it had higher success grades than thermodynamics, electromagnetism physics or the most feared of all assignments, data structures and algorithms.
So dammed hard depends very much on the learning path.
Because you're already used to a very different approach to express the solution to a problem. It is like somebody who already knows an western language trying to suddenly learn Mandarin.
Hah, I was pondering how to write a longer comment expressing this same thing so I'll piggyback on yours.
Paradigms are modes of thinking. You can't just pick up a new paradigm on the fly when you've spent your entire life in another one. Some individuals are exceptions, but most of us aren't so lucky as to have such a natural aptitude for changing our minds on the fly.
In order to be introduced to a new paradigm you have a couple options:
1. Sink or swim. In Haskell, this is the monad tutorial. Why the hell do people start their instruction here? Did they start here? If they did, did they succeed from that point or did they have to find another path and just forgot that this was a really stupid way to start?
2. Baby steps. "See Spot. See Spot run. Run Spot, run!" Learning Italian (previously Spanish), this is literally the level I'm at (actually, a bit better, but still highly constrained by my limited vocabulary). In Haskell, this is:
double :: Int -> Int
double x = 2 * x
quadruple :: Int -> Int
quadruple x = double (double x)
Simple functions tested in the REPL. Then you teach them about function composition (drawing on their knowledge from mathematics, where it's the same idea and not merely an analogous idea) and make a point-free version of quadruple. Then you show how functions can be passed around so that you can do:
square_function f = f . f
quadruple = square_function double
Maybe give that first function a better name, my coffee hasn't kicked in yet. My point, though, is that functional programming in Haskell does not rely on monads when teaching the topic. There are a million things to teach before you even reach that point, and only once the student has a foundation in Haskell's syntax, base semantics, and type system do they need to be introduced to monads. At which point it'll make a lot more sense because they'll be able to grok what monads add to the language.
By analogy (hah!), we don't start C language learners with implementing a generic swap or sort function. That would be way beyond their initial capability, relying no too many ideas that they have no foundation for (that said, it's a shorter path to that in C than monads in Haskell).
So why do people think that learning a totally novel (to them) paradigm like functional programming, especially in the uber-FP language Haskell, can be done by starting at the deep end without studying its fundamentals?
I equate it to a skier learning to snowboard. A skier points their toes downhill to go down the mountain; a snowboarder points their toes perpendicular. Once you are used to pointing your toes downhill, its very hard for the mind to switch context to perpendicular because your mind is telling you that's how you stop.
I think it's important to take note that these issues arise from typed functional programming languages. Untyped functional programming languages aka dynamic FP languages avoid most of these problems entirely by having robust and useful metaprogramming features, whereas in languages like Haskell, most experiences Haskellers would tell you to avoid Template Haskell like the plague.
Just to give you a sense of the issue in typed FP, and I will take Haskell as the prime example, the average Haskeller will not be able what a Monad is beyond the standard definition in category theory. I tried it once, got the whole Slack channel explode as everyone tries to tell me "I will just get it".
This is the result of my study.
https://medium.com/glassblade/pragmatic-monad-understanding-...
IMO the main problem is that typed FP langs don't try to connect their abstractions to the mainstream languages when these abstractions clearly have some counterpart or an intuitive close cousin in typed OOP. Worse, these abstractions usually are gaping holes in the language design but the typed FP community thinks its a feature.
Monads allow you to compose functions with incompatible in/out types. That’s all it is. And (as a nice bonus) add extra code in-between the two functions you are joining. It opens up a huge number of cool things you can do.
Functional programming is hard because some proponents of it spend more time on theory and proving why you should use FP than simply doing things with it. That's why most things shipped with FP is created with languages like Ocaml, Scala, JavaScript, Clojure, etc... You know, functional but more pragmatic and multi-paradigm.
I understand recursion just fine. I can look at a problem and think "this problem space is correctly represented by a tree/graph" and can meaningfully translate that into a program that shoves the data into a tree/graph-like data structure, and my natural inclination when computing on a tree/graph-like data structure is to use recursion.
On the other hand, I understand that not all problems lend themselves to recursive solutions. I can look at a problem and think "this problem space is correctly represented by a matrix" and the reality is that linear algebra problems are often iterative, not recursive, in nature. It is certainly possible to do the simplex algorithm recursively with immutable data, but it's just so much straightforward to do it iteratively on a mutable matrix.
The other thing is that most of my day job is just hooking one API up to another API to achieve my business's desired outcome. I don't need "interesting" algorithmic solutions for 95% of what I do. I just... extract data from one API and plug the data into another. You don't need recursion for that, and the "list of steps to do in order and (possibly) loop over that" paradigm with whichever seasonings you wish to add to it is just fine. (OOP is the flavor of the ~~day~~century at my company)
edit: I forgot the bit that ties it all together. I am absolutely no good at functional programming. I've tried to sit down and learn Haskell or Lisp a dozen times and always failed. I've bought books. I'm still no good. It just takes me ten times as much code that's completely unreadable to do something that would be simple and straightforward in C++.
I also think that most people who learn a functional language try to take what they know in an imperative language and translate it on the fly to a functional paradigm.
This seems like a lot more work than just learning how to think functionally.
The intro to programming course for the math department at my University was taught in Haskell. And it was really interesting to see how people who had never programmed before on the whole did better than those who had a bit of programming experience.
Those that had programmed before kept trying to make Haskell behave how they thought a programming language 'should' behave, while those who had never programmed just looked at Haskell and went "I guess this is what programming is" and just rolled with it.
The only problem was that once they moved onto the intermediate course, taught in Java, they where completely lost again.
Introduction to Programming at CMU is done in SML. They make the same observations as you. Those that have prior exposure to programming struggle. Those students without experience find it easy. They are a bit baffled in the next course, which is done in an imperative language.
15-150 (during my time), but it wasn’t the intro to programming - the course is titled “introduction to functional programming”. The intro to programming is Python (unless it changed).
The trajectory for students is Python to C to SML (15-110 to 15-122 to 15-150). Some people take the C and SML courses in the same semester.
I was really happy to have been introduced to many different paradigms over the course of a few years. But to both your points, once we moved on to Java I had a really bad time.
Alternatively, learning a new thing is easier if you have a grounding somewhere. Maybe the ideal would be a guide like "this is how functional programming differe from imperative programming"
> I think the main difficulty is getting along with recursion
meh, recursion is easy. Recursion is mainstream. It _began_ as a way to fake a loop without mutation, but I struggle now to remember a mainstream language without it.
The hardest part is all the new patterns you have to grok, it's like learning programming again from a blank slate, because it kinda is. It doesn't get any easier when Haskell decides it needs 7 new syntaxes and someone rushes to compose them all in one line.
Monads aren't really related to purity. Haskell uses one instance of a monad, the IO monad to encapsulate IO. But there are other ways of doing it. Most monads have nothing to do with side effects.
You see things like functors/monads all the time even in regular imperative languages. For example in .NET we have Linq, Nullable types, etc... They aren't as well defined as in Haskell but still an incredibly useful pattern.
You also have higher ordered functions and whatnot in imperative languages now. In fact, in JavaScript and .NET they are used all over the place.
I don’t believe .NET’s `System.Nullable<T>` is a monad. For example, a type that can encapsulate any other type fulfils the definition of a monadic-value, but `Nullable<T>` cannot be used with reference-types and delegates, only a strict subset of value-types (as `Nullable<T>` itself is a value-type, but you cannot use `Nullable<T2>` for T1 in `Nullable<T1>`).
While .NET allows for composition with delegates, it’s not very smart, and results in unnecessary type information erasure; for example in Linq if you take a projection of a fixed-size list, then that length information is lost if you then add another step after the projection. (Linq internally has some ugly, but inconsistent implementation hacks; for example, IList<T> length information may be preserved but IReadOnlyCollection<T> length information will not.
C# has nullable reference types now. But I agree it's not perfectly a monad in implementation due to engineering decisions. But the pattern generally follows that of the Option monad in Haskell, even if it falls short. If you know the Option monad you'll find it really easy to pick up .NET's nullable types.
That's my point, you find these patterns all over in common programming languages and even though they are hard to learn it's worth it.
It's going the other way that's the problem. I understand .net nullable, including some of the weird edge cases and trivia. None of that seems to provide any illumination into the working of more general monads.
Weird, when I was learning Haskell I felt like the Option type looked very familiar. Then when I learned about fmap and bind it was immediately obvious why these would be useful. You mean I can apply a function to a nullable type that has no knolwedge of nullable? Cool! Or I can compose general operations on nullables? That is neat.
In fact I think this was the easiest way for me to learn monads - to learn them in the context of things I already understood.
All essential applications of monads in Haskell are there to work around the limitations of purity. The I/O monad, ST monad, State monad, etc.
Yes, I know List and Maybe are also technically monads in Haskell. But you don't need to think of them as monads or even understand monads to do anything useful with them. Applying monad operators to lists and maybes is only occasionally useful.
In most impure functional languages (LISP, ML) you rarely model any data types as monads explicitly. As soon as you have side-effects, they lose their appeal.
This is just not true, you throw out some of the most useful monads :)
Also ST and State have no side effects. The only monad in Haskell that has side effects is IO (unless you were to sneak in an unsafePerformIO but that is an entirely different discussion). So they aren't there to work around the limitations of purity, because they are pure.
> This is just not true, you throw out some of the most useful monads :)
Namely?
> Also ST and State have no side effects.
I didn't say they do. I said they are useful only because Haskell doesn't allow side-effects. In ML or Lisp you'd just update a local variable instead.
Lists, Option, Either, Reader, Logging (non-effectful), STM etc... There is even a probability monad that I have found quite useful.
ST & State are far more useful than updating a local variable. If you are going to choose the simplest use case for them ever and then say that is the only thing they can do it is not a productive discussion. The state monad is a lot like a context object that allows state to flow through function composition (for functions that have no knowledge of such state). For example you could compose a bunch of functions and have the results of each function call stored without changing any of the functions to know about the operation. It's a method of abstraction and encapsulation.
It's not just monads that are useful either, it is the full hierarchy of Monoids, functors, applicatives, monads, etc...
recursion is always synonymous to me of unbounded memory usage. Understanding the memory consumption of a for loop updating a local variable looks easy.
But a recursive function ? forget it...
They can be made to work very much the same with the compiler. The main difference is tail recursive or not. Basically whether the answer is being accumulated along the way or requires returns from the call stack to accumulate the result.
Otherwise there’s no law of computer science making the two very different. Some language compilers implement recursion naively, that’s mostly it.
The call stack itself is literally just a stack provided by the OS with restrictions meant to protect the other programs in the os.
Yes. Fundamentally, the problem is cultural, not mathematical. A modern language like Racket with proper recursion optimizations will compile and optimize a recursive solution that is as fast or nearly (and even sometimes faster) than a mutable imperative one.
The problem is that the idea of recursion as slow or resource hungry has become a kind of circular just-so story among imperative programmers and compiler developers who don't want to write recursion optimizations (or don't even know how).
The argument goes that recursion is slow, so don't use recursion, and if you raise the idea of optimizing recursion, the compiler dev dismisses it, because why would you write recursion support, recursion is slow.
Now it's fair to say in an earlier era, this was somewhat justifiable, but the logic has been passed down far longer than it's actually been true of the technology available. I remember reading "recursion bad" in C books in the 90s, and it's still so pervasive that even though TCO has been in the JS standard since ES6 years ago, most browsers still can't be bothered to support it.
yeah but how do i know for sure that the compiler is going to optimize the memory or not ?
Some compilers are smarter than other to rewrite recursion in a tail recursive way, and others aren't, and it's probably not possible all the time anyway..
It seems like a very convoluted way to do a simple thing ( sometimes).
In Scala, as I recall, you could annotate a method as tail recursive so it wouldn't build if it weren't. A serious functional language pretty much has to have an answer for this since you're supposed to use recursion instead of iteration.
A tail recursive function can always be optimised into code which does not consume memory on recursion, and it’s a pretty basic fact to learn about a given language/implementation.
my question was more about cases where the tail recursive version isn't possible, or even worst when the code isn't written in a tail recursive way but you expect the compiler to rewrite it to the tail recursive version.
the fact that FP works over a model of computation (aka an abstraction over the underlying hardware limits) makes it a bit harder to visualize its behavior in the bounded real-world environment ( imho ).
The more i progress the more i want to develop with "visible wires", and reduce the amount of abstraction to its most basic components..
I suspect you think about recursion in terms of a call stack, yes? So when you have tail recursion, you're still pushing a frame onto the stack by default, unless the compiler recognizes that it can optimize that way. Yes?
The traditional FP model of computation doesn't have a call stack. It proceeds by reduction: replace one subexpression with another, in the program text itself, and repeat until you can't do it anymore.
So if I've got a fibonacci function in two clauses, `fib 0 a b = b` and `fib n a b = fib (n - 1) b (a + b)`, then a reduction may look like this:
Notice how the tail-recursion naturally falls out of the substitution steps performed by the interpreter. There are no stack frames; the whole "rest of the program" (which is what a call stack is) is just the program itself.
This isn't just an analogy; this is the way computation works in an FP. To an FP system, the call stack is the invisible, behind-the-scenes optimization.
Functional languages will typically guarantee you that tail recursion is constant space. The exception is languages running on a FP adverse runtime such as the jvm, and they tend to provide a special keyword and rewrite into a loop.
Also, mutual recursion is a bit more general than loops :)
Mutual single recursion is still a loop; you just pick where you want to cut the cyclic dependencies to return to the top. In the worst case, you can put a `switch` in the loop and track what state you're in, although that's gross (you have to keep the union of all variables that might be used in each state).
Multiple recursion, like in `fib n = fib (n - 1) + fib (n - 2)`, is more general than loops... in a world without higher-order functions. You can make such a function tail-recursive by doing the continuation-passing transform, which basically just makes the stack explicit. (You'd then want to "defunctionalize the continuation" [0] to clean up.)
(I've actually done this in Java! Writing the multiply-recursive solution is sometimes a lot easier to do (and verify); transforming it into an iterative solution mechanically exposes a lot more hidden details, but you still have your recursive solution you can test and compare against.)
it's pretty rare to have the size of an objet in memory be proportional to the number of times one of its method is called. It's usually proportional to the amount of "data" it stores.
In the case of recursion, the "time" of computation may have a direct consequence, in itself, on the occupied memory (in the case of non-tail optimized calls)
I think FP is hard because (a) lack of libraries/bindings (b) impedance mismatch with underlying host ecosystem. In every personal project where I have used an FP language (Elm, Elixir, Haskell) I have run into these problems. Then I have to put my project aside to get into the weeds to get some bindings working. Often the bindings will be abandonware, or work just well enough for you to make progress until you run into severe enough problems. This has happened for me using OpenCV, ZBar, async I/O, threading, lower-level kernel APIs, databases, just to name a few.
Often I think, I could have solved the problem so much faster if I had just stuck to Python or Javascript, or in the language the library was written in. In the end, you end up swimming upstream. This includes the effort of handing it over to someone else.
Elm tries hard to provide an impedance-mismatch free environment for UI development, until you reach a point where you want to do something just slightly different; then it becomes painful. Sure you can integrate Elm with JS using ports or even React with Elm, but creating a polished UX experience is a different level of effort.
Yeah, it seems weird that the author latched onto some things as part of their definition of "functional programming" which aren't really required. I still find SICP to be an impressive self-contained foundation for functional programming, and "Functor" and "Monad" aren't mentioned as named concepts.
Is there a better name for the domain the author is talking about? "Type-driven functional programming"?
> Yeah, it seems weird that the author latched onto some things as part of their definition of "functional programming" which aren't really required.
They are required when taking the original meaning of functional programming though (nowadays often called "pure functional programming" to differentiate it).
Well, it varies from source to source. E.g. if we look at John Hughes' working definition from his 1984 paper 'Why Functional Programming Matters', then functional programming is programming with only pure functions with no side effects, assignment, or mutation. No mention is made of 'monad' or 'functor'.
The rise of effect handling systems like monads in functional programming languages since the oughties was driven by the real-world need to, well, print 'Hello World' to the screen. And different practitioners have different opinions on how far to take effect management. Haskell is just one extreme, but there are several FP languages on the spectrum.
Fair enough, my bad - that definition is what I use as well. I think I got a bit stuck up when OP said "aren't really required", because for the definition that's true, but for real world usage, it is required in pretty much every pure FP language that I know.
Disagree. I think it’s a lot easier to learn than something like C++ or Java, which a lot of people start with. People just tend to forget how much they struggled with these very complicated languages when they were getting started.
Most Lisps are multiparadigm, covering at least the functional and procedural paradigms. Going to Common Lisp you can add OO to the mix. It is a severely constrained notion of FP that manages to exclude the first functional programming language from its definition.
Lisp is very old; it started out competing with languages lacking even reentrant function calls. Lisp had a lot of good ideas and a few poor ones from that era. As compilers improve, it becomes feasible for languages to only provide pure functions, lazy evaluation, and immutable data structures even on the same old imperative CPU cores.
Does that make Lisp somehow not a functional programming language? That other languages have been developed to provide features that most lisps do not does not remove them from the group of functional programming languages. It just means they're part of a different branch of the same paradigm.
It’s pretty easy to write Lisp “functions” that are not actually functions because they invisibly depend on each others’ side effects. The goal of FP is to prevent that.
Common Lisp programs can mutate variables, function bindings, and slots of various objects such as cons cells, arrays, structures and CLOS objects. Programs can execute sequences of forms as if they were statements, for the sake of their side effects, and can conditionally and repeatedly execute such statements. There is a form called tagbody which can contain labeled statements that can freely branch to each other using go.
Common Lisp doesn't require implementations to provide tail call support, so functional programs that express iteration using recursion may be severely limited in the inputs that they can handle; they would typically be ported to Common Lisp by a rewrite using iteration.
It depends on how broadly you define 'functional'. Back in the day just having closures and anonymous functions was probably enough to qualify. But nowadays Javascript and any other number of scripting languages have those features too. I can't think of any respect in which Common Lisp encourages or requires a functional style to a greater extent than, say, Javascript.
It's also worth mentioning that the Common Lisp standard does not require implementations to implement tail call elimination. The pervasive use of a functional style in CL would therefore give rise to programs with performance characteristics that could, at least in principle, vary greatly between implementations. (I'm aware that most popular CL implementations do optimize away tail calls in practice.)
Not all Lisps even allow dynamic scoping. Not to mention that beginners are commonly taught Scheme, which is an example of a Lisp that doesn't allow it.
Honestly a much less gimmicky and concise guide can be found here [0]. The focus is on scala but it does away with longwinded storytelling style explanations and petulant comics. It gets to the point, gives a couple of examples and tells you exactly why each tenant of functional programming is useful in one sentence.
His daughter’s question “Why do we use functions?” is something I found myself asking during my math undergrad, and no one could give me a straight answer.
To answer it to my own satisfaction, I eventually arrived at a form of predicate logic so I could directly experience how cumbersome it was to try to express everything that way. I liken it to trying to speak in a language that lacks a definite article: doable, but way more verbose.
I always thought of functions as just boxes. We like to put similar things in boxes because that lets us remove the complexity of having to manage many similar things at once. When you can just say "this group of balls here needs to be put in the closet" you don't need to think or know that one is a tennis ball, one is a basketball and one is a softball. A function to me is just a way to wrap up a complicated idea or task into a box. It's what we already naturally do with everything in our lives.
Indeed, but functions are not the same thing as functional programming. (We reuse so many words in this field!)
What you're describing are regular functions. If those functions hold a state, they're not functional; they're procedural. For example, a function that holds onto a counter and increments it by some value that is passed in cannot be functional because the counter exists in a hidden state, unknown to anyone else and unpredictable until runtime. This is what Backus was trying to fix.
A functional version of that same function would need two parameters, one for the amount to increment by and a second one for the counter's current value. Often, a function can be rewritten in the functional style and thereby eliminate state (at least from that function).
So at some point up in the heirarchy, you're keeping track of that counter. So there will be a parent of the incrementing function which is itself not a function because it keeps state. You could say your top level program is not a function because it keeps state.
Not neccessarily, when your program is (tail-)recursive you can carry the "current" value along without ever really mutating it. Haskell has the State monad which doesn't require any impurity.
> when your program is (tail-)recursive you can carry the "current" value along without ever really mutating it
right, what im curious though is, how one would go about making an app like say, itunes or the appstore in that way... it probably would go along way to help people understand these concepts
You can't. No useful program is stateless. Only examples and proofs-of-concept can be purely functional.
The lambda calculus is a way to define math in the functional style. Guess what? It needs state passed in to work. The names and orders of the integers must be passed in and held in a "state" in order to do anything useful. Since functional programming is centered around the concepts from the lambda calculus, this implies that all functional programs must have some "state" somewhere in order to start or do useful work.
If your program interacts with the outside world, then yes, it needs to be impure "along the edges" at least. But it can still keep track of state in a pure, functional way. Here's a simple example:
main :: IO ()
main = loop (0 :: Int) where
loop n = do
putStrLn $ "The counter is currently " <> show n <> ". Increase it by how much?"
increase <- readLn
loop $ n + increase
Here, the counter is updated purely, rather than by mutation.
Indeed, any useful program must have state and therefore cannot be entirely functional.
Only components or individual functions can be functional in style, the goal being to have as many of those as practical so that you don't touch state often and hopefully, only in a few places and using methods which set the state (rather than letting individual components "reach in" and set it themselves).
This is how I answer my kids' questions. "Why? Why? Why?" Buckle up, kid, we're going down the rabbit hole. They usually get bored before I do and have learned to back off when I slip into presentation tone.
Imho, ocaml is a way better language than Haskell when you are learning FP coming from other paradigms: it doesn't require you to learn mathematical constructs such as monads or monoids when you are beginning, it allows you to use imperative or OO constructs (though they are discouraged by the language)...
When you are used to FP, though, more abstraction can be nicer
I don’t consider myself an FP programmer but when I use modern C# for code that’s not terribly performance-critical, I use these FP concepts (purity, composition, currying, immutability, etc) when they’re a good choice. For suitable problems, FP can be awesome.
However, I often write performance-critical CPU-bound code, often impossible to do with FP. To write fast code for modern processors, programmers have to think about RAM layout and L1D cache misses. With these immutable data structures, FP code ain’t exceptionally fast because for optimal performance it’s essential to reuse the L1D-cached lines of virtual address space. Also, better to avoid any heap allocations on hot paths. Much easier to achieve these two things with imperative code, with their mutable variables, native stack and for loops.
Teaching someone to abstract out a name for something they want to do takes lots of practice, but then when they are pushed to have to abstract can lead to decision paralysis around when to do it. Having folks practice this together and feel out the nuances helps a lot. Control flow is also a tricky early concept for any new programming setting, but also, learning to compose is too. For some students tackling control flow and composition early by explicitly composing abstracted funcs feels great. However, I’ve oft seen many of my students want to get granular with control flow for a long while before they abstract or utilize many abstractions, probably because their mental model of what they want to do is stepping through each detail of an example they have in mind. Another component for andragogical learning is to just sell the WHYs of functional patterns: why care, why remember, why it’s helping, why they should just give it 5, why it’s ok to forget, et cetera. Learning and jumping straight into synthesizing, evaluating and analyzing code for a potential production setting has been difficult with a loose understanding. You need good scaffolding in your backlog, lots of time, patience, context sharing, and resourceful experts to turn to check your understanding and promote good tooling/solutions in the ecosystem. I saw this when my team switched to elixir. I saw this when my team stitched to K8s. Lots of delays I’m happy we worked through. Pairing a lot helped through it
Learning FP takes real investment both in time spent learning and practicing the concepts and in time spent slowly misusing them in real projects until you develop your taste for where they're appropriate. This investment is regularly underestimated.
Using FP introduces real advantage in terms of taste and simplicity, meaning that "advanced" concepts are not nearly as prevalent as someone who just learned them might hope. The rule of 3 is helpful and under-applied. Programmers new to FP get eager to use cool tech as opposed to leverage improved taste.
FP can be utilized in many languages but in ones that don't guide your hand toward it—your Haskell, your OCaml, your Elm—it's easy to have it "mix" with other styles. It is not the case that combining FP and non-FP styles immediately make sense or work. It is the case that the strengths can be combined if done thoughtfully.
All of these points generalize, though. As with any programming work, taste is important. It takes a while to develop and often needs to be developed within the context of a team. Tasteless FP is an awful, awful waste of time, energy, money.
Someone who likes to throw all the jargon at your is a hobbyist, a proselytizer, or a fan. Not terrible, but not necessarily someone who can yet manage all of the necessary tradeoffs and balances. We take that sense of taste somewhat for granted in "mainstream" programming styles.
Specific to the "jargons" the author talked about. I find recent introduction level Haskell books actually do a surprisingly good job covering all the concepts the author mentioned. And they don't suffer from the online learning material problems.
The two books I read and can recommend are Programming Haskell by Graham Hutton & Get programming with Haskell by Will Kurt.
Another source that helped me a lot is the video series Category Theory by Bartosz Milewski [0]. It requires basically no background on category theory and defines all the concepts along the way. Bartosz also uses Haskell code as examples. You only need to know the syntax up to type class to understand.
As for the awful experience people get from learning Haskell, I think it depends a lot on people's expectations.
Most people only learn one language from school or early career, and then migrate their experience from one language to another. You don't learn the concept of "stack" twice, you only "learn" how to express them with a different syntax. So people can "learn" new languages with significantly lower effort than they learned the first language.
And when they find Haskell or other ML-family functional programming languages, they can't find the corresponding concepts in their known language. This time people actually need to learn new concepts as they learned their first language.
The experience is very similar to learning algorithms or OOP design patterns. And I don't think it's harder to learn Haskell up to the level mentioned by the author than e.g. reading the gang-of-four book.
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.
That isn't even a criticism of regular expressions nor is this a criticism of functional programming. It's more about that fashionista need to use something rather than solve something.
I have found that my best use of functional programming occurs within hybrid systems.
When you can use either technology on a per-function or class basis, it really helps with adoption. Languages like C#8+ almost feel like cheating when it comes to mixing ideologies. Some developers think this sucks, but I view these complaints similar to those who would complain that someone might try to use a hammer to install a screw.
The general theme in my mind is to use the imperative code to construct the walls of the functional matrix wherein I author the actual business logic as pure functions.
FP for me is usually in the form of SQL, LINQ or switch expressions. I personally have never wanted to go all the way with FP. I see how you can do it in theory, but I question the practical engineering value of actually pursuing this. The most important & scary parts of our product are within this "hosted" functional scope.
The author's struggles with Haskell resonate. I suspect it's just a very hard language to learn. I've given up attempting to learn it at least twice now.
I took Martin Odersky's Scala MOOC sometime around 2012-2014 and it was easy. It just made sense and the IDE experience was nice. I would recommend it to anyone who uses C# or Java and wants to learn FP. Although after 8 years I don't know if it's still as good, or if something better has come along. Either way, the course is still available: https://www.coursera.org/learn/scala-functional-programming
Next I worked through SICP and that was the best CS book I've read, although significantly more difficult than the Scala course.
Note that the course got revamped this year. It is now based on Scala 3 and new content was added. Some of the new topics are: enums, extension methods, and givens.
1> The syntax is strange, I'm not sure it needs to be, a list of parameters then the return type, with none of the usual commas and parens, it's like someone heard a critique of lisp, and went too far the other way. It's almost as bad as Forth
2> lazy evaluation as a feature? ok, I guess. Is there somewhere else that over-eager evaluation happens? (like in Metamine, where you can do a super fancy equals, and if any of the terms change, the whole thing is re-evaluated, like cells in a spreadsheet) That would balance it out
3> Everything is an expression - this is the worst part of C ever, and it's being called a feature
4> Invariants as good - this is the worst part of python, and it's being called a feature
5> procedural code is banned completely - yeah, this likely breaks a lot of brains, but I'm willing to let it slide if it works in the end
6> definitions of functions being conditional, that one is also a brain breaker... in the normal world, you define a function once, and forever... the 3 different styles of spreading out/pattern matching the inputs is a bit of a hill to climb
7> having to worry about tail recursion instead of regular recursion, it's the metaphorical equivalent of manual memory management, it ads mental complexity that the programmer really shouldn't have to deal with.
That's as far as I've gone along this phase change... I'm pretty good at keeping opposing views both straight in my head, so maybe I'll be one of the few this doesn't break? Who knows.
Aside/Tangent - Perhaps the difficulties being encountered are why John McCarthy wanted to keep S expressions hidden under the hood in his unrealized programming language.
I don't want to diminish your first impressions, however I think you might have misunderstood one or two points in that video. Perhaps I can clarify:
Regarding 3/"everything is an expression": C does explicitly not have this property, it has statements and expressions. The typical example to illustrate this point is the lack of a true "if expression" in C. [1] Given the following C code:
int x;
if (a > 5) {
x = 1;
} else {
x = 2;
}
The value x is only assigned once, so it would be nice to express this using const. We can achieve this using the ternary operator, but that only works if the if statement is simple enough.
In Rust on the other hand, ifs are true expressions and "return" values, allowing you to write the statement above like this:
(Syntax might be slightly incorrect as I'm writing this without a compiler at hand)
let x = if a < 5 { return 1 } else { return 2 };
From what I'm observing the (functional programming) idea of "everything is an expression" is well-received and also included in recent imperative languages such as Rust.
Regarding 4: Do you mean immutability instead of invariants? I'm not quite sure what you are referring to in the context of Python.
Regarding 7: This was mentioned somewhere else as well, but in a functional programming language you don't have to worry about the compiler recognizing and optimizing tail recursion. The only exception are pathological (extremely and obviously inefficient) function definitions, for example functions that try sum an infinite list of integers and don't terminate or functions that reverse a long list.
[1] Terminology: "if" in C is a statement, and the ternary operator (a?b:c) is an expression similar to "if", but there is no true "if" expression.
Well, this is embarrassing. I'm using a small Rust example to make a point about C oddities, and then a Rust Core team member points out a syntax error in my example!
In my defense, I'd say the return keyword - combined with the different semicolon rules - is one of the more confusing and subtle syntactic differences when learning Rust from a C/C++ background. After a few weeks of programming C++ I always use return more often than necessary.
Oh absolutely! If you haven’t worked with expression-based languages before, it’s a big mental shift. In my experience it’s harder to come back to one that isn’t after you’ve done things this way for a while. There’s an adjustment period for sure.
Maybe one of you functional gurus can answer this - One of the problems I have with functional programming is how do you handle multiple data structures on a single thing. For example - say you have a drawing program, with say just one shape - a square. Now to make it fast when the user clicks somewhere you'd have a data structure like a k-d tree that can find your shape quickly. Now you also have another structure a list of the shapes, because you don't want to put them in the k-d tree, you update both structures when shapes change. You'd also want other structures for say undo/redo. I can't see anyway to do this in functional programming. Same applies to database style programming.
That’s not that hard in an FP language. I routinely write code with multiple structures referring to the same thing. My usual solution is to have an identifier for the thing that indexes into an array or map, and then the other structures contain that ID instead of the object itself. It’s basically a pointer like I’d use in any other language. The details and choices for how to represent the IDs and structures is usually application specific, but that’s true in any language: how you do something should be the choice that best fits your problem. There isn’t anything about functional programming that makes it impossible or particularly difficult.
I love functional programming and I think every CS major should learn the key concepts of it: higher-order functions, immutability, purity, and how they can lead to better designs and more reliable software.
However, I believe that even after the learning curve has been scaled, the task of writing code in functional languages is objectively more difficult in at least one way: Coding a function in the expression-oriented syntax of functional languages has a higher cognitive load than coding that same computation as a sequence of statements. You simply have to hold more things in your head.
> Coding a function in the expression-oriented syntax of functional languages has a higher cognitive load than coding that same computation as a sequence of statements.
I don't know what fp's you've been using but the most common modern ones have basically all adopted the pipe operator, which make the representation of data transformation as a "sequence of statements" painfully obvious.
Thanks for bringing that up. The shift to using the pipe operator has been an intriguing development in the community, and I've given some thought to what it implies about the difficulty of programming in functional syntax.
The pipe operator helps if you have a clean sequence of steps and each function returns only the needed input for the next step. The code becomes just as unwieldy if an argument isn't in the right position, so you have to splice in a lambda, or you have to split cases on the result, in which case you have to do something monadic if you want to avoid deeper nesting. All of which looks like unnecessary friction when I know perfectly well in my head what the computation needs to do after each step.
Also, when you have the pipe operator in your toolbox you now have a decision between two styles of coding that are basically inside-out from each other.
> The code becomes just as unwieldy if an argument isn't in the right position, so you have to splice in a lambda, or you have to split cases on the result
I disagree. This is pretty clear even though it does exactlywhat you claim is terrible, and it is far, far clearer than the corresponding imperative code:
> Coding a function in the expression-oriented syntax of functional languages has a higher cognitive load than coding that same computation as a sequence of statements. You simply have to hold more things in your head.
Interesting. That is exactly the opposite of the usual sales pitch for FP - that it simplifies things because you have to keep track of less.
These concepts aren't hard to understand and are actually incredibly useful. At some point though things get so abstract it's really difficult to understand what's going on. I've seen Haskell libraries that do things I can't explain for reasons I don't understand and the extremely detailed README with diagrams and arrows and everything somehow made it even more complex.
> At some point though things get so abstract it's really difficult to understand what's going on.
A lot of that comes down to extremely advanced use of Haskell's type system. Advanced type systems are valuable, and they're easier to add to FP languages, but I wouldn't consider them to be a fundamentally FP concept. (We see very expressive type systems being included in imperative languages like Rust and Swift these days.)
I think it's useful to keep in mind that Haskell is used a lot for research. You don't see Java used as a research sandbox in the same way, so you're not likely to run into run into these kinds of libraries in Java. (Though, as an aside, I consider anything that heavily uses reflection in Java to be nearly as inscrutable.) It's actually impressive just how many of these research-level libraries are practically useful in some way, even if the bar to understanding them is higher.
I don't think it's a universal truth that libraries making advanced use of type systems must be difficult to understand. Taking the `lens` library as an example (and I think you were alluding to it, anyway), there's a fairly long-running thread of research around what a "lens" even is, and how they fit into a broader domain of "optics". The `lens` library was created based on one particular representation of lenses, and the community tried to find ways to unify the various kinds of optics in a way that made sense to them. I think the more recent understanding of "profunctor optics" might give us a more accessible route to understanding optics in general, separate from the whole hierarchy of distinct optics you can derive from it.
The saying about Haskell is that it has the steepest unlearning curve. It probably helps to have seen some abstract algebra since many of its ideas come from there. The online book learnyouahaskell.com is pretty readable though.
As that guy was a big executive whose time was interchangeable with money, he might have had better luck treating FP as a topic in math that he was having trouble with, and hiring someone for one-on-one tutoring, either in person or online. It might have gotten him through the various stumbling blocks quicker.
Functional programming is not hard to learn. Try learn Elixir if you don't believe my word.
My observations on why people think functional programming is hard:
1. They find it hard because they're not learning functional programming, instead, they're learning functional programming, advanced statically typed systems, type classes, and monadic programming altogether. Languages like Haskell fit into this category. It seems the Author took this path (Elm, PureScript, Haskell).
2. The ecosystem is simply not there. Say I just learned Python and there are many meaningful projects I can build to continuously improve my skill. Say I just learned SML, there are not many exciting things to be done.
3. The ecosystem favors power users. Clojure the language is not hard to learn. The ecosystem is not bad at all, both quality and quantity. But:
3.1 I agree that the idea of "libraries over frameworks" seen in Clojure community is better, but when there's less constraint, the responsibility falls on users' shoulders. Working with libraries feels like assemble my own PC. This is also why many people find Vue is easier to learn than React, and Intellij IDEA is easier than Emacs.
3.2 The ideas are novel. It may not be harder to learn, but it takes time. Rails programmer knows what to expect in a Web framework, be it Django or Spring Boot, so when they learn them they just learn like the last 20% of them. But for Clojure I find myself keep learning things like Fulcro and Pathom, the ideas are exciting but it's not ready for the mass. Just imagine how hard it would be to educating "the Algebraic Effects" to React users in 2013. In fact, the React ecosystem has been influenced by ClojureScript/Om, but the React community did pushed hard for the education part. At that time I feel like I came across those buzzword "functional" "reactive" "immutable" "undirectional" in the community every single day - it became such a cliche so that people start absorbing them.
Otherwise, functional programming without these problems is easy to learn. Rails programmers can be productive with Phoenix in a matter of days - many of them didn't even bother to learn Elixir or functional programming, they just picked them up along the way.
> The Technological Debt, i.e. the cost and burden of a large codebase that’s complex and difficult to maintain, is also higher in Javascript. One of the reasons for this is that the language is fragile.
I found this rather bias. Is it true for those who has tried FP / elixir? I have never used it, I don't think my company will ever consider changing the codebase at all. However java / python as backend and typescript (react) as front end seems to scale soo easy for us here.
Shouldn't this article have been titled "Why Is Learning Haskell So Damn'd Hard?"
Don't get me wrong. I like Haskell. But yeah, I learned Haskell because I used to sit at a desk across from Brian O'Sullivan (who later wrote "Real World Haskell".) And even having Brian around to answer questions, it still required stretching of the brainpan. But at least I had a guide.
In my experience when learning functional programming with Haskell there are two kinds of complexity:
1. Complexity caused by the change of paradigm: This specifically hits programmers with a lot of experience in OOP languages. For getting into the FP-mindset your brain needs a bit of time to get rewired. You need to switch thinking about mutating objects to thinking in mapping streams of data. Given that many OOP languages are adopting functional ideas like map and fold/reduce, nowadays a lot of developers already have a bit of experience in thinking in the FP-way, and this will get better with time.
2. Complexity caused by the tools: There's a reason the author in the above post started out with Elm instead of Haskell :-) Writing a few recursive functions in haskell is still pretty simple. Where it get's very complex is when you want to build real world applications. To build a simple web app you need to 10s of decision on what tools and libraries to use. Here's a couple questions you'll need to find an answer to when building a haskell app:
- What GHC and what language extensions do I need? How do I install it?
- What package manager do I use? Cabal, nix, stack?
- What web server library?
- What database library do I need? Do I need an ORM, are there even ORMs in haskell even though there are no objects?
- What HTML template library to use?
- How do I compose it all together? What is a monad stack?
When you have only very few experience in haskell it's really hard to not get stuck here. The quality of documentation of most haskell tools also doesn't very much help here.
I believe that the value haskell can bring is signifcant, and by fixing the tooling situation we can a lot more people to adopt haskell in the future.
With IHP we're trying to fix the tooling situation and build a haskell-based web framework that is as easy to use a rails or laravel :) To combine the benefits of purely-functional programming with the RAD approach of rails, laravel and django. It's now already the most active haskell web framework and we have many people starting their haskell journey with IHP.
We have live reloading in dev mode, a JSX-inspired template language and many code generators to quickly get started with shipping real world apps.
One aspect I don't see mentioned very often is that the FP model of computation is completely different from the model of the underlying hardware, which makes it very difficult to reason about things (and often involves putting blind hope in the compiler).
I don't really agree, the point is to provide high-level abstractions that hide a bunch of complexity. But even in a high-level non-functional language, there's not a lot of friction in terms of the model of the language vs the hardware. An array of ints in java is basically 1:1 to the underlying continuous, mutable block of memory.
> the point is to provide high-level abstractions that hide a bunch of complexity
And these abstractions are not hiding the model of the machine? For example the machine has a small set of fixed-size registers, but your language probably has an unlimited amount of arbitrarily sized, lexically-scoped local variables.
> I believe that Functional Programming is far far better than Imperative Programming. I know this because I was willing to suffer for 3 straight months learning something I had failed at multiple times in the past just so I didn’t have to write in JavaScript.
Or, one could just swallow ones pride, learn TypeScript, and have the benefit of working in a sane language that transpiles effortlessly into Javascript.
(Not a bad word about Javascript wizards or its inventor. Given the constraints the results are nothing but amazing. I just wish we didn't have to suffer an untyped language for so long before someone came up with the obvious solution that is TypeScript.)
Imperative programming is intuitively literal in a way that formal functional programming in something like Haskell isn't, because you have to understand some nontrivial abstract mathematical formalism to build nontrivial programs.
I think a lot of programmers, including me, have a gift for the former kind of thinking but not the latter. So we tend to struggle with abstractions like monads, because we're looking for some literal ground truth about them that doesn't really exist.
Learning Haskell to the point where I could use it for practical work was one of the best things I ever did for my programming brain. I'd recommend it to anybody.
You can write the closed form in FP just as well as any other language.
Most FP languages will let you write a recursive form that won't blow the stack. In C, you'd need to used closed form or a loop if you value your stack.
fib(0) -> 0;
fib(1) -> 1;
fib(N) when is_integer(N), N >= 2
-> fib(N, 2, fib(0) + fib(1), fib(1)).
fib(N, N, Current, _Last) -> Current;
fib(N, I, Current, Last) ->
fib(N, I + 1, Current + Last, Current).
But if you write the same thing in C, it'll still blow your stack (of course, you can transform it into a loop), and you'll need to use bigints somehow, these numbers get big fast. It does use a lot of cpu though; takes not quite 10 seconds to run fib(1000000) on my Pentium G2020. Closed form is certainly faster, although I didn't benchmark it :D
Then you can check for N = 0 to return Current, and do N - 1 in the iteration. One less value to pass in each recursion should make it faster, and it's less to think about.
For completeness, what was your solution? I promise not to judge it if you don't want me to :) but it seems like it would be a valuable discussion for others who might be on the fence.
My solution (from another conversation on this very page) is:
fib 0 a b = b
fib n a b = fib (n - 1) b (a + b)
It might (or might not) look simple, but this solution arises from an understanding of computation as reduction, which is very much not how imperative languages go about things. So it's really not shameful or even problematic to have arrived at a different solution first.
Implementing it with a hylomorphism is actually quite clean, however, understanding the plumbing to get there is not. The principles are generally quite simple, however, there's no non-mathematical wordings to describe it.
I fail to learn Haskell about once every 5 years and sometimes I think of functional languages as not languages for expressing concepts, but more like a musical notation that denotes a set of abstractions mapped to an underlying ontology made from things you just train yourself on with practice - and then use the functional language as a lookup reference to until you can in effect "sight read," or compose those underlying elements to the page. The algebra and category theory are the true instrument, and the language is the composition notation tool. Whereas with OO and imperative programming, the imperative or OO language itself is the instrument or tool that you can just pick up and bang away on and music will come out. To extend the metaphor, OO is an instrument on which you can play a canon like "row your boat" without knowing how to write one, some, or all of them. It's lovely that we have these tools for specifying and composing functions, but perhaps the gap is in effectively expressing and teaching the underlying ontology. Maybe as a writer, my next attempt should be to write that explanation of it as I go.
The most trouble I had with Haskell was pronouncing statements after reading them, which prevented me from composing the ideas in my head because the syntax doesn't give you a lot of hints. It's like trying to parse calculus without names for the letters in the greek alphabet. Most people are proficient in their own written languages, and they are capable of mapping ideas to abstractions, but if there is not a way to express those ideas lexically in their language, most are going to stumble.
The incomplete and error prone explanations by novices that the author refers to also create an artificial conceptual barrier, but mostly because there is always the hint and implication (or perhaps just personal anxiety) that you are not writing real functional programs and that your naivety is accumulating a hidden sunk cost that you will eventually be exposed for and made ashamed of because you aren't a category theorist, which is a consequence of this mostly aesthetic idea that functional code is about competing to advance knowledge in a science and not just making stuff you enjoy or people like. The clostest thing I found to a kind of function-punk where you could just bang away on it because it was fun was the excellent Learnyouahaskellforgreatgood, but it focused on the notation less than the underlying ontology - e.g. the instrument - and what I think I might need is more of a category-punk that provides the equivalent to composition with a few concepts, and then add the notation to those.
Not sure there's another attempt in me, but anyway, provocative article about something so important, and that I think ultimately FP will really advance the way people think about the world, if only in another generation or two.
Good post from somebody with an overflowing teacup. Haskell's not magic or special; it's Just Another Programming Language, really. Reading their monad tutorial (https://gist.github.com/cscalfani/b63552922a8deb2656ecd5ec8a...) it doesn't sound like they actually understand monads; they don't know about algebraic laws or the join operation.
Christ that's a long winded article!
Also I'm getting tired of "I learned to program on a stone tablet with a chisel" type stories. Wow, you wrote assembly, swoon!
These uncommon FP frameworks and languages can be good tools in the right, experienced hands. They can also be fun for side projects and as learning exercises. But every time I’ve watched programmers try to use uncommon functional languages for real projects they end up like this article:
> More accurately, learning Functional Programming concepts used in Haskell in 3 months after having thrown out 30,000 lines of code on a project that was now monumentally behind schedule was the hardest thing I had to do in my career.
When you’ve reached the point of being severely behind schedule, throwing out mountains of code along the way, and struggling mightily just to get basic things accomplished: It’s time to stop. Don’t double down on a new language that you also have to learn from scratch. Pick something tried and true and get the work done. Revisit the functional language at a later time for an unimportant project or a side project, not something with a deadline.
If the programming language or ideology has become more important than shipping the project, we’ve lost the point.