Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Haskell's actual problem isn't the lack of a comprehensive standard library, but rather the presence of core language features that actively hinder large-scale modular programming. Type classes, type families, orphan instances and flexible instances all conspire to make as difficult as possible to determine whether two modules can be safely linked. Making things worse, whenever two alternatives are available for achieving roughly the same thing (say, type families and functional dependencies), the Haskell community consistently picks the worse one (in this case, type families, because, you know, why not punch a big hole on parametricity and free theorems?).

Thanks to GHC's extensions, Haskell has become a ridiculously powerful language in exactly the same way C++ has: by sacrificing elegance. The principled approach would've been to admit that, while type classes are good for a few use cases, (say, overloading numeric literals, string literals and sequences), they have unacceptable limitations as a large-scale program structuring construct. And instead use an ML-style module system for that purpose. But it's already too late to do that.



How are type families worse than fundeps? That's a pretty ridiculous assertion; the things you can do with fundeps are strictly fewer than the things you can do with type families.

> The principled approach

You're dead wrong. The principled approach here is dependent types and full-featured type-level functions. Fundeps are a hack that let you implement a small subset of such functions (while type families gets us a bit closer to the ideal).

> they have unacceptable limitations as a large-scale program structuring construct.

Such as?

> And instead use an ML-style module system for that purpose.

How about we just use C macros for parametricity?

ML-style modules have their uses, but they aren't nearly as elegant as a clean type-level solution.


> How are type families worse than fundeps? That's a pretty ridiculous assertion; the things you can do with fundeps are strictly fewer than the things you can do with type families.

It's not about how much you can do (otherwise, just use a dynamic language, you can do everything, even shoot yourself in the foot!), it's about whether the result makes sense, and how much effort it takes to make sense of it.

> You're dead wrong. The principled approach here is dependent types and full-featured type-level functions. Fundeps are a hack that let you implement a small subset of such functions (while type families gets us a bit closer to the ideal).

You wanna play the dependent type theory card? Type families as provided in Haskell are incompatible with univalence.

    type instance Foo Bool = Int
    type instance Foo YesNo = String
Please kindly provide the isomorphism between `Int` and `String`.

Case analysis only makes sense when performed on the cases of an inductive type, which the kind of all types is not.

> Such as?

The insistence on globally unique instances?

> How about we just use C macros for parametricity?

What does this even mean?

> ML-style modules have their uses, but they aren't nearly as elegant as a clean type-level solution.

See here for how modular type classes, as proposed for ML, would actually prevent the issues caused by Haskell-style type classes: http://blog.ezyang.com/2014/09/open-type-families-are-not-mo...


> You wanna play the dependent type theory card? Type families as provided in Haskell are incompatible with univalence.

Hi. As someone that knows type theory and knows homotopy type theory and also knows Haskell well I would pose the following question to you: what purpose on god's green earth would be served by introducing univalence directly to haskell?

(Oh, and furthermore, you realize that fundeps have precisely the same issues in this setting?)

Contrariwise, don't you find it _useful_ that we can have two monoids, say And and Or, which have different `mappend` behaviour?

Now, can you imagine having that feature and _also_ respecting the idea that set-isomorphic things should be indistinguishable? How?


> what purpose on god's green earth would be served by introducing univalence directly to haskell?

Generally, when I want to reason about tricky data structures, what I do is:

(0) Define a set-isomorphic auxiliary type that's easier to analyze, and whose operations are easier to implement, but have worse asymptotic performance.

(1) Prove that transporting the operations on the auxiliary type along the isomorphism yield the operations on the original tricky type.

I need univalence for this argument to hold water.

> (Oh, and furthermore, you realize that fundeps have precisely the same issues in this setting?)

Type classes are already Haskell's controlled mechanism for adding ad-hoc polymorphism “without hurting parametricity too much”. I consider it healthier to reuse and extend this mechanism (which is what functional dependencies do) rather than add a second one for exactly the same purpose (type families).

> Contrariwise, don't you find it _useful_ that we can have two monoids, say And and Or, which have different `mappend` behaviour?

Sure. In ML, I'd just make two structures having the MONOID signature. Haskellers have this wrong idea that the monoid is just the type - it's not! A monoid is a type plus two operations. Same carrier, different operations - different monoids.

> Now, can you imagine having that feature and _also_ respecting the idea that set-isomorphic things should be indistinguishable? How?

Yes. Acknowledging that an algebraic structure is more than its carrier set.


> I need univalence for this argument to hold water.

No, you don't. Univalence is the axiom that transporting operations across such equivalences _always_ works. If you're doing equational reasoning directly it doesn't arise.

Furthermore, all you need to do is to establish that the _type operations_ regarding one type respect the equivalence to the other type as an additional step.

As you say "a monoid is a type plus two operations" -- so fine, we can treat the monoid And as the type bool and the dictionary of operations on it, and all this still works out.


> No, you don't. Univalence is the axiom that transporting operations across such equivalences _always_ works.

Sure, but the strategy I outlined is risky (as in “may lead to getting suck and having to undo work”) in a language where this isn't guaranteed to work.

> As you say "a monoid is a type plus two operations" -- so fine, we can treat the monoid And as the type bool and the dictionary of operations on it, and all this still works out.

Yup, but Haskell doesn't let you define types parameterized by entire algebraic structures. It only lets you define types parameterized by the carriers of algebraic structures.


> otherwise, just use a dynamic language, you can do everything, even shoot yourself in the foot!

Type classes allow huge flexibility while maintaining type safety, to a much greater degree than fundeps allow.

> it's about whether the result makes sense

Which they do. Perhaps you have some examples of when type families confused you or made you perform an error?

> Type families as provided in Haskell are incompatible with univalence.

TFs aren't dependent types. However, they are on the right track. Fundeps are farther away from the right idea. Could you explain to me what's wrong with your example? I'm not up to date on HoTT, but it seems like there's nothing in principle wrong with pattern matching on elements of *. That seems like an important feature of type-level functions.

>The insistence on global unique instances?

Why is this a problem? It makes sense from a theoretical perspective (we don't associate multiple ordering properties with the things we call "the integers"), and it's very easy to use newtype wrappers to create new instances if needed.

> What does this even mean?

ML modules are flexible, but backwards from a theoretical perspective. Parametricity is something that should be embedded in the type system, not the module system.

> See here

Interesting example. However, I doubt that the syntactic cost of using such a system is less than the syntactic cost of enforcing global instance uniqueness and using newtype wrappers.


> Type classes allow huge flexibility while maintaining type safety, to a much greater degree than fundeps allow.

Um, aren't functional dependencies an add-on to multiparameter type classes? I don't see where the opposition is.

> Which they do. Perhaps you have some examples of when type families confused you or made you perform an error?

I already gave an example above. I defined two type instances that violate the principle of not doing evil: https://ncatlab.org/nlab/show/principle+of+equivalence

> TFs aren't dependent types. However, they are on the right track.

Dependent types are a good idea. The way Haskell attempts to approximate them is not. Parametricity is too good to give up. With the minor exception of reference cells (`IORef`, `STRef`, etc.), if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.

You know what type families actually resemble? What C++ calls “traits”: ad-hoc specialized template classes containing type members.

> Fundeps are farther away from the right idea.

Functional dependencies are a consistent extension to type classes, which don't introduce a second source of ad-hoc polymorphism, unlike type families.

> Why is this a problem? It makes sense from a theoretical perspective (we don't associate multiple ordering properties with the things we call "the integers"),

What if I want to order them as Grey-coded numbers? In any case, the integers are far from the only type that can be given an order structure, and many types don't have a clear “bestest” order structure to be preferred over other possible ones.

> and it's very easy to use newtype wrappers to create new instances if needed.

Creating `newtype` wrappers is easy at the type level, but using them is super cumbersome at the term level.

> ML modules are flexible, but backwards from a theoretical perspective.

ML modules are plain System F-omega: http://www.mpi-sws.org/~rossberg/1ml/ . Where's the backwardness?

> Parametricity is something that should be embedded in the type system, not the module system.

It's type families, as done in Haskell, that violate parametricity! Standard ML has parametric polymorphism, uncompromised by questionable type system extensions.

> Interesting example. However, I doubt that the syntactic cost of using such a system is less than the syntactic cost of enforcing global instance uniqueness and using newtype wrappers.

I can't imagine it being more cumbersome than wrapping lots of terms in newtype wrappers just to satisfy the type class instance resolution system.


>Um, aren't functional dependencies an add-on to multiparameter type classes?

You're right, I meant "type families".

> I defined two type instances that violate the principle of not doing evil:

We're not doing abstract category theory; we're writing computer programs (well, I am). Have you ever run into a problem with type families in that capacity?

>if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.

Agreed, but there's a difference between type functions and type constructors. TFs are (a limited form of) type functions. Value-level constructors admit lots of nice properties that value-level functions do not, and I see no reason to be uncomfortable with this being reflected at the type level.

> What if I want to order them as Grey-coded numbers

Use a newtype wrapper. Even if a language allowed ad-hoc instances, I would consider it messy practice to apply some weird non-intuitive ordering like this without specifically making a new type for it.

> Creating `newtype` wrappers is easy at the type level, but using them is super cumbersome at the term level.

And using ML-style modules is easy at the term level, but cumbersome at the type level.

It's a tradeoff, and I suspect that newtypes are usually the cleaner/easier solution.

> ML modules are plain System F-omega

I hadn't seen the 1ML project. That's pretty cool.

> It's type families, as done in Haskell, that violate parametricity!

How so? I really don't understand your argument here, if you just take TFs to be a limited form of type function.


> We're not doing abstract category theory; we're writing computer programs (well, I am). Have you ever run into a problem with type families in that capacity?

I like being able to reason about my programs. For that to be a smooth process, the language has to be mathematically civilized.

> Agreed, but there's a difference between type functions and type constructors. TFs are (a limited form of) type functions.

By “type families”, I meant both data families and type families. Case-analyzing types is the problem, see below.

> And using ML-style modules is easy at the term level, but cumbersome at the type level.

Actually, ML-style modules are also more convenient at the type level too! If I want to make a type constructor parameterized by 15 type arguments, rather than a normal type constructor in the core language, I make a ML-style functor parameterized by a structure containing 15 abstract type members.

> How so? I really don't understand your argument here, if you just take TFs to be a limited form of type function.

“In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way.”

https://en.wikipedia.org/wiki/Parametricity


> I make a ML-style functor parameterized by a structure containing 15 abstract type members.

You can do this in Haskell with DataKinds (you just pass around a type of the correct kind which contains all the parameters). Admittedly, it is quite clunky at the moment. I did this to pass around CPU configuration objects for hardware synthesis a la Clash, as CPU designs are often parametrized over quite a few Nats.

> parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions

Whenever one introduces a typeclass constraint to a function, one can only assume that the function exhibits uniform behavior up to the differences introduced by different instances of the typeclass. There is no particular reason to assume that (+) has the same behavior for Int and Word, except insofar as we have some traditional understanding of how addition should work and which laws it should respect. The same is true for type families. It is not a problem that they introduce non-uniform behavior; we can only ask that they respect some specified rules with respect to their argument and result types.

Case-analyzing types in type families is no worse than writing a typeclass instance for a concrete type. Would you say that the fact that "instance Ord Word" and "instance Ord Int" are non-isomorphic is a problem? After all, the types themselves are isomorphic!


> Whenever one introduces a typeclass constraint to a function, one can only assume that the function exhibits uniform behavior up to the differences introduced by different instances of the typeclass.

Of course.

> Would you say that the fact that "instance Ord Word" and "instance Ord Int" are non-isomorphic is a problem? After all, the types themselves are isomorphic!

It's already bad enough, but at least the existence of non-uniform behavior is evident in a type signature containing type class constraints. OTOH, type families are sneaky, because they don't look any different from normal type constructors or synonyms.


>OTOH, type families are sneaky, because they don't look any different from normal type constructors or synonyms.

That is fair.

I think we're on the same page at this point. You have made me realize that ML-style modules are useful in ways I did not realize before, so thanks for that.

Question: How would you feel if the tradition was to do something like

insert :: Ord a f => a -> Set f a -> Set f a

That is, "f" is some type that indicates a particular ordering among "a"s. Then, "Set"s are parametrized over both "f" and "a", and one cannot accidentally mix up Sets that use a different Ord instance.

Here's a quick example:

https://gist.github.com/wyager/a021f7e5d9f23643bc90a9866b5c0...


Seems a lot more cumbersome than the direct ML solution:

    signature ORD =
    sig
      type t
      val <= : t * t -> t
    end
    
    functor RedBlackSet (E : ORD) :> SET =
    struct
      type elem = E.t
      
      datatype set
        = Empty
        | Red of set * elem * set
        | Black of set * elem * set
      
      (* ... *)
    end
    
    structure Foo = RedBlackSet (Int)
    structure Bar = RedBlackSet (Backwards (Int))
    
    (* Foo.set and Bar.set are different abstract types! *)


That is more elegant! But do you think they're functionally more or less equivalent?


Assuming you don't mind plumbing value-level proxies all over the place, it's indeed functionally equivalent.


> Parametricity is too good to give up. With the minor exception of reference cells (`IORef`, `STRef`, etc.), if two types are isomorphic, applying the same type constructor to them should yield isomorphic types.

You know that's not what parametricity means, right? Like, at all?

Here's a challenge.

`foo :: forall a. a -> a`

Now, by parametricity that should have only one inhabitant (upto iso). Use your claimed break in parametricity from type families and provide me two distinct inhabitants.


> Now, by parametricity that should have only one inhabitant (upto iso).

I can count at least three: `undefined`, `const undefined` and `id`.

> Use your claimed break in parametricity from type families and provide me two distinct inhabitants.

Does this count? Here Oleg constructs an inhabitant of False using just some means to case-analyze types (GADTs or type families): http://okmij.org/ftp/Haskell/impredicativity-bites.html


i should have specified "modulo bottom" because i somehow didn't cotton i was talking to someone more interested in pedantry than actual discussion.

that said, constructing an inhabitant of false a _different_ way (when we can already write "someFalse = someFalse") is not particularly interesting, and again doesn't speak to parametricity in any direct way.


Sounds like the problem there is that the Haskell typechecker assumes injectivity, not that it supports case-analysis.


The injectivity assumption isn't unjustified - type constructors are injective.


> The injectivity assumption isn't unjustified

Strongly disagree.

(+2) 3 == (+1) 4 implies neither (+2) == (+1) nor 3 == 4. So Fst goes out the window.

(even 5) == (even 7) does not imply 5 == 7.

>type constructors are injective.

But type functions aren't, and that's what we want.

I agree that it's misleading to have type functions look like type constructors syntactically.


The type constructor that's being assumed injective in the section “Deriving `absurd` with type families” is `R`.


There can be more than one problem, including having a small standard library.


The lack of a standard library can be fixed relatively easily: write libraries! OTOH, the existence of anti-modular language features that are extensively used in several major libraries, is a more serious problem, because:

(0) It means that libraries in general won't play nicely with each other, unless they're explicitly designed to do so.

(1) It can't be fixed without throwing away code.


This whole thread is exactly about how "write libraries!" (if done outside the standard library) doesn't work (see my top post).

I do agree that lack of modularity features certainly doesn't help though.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: