hn-classics/_stories/2008/4980588.md

8.4 KiB

created_at title url author points story_text comment_text num_comments story_id story_title story_url parent_id created_at_i _tags objectID year
2012-12-29T01:33:58.000Z Monads are a Class of Hard Drugs (2008) http://lambda-diode.com/programming/monads-are-a-class-of-hard-drugs paul-woolcock 41 36 1356744838
story
author_paul-woolcock
story_4980588
4980588 2008

Source

Monads are a class of hard drugs

Rootprogramming

Monads are a class of hard drugs

I've been doing functional programming (FP) for about twelve years now, first with Scheme, and then with Ocaml, professionally for nearly three years now. I'm a big fan of FP and a big fan of static typing. So I have no prejudice against FP languages and I have tried many FP languages over the years, including, of course, Haskell.

When I try a language, I first want to try a non-trivial program in that language. Ocaml has Unison, MLDonkey and the defunct Efuns & MMM.

Haskell has Xmonad, Yi and Darcs. I did not have enough patience to compile either Xmonad or Yi. They had many dependencies, required the latest version of GHC which itself took many hours to compile, and I couldn't figure out how to use Cabal. As for Darcs, which is usually available pre-compiled on most systems, it is extremely slow (checking out the source of Darcs itself takes ages), extremely memory hungry, and, word has it, is fundamentally flawed. But these are all practical issues.

Haskell programs tend to be slow, save for shootout-optimized ones. You can talk all you want about how functional purity allows memoization (BTW my PhD thesis was memoizing automata) but, in practice, GHC, as a compiler, is very slow. Haskell programs, unless written very carefully, do get quite slow.

The usual answer is that languages are not slow, programs are, and that there are ways to make your program really fast, and that, actually, as it is purely functional, the compiler can do lots of very smart tricks to produce code undistinguishable from hand-coded imperative assembly.

The problem is that those very smart tricks do not always apply. Further, as they are complex, it is difficult for the programmer to know if they will apply for his particular program. So he has to know a lot about the compiler internals to write programs that play well with the compiler. But, unlike mandatory optimization of tail-recursion in most FP languages, those smart tricks are not part of the language specification. So he can't really rely on them.

In other words, with Haskell/GHC, the programmer doesn't have a simple mental model of the compiler with which he can write efficient programs. So either he takes the academic approach and says "I don't care about efficiency", or delves into the depths of the compiler and actively follows the latest papers on FP optimization.

But maybe those smart tricks apply sufficiently often in practice that most programmers do not have to care about them?

I would like to say that if this was the case, GHC and Darcs wouldn't be dog slow, but I have another argument.

That's the part where we use the M-word. Monads. While they have been turned into a tool of intellectual terrorism, the original paper on monads is actually quite easy to understand and fun to read. Monads are a good and natural solution for expressing imperative constructs in purely functional languages.

The essence of monads is to use abstract types to enclose a mutable state, providing only a set of carefully-crafted combinators for using it in such a way that you can't duplicate the state but have to use it in a linear, non-branching fashion; that way, the implementation of the monad can use side-effects. Monads can be used to access mutable arrays, in which case the side-effects would be writes to the array, or system services such as I/O, in which case the side-effects would be some system calls. Monads are a typing trick to ensure proper linear usage of some updateable resource in a functional language. A very neat idea, indeed.

So monads hide the tricky machinery of managing mutable state under the carpet of the type system. This is good because you don't get to see greasy mechanical parts while casually programming. But mutable state covers everything that is not pure computation and that includes debug statements, input and output, system calls, network access, getting the time of the day or even throwing an exception. So the monad you are using has to provide all these services. Of course not every monad has everything you need so you have the IO monad, the Maybe monad, the STM monad, and so on. Whenever you have a C binding, you get a monad. And this is where things get ugly.

For one thing, when you use a monad, you get hooked to it: the type constructors of the monad start to appear in the signatures of your function. Obviously, you just have to add a layer of indirection by redefining those constructors and combinators in another module or type class. But then you've just substituted your own brand of heroin for your dealer's brand. All your code still has to use your brand. You can change your dealer, but still, if someone imports your code, they will have to "supply" your code with the brand you've selected. Where did the type inference go? Why are we declaring types everywhere as if we were in freaking Java or C ?

And, for another thing, what happens if I need more than one monad? That's where you get the fun concept of "Monad combinators" which allow you to define the "product" of two or more monads and "bundle" your brand of heroin with your brand of crack into a nice "crackoin" product. You're still doing drugs, but at least you can do any combination of them!

So, the situation is this. You write a software component doing imperative tasks in Haskell. You need monads M1, M2 and M3. You thus define your monad bundle M1M2M3 which you use in all your code.

Someone else writes a software comoponent C2 and uses a monad bundle M4M5.

Now what if I want to write a component using C1 and C2? I just define my functions in the monad bundle M1M2M3M4M5, silly! In the end, I might very well have more monad definitions than code.

I do not like mandatory type annotations. Type inference is a great technology and it is very well worth the price of restricted or inexistant overloading. Modules and functors are also very great, and those, understandably, require explicit annotations. In Ocaml, you can often start writing whatever you want with little to no type declarations (and the declarations you use are usually for defining records or sum types); when your program grows, you start isolating parts inside modules; later, those modules get their own file and might transform into a functor or two. In Haskell, you have to start in the correct monad if you plan on doing any I/O (and you do). Ocaml's "implicit" monad is the same monad the overwhelming majority of programs live in: the imperative monad with exceptions. This includes C programs and your operating system. Types are implicitly lifted to this monad. This means that you do not have to worry about monads and that you just write your code, getting all the benefits of a FP language with static typing and type inference. If you want to add a printf there, or a network connection here, you can. So your functions are unpure, but programmers are not priests of type theology. They want to do dirty things, like connecting to sockets or mutating array entries while waiting for external world events.

Programmers might one day accept to do all these actions under the hood of a combination of monads, but only if monadic types do not creep too much in their code and, of course, if the resulting programs are sufficiently fast. This is not the case today with Haskell and this is why I will stick to ML.

  • Berke Durak 2008-06-06