hn-classics/_stories/2010/8821808.md

50 KiB
Raw Blame History

created_at title url author points story_text comment_text num_comments story_id story_title story_url parent_id created_at_i _tags objectID year
2015-01-01T01:12:08.000Z Can functional programming be liberated from the von Neumann paradigm? (2010) http://conal.net/blog/posts/can-functional-programming-be-liberated-from-the-von-neumann-paradigm bkirwi 92 38 1420074728
story
author_bkirwi
story_8821808
8821808 2010

Source

Conal Elliott » Blog Archive » Can functional programming be liberated from the von Neumann paradigm?

Conal Elliott

Inspirations & experiments, mainly about denotative/functional programming in Haskell

« Garbage collecting the semantics of FRP

Is Haskell a purely functional language? »

Can functional programming be liberated from the von Neumann paradigm?

5th January 2010, 10:32 am

This post describes an idea Ive had in mind for years and have chatted about with friends but havent written down before.

Functional programmers used to worry about how to solve “the I/O problem”. Functional programming (FP) eschews any notion of side-effect or order of execution, and instead has an order-independent notion of evaluation. In contrast, input & output (I/O) seems to be an inherently effectful, ordered notion.

For a long time, Ive been dissatisfied with the current Haskell solution to “the I/O problem”. Of course its very clever and effective. With “monadic IO” we can program both functionally and imperatively, and the imperative semantics doesnt compromise the functional semantics.

I look at monadic IO not as any solution to functional I/O, but rather as a (clever) way not to bring I/O into the functional world. Theres a ton of confusion buzzing around monadic IO, so let me try to restate more carefully: “monadic IO” is a way to generate imperative computations (not just I/O) without having to say anything at all about the semantic content of those computations. Rather than retooling our mental habits completely away from the imperative model of computation, monadic IO is a way to hang onto these old mental habits, and use them in a hybrid functional/imperative programming style, in which the imperative doesnt contaminate the functional, semantically.

John Backus offered us a gift and a blessing in his 1977 Turing Award lecture, _Can Programming Be Liberated from the von Neumann Style? A functional style and its algebra of programs_. He told us how the sequential/imperative model enslaves us in weak thoughts, inhibiting manageable scaling (sections 1 through 5.1). In the abstract, Backus says

Conventional programming languages are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of programming inherited from their common ancestorthe von Neumann computer, their close coupling of semantics to state transitions, their division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs.

As well, Backus showed one way to throw off the shackles of this mental and expressive slavery — how to liberate our notions of programming by leaving the von Newmann mental model behind.

Haskell is often described as a “purely functional programming language”, and in a technical sense it is. (In good company, too; see _The C language is purely functional_.) However, as commonly practiced, imperative computation still plays a significant role in most Haskell programs. Although monadic IO is wily enough to keep a good chunk of purity in our model, Haskell programmers still use the imperative model as well and therefore have to cope with the same fundamental difficulties that Backus told us about. In a sense, were essentially still programming in Fortran, with its mixture of statements and expressions (or, semantically, of commands and values). Dont get me wrong; Haskell is a much better Fortran than the original. (If you have an urge to inform me how monadic IO works, re-read what Im really saying. Honestly, I know.)

(Sadly, the very definition of “program” in Haskell requires that there be a main of type IO ()! Haskell programs thus cannot be purely functional (except technically), and so cannot be composed except very weakly, as Backus describes. For an alternative model, see _Tangible Functional Programming: a modern marriage of usability and composability_.)

In a sense, I see us as in a worse position than before. Since the adoption of monadic IO, its been less immediately apparent that were still enslaved, so theres been much less activity in searching for the promised land that the prophet JB foretold. Our current home is not painful enough to goad us onward, as were continuation-based I/O and stream-based I/O. (See A History of Haskell, section 7.1.) Nor does it fully liberate us.

Since I/O is an intrisically sequential, effectful notion, and we need some way of doing input & output, perhaps the best we can do is what weve done with monadic I/O: support both imperative and functional programming and proctect the functional from the imperative.

Or can we do better?

In my recent post, _Why program with continuous time?_, I had suggested that the abstraction of continuous time is useful even if “in the end all streams are discrete”. I advised designing APIs based on the principle that “programming is more about the middle than the end, i.e., more about composition than about output.” As an example, I pointed out that we dont represent numbers as strings even if “in the end”, numbers are output as strings, suggesting that the key reason to choose an abstraction other than strings is composability (here, arithmetic). The post gives a few other examples of this principle of designing an API for composition, not for input or output.

Roly Perera noted that

… you never really need to reach "the end". (It really is all about composition.)

To extend an example in the previous post, after numbers, then strings, and then pixels, are phosphors the end? (Sorry for the obsolete technology.) After phosphors come photons. After photons comes retina & optic nerve. Then cognitive processing (and emotional and who-knows-what-else). Then, via motor control, to mouse, keyboard, joystick etc. Then machine operating system, and software again. Round & round. More importantly, our interactions with other wetware organisms and with our planet and cosmos, and so on.

Roly added:

What looks like imperative output can be just what you observe at the boundary between two subsystems.

Which is exactly how I look at imperative input/output.

Representation shifts often happen at programming interfaces. Major interfaces in a whole system sometimes require a major representation shift, namely marshalling to and unmarshalling from serialized, memory-external representations. For instance, conversion from strings to numbers on input to a computer or brain and conversion of numbers to strings on the way out. Or replace strings with other rendered representations. Another example: conversion from continuous time & space to discrete time & space (“sampling”) on input to a digital machine, and conversion from discrete time & space to continuous time & space (“reconstruction”) on the way from digital machine (computer, CD/DVD player) to a brain.

This perspective is whats behind my confidence that we will someday really learn to look at whole systems in a consistently functional/denotational style (simple & precise semantics). The imperative interfaces in today OSs and data-bases are troubling at first, and indeed I often hear people (even on #haskell) using these examples as demonstration that the imperative programming model is inescapable. These examples dont trouble me, however, because I see that weve already dealt with others of their kind. Underneath the implementation of our current functional abstractions (numbers, strings, trees, functions, etc), there are imperative mechanisms, such as memory allocation & deallocation, stack frame modification, and thunk overwriting (to implement laziness). Every one of these mechanisms can be used as evidence against the functional paradigm by someone who hasnt yet realized how the mechanism can be shifted out of the role of a programming model notion and into the role of implementation of a programming model notion. The new notion is more composable by virtue of being semantically simpler and mathematically more well-behaved.

Stack and register munging and jump/GOTO are implementations of the semantically simpler notion of function application. (I mean “function” in the sense of math and of pure functional programming.) Moreover, stack & register munging is the input/output (IO) part of the implementation of function application. Information goes out of one context and into another. My vision of “solving the I/O problem” for functional programming is nothing like Haskells semantically imperative (non-)solution (caled “IO” in Haskell), but rather to continue shifting semantically imperative mechanisms out of the programming model and into the implementation of (semantically simple) function application.

How do we get from here (still stuck in the quagmire of imperative programming models) to there (simple semantics and consistently excellent composability)? First, stop rehearsing the thinking that keeps us stuck. When we keep telling ourselves and each other that imperative notions are inevitable, accompanying such statements with hand-waving arguments, we are weaving self-fulfilling prophecies. A person convinced of (and perhaps ego-invested in) the nonexistence of new possibilities is a person actively resisting their opportunity to discover possibilities — too stuck in the obvious to discover new truths.

The significant problems of our time cannot be solved by the same level of thinking that created them. Albert Einstein

They are ill discoverers that think there is no land, when they can see nothing but sea. Francis Bacon

Hand-waving is an important factor in staying stuck in impossibility thinking, since rigorous argument uncovers unconscious limiting assumptions. Im just as happy if you are working on showing impossibility as discovering the possible, as long as you are rigorous and get rigorous peer review. Otherwise, we all fall into the trap of self-deceit:

Nothing is easier than self-deceit. For what each man wishes, that he also believes to be true. Demosthenes

If you wish to see the truth, then hold no opinion for or against Osho (Rajneesh Chandra Mohan Jain)

The discovery of this reality is hindered rather than helped by belief, whether one believes in God or believes in atheism. We must make here a clear distinction between belief and faith, because, in general practice, belief has come to mean a state of mind which is almost the opposite of faith. Belief, as I use the word here, is the insistence that the truth is what one would “lief” or wish it to be. The believer will open his mind to the truth on the condition that it fits in with his preconceived ideas and wishes. Faith, on the other hand, is an unreserved opening of the mind to the truth, whatever it may turn out to be. Faith has no preconceptions; it is a plunge into the unknown. Belief clings, but faith lets go. In this sense of the word, faith is the essential virtue of science, and likewise of any religion that is not self-deception. Alan Watts

In summary, Im suggesting we take a fresh look at I/O and functional programming. Instead of putting I/O into the functional programming model (as in continuation- and stream-based I/O), or adopting a hybrid functional/imperative model (as in monadic I/O), lets explore how to move I/O entirely out of our programming model into the implementation of a denotationally simple model for whole systems.

This post describes an idea Ive had in mind for years and have chatted about with friends but havent written down before. Functional programmers used to worry about how to...

Tags: vision  |  Comment (RSS)  |  Trackback

31 Comments

  1. sclv:

Your post just inspired me to take a wary eye to the code Im working on at this moment. It invokes a few core, pure, computation functions. However, the ugly bits involve grabbing information from a number of different places and handling failure cleanly — while theres a little IO involved, the sequencing is, at base, brought on by use of the Maybe monad. Id venture a guess that 90% of my imperativeish code is that way for this reason. In javascript it would look fairly close to ANF. If we had a syntax that didnt look like sequencing, and which was commutative, but allowed us the convenience of binding subexpressions (i.e. didnt lock us in as applicative does to a pointfree(ish) style) then I think wed really be cooking. I dont know if this would lead to fewer keystrokes, but it would make it more evident where we were actually doing imperative things, and where imperative notation was just making it appear so.

5 January 2010, 11:38 am 2. #### Heinrich Apfelmus:

I completely agree. Designing programming languages is about designing models, however removed from the “implementation model” (:= just “the” implementation). Its the job of compilers to implement them.

There is a small point worth noting, however: sometimes you do want the “implementation model”, for instance when writing a compiler. Even in Haskell, there should be some “way of speech” to describe Von Neumann machines in case you want control the latter precisely. But this is quite irrelevant to your main point of insisting on a good model when you dont want to do that, which is most often. Not to mention that nothing prevents another model from describing Von Neumann machines equally precisely, just very differently.

5 January 2010, 11:39 am 3. #### Sam Martin:

More than a few functional programs boil down to interpreters or EDSLs of some sort: a type or types for an AST and functions for manipulating and “evaluating” the type. I expect a large number of IO-y things can be modelled and executed in this way, with IO only occuring during evaluation. Graphics and music spring to mind. I guess this is the kind of thing you are leaning towards?

If so, I guess the question you are asking is whether there is a model that satisfies all real world interaction? My intuitive guess is that for the model to have the most utility it needs to match the tasks you are attempting as closely as possible, so theres a risk that no one model will (efficiently) rule them all.

Or are you thinking of something else?

5 January 2010, 12:25 pm 4. #### Marc Hamann:

To make sure I understand you, Ill share a couple approaches Ive used to make programs more functional, even in imperative languages like Java.

One approach is to push all the nasty IO and statefulness to the “top” layer of the program, and make all sub-components functional (i.e. deterministic on inputs). All non-determinism, error-handling, etc. is done at the top level “where you can keep an eye on it”. In my mental model, this is somewhat like what Haskell monads do for you: they implicitly “push the mess up to the top level”. (The difference is that monads also hide the messiness, so it is harder to keep an eye on it.)

The other approach, which if I understand correctly, is the one you advocating, is that you totally encapsulate the messiness within the sub-components, i.e. the “bottom level” components are totally responsible to handle errors and non-determinism so that it doesnt “escape” out into the wider application. In this model, “from the top” it looks like the program is functional. This approach is exactly like what happens in a compiler for a pure functional language implement on a imperative platform.

Is that the idea?

For what it is worth, my experience is that both of these approaches can work “in the small”, but that the former works better for large apps. The reason for this is that intelligent error-handling and error-reporting is often dependent on contextual information not available to the local component. Also it is a lot harder to anticipate all the possible environmental problems that can occur when you are building a small component. It is usually easier to do that in the context where you “glue the components together”.

5 January 2010, 12:27 pm 5. #### dubhrosa:

the link to the backus paper is broken, should be just http://www.stanford.edu/class/cs242/readings/backus.pdf

Interesting article.

5 January 2010, 1:23 pm 6. #### conal:

dubhrosa — thanks! now fixed.

5 January 2010, 1:38 pm 7. #### Robin Green:

New readers of this blog may not realise from this post that the new way of looking at functional I/O Conal is advocating here is Functional Reactive Programming, which Conal has been developing, writing papers about and blogging about for some time now.

5 January 2010, 2:51 pm 8. #### Conal Elliott » Blog Archive » Is Haskell a purely functional language?:

[…] About « Can functional programming be liberated from the von Neumann paradigm? […]

5 January 2010, 3:09 pm 9. #### Roly Perera:

Indeed, its thoroughly perverse that Haskell programs have to communicate by pretending to be C programs! As Ian Holyer put it, functional programs are currently treated as unwelcome guests in an imperative world. Its a situation we can invert. Id like to be able to treat every change to a computed value as in some sense an “output” (and perhaps every change to a non-computed value, i.e. a “constant”, as an input). Differences are the things that make a difference (= have information content). What looks like imperative state change is something like being inside a pure functional computation and being able to observe intermediate states. Put like that, its abstract and hand-wavey, but I think were saying compatible things, and I also think theres a very simple, very practical approach to programming lurking nearby.

Put in another abstract, hand-wavey kinda way (its all Ive got time for right now, so feel free to ignore), IO shouldnt be a “language feature”, but a model of interactive computation. Its about programs being able to observe and influence each other as they execute, which essentially just means consuming values that other programs produce, and producing values for other programs to consume.

5 January 2010, 3:56 pm 10. #### Koray Can:

You are exactly right that monadic i/o is just pretending that i/o is not really i/o. This way we can pretend that very sequestered, reliable i/o (e.g. console or disk i/o that a compiler may do) is “like” a function in order to keep equational reasoning.

A lambda calculus program does describe a computable function. However, LC is not a model of computation for we cannot calculate an upper bound on resource consumption of its reduction steps without resorting to another model of computation, such as TMs (according to Yuri Gurevich). But, models of computation dont end there, either.

In order to talk about distributed algorithms the first thing to do is to present an even more powerful model that accounts for concerns like non-determinism, storage/channel capacity, network topology, volatile vs non-volatile memory, loss, corruption and re-ordering of messages, scheduler behavior, hardware clocks, etc. A lot of these are very relevant to application programmers.

C+POSIX or Java+JVM express very few, but some of these things (and badly), but LC describes even less and is yet the basis of a high level language? And once you start doing real work, running monadic i/o left and right that may throw errors, things dont look so “elegant” any more.

Unfortunately, a lot of applications cant have their nasty, but comparatively tiny i/o sections sequestered from the large but beautifully deterministic computations that lend themselves to equational reasoning. In fact there are apps that have multiple outstanding i/o requests at any given moment during their weeks-long executions, and its not easy to find large pure functions in them.

So, I disagree with you when you say move i/o entirely out of our programming model. On the contrary, I want as much of the above as possible modeled better.

5 January 2010, 11:05 pm 11. #### Tom Lokhorst:

A very inspiring post. The first step is admitting we have a problem.

Just to note, the type of main isnt required to be IO (), but rather of type forall t. IO t.

“The value of the program is the value of the identifier main in module Main, which must be a computation of type IO t for some type t.”

(From: http://www.haskell.org/onlinereport/modules.html)

So in theory programs are somewhat composable in that theyre computations that result in a value (other than ()), but in practice I dont think this gives you much, nor have I ever seen this used in any actual Haskell program.

6 January 2010, 2:51 am 12. #### Damien Guichard:

There are two separate questions here.

First, can we imagine a world were any interaction can be done the compositional way ? Yes we can. Great effort is spent doing so, there is a strong categorical background, there is this new arrow calculus, i am confident that is within reach by the great people involved.

Second, can we imagine a world where any functional programmer can pretend to live in a stateless world ? No we cant. The imperative world is all around and produces tons of non-trivial software designed with the dysfunctional approach. Software that cant be reconciliated with the compositional way. Thus the monadic style is here to stay. It will last the day the compositional way will be so successfull that no new mainstream framework uses the old style. Thats decades at least.

6 January 2010, 10:05 am 13. #### Jonathan Turner:

Just curious, how do you see alternate solutions like Cleans uniqueness types?

6 January 2010, 12:06 pm 14. #### Daniel Vainsencher:

Two thoughts. Having a comfy world that is highly compatible with a languages model is both powerful and dangerous. This is exemplified by Smalltalks such as Squeak. There the impedance mismatch caused by I/O is different and less important than in functional programming, but it still exists. However, a new program can go far before needing to do input/output because a default form of persistance is provided for arbitrary program states/object graphs. For many programs it is trivial to ship and share them bundled with their data, to be supplemented only by user interactions. This has interesting effects.

Imagine starting up GHCi and always having available data structures representing natively all the source (parse trees?) of GHC and libraries, many images, sounds… wonderful. The ability to concentrate on manipulating information in the form most natural for the programming model (in this case objects), before having to deal with “the real world”, makes it amazingly easy to play with ideas. Addictive.

Thats the second part probably half the reason Smalltalk is less popular than its successors, is that it (community, libraries) places less focus on connecting to the rest of the world. This is changing with the web, but…

For denotational programming, in Haskell or otherwise, Id love to see what it would do to my computer if it transformed it in its own image, but be careful what you wish for on the way there.

7 January 2010, 1:01 am 15. #### conal:

Second, can we imagine a world where any functional programmer can pretend to live in a stateless world ? No we can't. […]

Please see my comments above about self-fulfilling prophecies, hand-waving arguments, and self-deceit.

As I said, I dont care whether youre playing “Yes, we can” or “No, we cant”, so long as youre rigorous. Rigorous proofs of possibility are usually (not always) constructive: demonstrate an example. That demonstration is what Im working toward and inviting others along, as in this post and most of my other work. Rigorous/correct impossibility proofs are usually much more profound (e.g., uncountability of the reals). The required rigor will reveal your assumptions and maybe point you to the possibility you set out to disprove. And if it turns out that youre correct about impossibility, your rigorous demonstration will be helpful to others. Much more so than the popular practice Ive been calling “proof by lack of imagination”.

7 January 2010, 9:52 am 16. #### conal:

So in theory programs are somewhat composable in that they're computations that result in a value (other than ()), but in practice I don't think this gives you much, nor have I ever seen this used in any actual Haskell program.

Hi Tom. Thanks for the correction about the type of main (though the forall version is probably not quite what you mean).

The main hindrance I see in composing Haskell programs is exactly the same as with the original Unix model, which is that the programs inseparably combine functionality and interface (I/O). I give an alternative in _Tangible Functional Programming: a modern marriage of usability and composability_. The trick is to keep functionality & interface together for usability and separable for composability.

7 January 2010, 10:40 am 17. #### franco:

Do you think that on the lowest level IO is imperative that traps us in the paradigm? At the OS level these imperative fundamentals are the only ways we can perform IO? I understand that every turning complete system can evaluate/emulate any other turing complete system. If we keep the usable fundamentals of IO imperative will that imperativeness trickle up to the highest level expressions?

If the technically (but awkwardly) functional IO had their imperative essence removed at a sufficiently low level (say the kernel) we could break out of the paradigm? or would it always be a clever hack like monads? Certain things like determining the current time with no inputs are not computable. A machine needs input either from someone initially setting a time and having a piece of quartz determine the next moment; or having a camera record the positions of the heavenly bodies. both of those things are non-deterministic inputs for performing something as trivial as getting the current time.

Do our realities interact with a turing complete computer analogously to a stack augmenting an FSA (to yield a more powerful pushdown automata)? Do computers really express time/space continuously sequential as we experience it? analogous to a FSA not expressing arbitrary state like a pushdown automata?

Maybe we need denotative/functional models for things like clocks/time; networking IO; random values; device IO. these things can provide values that can be used in turing complete computations but themselves need not be made up of turing complete systems.

If we are to bust out of the von Neumann paradigm we need to denote systems beyond turning completeness.

9 January 2010, 10:12 am 18. #### Rajesh Krishnan:

Before everyone loses track of the question Jonathan Turner asked about Clean, I would also like to know why Uniqueness Types in Clean Language (or another version of it) wont be a good replacement for Monads that Conal is seeking.

(forgive me if the question doesnt make too much sense as I am new to both)

10 January 2010, 9:34 pm 19. #### Raoul Duke:

@clean, uniqueness types.

please read the DDC thesis which explains how the uniqueness typing ASCII gets quickly out of hand in real world programs, if i recall correctly.

11 January 2010, 5:34 pm 20. #### Neel Krishnaswami:

Hi Rajesh,

Uniqueness types are very nice, and their cousin linear types are of fundamental importance to proof theory and semantics. However, they dont directly supply an answer to Conals question: namely, does this library have good equational reasoning principles?

A good example of this arises when you try to program randomized algorithms in a functional style. The RNG will need a seed, and so the typical thing to do is to thread the seed through the program, and use either a monadic API or explicit state-passing (with uniqueness types) to manage this work, so that whenever we make a call to the RNG, we have the state available.

However, this is a lousy API. The reason is that when were writing our program, we dont care about the order in which we make calls to the RNG, since the whole point is that it gives us a source of unpredictable values. However, the fact that were passing a state around means that we cannot reorder operations without changing the visible behavior of the program. So theres a gap between how we want to think about the program, and what we can actually do.

A better API is to give a monadic API, where the the type constructor P(A) is interpreted as a probability distribution over values of A. Then we give no operation to “call the RNG”, and instead supply an operation to create distributions (for example, “choose p a b” might mean a occurs with probability p and b occurs with probability 1-p). Now, the return of the monad “return v” is the point distribution that is v with probability 1, and the bind operation corresponds to conditionalization.

Now, this gives very good equational reasoning principles — in fact, we can apply probability theory very directly to reasoning about our programs. The implementation is actually still the same as the usual state passing implementation, which shows that the API exposed is whats important to determining the reasoning principles we get. (If you want the details of this idea, theres a nice paper by Avi Pfeffer and Norman Ramsey “Stochastic Lambda Calculus and Monads of Probability Distributions”, at http://www.cs.tufts.edu/~nr/pubs/pmonad.pdf.)

14 January 2010, 2:22 am 21. #### Linktipps Februar 2010 :: Blackflash:

<

p>[…] Erstellungsprozess? Brandon Savage legt seine Gr

14 January 2010, 9:17 am 22. #### A non von Neumann Style of Programming « Code and Bugs:

[…] more informations, one can read Conals article here. In fact, it was his blog which directed me to Bakus work. A single article upon which I […]

24 February 2010, 11:10 am 23. #### Kumar:

Would you consider unix pipes as an example of pushing what we treat as “IO” at the program level just as an interoperation mechanism in the implementation of a “shell programming language”? For example, would “ls | grep txt” be an example of non-IO think you want to shoot for?

6 April 2010, 7:05 pm 24. #### conal:

@Kumar I see Unix pipe examples like ls | grep txt as hinting in the direction Im talking about but not quite going there. For instance, while grep is a pure function, it takes its two inputs in very different ways, one of which is non-compositional (except through inconsistent hacks). Also, ls takes its input in a very non-functional way. It accesses various bits of mutable state (current working directory and current file system contents), as well as having an implicit reference to a particular file system. And then theres the lack of static typing in Unix chains. Please see my “modern marriage” talk for discussion of these points. (Near the start in discussing the Unix dream and how its choice of text I/O prevented it from achieving that dream.)

7 April 2010, 7:38 am 25. #### Kumar:

Just watched your talk. TV looks interesting indeed. The need for introspection of fusions (which one of the audience pointed out) struck me too as quite crucial to a tinkering experience. Also, the construction of a widget to visualize a value seemed to be a substantial programming activity in itself in other words TV seems to lead to a “caste” of developers who focus on building visualizers. For example, building a widget for “list of x” and the x can be numbers, images, functions of lists of ys, or (to artificially complicate it) “Parser y” monads. Itll certainly take a brilliant designer to capture that kind of generality that seems so easy with text into a sensory model (visual, audio, whatever), but it sounds like a fun thing to work on :\) The operation of fusion that you show in your talk probably also has nicer interface possibilities on touch devices like the you-know-what. They just beg for a powerful way to program them in a visual+tactile way … which is a big unfulfilled gap.

Btw, it was mildly ironic that in your talk you had to resort to textual notation to describe the constructs behind Eros :\) That is somewhat like asking for the visual equivalent of the understanding that lists, maybe and IO are in some sense “the same”.

The download link at http://www.haskell.org/haskellwiki/Eros is broken.

8 April 2010, 8:22 am 26. #### Kumar:

… and, since I mentioned several monads, I dont think there is anything fundamentally wrong with them as long as the unsafePerformIO kinds of things are completely ruled out. In fact, itll be great if TV encompasses monads.

8 April 2010, 8:41 am 27. #### conal:

Kumar:

Did you somehow get the impression that I have a bias against monads? If so, how?

Many of my best friends are monads (e.g. (partially applied) pairings, functions, Maybe, Future, backtracking, …). These friends are all denotative. They have precise & tractable meanings I can work with. In contrast, there are non-denotative types, like IO. And while IO happens to be a Monad, its also a Functor, Applicative, Monoid (when applied to a Monoid), and even a Num (when applied to a Num)! These classes dont hinder or help denotative-ness.

My goal here is to remind people of the quest of denotative programming and hopefully inspire some curiosity and creative exploration of new denotative alternatives to Haskells IO.

20 May 2010, 9:47 pm 28. #### Dobes:

FRP seems to struggle for interactive sequential procedures… for example:

    printLine "What is your name?"
name ← readLine
printLine "Hello, " ++ name ++ "! How old are you?"
age ← readLine
printLine "Well, you look much younger..."

Is FRP unsuitable for this kind of application?

12 June 2014, 9:42 pm 29. #### conal:

@Dobes: Interesting question and perhaps worth making more precise. As stated, it's sort of an apples & oranges comparison, since FRP (as originally formulated) is denotationally based and Haskell's IO is not. One way to investigate is to cast something like your IO example in terms of a (precise) denotation, and then see whether/how to capture that denotation in terms of FRP's which is functions of (continuous) time. I expect that doing so would flush out some hidden assumptions and reveal some alternatives.

17 June 2014, 5:01 pm 30. #### Dobes:

@conal

I admit I was hoping youd covered this base already in your work or knew someone who did :-).

Im not sure I have a good enough sense of what you mean by a “precise denotation” to come up with one.

I can think of some of my own approaches to the problem, and I may try some experiments to see what works out in practice, but Im not sure if they are going to be as “good” as the procedural version.

For example, if the input is given as a list of lines of text and the output is given as a list of lines, then one could have a function from input lines to output lines. Like:

        prog [] = ["What is your name?"]
    prog [name] = (prog []) ++ ["Hello, " ++ name ++ "!  How old are you?"]
    prog [name, age] = (prog [name]) ++ ["Well, you look much younger..."]

Presumably I can devise a function from time to a list of input lines, and pass that into this function to get the output lines.

I know this approach has major flaws, but is this an example of what you mean by casting something in terms of denotation?

18 June 2014, 12:54 pm 31. #### Jcandy:

I was thinking that the problem that necessitates the IO monad is destruction. If you remove destruction, then a lot more models are possible. Here is one. A program is just a monoid of all the output it has issued, parameterized by a dictionary of the records the user has input.

main :: MMap k v -> MMap kk vv

Dictionary lookups on a key block until that key arrives in the dictionary.

lookup :: MMap k v -> k -> v

Sometimes you want to wait on all keys at the same time, and produce a result. This form accomplishes this:

mapp :: MMap k v -> (k -> v -> MMap kk vv) -> MMap kk vv

The only caveat is that if mapp ever detects that the result of the function for two key-value pairs disagree at a key, it must signal an error.

All destruction actions are pushed to the top level of the model. The option to delete input records can be provided. If a record is deleted, all computations that depended on its existence can be automatically recomputed.

20 August 2014, 11:08 am

Leave a comment

Name (required)

E-Mail (will not be published) (required)

Website

applicative functor arrow beautiful code bot category CCC comonad creativity DeepArrow derivative design Eros events fold FRP functional reactive programming functor future value graphics programming icfp interaction library linear map math memoization monad monoid number Pajama paper partial value Phooey proof reactive value scan semantics trie TV type class type class morphism type composition unamb vector zip zipper

  • Tags

zipper zip vector unamb type composition type class morphism type class TV trie semantics scan program derivation Phooey partial value paper Pajama number monoid monad memoization math linear map library interaction icfp graphics programming future value functor functional reactive programming FRP fold events Eros design derivative DeepArrow creativity continuous comonad CCC category bot beautiful code arrow applicative functor

Entries (RSS) and Comments (RSS). Valid XHTML and CSS.
Powered by WordPress and Fluid Blue theme.

11:11

[[CSS]: Cascading Style Sheets [[XHTML]: eXtensible HyperText Markup Language [*RSS]: Really Simple Syndication