For quite some time there have been
very compelling arguments for non-strict functional programming in the style made popular by Haskell; here are some
more recent ones. In a nutshell: non-strict semantics is a very powerful "glue" to aid the
modular decomposition of programs. And, as software gets more complex, modularity matters most. (
Types are arguably even more important, but they are orthogonal to the strict versus non-strict debate).
The most serious critique of non-strict semantics is that the time and space behavior of its implementation (lazy evaluation) are difficult to reason about. In other words: non-strict functional programming allows reasoning about
denotational properties but hinders reasoning about
operational ones. And
how something is computed is often as important as
what is computed: for example, all sorting algorithms (when seen as pure functions) have the
same denotation, yet their differences are extremely relevant.
However, I don't think the situation is hopeless; two things are required to improve this state of affairs: 1) a language-level operational model of lazy evaluation; and 2) a compositional method for assigning costs derived from the execution model to language fragments.
There has been a good solution for the first part for some time now, namely, Launchbury's
natural semantics for lazy evaluation. The key idea is to consider an abstract model of heap as a mapping from variables to expressions (thunks); the evaluation of
let x=e1 in e2 extends the heap with a mapping from x to e1 and proceeds to evaluate e2. To model call-by-need (i.e. graph reduction with update in-place) rather than call-by-name we also need to update a thunk with its result when evaluating a variable. (I'm glossing over many technical points for simplicity; go and read the paper for the details).
I have been working with Kevin Hammond, Steffen Jost, Mário Florido
and Hugo Simões on a new approach for
the second problem based on amortised static analysis
. The use of amortisation with lazy evaluation goes back to Okasaki's "Purely functional data-structures"; unlike the later, our approach is type-based and fully automatic. Simplifying slightly many technical details to focus on the intuitions, the key ideas are:
- for concreteness, we consider the number of allocations in a semantics similar to Launchbury's as our measure of cost;
- we annotate thunk costs in types, e.g. in a context x:T^q(A) the variable x is a thunk which costs q to evaluate to whnf and whose value has type A;
- we also annotate the overall evaluation cost in type judgments, e.g. Gamma |-p- e:B means that evaluation of e to whnf costs p (and admits type B);
- the type rule for variables requires paying the full cost annotated in the type (i.e. as if evaluation was call-by-name);
- we recover the cost of call-by-need by a structural type rule that allows prepaying part or all of the cost of thunks (amortisation allows decoupling the accounting of the cost from the dynamic moment when evaluation actually occurs).
We have written a paper describing the analysis that will be presented at
ICFP 2012 conference next week; the full soundness proof is available in a draft
extended version of the paper. Moreover, the analysis is fully automatic (it generates a linear program that can be feed to an LP solver to determine the type annotations) and we have built a
prototype implementation which we have used to accurately determine the costs of simple examples (but you most likely need to read the paper to understand the output).
Of course, the general computability limitations apply, so this analysis cannot predict costs of arbitrary programs. In particular, it is limited to costs that are constant or linear functions of the number of data constructors.