Languages as models of computation

Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.

Alan Perlis, Epigrams on Programming

Various different models of computation have been formulated, for various purposes. The most well-known are perhaps the Turing machine and the lambda calculus, but others include the pi calculus for concurrent computing, typed and higher-order versions of lambda calculus such as System F, etc.

It is rather self-evident, but not commonly remarked upon, that programming languages themselves form models of computation (albeit often ill-specified ones). Although this in itself merely constitutes a change of terminology, and doesn't provide any new information, it becomes natural from this viewpoint to ask more mathematical questions about the model associated with a language, and ignore other (not necessarily unimportant) issues like syntax, speed, deployment model, etc.

Consider for example the difference in model between Haskell and C. In Haskell, memory management is entirely hidden; in C, it is entirely exposed. Haskell is lazy and pure; C is eager and side-effecting. Haskell has a strong type system with mathematical foundations; C's types are weak and based on the underlying machine. Yet even C and Haskell have similarities: in both one can speak of "expressions" which are "evaluated" to produce "values"; in a logic language like Prolog, such concepts are drastically changed or even abandoned entirely; expressions do not evaluate to a single value, they unify with other expressions under a number (possibly zero, possibly infinite) of different bindings.

As a footnote, strong typing advocates sometimes contend that the important fundamental differences between languages can be traced to their type systems. I would argue more broadly that they can be traced to what models of computation the languages embody.

Monads and model-changing

Haskell is a pure and lazy functional language, but it has within it a tool for changing your model of computation: monads. Monads can, for example, implement nondeterminism (the List monad, also known as the "amb" operator), continuation-passing-style (and hence call/cc), and shared-state-based computation. In fact, the IO monad is used to represent (an approximation of) the actual computer's model of computation; Haskell, being pure, lacks side-effects, so you must use a monad to change your model if you want to do IO (roughly speaking).

Monads are, surprisingly, more of an emergent than a built-in feature. Haskell does have syntactic sugar for dealing with them, and the IO monad has special significance, but otherwise monads are just another type-class (a term I will not discuss here, but basically it's exactly what it sounds like: a class of types). Other methods for representing computation, like Arrows, can be and have been created in Haskell, which brings me to my second point: Monads are not magic. They let you do many interesting things, but not everything is easily expressible as a monad. I doubt that anything approximating logic languages like Prolog could be succinctly expressed via a monad, for example. Nevertheless, they are a very useful tool.

As the most well-known use of monads in Haskell is to represent side-effecting computation, a trick unnecessary in non-pure languages, it is perhaps unsurprising that (to my knowledge) no other widely-used language uses monads or any similar "model-changing" construct. Although unfortunate, this is one reason Haskell is one of my favorite languages. A single model of computation is all you may need to write a program, but having access to multiple models can make the task less difficult and the code more beautiful.