On dynamic and static types
I recently had a conversation online with Stefan Karpinski that motivated me to write up some thoughts I’ve been mulling over for a while on the nature of static and dynamic typing. Our point of departure was the statement “dynamically typed languages are unityped,” which Stefan objected to. The following is not intended as a refutation; it is merely an exploration of some ideas brought up during the conversation. To the extent that I have a thesis, it is that type theory (the exploration of static type systems and their properties) is a useful conceptual tool — even in dynamic languages!
1 Unityped isn’t an insult
Asserting that “untyped languages are unityped” does not reject the legitimacy of dynamic types. It merely describes the static semantics of a dynamic language. In particular, “dynamic typing is a special case of static typing” does not justify asserting that “dynamic typing is a bad idea.” Monoids are a special case of semigroups, but one is not better than the other. The space of type system designs is large, and dynamic typing forms only one small corner of it—but perhaps it is a useful corner.
2 Tags and sum types
Stefan mentioned in passing that “static languages have an awkward means of simulating what dynamic languages do and call it a ‘tag’”. Certainly embedding a dynamic language in Haskell by explicitly casting into and out of a big sum type would be awkward. But this misses the point: values in dynamic languages behave exactly like elements of a sum type. Rather than understanding the two separately, I need remember only one.
More generally, whenever we can embed/compile one language feature into another, it deepens our understanding of both. Practically, it’s awkward to do arithmetic in the lambda calculus via Church-encoding. Practically, it’s awkward to write a large program in Core Haskell rather than full Haskell. But the translations between these pairs improve our understanding of each.
3 Static types as a conceptual tool
Stefan seemed to believe that most programmers realize the depth of the differences between static and dynamic type systems. I am not so sure. I myself did not understand them fully until I studied type theory. I thought static typing (and by extension type theory) was a bug-catching convenience afforded at the expense of limiting what could be expressed in your language. I would have agreed with Stefan when he said:
Both static and dynamic types are descriptions of the structure and behavior of data. That dynamic languages associate structure and behavior to values rather than expressions is not all that significant.
Indeed, both static and dynamic types classify data. But thinking of static types as a “bug-catching convenience,” or as merely “descriptions of the structure and behavior of data,” ignores their most powerful uses: as a guide to designing languages, and an aid to reasoning.
The patterns discovered by type theory research are useful in the design of all languages. Sum types give us pattern matching. Linear-logic-based-session-types provide a powerfully expressive basis for structuring concurrent programs. Parameterizing code over a Haskell-style typeclass or an ML-style module is a powerful mechanism for modularity and code reuse. Etcetera. Any of these ideas could be (and some have been) used in a dynamically typed setting.
Since these “design patterns” can occur in both static & dynamic languages, they can also be discovered without doing type theory. Call/cc was discovered before its connection to classical logic, for example. But I believe good type theory yields uniquely reasonable abstractions. For every call/cc, there are a hundred ad-hoc bug-prone leaky abstractions created by doing “what seemed obvious” to solve the problem, and then iterating until a big hairy mudball develops. But the logical interpretation of call/cc assures us we have found something that is not merely ad-hoc. In other words, I agree with Harper’s principle of computational trinitarianism.
Type theory is hardly the be-all, end-all of program design, but type theoretic thinking is immensely useful. I often find bugs by thinking “if I had types to express the property I want, what would they look like, and would this code typecheck?”. Static types also serve as a design record of your language: they tell you what is really going on, even in a dynamic language (“what does your one static type look like? what branches does it have, and how does each behave?”).
And this is what I think down-playing the difference between static and dynamic types can obscure: the use of type theory as an aid to design and reason.