It's absurd to suggest a rival to static typing in quantitative science.No it's not. Embedding units so deeply into the type information (I'm ignoring the static vs dynamic language debate) makes it so that all the different types of units and their conversions need to be known at compile time.
The issue is the degree of enforcement of type rules (i.e., type safety) at compile-time vs. runtime. The point is not whether a framework (for computational science or otherwise) should be strongly or weakly typed - the point is how statically should the type rules be enforced. For instance, the statement
S = V*T
must ultimately (at runtime) only succeed if e.g.
- S is of type length
- V is of type velocity
- T is of type time
- the units of S, V and T are compatible (e.g., m, m/s, and s, respectively).
That's hopefully beyond dispute ;-)
The question is whether the above statement should cause a compile time error if S, V and T
1. are not of the correct types *and* units,
2. or whether only the types and not the units are enforced at compile time,
3. and what happens if the unit of T is not known until runtime and turns out to be ms instead of s? Will then the result S incorporate the factor 1000 although it did not appear in the statement?
And the point i was *actually* trying to raise in my original posting is this: in the case of computational science it seems pretty clear that one benefits from static type safety (as long as case 3. from above can be handled), because velocity*time is always and will always be of type length. So what is it that makes dynamic typing apparently so desirable in other domains? Is it just the need for compile-time tolerance to inexactness in typing? Is it just that for other, less "eternal" domains than science, we can not afford the effort to enforce type safety with the same rigor? Or is there some other factor that i do not see?