I've also worked with OCaml code that didn't do it and you lose a lot of the advantages of static typing. Definitely worse.
Rust got it right.
In Haskell-land: At the global scope, yes, that's considered good practice, especially if the function is exported from a module. When you just want a local helper function for some tail-recursive fun it's a bit of extra ceremony for little benefit.
(... but for Rust specifically local functions are not really a big thing, so... In Scala it can be a bit annoying, but the ol' subtyping inference undecidability thing rears its ugly head there, so there's that...)
1. leveraging the type checker to verify (aspects of) the correctness of your function, and
2. communicating intent to humans
I've found in my own explorations with Haskell that its useful to write with functions with them, then verify that they work, and then remove them to see what the inferred would be (since it already compiled with the annotation, the inferred type will either be identical to or more general than the previously declared type), and generally (because it is good practice to have a declared type), replace the old declared type with the inferred type (though sometimes at this point also changing the name.)
For example, suppose that you have a bug in the body of a function, but did not provide a type annotation for it. The function might still compile but not with the type you want. The compiler will only notice something is amiss when you try to call the function and it turns out that the function's inferred type doesn't fit the call site.
Basically, global type inference in the absence of type annotations means that changes in one part of the file can affect inferred types very far away. In practice it's best to use type annotations to limit inference to small sections, so that type errors are reported close to what caused them.
The problems are:
1. Type errors become much less localised and much harder to understand. Instead of "the function you're editing is supposed to return a string but it doesn't" you get "this other function three stack frames away that indirectly calls the function you're editing can't pass some value into println".
2. The inferred types are as generic as possible, which also means they are as hard to understand as possible.
It's infuriating when a type error can "jump" across global functions just because you weren't clear about what types those functions should have had, even if those types are very abstract. So early adopters learned to sprinkle in type annotations at certain points until they discovered that the top-level was a good place. In OCaml this pain is somewhat lessened when you use module interface files, but without that... it's pain.
switch (expr.kind) {
case "number"/"string"/"var":
... [Var]
case "call":
... [App]
case "function":
throw new Error("...[Abs]")
case "let":
... [Let]
Looks like most of the hard work's done, and probably wouldn't be too tricky to get [Abs] in there too!* https://wikimedia.org/api/rest_v1/media/math/render/svg/10d6...
{ kind: "function", arg: A, returnType: { kind: "function", arg: B, returnType: C}}
Or is that case simply not covered by this bare-bones example? I can't parse the code well enough just in my mind to tell if that would work, but I think it would work?EDIT:
I just noticed the working demo at the bottom that includes an example of a multi-argument function so that answers whether it works.
function f<T>(x: T) { return x; }
function g(x: number) { return { a: f(x), b: f(x.toString()) }; }
in sat?if that's easy, how about length and f in:
function append<T>(xs: list<T>, ys: list<T>) {
return match xs {
Nil() -> ys,
Cons(hd, tl) -> Cons(hd, append(tl, ys)),
};
}
function flatten<T>(xs: list<list<T>>) {
return match xs {
Nil() -> Nil(),
Cons(hd, tl) -> append(hd, flatten(xs)),
};
}
function map<T, U>(f: (T) => U, xs: list<T>) {
return match xs {
Nil() -> Nil(),
Cons(hd, tl) -> Cons(f(hd), tl),
};
}
function sum(xs: list<number>) {
return match xs {
Nil() -> 0,
Cons(hd, tl) -> hd + length(tl),
};
}
function length<T>(xs: list<T>) { return sum(map((_) -> 1, xs)); }
function f<T>(xs: list<list<T>>) {
return length(flatten(xs)) === sum(map(length, xs));
}
hm-style inference handles polymorphism and type application without a complicated sat encoding(Of course the tricky part of the definition above is that the size of types can theoretically be exponential in the size of a program, but that doesn't happen for programs with human-understandable types)
The intuition is that you never need to backtrack, so boolean formulae (ie, SAT) offer no help in expressing the type inference problem. That is, if you think of HM as generating a set of constraints, then what HM type inference is doing is producing a conjunction of equality constraints which you then solve using the unification algorithm.
https://www.haskellforall.com/2022/06/the-appeal-of-bidirect...
It seems to me that type-checking that relies on global constraint-solving is usually a bad idea. Annotated function types result in less confusion about what a function does.
Anyway nice demo. Bi-directional is already quite powerful!