I started out following Crafting Interpreters, but gradually branched off that until I had almost nothing left in common.
Tech stack: Rust, Cranelift (JIT compilation), LALRPOP (parser).
Original title: "A small programming language where everything is a value" (edited based on comments)
Use immutable pass by reference. Make a copy only if mutability is requested in the thread. This makes concurrent reads lock-free but also cuts down on memory allocations.
This is essentially what Herd does. It's only semantically a pass by value, but the same reference counting optimizations still apply.
In fact, Herd's approach is a bit more powerful than this because (in theory) it can remove the copy entirely if the caller doesn't use the old value any more after creating the thread. In practice, my optimizations aren't perfect and the language won't always detect this.
The big downside is that we have to use atomic reference counts for _everything_. From memory this was about a 5-15% performance hit versus non-atomic counters, though the number might be higher if other bottlenecks were removed.
Nothing "real", just the synthetic benchmarks in the ./benchmarks dir.
Unnecessary copies are definitely a risk, and there's certain code patterns that are much harder for my interpreter to detect and remove. E.g. the nbodies has lots of modifications to dicts stored in arrays, and is also the only benchmark that loses to Python.
The other big performance limitation with my implementation is just the cost of atomic reference counting, and that's the main tradeoff versus deep cloning to pass between threads. There would definitely be room to improve this with better reference counting optimizations though.
In C, you can simply mutate the underlying characters. So changing the fifth character in a string is as easy as:
str[4] = 0;
Whereas using the immutable syntax you might have to do something like: str = str.substr(0, 4) + "\0" + str.substr(4);(Ah, no, your example elsewhere in the thread suggests that the referred-to structures get implicitly copied all over the place.)
Optimization specifically for function calls, or... ?
Because if you're doing all this copy-on-write anyway, the indirection seems to be a needless cost in other contexts.
Things of course become a lot more fun with concurrency.
Now if you want a language where all the data thingies are immutable values and effects are somewhat tamed but types aren't too fancy etc. try looking at Milner's classic Standard ML (late 1970s, effectively frozen in 1997). It has all you dream of and more.
In any case keep having fun and don't get too bogged in syntax.
You can apply meaning to a particular shape of that tree which could be executed, but then you basically just added another layer before you parse your AST that becomes executable.
> Standard ML [...] It has all you dream of and more
The main thing here that's missing in Standard ML (and most other functional languages) is the "mutable" part of "mutable value semantics" - i.e., the ability to modify variables in-place (even nested parts of complex structures) without affecting copies. This is different from "shadowing" a binding with a different value, since it works in loops etc.
I've only read the first couple paragraphs so far but the idea reminds me of a shareware language I tinkered with years ago in my youth, though I never wrote anything of substance: Euphoria (though nowadays it looks like there's an OpenEuphoria). It had only two fundamental types. (1) The atom: a possibly floating point number, and (2) the sequence: a list of zero or more atoms and sequences. Strings in particular are just sequences of codepoint atoms.
It had a notion of "type"s which were functions that returned a boolean 1 only if given a valid value for the type being defined. I presume it used byte packing and copy-on-write or whatever for its speed boasts.
I've got a hobby language that combines this with compile time code execution to get static typing - or I should say that's the plan, it's really just a tokenizer and half of a parser at the moment - I should get back to it.
The cool side effect of this is that properly validating dynamic values at runtime is just as ergonomic as casting - you just call the type function on the value at runtime.
[1] https://github.com/FransFaase/IParse/?tab=readme-ov-file#mar...
x = [1, [2]];
var y = x;
set y.[0] = 3; // clones the outer array, keeps a reference to inner array
set y.[1].[0] = 4; // clones the inner array here. Outer array is now exclusive so it doesn't need another clone.
var z = x;
set z.[1].[0] = 4; // clones both arrays at onceSo basucally everything is var?
There are two ways to define a variable binding:
x = 1; // declares x as immutable
var y = 2; // declares y as mutable
The "default" behaviour (if no keyword is used) is to define a new immutable variable.I intentionally didn't add this, mostly because I wanted to explore how far you can get without it (and keep the language simple). Having a "real" pointer as a first class type wouldn't work though, since it would break a lot of the assumptions I use for optimizations.
I did think about two different versions of this but didn't end up adding either:
- Something like `inout` parameters in Swift, which aren't first class pointers. This is really just an alternate syntax for returning multiple values. - A "ref" type, which is essentially a mutable container for an arbitrary value. Passing the container around would share a reference to the same mutable value. This still wouldn't allow modifying values "outside" of the container though.
I've worked on systems where we spent more time reasoning about shared state than writing actual logic. The typical answer is "just make everything immutable" but then you lose convenient imperative syntax. This sits in an interesting middle ground.
Curious about performance in practice. Copy-on-write is great until you hit a hot path that triggers lots of copies. Have you benchmarked any real workloads?