The first few unit tests compile and link, just a few bit representation errors in the code. Weeded out some trivial coding errors, like you shift 8 bits or you multiply by 256, but you don't multiply by 8. The compiler doesn't handle the new FFI interface yet, so I am implementing that. Bit stuck at which interface boundary I should shift from handling primitive values to series of bits representation. Before, or after, I compile down to lambda terms? Silly thing actually, doing it before means less code but difficult to debug.
Should put the conversion of primitive data to series of integers in a separate module. Make sure I implement the reverse conversions too, and I am set.
I read a bit more on the G-machine, had forgotten half of it, and was a bit doubtful I wasn't actually implementing the same machinery. I am not, it is close but different. I was right. Where the G-machine reduces term expressions of the graph representation of a combinatorial term, by traversing the graph, and storing back links, I reduce directly on a stack representation, it should be faster.
Bit thinking further on the Dot model. I actually don't use it internally, but just refer to it as a mental model, and compile straight from 'combinatorized' lambda terms to code. It's hard to maintain the right invariants that manner, should rethink, whether, at some point, it is right to just add an intermediate combinatorics layer. Unless I run into big debugging problems, I guess this is for later.