NOTHING HOLDS BY INDUCTION...
Friday, July 30, 2010
Log 073010
Well, you need a sense of humor writing a compiler, together with a lot of tenacity -man, five years- and a character which doesn't get upset by anything. Now the longs are handled correctly, but they result in pretty big negative numbers sometimes, and it fails on converting those to text. Dare I say: Aaaaaaaaargh?
It had an easy fix in one of the dynamic libraries, stage two now passes most of the trivial unit tests. It failed on the last one, the io tests. Weird, because it does io, probably an old binding? First, R&R, then I'll see again.
Of course, it does help when a file actually exists you're echoing characters from. It passed the simple runtime unit tests.
See what gives on linking with the system file. It passed one test, the compiler takes several minutes compiling 1.2KLoC. It's slow, though most of it is spend printing diagnostic output, a lot of diagnostic output. Guess I what I have now is more a proof of concept than anything else.
Guess I am set for a bootstrap.
Ah well. Build a bootstrap build chain for a Hi release without ocaml dependencies. It failed to build the third file in the util directory. Hmm, it's the only file which includes more than one other file? (Back to R&R with C&C, for the moment. Debug later. It undumps both files which are imported, but doesn't import the second file... Clueless...)
Wrote a unit test, it passed it? Tracing on the original 'feature' almost filled my hard disk since it tracks and prints all combinator rewrites with tracing turned on. Piping all output to tail first, and wait a few hours till the process is done to see why it bogs. (Figured it out when I slept and the log confirms it, just closing a file which doesn't exist while going over all possible locations of object files. Couldn't build a quick fix in C since the system file binds to fclose directly. So, recompiling...)
It had an easy fix in one of the dynamic libraries, stage two now passes most of the trivial unit tests. It failed on the last one, the io tests. Weird, because it does io, probably an old binding? First, R&R, then I'll see again.
Of course, it does help when a file actually exists you're echoing characters from. It passed the simple runtime unit tests.
See what gives on linking with the system file. It passed one test, the compiler takes several minutes compiling 1.2KLoC. It's slow, though most of it is spend printing diagnostic output, a lot of diagnostic output. Guess I what I have now is more a proof of concept than anything else.
Guess I am set for a bootstrap.
Ah well. Build a bootstrap build chain for a Hi release without ocaml dependencies. It failed to build the third file in the util directory. Hmm, it's the only file which includes more than one other file? (Back to R&R with C&C, for the moment. Debug later. It undumps both files which are imported, but doesn't import the second file... Clueless...)
Wrote a unit test, it passed it? Tracing on the original 'feature' almost filled my hard disk since it tracks and prints all combinator rewrites with tracing turned on. Piping all output to tail first, and wait a few hours till the process is done to see why it bogs. (Figured it out when I slept and the log confirms it, just closing a file which doesn't exist while going over all possible locations of object files. Couldn't build a quick fix in C since the system file binds to fclose directly. So, recompiling...)
Thursday, July 29, 2010
Log 072910
The output of stage one and stage two are exactly similar C files ... except for truncated longs in stage two. That's a messy bug.
It's a corner case somewhere I didn't feel like debugging today. Took some R&R. (Looked at it briefly before going to bed, turned out to be trivial, fixed it. Recompiling...)
It's a corner case somewhere I didn't feel like debugging today. Took some R&R. (Looked at it briefly before going to bed, turned out to be trivial, fixed it. Recompiling...)
Wednesday, July 28, 2010
FFI and Assembly
Looked at where Factor is heading with FFI. What I have now is that I construct a libffi type from a type descriptor passed to the FFI combinator each time a native function is called. I should keep the FFI combinator -its clean approach to FFI outweighs the performance benefits of specializing native calls to assembly directly- but I should get rid of libffi once, or at least for specialized runtimes, and generate the call directly from my own FFI type descriptor.
I should take a good look at what assembly I am generating at the moment and how I can clean it up such that my own optimizer and machine back-ends can be tossed in easily.
I should take a good look at what assembly I am generating at the moment and how I can clean it up such that my own optimizer and machine back-ends can be tossed in easily.
Spawn, Yield and WaitFor
Read this and some stuff of the emerging languages conference 2010. It seems everyone agrees that the way to handle parallelism is by viewing an algorithm as a DAG and to be able to spawn separate threads, defer evaluation in a thread while it waits for IO, and being able to join several threads together in different manners. The number of primitives are just not that big.
The above model doesn't take into account persistency of communicating processes (unless a reference to the process is part of the return result). It's still fork and join.
The above model doesn't take into account persistency of communicating processes (unless a reference to the process is part of the return result). It's still fork and join.
Log 072810
Can't believe some of the mistakes I made; gave up on it for several hours since I had another bug problem, an inconvenient number of flies in the house. It had a problem comparing chars, fixed that. It has a problem converting longs to text, I think, which stops it from translating the system file to lambda terms. Wrote a unit test for that, see what's up. (There was no select test which handles "system.long" types as native values in stage one, patched that.)
Edibilities removed, 95% of bugs are dead. Recompiling...
Edibilities removed, 95% of bugs are dead. Recompiling...
Tuesday, July 27, 2010
Log 072710
It linked one small file and generated C for the first time. Tracing support works. There's still a strange bug with passing the compiler arguments, I wonder whether I'ld need to do a strcpy on the argument list of main? Fixed the latter bug, I patched the overflow region now for good in the sources.
Replaced the generic list text comparison with a specialized version. (For a comparison of text the generic version calls the text compare, which calls the list compare, which for each char calls the char compare, which for each compare called the int compare. That might have been overdoing it.)
Recompiling....
Replaced the generic list text comparison with a specialized version. (For a comparison of text the generic version calls the text compare, which calls the list compare, which for each char calls the char compare, which for each compare called the int compare. That might have been overdoing it.)
Recompiling....
Monday, July 26, 2010
Log 072610
It serializes and unserializes the system file without runtime errors. There may be other errors I am unaware of, but for the moment it seems to work. Next debug step: Linking.
Tried to look at the document generator, didn't feel much like working on it too much. There's still a strange bug in the compiler that if I try to invoke it from a certain directory depth, it gives a free/runtime error. No idea.
It didn't link because of a trivial unserialization bug. Fixed that, recompiling. Also fixed a bug in the trace introduction. Another long wait.
Tried to look at the document generator, didn't feel much like working on it too much. There's still a strange bug in the compiler that if I try to invoke it from a certain directory depth, it gives a free/runtime error. No idea.
It didn't link because of a trivial unserialization bug. Fixed that, recompiling. Also fixed a bug in the trace introduction. Another long wait.
Sunday, July 25, 2010
Saturday, July 24, 2010
Log 072410
Looked briefly at a recompile. It unserializes small files, not the system file. It doesn't print the unserialized AST after unserialization. Fixed the latter, recompiling.
Typing Doodles
-under construction, this is my scratch pad-
Lets see if I can get this higher-ranked typing stuff right. The basis of my semantic/type checker is a sequent calculus reasoner which just tries to solve whatever is thrown at it. What is thrown at the prover are the type assumptions and obligations derived from a Hi AST, since Hi definitions are all typed, there is no (real) need for generalization of derived types.
The basis of reasoning is unification, the basis of higher-order logic reasoning is instantiation/specialization and skolemization. A nice manner of abbreviating propositional (higher-order) reasoning is with a sequent calculus.
Sequent Calculi for Typing
Lets first look at sequent calculi. In logics, a sequent is standardly defined as a conjunction of antecedents implying a disjunction of consequents. Since, in a context of typing for programming languages we're normally not interested in proving a disjunction of consequents, a sequent in my prover is conjunction of antecedents implying a conjunction of consequents.
So, formally, where I mean
Kind Checking
During kind checking, a set of rules, formulated below, are used such that a series of equations postulated are discharged, and the antecedent after proving will consists of a series of kind equalities such that each kind schematic variable receives a value.
Type Checking
Now, the real fun starts. Type checking higher-ranked types is a game where unification is done by equational reasoning and instantiating/specializing, skolemizing and -possibly- generalizing type terms by employing type constants, type constructors, type schematic variables, and type skolem constants in a type sequent calculus.
For the calculus, we want to unify, so some extra notation is necessary. Type variables are written in (Greek) lowercase, type skolem constants are preceded with an exclamation mark (!), type schematic variables are preceded with a question mark (?).
(Note: For a normal ranked system (Damas-Milner) where quantifiers/type-schemes only appear outermost in type terms, it is easy to derive and implement a straightforward system by just specializing, or skolemizing, types at the defining and applying occurrences in a syntax tree. The prover just needs to deal with simple type equalities and complex type equalities of form τ = skol(T) or τ = spec(T). Generalization is a elaborate procedure which also needs to capture postulated predicates in the antecedent. The current prover works that way.)
The game for a new higher-ranked Hi type system is to arrive at a set of typing rules such that: a) it is unnecessary to generalize, and b) skolemization and specialization are defined such that higher-ranked types can be dealt with trivially.
Lets kick of with some simple stuff, T and S are for possibly quantified type terms, τ and σ for simple type terms.
To type check higher-ranked types, one normally need rules for specialization, often called instantiation, skolemization, and -possibly- generalization. In a specialized term, universal quantifiers are eliminated and for the variables type schematic variables, or unknowns, are introduced. Similarly, skolemization replaces universally bound variables with Skolem constants. Generalization is the reverse of specialization, type schematic variables (not-monomorphic) are replaced with universal quantifiers.
The role of specialization is, as in higher-order logic proofs, to defer reasoning and instead of supplying an actual witness, construct a witness ad-hoc by unifying on what you can infer from the given proof obligation.
Lets see if I just can get away with specializing terms in small steps. Let's make specialization and skolemization not separate functions but part of the calculus.
Γ ⊢?σ = c ?σ0…?σi ∧ ?σ0= spec T0 ∧ … ∧ ?σ0 = spec Ti ∧ Δ
Lets see if I can get this higher-ranked typing stuff right. The basis of my semantic/type checker is a sequent calculus reasoner which just tries to solve whatever is thrown at it. What is thrown at the prover are the type assumptions and obligations derived from a Hi AST, since Hi definitions are all typed, there is no (real) need for generalization of derived types.
The basis of reasoning is unification, the basis of higher-order logic reasoning is instantiation/specialization and skolemization. A nice manner of abbreviating propositional (higher-order) reasoning is with a sequent calculus.
Sequent Calculi for Typing
Lets first look at sequent calculi. In logics, a sequent is standardly defined as a conjunction of antecedents implying a disjunction of consequents. Since, in a context of typing for programming languages we're normally not interested in proving a disjunction of consequents, a sequent in my prover is conjunction of antecedents implying a conjunction of consequents.
So, formally, where I mean
γ0 ∧ γ1 ∧ … ∧ γn ⇒ δ0 ∧ δ1 ∧ … ∧ δm
I am tempted to write,
γ0, γ1, … γn ⊢ δ0, δ1, … δm
but, since that is non-standard, will just write the conjunctions explicitly,
γ0∧ γ1∧ … ∧ γn ⊢ δ0∧ δ1∧ … ∧ δm
and abbreviate lots of shiny small stuff -look mum, it's math- with shiny big stuff such
γ ∧ Γ ⊢ δ ∧ Δ.
To give it a calculational flavor, my prover just discharges consequents one-by-one head-first.
Now we have trivialities like
Now we have trivialities like
-------[ Done ]
Γ ⊢
Γ ⊢ δ0 Γ∧ δ0 ⊢ Δ
--------------------------------- [ Conj ]
Γ ⊢ δ0 ∧ Δ
and
Γ ⊢ Δ δ0∈Γ
------------------ [ Elim ]
Γ ⊢ δ0 ∧ Δ
The Done rule states that nothing is trivially true. The Conj rule states that a proven consequent can be used as an antecedent. The Elim rule states that every consequent is trivially true if it also appears as an antecedent.
A typing algorithm terminates if it deterministically derives a finite tree proof. A set of typing rules is deterministic if there's only one, possibly prioritized, manner of applying them.
The actual rationale for these sequent calculi is that a) they trivialize proofs of equivalences of infinite/recursive types, more about that later, b) they `linearize' proofs in the sense that you can implement it as a function on two series of type terms, c) the result of proving is a collection of bound schematic variables which can be used later to decorate the AST.
The prover is driven by the unification process. For that we actually have a large number of different equalities such as '= kind,' '= type,' '= spec,' and '= skol.' The kind and type equalities are abbreviated. (This distinction is maybe not necessary, but convenient. I only want to unify on types where the scheme is removed, and I don't want types with their schemes to be substituted into other simple types. Moreover, I want to be able to specialize the same occurrence of a type scheme in, say, a list to multiple instantiations to make many-valued lists possible.)
The prover is driven by the unification process. For that we actually have a large number of different equalities such as '= kind,' '= type,' '= spec,' and '= skol.' The kind and type equalities are abbreviated. (This distinction is maybe not necessary, but convenient. I only want to unify on types where the scheme is removed, and I don't want types with their schemes to be substituted into other simple types. Moreover, I want to be able to specialize the same occurrence of a type scheme in, say, a list to multiple instantiations to make many-valued lists possible.)
Kind Checking
During kind checking, a set of rules, formulated below, are used such that a series of equations postulated are discharged, and the antecedent after proving will consists of a series of kind equalities such that each kind schematic variable receives a value.
Below, I'll write ?κi for a kind-schematic variable, and Κi for a whole kind term. Kind terms are the usual mix of stars * and arrows →. Kind checking is a trivial exercise with the following rules.
Γ ⊢ Δ
----------------[ Eq ]
Γ ⊢ Κ=Κ ∧ Δ
Γ ∧ ?κ0=Κ ⊢ Δ[?κ0/Κ] ?κ0∉Κ
------------------------------------- [ Elim ]
Γ ⊢ ?κ0=Κ ∧ Δ
Γ ⊢ ?κ0=Κ ∧ Δ
-----------------[ Swap ]
Γ ⊢ Κ=?κ0 ∧ Δ
Γ ⊢Κ0= Κ2 ∧ Κ1 = Κ3 ∧ Δ
--------------------------------[ Arrow ]
Γ ⊢ Κ0→Κ1 = Κ2→Κ3 ∧ Δ
The Eq rule removes simple equalities from the consequent. The Elim rule removes a schematic variable from the consequent, checks that it not gives rise to an infinite expansion, moves the equality to the antecedent and eliminates the variable from the consequent by substitution. The Swap rule just swaps a schematic variable to the left-hand side. The Arrow rule structurally decomposes a proof obligation where two terms are complex into two new proof obligations.
The rules above are a bit contrived. First, they are much more easily defined directly in a type logic and are given an operational meaning. Second, the careful reader will notice that a proof sometimes only exists when a series of consequents are written in a certain order.
What the rules do give are a direct representation of the Hi kind checker, where a series of kind equalities are offered to a prover, the prover discharges each proof obligation, and a list of simple equalities -a mapping from schematic variables to terms- is returned.
Kind equalities are derived from terms in a bottom-up fashion, care is taken that they are linearized in the right order during that pass.
What the rules do give are a direct representation of the Hi kind checker, where a series of kind equalities are offered to a prover, the prover discharges each proof obligation, and a list of simple equalities -a mapping from schematic variables to terms- is returned.
Kind equalities are derived from terms in a bottom-up fashion, care is taken that they are linearized in the right order during that pass.
Type Checking
Now, the real fun starts. Type checking higher-ranked types is a game where unification is done by equational reasoning and instantiating/specializing, skolemizing and -possibly- generalizing type terms by employing type constants, type constructors, type schematic variables, and type skolem constants in a type sequent calculus.
Lets assume higher-ranked type terms with type constants and type constructors (written as lowercase Latin c), function abstractions (written with arrows →), universally quantified types (∀), existentially quantified types (∃) and type predicates denoting interfaces (written as capitals). Without too much fuss, I'll go straight to an example, and just leave the formal grammar for what it is. The type "∀τ ⇒ ::I τ ⇒ τ → list (∃σ ⇒ ::I σ ⇒ σ)" describes a function which takes an element of type τ which implements interface I and returns a list of some unknown type σ which also implements interface I.
For the calculus, we want to unify, so some extra notation is necessary. Type variables are written in (Greek) lowercase, type skolem constants are preceded with an exclamation mark (!), type schematic variables are preceded with a question mark (?).
(Note: For a normal ranked system (Damas-Milner) where quantifiers/type-schemes only appear outermost in type terms, it is easy to derive and implement a straightforward system by just specializing, or skolemizing, types at the defining and applying occurrences in a syntax tree. The prover just needs to deal with simple type equalities and complex type equalities of form τ = skol(T) or τ = spec(T). Generalization is a elaborate procedure which also needs to capture postulated predicates in the antecedent. The current prover works that way.)
The game for a new higher-ranked Hi type system is to arrive at a set of typing rules such that: a) it is unnecessary to generalize, and b) skolemization and specialization are defined such that higher-ranked types can be dealt with trivially.
Lets kick of with some simple stuff, T and S are for possibly quantified type terms, τ and σ for simple type terms.
Γ ⊢ Δ
---------------- [ Eq ]
Γ ⊢ T=T ∧ Δ
Γ ∧ ?τ=σ ⊢ Δ[?τ/σ]
----------------------- [ Elim ]
Γ ⊢ ?τ=σ ∧ Δ
Γ ∧ ?τ=σ ⊢ Δ[?τ/σ]
----------------------- [ Swap ]
Γ ⊢ σ=?τ ∧ Δ
The above three rules discharge identities (Eq) and eliminate schematic variables (Elim and Swap).
To type check higher-ranked types, one normally need rules for specialization, often called instantiation, skolemization, and -possibly- generalization. In a specialized term, universal quantifiers are eliminated and for the variables type schematic variables, or unknowns, are introduced. Similarly, skolemization replaces universally bound variables with Skolem constants. Generalization is the reverse of specialization, type schematic variables (not-monomorphic) are replaced with universal quantifiers.
The role of specialization is, as in higher-order logic proofs, to defer reasoning and instead of supplying an actual witness, construct a witness ad-hoc by unifying on what you can infer from the given proof obligation.
Lets see if I just can get away with specializing terms in small steps. Let's make specialization and skolemization not separate functions but part of the calculus.
Γ ⊢?σ = spec (T[τ/?τ]) ∧ Δ
---------------------------------- [ Spec-∀ ]
Γ ⊢ ?σ = spec (∀τ ⇒ T) ∧ Δ
Γ ⊢?σ = skol (T[τ/!τ]) ∧ Δ
---------------------------------- [ Skol-∀ ]
Γ ⊢ ?σ = skol (∀τ ⇒ T) ∧ Δ
The above two rules, Spec-∀ and Skol-∀, specialize and skolemize universally bound type terms by replacing a bound variable with schematic variables and skolem variables respectively. In the rules, types remain tagged that they are specialized and skolemized such that proving is driven by unification solely.
A minor drawback is that trivial rules need to be written twice, such as the ones below.
Γ ⊢?σ = spec (T) ∧ Δ ∧ ::I ?τ
---------------------------------- [ Pred-Spec-? ]
Γ ⊢ ?σ = spec (::I ?τ ⇒ T) ∧ Δ
Γ ⊢?σ = skol (T) ∧ Δ ∧ ::I ?τ
---------------------------------- [ Pred-Skol-? ]
Γ ⊢ ?σ = skol (::I ?τ ⇒ T) ∧ Δ
The two rules above state that type predicates (interfaces) to be proven on schematized variables are proven after the schematized variables are fully unified.
Γ∧ ::I !τ ⊢?σ = spec (T) ∧ Δ
---------------------------------- [ Pred-Spec-! ]
Γ ⊢ ?σ = spec (::I !τ ⇒ T) ∧ Δ
Γ∧ ::I !τ ⊢?σ = skol (T) ∧ Δ
---------------------------------- [ Pred-Skol-! ]
Γ ⊢ ?σ = skol (::I !τ ⇒ T) ∧ Δ
Similarly, the two rules above state that type predicates on skolemized variables may be assumed.
Γ ⊢?σ = ?σ0→?σ1 ∧ ?σ0= spec T0 ∧ ?σ1 = spec T1 ∧ Δ
------------------------------------------------- [ Arrow ]
Γ ⊢ ?σ = spec T0→T1 ∧ Δ
----------------------------------------------- [ Constructor ]
Γ ⊢ ?σ = spec (c (T0…Ti)) ∧ Δ
To be continued later...
Friday, July 23, 2010
Arity... Feature?
The last bug I had was another arity mishap. Suppose you have a definition f = (h = g f; λx → h x), than that is equivalent to, in my setting, f = (λh x → h x)(g f). Since arguments are reduced to normal form, it will reduce f ad nauseum since, believe it or not, it has arity zero in my setting.
Is it a bug, or a feature? Since it has an easy fix, I am inclined to call it a feature.
Is it a bug, or a feature? Since it has an easy fix, I am inclined to call it a feature.
Log 072310
I tried to pick up a DVD storage box, was hindered by some festivities, got my groceries though, fired up my compiler in a silent park, and, ....
Wait, this is a language blog, right?
It serializes.
Wait, this is a language blog, right?
It serializes.
Native and External
Compile times are long. Writing a Swig module gave some insight in what I should have done. I should change the FFI extension and replace it with a keyword. -Though the keyword would just introduce syntactic sugar for what I have now, not sure- At the same time, I should make it possible to build dynamic libraries from Hi modules.
It's for later. I am still an annoying serialization bug and unserialization away from a bootstrap. I scanned some files to see if I could gather where the text/AST replacement stems from, clueless. If all fails, I'm forced to try unserialize all intermediate ASTs to catch the bug, or find the bug in the type checker...
It's for later. I am still an annoying serialization bug and unserialization away from a bootstrap. I scanned some files to see if I could gather where the text/AST replacement stems from, clueless. If all fails, I'm forced to try unserialize all intermediate ASTs to catch the bug, or find the bug in the type checker...
Thursday, July 22, 2010
Wednesday, July 21, 2010
Swig
Looked at language support as implemented in Swig. It looks doable, around 1-2KLoc for a language extension in pretty readable code. Guess I'll take the ocaml examples and just let the compiler do its business.
(Need to take care of the amount of eggs in the basket though. Bootstrap, documentation generation, Swig, new typing, optimizations. The last two will need to wait until the bootstrap.)
(Need to take care of the amount of eggs in the basket though. Bootstrap, documentation generation, Swig, new typing, optimizations. The last two will need to wait until the bootstrap.)
Log 072110
Don't feel too much like debugging anymore. It doesn't make sense to assume a logical error in the compiler anymore (the combinators compiled look entirely standard). So, I must have missed something in the code. Recompiling again with an extra check.
There seems a lot of room for optimization of the code, given the few examples I worked through. I eta-expand a lot of expressions which can be dumbed down. Should look if I still run the partial evaluator, might have switched it off, should see how often and where it is run. Cool, I hope I am gonna take about 50% off of code size with some optimizations.
(Looks like I am serializing a piece of text where it expects an AST. It's the permissiveness of stage 1. So, either fix that once or just accept it and debug stuff by hand. Found the bugs, need to print, again.)
(I removed some bugs, but the AST is very probably garbled. I.e., it finds texts where it expects AST nodes and cannot serialize those. I never notice, since it occurs only at positions -like a type node- where I expect texts anyway. Not sure.)
There seems a lot of room for optimization of the code, given the few examples I worked through. I eta-expand a lot of expressions which can be dumbed down. Should look if I still run the partial evaluator, might have switched it off, should see how often and where it is run. Cool, I hope I am gonna take about 50% off of code size with some optimizations.
(Looks like I am serializing a piece of text where it expects an AST. It's the permissiveness of stage 1. So, either fix that once or just accept it and debug stuff by hand. Found the bugs, need to print, again.)
(I removed some bugs, but the AST is very probably garbled. I.e., it finds texts where it expects AST nodes and cannot serialize those. I never notice, since it occurs only at positions -like a type node- where I expect texts anyway. Not sure.)
More Crayons
Combinator representation of the last post. Green = exception handler. Red = file. Purple = arg#0 of the record, the position. Yellow = arg#1 of the record, the type name. Blue = arg#2 of the record, the type abstracted.
No bugs yet.
Crayons
No bugs yet.
Swig... or XML?
I read up on Swig since I am waiting for another great opportunity to fix a bug and want bindings to OpenGL, OpenAL, etc. Seems language extensions to Swig are written in C++. Not sure that is a show stopper. I don't need it that much, all libraries can be called directly through libffi. I just want something which translates a C header file to a Hi file, i.e., just some renaming of types to their basic types, enums to ints, and most other things to pointers would be ok. Not too sure how much of a moving target Swig is. I think it can also output an XML-ized version of a C/C++ header file, maybe it would be better to build a translator for that, and just leave Swig for what it is?
[ Consistently dropping more subjects from sentences. Wonder if that is how Chinese evolved? Maybe, it is just an older, more evolved, language? Or the opposite, andI am just dumbing down ;) ]
[ Consistently dropping more subjects from sentences. Wonder if that is how Chinese evolved? Maybe, it is just an older, more evolved, language? Or the opposite, and
Rss Swap
Added Don Syme, removed the Parasail blog. Basically, since I don't like texts written like this. Come on: Italics is for emphasis only, bold is for structuring. Makes my eyes cry just to read it.
Tuesday, July 20, 2010
To Each His Own
I just watched an old F# presentation by Luca Bolognese. A few things were striking. The ML-ish nature of F#, the integration with the DOTNET platform, the use of |> (just a reverse apply operator), their modules which have a direct OO representation, their concurrency model, and a small comment.
F# looks gorgeous. The integration is very tight, I went another direction, but -true- it stings. Having said that, Hi interfaces with C exactly since everyone and their dog is either targeting the CLI or JVM, so, my own choice. The IDE and compiler integration is very interesting though.
As far as the ML-ish nature of F# goes, hell, couldn't they have dropped that awful syntax by now? Haskell, Clean, and a plethora of other languages just, well, do it better.
The pipe operator '|>', a reverse apply, is a nice trick and idiom Luca used a lot. I thought of it myself too, didn't see it used a lot like that before, but don't think it'll go very far. Whereas I usually write a series of explicit assignments/lets, he doesn't, but he needs to 'jump' between lets and pipes a lot. Better to just stick with a simple assignment.
My interfaces just work on data types, which is a lot weaker than modules. Thing is, since OO objects are best understood as reinstantiatable modules thereby solving a state problem, and I don't have explicit state, guess I don't need them. The integration with OO is cool though.
Their concurrency model is elaborate. It defers evaluation by wrapping functions into asynchronuous values correspondingly typed and, I guess, defers evaluation until a block is ran to completion. I hope, not sure on this, that this also has to do with their somewhat impure nature of their language. Need to figure that one out, but I made a descision. Hi will be pure, except for side effects. I am not a great believer in pure code, but I don't want to work on the language that much more, and it just has some benefits I don't want to throw away.
Small things. It dawned to me that foreach is just better readable to most programmers than map. So, I'll add one silly thing to the list library, an 'each' function, which is -again- just map with his arguments reversed.
[ I think F# 2.0 added units of measure, which is a good thing in calculation. See what I'll need to do to support that. Are they just 'tags' for literal types? Hmm, feature creep. Dropped the idea, it is still cool but I am not sure what you pay. ]
F# looks gorgeous. The integration is very tight, I went another direction, but -true- it stings. Having said that, Hi interfaces with C exactly since everyone and their dog is either targeting the CLI or JVM, so, my own choice. The IDE and compiler integration is very interesting though.
As far as the ML-ish nature of F# goes, hell, couldn't they have dropped that awful syntax by now? Haskell, Clean, and a plethora of other languages just, well, do it better.
The pipe operator '|>', a reverse apply, is a nice trick and idiom Luca used a lot. I thought of it myself too, didn't see it used a lot like that before, but don't think it'll go very far. Whereas I usually write a series of explicit assignments/lets, he doesn't, but he needs to 'jump' between lets and pipes a lot. Better to just stick with a simple assignment.
My interfaces just work on data types, which is a lot weaker than modules. Thing is, since OO objects are best understood as reinstantiatable modules thereby solving a state problem, and I don't have explicit state, guess I don't need them. The integration with OO is cool though.
Their concurrency model is elaborate. It defers evaluation by wrapping functions into asynchronuous values correspondingly typed and, I guess, defers evaluation until a block is ran to completion. I hope, not sure on this, that this also has to do with their somewhat impure nature of their language. Need to figure that one out, but I made a descision. Hi will be pure, except for side effects. I am not a great believer in pure code, but I don't want to work on the language that much more, and it just has some benefits I don't want to throw away.
Small things. It dawned to me that foreach is just better readable to most programmers than map. So, I'll add one silly thing to the list library, an 'each' function, which is -again- just map with his arguments reversed.
[ I think F# 2.0 added units of measure, which is a good thing in calculation. See what I'll need to do to support that. Are they just 'tags' for literal types? Hmm, feature creep. Dropped the idea, it is still cool but I am not sure what you pay. ]
Log 072010
Recompiling again, and flushing the output of the compiler to a file to see whether I can track where that serialization bug comes from. Somehow, I hope it automagically disappears, but I am afraid it won't.
Odd bug. It lexes, parses, identifies, type checks, pretty prints, everything. But, suddenly, in the last minute, I have an operational error in the exact last module I wrote?
(I added yet an option more to the compiler to be able to unserialize and pretty print an AST. The bug persisted after the recompile. Going through the log to see in what stage it was introduced.)
Odd bug. It lexes, parses, identifies, type checks, pretty prints, everything. But, suddenly, in the last minute, I have an operational error in the exact last module I wrote?
(I added yet an option more to the compiler to be able to unserialize and pretty print an AST. The bug persisted after the recompile. Going through the log to see in what stage it was introduced.)
Monday, July 19, 2010
Log 071910
It recompiled. It serialized a small file from the runtime tests but failed on the system file (which has roughly all constructs), it failed to unserialize the small test. Going to reread the serialization code, again...
(Looked at it, again. The first parameter of a type abstraction is serialized twice, no idea why. Looks like I got another bug in the translation, again. Not amused.)
(Looked at it, again. The first parameter of a type abstraction is serialized twice, no idea why. Looks like I got another bug in the translation, again. Not amused.)
Sunday, July 18, 2010
Two Performance Tests
Just two performance tests. I thought it would be nice just to show off with some source code. No difficult type-class/interface stuff here, just performance tests I made.
// test ackermann 3 n
import "system.hi"
using system
def ack: int -> int -> int =
[ 0, n -> n + 1
| m, 0 -> ack (m - 1) 1
| m, n -> ack (m - 1) (ack m (n - 1)) ]
def main: unit =
c = argc nop;
n = text_to_int (argv (c-1));
_ = print (ack 3 n);
nop
// invert nucleotides from a file in Fasta format
import "system.hi"
using system
def complement: char -> char =
[ 'A' -> 'T' | 'a' -> 'T'
| 'C' -> 'G' | 'c' -> 'G'
| 'G' -> 'C' | 'g' -> 'C'
| 'T' -> 'A' | 't' -> 'A'
| 'U' -> 'A' | 'u' -> 'A'
| 'M' -> 'K' | 'm' -> 'K'
| 'R' -> 'Y' | 'r' -> 'Y'
| 'W' -> 'W' | 'w' -> 'W'
| 'S' -> 'S' | 's' -> 'S'
| 'Y' -> 'R' | 'y' -> 'R'
| 'K' -> 'M' | 'k' -> 'M'
| 'V' -> 'B' | 'v' -> 'B'
| 'H' -> 'D' | 'h' -> 'D'
| 'D' -> 'H' | 'd' -> 'H'
| 'B' -> 'V' | 'b' -> 'V'
| 'N' -> 'N' | 'n' -> 'N'
| c -> c ]
def complement_code: int -> int =
[ n ->
c = complement (char_ascii_char n);
char_ascii_code c ]
def ascii_n : int = char_ascii_code '\n'
def ascii_s : int = char_ascii_code ';'
def ascii_l : int = char_ascii_code '<'
def copy_line: file -> file -> unit =
[ i, o ->
n = fgetc i;
if n < 0 then nop
else if n == ascii_n then
_ = fputc o n;
process_lines i o
else
_ = fputc o n;
copy_line i o ]
def reverse_line: file -> file -> unit =
[ i, o ->
n = fgetc i;
if n < 0 then nop
else if n == ascii_n then
_ = fputc o n;
process_lines i o
else
n = complement_code n;
_ = fputc o n;
reverse_line i o ]
def process_lines: file -> file -> unit =
[ i, o ->
n = flookc i;
if n < 0 then nop
else if n == ascii_s then
copy_line i o
else if n == ascii_l then
copy_line i o
else reverse_line i o ]
def main: unit =
i = get_stdin nop;
o = get_stdout nop;
process_lines i o
With full optimization, the first program, ackermann 3 8, runs in somewhat more than eight seconds on a 2.26GHz Macbook Pro (not sure when the 2.26 kicks in). Anyway, that is slow, it's Python speed. I blogged on it before, most of the time is spend on resolving dynamic symbols, calling C, resolving overloading, and subsequently, garbage collecting the heap from all thunks inserted to do all that work. I think, with some optimization, I could get it down to 1-3 seconds, which would be fine by me. (Most work in my compiler is just plain graph rewriting, which is the one thing not tested by most performance tests.)
The second, reversal of nucleotides, I didn't really test yet on real input data. Just some short files made by me to check if it works. No performance comparisons yet. (I tested the "it can handle infinite recursion" feature by piping an infinite stream of data, "ACTG" strings, to it, it performs ok. See if I can get my hands on some real test-data.)
// test ackermann 3 n
import "system.hi"
using system
def ack: int -> int -> int =
[ 0, n -> n + 1
| m, 0 -> ack (m - 1) 1
| m, n -> ack (m - 1) (ack m (n - 1)) ]
def main: unit =
c = argc nop;
n = text_to_int (argv (c-1));
_ = print (ack 3 n);
nop
// invert nucleotides from a file in Fasta format
import "system.hi"
using system
def complement: char -> char =
[ 'A' -> 'T' | 'a' -> 'T'
| 'C' -> 'G' | 'c' -> 'G'
| 'G' -> 'C' | 'g' -> 'C'
| 'T' -> 'A' | 't' -> 'A'
| 'U' -> 'A' | 'u' -> 'A'
| 'M' -> 'K' | 'm' -> 'K'
| 'R' -> 'Y' | 'r' -> 'Y'
| 'W' -> 'W' | 'w' -> 'W'
| 'S' -> 'S' | 's' -> 'S'
| 'Y' -> 'R' | 'y' -> 'R'
| 'K' -> 'M' | 'k' -> 'M'
| 'V' -> 'B' | 'v' -> 'B'
| 'H' -> 'D' | 'h' -> 'D'
| 'D' -> 'H' | 'd' -> 'H'
| 'B' -> 'V' | 'b' -> 'V'
| 'N' -> 'N' | 'n' -> 'N'
| c -> c ]
def complement_code: int -> int =
[ n ->
c = complement (char_ascii_char n);
char_ascii_code c ]
def ascii_n : int = char_ascii_code '\n'
def ascii_s : int = char_ascii_code ';'
def ascii_l : int = char_ascii_code '<'
def copy_line: file -> file -> unit =
[ i, o ->
n = fgetc i;
if n < 0 then nop
else if n == ascii_n then
_ = fputc o n;
process_lines i o
else
_ = fputc o n;
copy_line i o ]
def reverse_line: file -> file -> unit =
[ i, o ->
n = fgetc i;
if n < 0 then nop
else if n == ascii_n then
_ = fputc o n;
process_lines i o
else
n = complement_code n;
_ = fputc o n;
reverse_line i o ]
def process_lines: file -> file -> unit =
[ i, o ->
n = flookc i;
if n < 0 then nop
else if n == ascii_s then
copy_line i o
else if n == ascii_l then
copy_line i o
else reverse_line i o ]
def main: unit =
i = get_stdin nop;
o = get_stdout nop;
process_lines i o
With full optimization, the first program, ackermann 3 8, runs in somewhat more than eight seconds on a 2.26GHz Macbook Pro (not sure when the 2.26 kicks in). Anyway, that is slow, it's Python speed. I blogged on it before, most of the time is spend on resolving dynamic symbols, calling C, resolving overloading, and subsequently, garbage collecting the heap from all thunks inserted to do all that work. I think, with some optimization, I could get it down to 1-3 seconds, which would be fine by me. (Most work in my compiler is just plain graph rewriting, which is the one thing not tested by most performance tests.)
The second, reversal of nucleotides, I didn't really test yet on real input data. Just some short files made by me to check if it works. No performance comparisons yet. (I tested the "it can handle infinite recursion" feature by piping an infinite stream of data, "ACTG" strings, to it, it performs ok. See if I can get my hands on some real test-data.)
Doodling
In search of a new syntax-driven type system for Hi... Now it can't be that difficult, can it?
Log 071810
It compiled again. The escape bug is mostly gone (it still appears in the output sometime, I need to recompile the pretty-printer), but it recognizes fine. See what's up with the rest. I might skip all semantic checks -Hi source code doesn't need type information and compiles fine without it,- just to get to a bootstrap.
It halts again after the semantic analysis and before emitting/serializing the AST. Not sure what's up with that. Checked it on a small file, the heap grows to 1 GB, pretty sure it just goes into some infinite loop. (Traced it, confirmed. It's an arity bug, it derived somewhere something of arity 2 has arity 1, so, it infinitely expands something which should have given a closure constant.)
It's a semantic bug in the compiler which should have been caught by the type checker.
(I patched the runtime bug. At least, I hope so. Recompiling again. The good news is that the type checker needed an overhaul anyway, there don't seem to be any big bugs, so self-compilation should be feasible. All in all, getting there.)
It halts again after the semantic analysis and before emitting/serializing the AST. Not sure what's up with that. Checked it on a small file, the heap grows to 1 GB, pretty sure it just goes into some infinite loop. (Traced it, confirmed. It's an arity bug, it derived somewhere something of arity 2 has arity 1, so, it infinitely expands something which should have given a closure constant.)
It's a semantic bug in the compiler which should have been caught by the type checker.
(I patched the runtime bug. At least, I hope so. Recompiling again. The good news is that the type checker needed an overhaul anyway, there don't seem to be any big bugs, so self-compilation should be feasible. All in all, getting there.)
Saturday, July 17, 2010
Log 071710
Another nice date.
I needed to increment the spillover region (the free space needed in between combinator rewrites) to 32kB, which was minor. With, I guess, an average of five cells, each eight bytes, you get forty bytes for each average allocated construct, which means now eight hundred somewhat constructs can be allocated. Not sure I am just fixing a heap growth bug.
Still stuck with some kind of escape bug, which I am clueless where it comes from. I, again, patched some C code by hand.
Patched, it does the full semantical analysis of the system file. Some stuff is really fast but the prover is way to slow to be usuable. Even when kind checking, unifying small constraints like v0 -> # = * -> v1, it proceeds with about one rule each 300ms or something. It fails somewhere after the type checker.
Type checking is faster than kind checking??? I guess I pass to many constraints during kindification?
It fails during the AST emitting phase with a corrupted heap...
I extended the tests on escaped pieces of texts. I now print an AST node before serializing to track where it goes wrong. For the rest, just waiting for slow compiles. I should use my printer again.
(God, I guess I figured out the escape bug. It doesn't make sense to patch a compiler, and forget to recompile relevant individual modules before linking...)
Another long wait, implemented a new performance test from the computer languages shootout.
I needed to increment the spillover region (the free space needed in between combinator rewrites) to 32kB, which was minor. With, I guess, an average of five cells, each eight bytes, you get forty bytes for each average allocated construct, which means now eight hundred somewhat constructs can be allocated. Not sure I am just fixing a heap growth bug.
Still stuck with some kind of escape bug, which I am clueless where it comes from. I, again, patched some C code by hand.
Patched, it does the full semantical analysis of the system file. Some stuff is really fast but the prover is way to slow to be usuable. Even when kind checking, unifying small constraints like v0 -> # = * -> v1, it proceeds with about one rule each 300ms or something. It fails somewhere after the type checker.
Type checking is faster than kind checking??? I guess I pass to many constraints during kindification?
It fails during the AST emitting phase with a corrupted heap...
I extended the tests on escaped pieces of texts. I now print an AST node before serializing to track where it goes wrong. For the rest, just waiting for slow compiles. I should use my printer again.
(God, I guess I figured out the escape bug. It doesn't make sense to patch a compiler, and forget to recompile relevant individual modules before linking...)
Another long wait, implemented a new performance test from the computer languages shootout.
Friday, July 16, 2010
Escapes
Fixed a bug to introduce a new one. The lexer fails on '\t =>,' because, well, that is a tab, right? Somewhere, there is an escape/unescape statement too much...
Source code looks ok. Problem with these kind of bugs is that they are kind-of viral, you never know if you're fixing a bug of a previous compiler.
Which it was, fixed the same bug in stage 1. Rerunning the unit tests. I need unit tests for handling escapes.
Source code looks ok. Problem with these kind of bugs is that they are kind-of viral, you never know if you're fixing a bug of a previous compiler.
Which it was, fixed the same bug in stage 1. Rerunning the unit tests. I need unit tests for handling escapes.
Log 071610
Med is failing, did some light work on prepping the math library while waiting for a recompile. Introduced most of IEEE 754 by binding to math.h. Read the (un-)serialization code on the AST, fixed a missing line. Working on the documentation generator again.
Rebuilding the compiler(s). I have five laptops, should have put the memory in a different one. Or buy a desktop, once?
Rebuilding the compiler(s). I have five laptops, should have put the memory in a different one. Or buy a desktop, once?
Thursday, July 15, 2010
Stuff to Do
With a long wait, I am more blogging than anything else. Let's see:
- Bootstrap v0.1 (Not even wrong)
- Release?
- Overhaul:
- Support for longs, floats, doubles, etc.
- Check the utilities directory. Move from trivially size-balanced maps to red-black trees?
- Check the parser. Sometimes it reports the wrong location.
- Simplify the type-checker. Throw out generalization, toss in higher-ranked types.
- Clean the lambda module.
- Create an optimizer for combinators.
- Have a good look at intermediate assembly. More type-info, generate i386 code, track pointers vs integers?
- Clean the runtime C files. Add memoization for FFI.
- Build a comprehensive set of bindings to C.
- Work on the document generator.
- Release? (It may be right)
Log 071510
Took a few days off to encounter the weirdest of all bugs. I go down a directory, try the help option, it borgs; go one up, everything is fine. Still a memory error, I guess.
Got mostly rid of the stuttering. It identifies the system file, not very happy about the speed, which probably means my size-hardly-balanced maps are not very fast, trying to get it to compile the system file. Great, it catches small semantic bugs.
Runs like a charm at the moment, instead of debugging the output, I am debugging the input.
[ It crashed on outputting the compiled system file, which makes sense, since its own system file has bugs. Regenerating the bootstrap. ]
Got mostly rid of the stuttering. It identifies the system file, not very happy about the speed, which probably means my size-hardly-balanced maps are not very fast, trying to get it to compile the system file. Great, it catches small semantic bugs.
Runs like a charm at the moment, instead of debugging the output, I am debugging the input.
[ It crashed on outputting the compiled system file, which makes sense, since its own system file has bugs. Regenerating the bootstrap. ]
Tuesday, July 13, 2010
Monday, July 12, 2010
Log 071210
Didn't do a lot, just a recompile so I can test each individual stage better. On first glance: Two errors in the lexical analysis it seems. It doesn't seem to match on a double quote, it doesn't seem to handle the exception thrown right, not sure.
Went for a bike-ride, got my cigarettes at 1 a.m.; it's a 10 mile drive or something. Anyway, it refreshed the head. I found and patched the quotation bug in the generated C by hand, no idea where it came from, it shouldn't have accepted stage 1 with that bug. It lexifies and parses small files, it lexifies and parses the system file. But, I think I see stuttering behavior in the garbage collector. I.e., it probably stutters against the gc-limit; so, I need another simple strategy for doubling the heap.
Except for the stuttering, it is fast. Looks like it parses faster than stage 0, the ocaml parser.
[Not sure about the latter. Fixed the GC-grow trigger to 75% of the heap, it passes the heap unit tests, but the top fails because posix-memalign gives up? Silliness...]
Went for a bike-ride, got my cigarettes at 1 a.m.; it's a 10 mile drive or something. Anyway, it refreshed the head. I found and patched the quotation bug in the generated C by hand, no idea where it came from, it shouldn't have accepted stage 1 with that bug. It lexifies and parses small files, it lexifies and parses the system file. But, I think I see stuttering behavior in the garbage collector. I.e., it probably stutters against the gc-limit; so, I need another simple strategy for doubling the heap.
Except for the stuttering, it is fast. Looks like it parses faster than stage 0, the ocaml parser.
[Not sure about the latter. Fixed the GC-grow trigger to 75% of the heap, it passes the heap unit tests, but the top fails because posix-memalign gives up? Silliness...]
Highs and Lows
Sometimes I do difficult stuff, sometimes I don't. Got a bit bored with it so I fixed some stuff to make it somewhat easier to debug. Thinking about typing and combinators again.
One of the things I want is that a list of type "list (?t => ::I t => t)," a list made out of arbitrary elements implementing interface I is accepted by "!t => list t". I should try some type derivations to see if that works, assuming naive semantics for a HM style checker.
Thing is, I don't care too much if all most general types for functions are automatically derivable; it is a property lost in most higher-ranked systems anyway. Though, it is a nice property to have, since it hints that your type system is complete. I should see how far I can trivialize the type checker by not having it generalize local definitions, and just check the instantiation constraints for applications. Just to see if that makes it possible to get away with higher-ranked types trivially. [1]
Not sure yet, didn't write anything down, didn't look at code, but my gut feeling is that I just need a more fine-grained type checker, where types are not instantiated in one rewrite, but in a series of rewrites where deemed necessary during the unification process.
Next thing, I have been looking at the code for the document generator. It is overly complex, and I can think of several manners to reformulate it, none of them very good. It boils down between choices of functional style, combinatorial style, even monadic style, and whether, in the end, you want to treat the generation of data as the generation of a stream or a tree. Best would be if, on the specification side, you would be oblivious to the underlying representation but it always seems to boil up to the surface.
[1] That might actually be a neat trick. Generalization is a cumbersome computationally intensive procedure, having no generalization may just do it.
One of the things I want is that a list of type "list (?t => ::I t => t)," a list made out of arbitrary elements implementing interface I is accepted by "!t => list t". I should try some type derivations to see if that works, assuming naive semantics for a HM style checker.
Thing is, I don't care too much if all most general types for functions are automatically derivable; it is a property lost in most higher-ranked systems anyway. Though, it is a nice property to have, since it hints that your type system is complete. I should see how far I can trivialize the type checker by not having it generalize local definitions, and just check the instantiation constraints for applications. Just to see if that makes it possible to get away with higher-ranked types trivially. [1]
Not sure yet, didn't write anything down, didn't look at code, but my gut feeling is that I just need a more fine-grained type checker, where types are not instantiated in one rewrite, but in a series of rewrites where deemed necessary during the unification process.
Next thing, I have been looking at the code for the document generator. It is overly complex, and I can think of several manners to reformulate it, none of them very good. It boils down between choices of functional style, combinatorial style, even monadic style, and whether, in the end, you want to treat the generation of data as the generation of a stream or a tree. Best would be if, on the specification side, you would be oblivious to the underlying representation but it always seems to boil up to the surface.
[1] That might actually be a neat trick. Generalization is a cumbersome computationally intensive procedure, having no generalization may just do it.
Sunday, July 11, 2010
Log 071110
It compiled. It gives a usage prompt if ran without arguments, everything after that, it segfaults, so, see what's up and back to unit tests I guess.
Cool, the spill-over region of the heap just was too small. The top compiles cleanly, next test, lexical analysis.
Cool, the spill-over region of the heap just was too small. The top compiles cleanly, next test, lexical analysis.
Saturday, July 10, 2010
Log 071010
Stage 0 compiles stage 1 to ML, stage 1 compiles stage 2 to C. Stage 1 and stage 2 share largely the same codebase, but it seems stage 1 is slightly more permissive than stage 0. There was a bug in the AST serialization code of stage 2 which was finally caught by C. Fixed it, should print and verify that code by hand. Fixed the bug and fired up the compile of stage 2 again.
This, on the other hand, I don't like. Now, I expected some bugs, cool. But the bug was a missing underscore, leaving two identifiers. The way the compiler works, one identifier referred to a type, one to a method. That means two bugs.
1. I have flat namespaces and allow all kinds of names everywhere, there is no disambiguation at the syntactic level of constructors or functions. (Like Ocaml and Haskell where constructors start with an uppercase letter.) I deal with that with one separate pass after identification which just checks all identifiers to see if they are bound legally. It missed this case.
2. The missing underscore left two identifiers, which treated as names, should have given a typing error.
I better make sure I am not working with an old compiler in which I turned checks off for speed.
(Great bug, concerning 2. It probably behaved correctly, a type name maps to its definition, a method to its type. The silliness...)
This, on the other hand, I don't like. Now, I expected some bugs, cool. But the bug was a missing underscore, leaving two identifiers. The way the compiler works, one identifier referred to a type, one to a method. That means two bugs.
1. I have flat namespaces and allow all kinds of names everywhere, there is no disambiguation at the syntactic level of constructors or functions. (Like Ocaml and Haskell where constructors start with an uppercase letter.) I deal with that with one separate pass after identification which just checks all identifiers to see if they are bound legally. It missed this case.
2. The missing underscore left two identifiers, which treated as names, should have given a typing error.
I better make sure I am not working with an old compiler in which I turned checks off for speed.
(Great bug, concerning 2. It probably behaved correctly, a type name maps to its definition, a method to its type. The silliness...)
Friday, July 9, 2010
Hmmm....???
I looked at what optimizations to do in what order, it seems relatively straightforward:
- Inlining of simple functions. Inlining will just mean less thunks/stack frames are created on the heap, less calls are made, and less garbage collection is needed. Should give better than linear speedup.
- Generational garbage collection. A lot of 'calls' are done once, just keeping them around for too long means hogging the memory.
- Memoization of libffi calls. For each call, the dynamic library symbol is looked up, an FFI cif is created from an FFI Hi type descriptor. Both are constant lookups/creations in the end and can be memoized.
Think that's about it. I'll never get to GHC or Ocaml speed without native ints and stack allocations, but it should perform fine.
Log 070910
My last compile of stage2 ended with a stack overflow while printing a generated pseudo-assembly routine. That usually means it couldn't handle some hefty concatenation of strings. Removed a print statement, fired up the compile again. Good news is it probably will run to completion this time within 10 hrs.
Just for Fun
Looked at the new language Rust. I couldn't really make sense of it yet - why this language, what do they miss in other languages? Thought it be nice to compile a test of Ackermann too.
Oops, almost hogged my disk since tracing was turned on while running Ackermann on input 4 1.
Ran Ackerman 3 8, it ain't as fast as I hoped, around 14 secs . Of course, in comparison to other languges, the runtime is oblivious to the actual types of integers, all primitive arithmetic calls are dynamic and through libffi -no memoization yet,- addition is overloaded so a method is selected dynamically, I don't have generational garbage collection, I don't inline simple functions, and I couldn't get it to compile cleanly through GCC with all optimizations turned on [1]. Basically, I am not measuring integer arithmetic, but the speed of garbage collecting, selecting methods, and making dynamic calls [2].
Not unhappy though. Approximately every optimization I could do would take off 25-50% of running time. So, I expect this benchmark to end somewhere around Lua speed, the bottle neck then being not having native integer types and libffi.
[1] GCC solved, -fno-strict-aliasing did it. Thanks to Cedric Roux of GCC-help, down to 8 secs.
[2] Profiling with -O3 now gives 35% garbage collection, 35% libffi, 30% calculation.
Oops, almost hogged my disk since tracing was turned on while running Ackermann on input 4 1.
Ran Ackerman 3 8, it ain't as fast as I hoped, around 14 secs . Of course, in comparison to other languges, the runtime is oblivious to the actual types of integers, all primitive arithmetic calls are dynamic and through libffi -no memoization yet,- addition is overloaded so a method is selected dynamically, I don't have generational garbage collection, I don't inline simple functions, and I couldn't get it to compile cleanly through GCC with all optimizations turned on [1]. Basically, I am not measuring integer arithmetic, but the speed of garbage collecting, selecting methods, and making dynamic calls [2].
Not unhappy though. Approximately every optimization I could do would take off 25-50% of running time. So, I expect this benchmark to end somewhere around Lua speed, the bottle neck then being not having native integer types and libffi.
[1] GCC solved, -fno-strict-aliasing did it. Thanks to Cedric Roux of GCC-help, down to 8 secs.
[2] Profiling with -O3 now gives 35% garbage collection, 35% libffi, 30% calculation.
Thursday, July 8, 2010
Need Holidays
Despite psychoses, no job and all, I still try to pull 18hr work days, 7 days a week for the last three years. That is, if you call coding work.
Need a vacation.
Need a vacation.
Log 070810
Four hundred and somewhat Euros and several tens of billions of transistors later, my MacBook Pro gloats happily with 8 GB of memory. Fired up the compile of stage 2, again.
I really should get rid of some redundant code; some code I just wrote to experiment with an idea, and see if it would go somewhere, some of it didn't, some of it works, but I just don't feel right about it. (Yeah, this is no nice academic development model, but, after having read through a number of compilers -like Haskell, Clean, Helium, ML,- I can't say they do much better. It's just toss in an idea, see what sticks everywhere.)
I really should get rid of some redundant code; some code I just wrote to experiment with an idea, and see if it would go somewhere, some of it didn't, some of it works, but I just don't feel right about it. (Yeah, this is no nice academic development model, but, after having read through a number of compilers -like Haskell, Clean, Helium, ML,- I can't say they do much better. It's just toss in an idea, see what sticks everywhere.)
A Blueprint for Combinator Libraries
As I stated before, the main point of the Hi language should be combinatorial thinking. Whether that is a viable approach, no-one knows. You'll need a pointy hat to program in it, I suppose, but the same holds true for Matlab, so I guess there's a market.
If you write stuff down you start answering your own questions, and the design of combinator libraries shouldn't be too hard. Below, six questions I just patched up which need to be answered if you want to design a library in that manner:
If you write stuff down you start answering your own questions, and the design of combinator libraries shouldn't be too hard. Below, six questions I just patched up which need to be answered if you want to design a library in that manner:
- What is the state being threaded?
- What are the basic combinators you need?
- What are the arguments needed by these combinators?
- What are the return values?
- How are combinators supposed to be combined?
- How can you make it perform?
For a parser library, the answers would be: 1. The collection of tokens unprocessed. 2. Parsing a word. 3. Text to be identified. 4. On success, an arbitrary (AST) value, or just failure. 5. Sequence and choice. 6. The collection of tokens could be an indexed piece of memory.
Not sure, I guess it sticks.
On Compositionality.... and State
I am thinking on how to rewrite code, maybe not all but some of it. Thing is, in a functional language you can write code in combinatorial style, the best example of that being parser combinators, followed by monads and point-free style. I originally intended my language to be able to exploit that to the fullest. I.e., types help, and hand you, good guarantees to get your combinator libraries right, and the syntax (blocks for abstractions) help you define them.
I went in another direction at some point. Mostly, because I just found it easier to patch up some functional code.
Parser combinators are the best example of a good combinator library. A good collection of them does two 'things' at the same time: They pass on some internal state from combinator to combinator, the tokens processed. And, they pass on and combine return values; sometimes, with some extra arguments, they also take some input which may be threaded too.
Monads do exactly the same thing, but in some sense always in the same fashion. They just allow you to write a sequence of instructions with respect to a state, since the combining operators are restricted to return and bind. The question is: Is that really the kind of composition you are after?
Point-free style, as it stands, is not that much associated with combinatorial thinking. It is more a manner of using higher-order functions to program with the least amount of variables.
If you follow the combinatorial style, then the question is: How would you write combinators for document generation, code generation, dealing with game states and events, etc? And, second, can it perform? Now I think you can make it perform by just making the right abstractions and making the application, or composition, of combinators as efficient as possible. But how to get it right in the first place, is just a puzzle for me at the moment.
So, this is what I am after: Reduce everything to composition of code.
I went in another direction at some point. Mostly, because I just found it easier to patch up some functional code.
Parser combinators are the best example of a good combinator library. A good collection of them does two 'things' at the same time: They pass on some internal state from combinator to combinator, the tokens processed. And, they pass on and combine return values; sometimes, with some extra arguments, they also take some input which may be threaded too.
Monads do exactly the same thing, but in some sense always in the same fashion. They just allow you to write a sequence of instructions with respect to a state, since the combining operators are restricted to return and bind. The question is: Is that really the kind of composition you are after?
Point-free style, as it stands, is not that much associated with combinatorial thinking. It is more a manner of using higher-order functions to program with the least amount of variables.
If you follow the combinatorial style, then the question is: How would you write combinators for document generation, code generation, dealing with game states and events, etc? And, second, can it perform? Now I think you can make it perform by just making the right abstractions and making the application, or composition, of combinators as efficient as possible. But how to get it right in the first place, is just a puzzle for me at the moment.
So, this is what I am after: Reduce everything to composition of code.
Wednesday, July 7, 2010
Log 070710
Linker is busy for ten hours now. It just cache-trashes, I hope I can bootstrap soon, sick of that.
I am writing on the document generator, and thought a bit more about how to write it right, combinator style, which was my original approach to this language. I'll refactor after the bootstrap somewhat, some things I just didn't do right.
My laptop doesn't even seem to want to spend time on it anymore, it just memory hogs and sits there, without any reasonable progress or actual processor activity. I could code more, which is nonsense, or just upgrade from 2 to 8 GB. Which, actually, makes the most sense. The C compiler is good, no sense in going back for just an upgrade and a wait, it's just getting rid of ML...
(A small analysis. I can see the process jumping between 1 and 2 GB each half hour or so. As the AST is translated to lambda terms, to combinator terms, to code -where it halts,- they grow in size. Since I still only have simple transducers, the growth in size is linear. It stalls on the renaming of variables in the pseudo assembly code. I wrote the collection of free variables as a simple bottom-up procedure collecting and merging lists of them, which, in the end, is just damned expensive.)
I am writing on the document generator, and thought a bit more about how to write it right, combinator style, which was my original approach to this language. I'll refactor after the bootstrap somewhat, some things I just didn't do right.
My laptop doesn't even seem to want to spend time on it anymore, it just memory hogs and sits there, without any reasonable progress or actual processor activity. I could code more, which is nonsense, or just upgrade from 2 to 8 GB. Which, actually, makes the most sense. The C compiler is good, no sense in going back for just an upgrade and a wait, it's just getting rid of ML...
(A small analysis. I can see the process jumping between 1 and 2 GB each half hour or so. As the AST is translated to lambda terms, to combinator terms, to code -where it halts,- they grow in size. Since I still only have simple transducers, the growth in size is linear. It stalls on the renaming of variables in the pseudo assembly code. I wrote the collection of free variables as a simple bottom-up procedure collecting and merging lists of them, which, in the end, is just damned expensive.)
Tuesday, July 6, 2010
Log 070610
Still writing on the document generation. It's getting there. Fired up the compile for stage2 again after some simple tests got accepted. It's now just more convenient to see if I can get it to lex or parse a piece of code.
Grmbl, got stuck on a strange type error in the documentation lexer??? The silly thing is, almost exactly the same, but way more difficult, code of the Hi lexer gets accepted?
Grmbl, got stuck on a strange type error in the documentation lexer??? The silly thing is, almost exactly the same, but way more difficult, code of the Hi lexer gets accepted?
Uncle Bob
Robert C. "Uncle Bob" Martin gave a talk at RailsConf 2010 on that we can buy 1025 more computing power per device in 2010 in comparison to 1970. Now, the figure is debatable, but that it's at least an order twenty I agree with.
He went into testing for a short while, and, I gave it some thought. Should testing be a language feature? I am not sure of it. Do you need a testing set-up/bed for a program, or should you be able to define tests from within the language. Given the amount of tests I am writing now, the latter doesn't seem too bad to me. So, a test construct in the language is certainly an option, even if I think you can't really do without building explicit test beds outside of programs.
He went into testing for a short while, and, I gave it some thought. Should testing be a language feature? I am not sure of it. Do you need a testing set-up/bed for a program, or should you be able to define tests from within the language. Given the amount of tests I am writing now, the latter doesn't seem too bad to me. So, a test construct in the language is certainly an option, even if I think you can't really do without building explicit test beds outside of programs.
Monday, July 5, 2010
Changed the Blog Theme
I changed the blog theme. The last blog was typeset in an Arial font and used happy orange colors, it is now Georgia/Times New Roman and a brownish palette.
Log 070510
As always, stuff is taking too much time. The compiler caught a number of changes I needed to make to stage2 for the new runtime while I was writing the code for the documentation generator. (I am lazy in that respect. Sometimes, I just let myself be driven by the compiler.) Made the changes, it is now prodding along in the linking stage for the last seven hours or so.
Build a small separate compiler for testing the documentation generator.
(Oops, I broke off the compiler by accident. It almost was at the code emitting phase so, I guess I am still cool. The long time above is due to that the bootstrap compiler is an AST based interpreter in Ocaml with small step semantics handling a lot of character data. That just isn't fast, and wastes a lot of memory.)
Corner case: Interface definitions which don't have instances are compiled incorrectly.
Build a small separate compiler for testing the documentation generator.
(Oops, I broke off the compiler by accident. It almost was at the code emitting phase so, I guess I am still cool. The long time above is due to that the bootstrap compiler is an AST based interpreter in Ocaml with small step semantics handling a lot of character data. That just isn't fast, and wastes a lot of memory.)
Corner case: Interface definitions which don't have instances are compiled incorrectly.
Sunday, July 4, 2010
Need Types
I'll need to rethink my types to get everything right, I'll need impredicative typing. I have most in order but didn't implement the higher-rank existential types, yet. There's a delightful paper by Simon Peyton Jones on the subject, Practical type inference for arbitrary-rank types, 2007, although it doesn't have a separate prover (which is actually a pro, discharging constraints to a prover is computation intensive), and doesn't deal with existential types or the mix with all of that and type classes.
There's actually a silly little problem I am wondering about. What's the semantics of a list of type (?t => list t), or rather, list (?t => t). Is it: A, a list of values of an unknown type t, or B: a list of values each of unknown types. I need B, of course.
There's actually a silly little problem I am wondering about. What's the semantics of a list of type (?t => list t), or rather, list (?t => t). Is it: A, a list of values of an unknown type t, or B: a list of values each of unknown types. I need B, of course.
Meanwhile, I am writing also on the documentation generator which is supposed to be part of the compiler. I got it partly right, then wrong, now right in the last -cough- couple of years.
Log 070410
The unserialization code works, for as far as I tested it on some primitive types and lists of those. I went for a full compile for change, which is a day long wait.
How many mistakes can you make in reading silly stuff, or implementing a straightforward API for that? The basic primitives for a reader are look, skip, and possibly, is_end; everything else is easily derivable from that. When interfacing with C what do I find: get, unget (limited to one char), fget, but no funget, and a bunch of other silly stuff. Now, I should have been more careful implementing the unserialization code, but man...
How many mistakes can you make in reading silly stuff, or implementing a straightforward API for that? The basic primitives for a reader are look, skip, and possibly, is_end; everything else is easily derivable from that. When interfacing with C what do I find: get, unget (limited to one char), fget, but no funget, and a bunch of other silly stuff. Now, I should have been more careful implementing the unserialization code, but man...
Saturday, July 3, 2010
A Piece of History
Sigh, its a long time since I've been working on this compiler. I started about five and a half years ago, after spending too much time on thinking about SAT solving and P?=NP, and decided I needed to think about something simple for change - I just puzzle 24 hrs a day, can't stop it. The development has been slow, due to much reading, intervening work, and after some medication, a series of psychoses, but ah well, it's getting there.
Games
And now, for something completely else. Hi was originally intended by me as a high-level language for writing games, though I think I am ending up with something which is more suitable for scientific calculations. Now simple functional languages aren't even that good at game code due to the fact unions of types are usually used for situations where in an OO language you'ld use simple inheritance which gives a much simpler approach. I.e., in a functional language, in a game, where you'ld want to manipulate a list of different character types, you'ld need to know a-priori what kind of characters go into that list, whereas in an OO language, you'ld define a list on an abstract interface for characters and derive different characters implementing that interface.
You can work around that problem a bit if you implement type classes, existential types, and simple coercions, which is what I did in Hi. (This actually means you're looking at a very complex solver, which cannot solve all puzzles presented. But, I don't care as long as it typechecks the constraints it is given, for the code it is presented. I rely on the programmer to guide the typechecker through it, and the fact that most programs just are not that complex.)
Since I am looking a looong compiles. I started patching up simple skeletal animation. (I don't know a lot of this, I ended up coding something simple with a small number of control points, where simulations are made a-la a physics engine. I minimize the square-root of the averaged squared distance to where I want those control points to be subject to a number of constraints.)
You can work around that problem a bit if you implement type classes, existential types, and simple coercions, which is what I did in Hi. (This actually means you're looking at a very complex solver, which cannot solve all puzzles presented. But, I don't care as long as it typechecks the constraints it is given, for the code it is presented. I rely on the programmer to guide the typechecker through it, and the fact that most programs just are not that complex.)
Since I am looking a looong compiles. I started patching up simple skeletal animation. (I don't know a lot of this, I ended up coding something simple with a small number of control points, where simulations are made a-la a physics engine. I minimize the square-root of the averaged squared distance to where I want those control points to be subject to a number of constraints.)
Friday, July 2, 2010
Log 070210
Typos, lots of typos and looooong compiles. It's not grabbing a coffee anymore, more like, make coffee, do laundry, create offspring, wave them off to high-school, and find out it still didn't finish. Good news is there don't seem to be logical errors in the compiler. I spotted one missed type error, usually that means I didn't pass a type constraint to the solver.
Wrote the AST serialization and unserialization code.
Wrote the AST serialization and unserialization code.
On Bootstrapping
The one big error I made in this project is that I should have sticked with ML as long as possible. Instead of writing an ML to C compiler, I went for an ML based AST-interpreter of Hi, which meant I could start writing in Hi as soon as possible, which is an advantage since I could debug various of the language but also means I am in deep now.
The other mistake was probably, not sure here, having started in ML (ocaml) instead of C in the first place. Not sure on this one.
The other mistake was probably, not sure here, having started in ML (ocaml) instead of C in the first place. Not sure on this one.
Log 070210
Writing the unit tests for the serialization code. Since this is code which has to be compiled by the ocaml interpreted Hi to C, stage 1, compiler, I am again looking at not-even-snail-speed compiles. Got rid of a few bugs... (Guess I should print again. For some reason, staring at a screen for too long makes one word-blind.)
Woke up. Serialization works, now for the reverse, and the serialization code for the AST. I serialize text representations of integers since these are the closest primitives I had in the language. At some point, I should just serialize/write the bit representation directly. But that's, again, for later.
My mind is getting better; there were some thoughts again when I went to bed. One performance thought, I followed my stack representation faithfully, which means that when I push the first thunk in a series, inheriting code follows that thunk, and expansion with extra arguments (through a C call and a copy of the original thunk) is done afterwards, if necessary. Where I guessed necessary wouldn't be quite often. It doesn't make sense; I should factor that out and just generate the right code for the first thunk at some point - I'll leave it for now.
Woke up. Serialization works, now for the reverse, and the serialization code for the AST. I serialize text representations of integers since these are the closest primitives I had in the language. At some point, I should just serialize/write the bit representation directly. But that's, again, for later.
My mind is getting better; there were some thoughts again when I went to bed. One performance thought, I followed my stack representation faithfully, which means that when I push the first thunk in a series, inheriting code follows that thunk, and expansion with extra arguments (through a C call and a copy of the original thunk) is done afterwards, if necessary. Where I guessed necessary wouldn't be quite often. It doesn't make sense; I should factor that out and just generate the right code for the first thunk at some point - I'll leave it for now.
Thursday, July 1, 2010
Serialization, readers, writers, message queues and file descriptors
I looked at my serialization code, decided it was wrong, started rewriting it. So, first I rewrote it with abstract readers and writers. Great, to do that the right way I would need multiple parameter type classes; a reader r is some abstract construct reading things of type t. Solved that problem, in my head at least, it ain't difficult in my setting. Decided the multiple parameter type classes are for later, restricted the IO type to word. But then again, I already interface with C, and actually, file descriptors are then a better option than implementing heavy-weight reader and writer interfaces. Looked at file descriptors if I could use those as message queues for IPC, and behold, supported in Linux, not Posix, so not portable? What now?
Oh god, mmap for IPC?
Oh god, mmap for IPC?
Subscribe to:
Posts (Atom)