**-under construction, this is my scratch pad-**

Lets see if I can get this higher-ranked typing stuff right. The basis of my semantic/type checker is a sequent calculus reasoner which just tries to solve whatever is thrown at it. What is thrown at the prover are the type assumptions and obligations derived from a Hi AST, since Hi definitions are all typed, there is no (real) need for generalization of derived types.

The basis of reasoning is unification, the basis of higher-order logic reasoning is instantiation/specialization and skolemization. A nice manner of abbreviating propositional (higher-order) reasoning is with a sequent calculus.

**Sequent Calculi for Typing**

Lets first look at sequent calculi. In logics, a sequent is standardly defined as a conjunction of antecedents implying a disjunction of consequents. Since, in a context of typing for programming languages we're normally not interested in proving a disjunction of consequents, a sequent in my prover is conjunction of antecedents implying a conjunction of consequents.

So, formally, where I mean

γ0 ∧ γ1 ∧ … ∧ γn ⇒ δ0 ∧ δ1 ∧ … ∧ δm

I am tempted to write,

γ0, γ1, … γn ⊢ δ0, δ1, … δm

but, since that is non-standard, will just write the conjunctions explicitly,

γ0∧ γ1∧ … ∧ γn ⊢ δ0∧ δ1∧ … ∧ δm

and abbreviate lots of shiny small stuff -look mum, it's math- with shiny big stuff such

γ ∧ Γ ⊢ δ ∧ Δ.

To give it a calculational flavor, my prover just discharges consequents one-by-one head-first.

Now we have trivialities like

Now we have trivialities like

-------[ Done ]

Γ ⊢

Γ ⊢ δ0 Γ∧ δ0 ⊢ Δ

--------------------------------- [ Conj ]

Γ ⊢ δ0 ∧ Δ

and

Γ ⊢ Δ δ0∈Γ

------------------ [ Elim ]

Γ ⊢ δ0 ∧ Δ

The Done rule states that nothing is trivially true. The Conj rule states that a proven consequent can be used as an antecedent. The Elim rule states that every consequent is trivially true if it also appears as an antecedent.

A typing algorithm terminates if it deterministically derives a finite tree proof. A set of typing rules is deterministic if there's only one, possibly prioritized, manner of applying them.

The actual rationale for these sequent calculi is that a) they trivialize proofs of equivalences of infinite/recursive types, more about that later, b) they `linearize' proofs in the sense that you can implement it as a function on two series of type terms, c) the result of proving is a collection of bound schematic variables which can be used later to decorate the AST.

The prover is driven by the unification process. For that we actually have a large number of different equalities such as '= kind,' '= type,' '= spec,' and '= skol.' The kind and type equalities are abbreviated. (This distinction is maybe not necessary, but convenient. I only want to unify on types where the scheme is removed, and I don't want types with their schemes to be substituted into other simple types. Moreover, I want to be able to specialize the same occurrence of a type scheme in, say, a list to multiple instantiations to make many-valued lists possible.)

The prover is driven by the unification process. For that we actually have a large number of different equalities such as '= kind,' '= type,' '= spec,' and '= skol.' The kind and type equalities are abbreviated. (This distinction is maybe not necessary, but convenient. I only want to unify on types where the scheme is removed, and I don't want types with their schemes to be substituted into other simple types. Moreover, I want to be able to specialize the same occurrence of a type scheme in, say, a list to multiple instantiations to make many-valued lists possible.)

**Kind Checking**

**During kind checking, a set of rules, formulated below, are used such that a series of equations postulated are discharged, and the antecedent after proving will consists of a series of kind equalities such that each kind schematic variable receives a value.**

Below, I'll write ?κi for a kind-schematic variable, and Κi for a whole kind term. Kind terms are the usual mix of stars * and arrows →. Kind checking is a trivial exercise with the following rules.

Γ ⊢ Δ

----------------[ Eq ]

Γ ⊢ Κ=Κ ∧ Δ

Γ ∧ ?κ0=Κ ⊢ Δ[?κ0/Κ] ?κ0∉Κ

------------------------------------- [ Elim ]

Γ ⊢ ?κ0=Κ ∧ Δ

Γ ⊢ ?κ0=Κ ∧ Δ

-----------------[ Swap ]

Γ ⊢ Κ=?κ0 ∧ Δ

Γ ⊢Κ0= Κ2 ∧ Κ1 = Κ3 ∧ Δ

--------------------------------[ Arrow ]

Γ ⊢ Κ0→Κ1 = Κ2→Κ3 ∧ Δ

The Eq rule removes simple equalities from the consequent. The Elim rule removes a schematic variable from the consequent, checks that it not gives rise to an infinite expansion, moves the equality to the antecedent and eliminates the variable from the consequent by substitution. The Swap rule just swaps a schematic variable to the left-hand side. The Arrow rule structurally decomposes a proof obligation where two terms are complex into two new proof obligations.

The rules above are a bit contrived. First, they are much more easily defined directly in a type logic and are given an operational meaning. Second, the careful reader will notice that a proof sometimes only exists when a series of consequents are written in a certain order.

What the rules do give are a direct representation of the Hi kind checker, where a series of kind equalities are offered to a prover, the prover discharges each proof obligation, and a list of simple equalities -a mapping from schematic variables to terms- is returned.

Kind equalities are derived from terms in a bottom-up fashion, care is taken that they are linearized in the right order during that pass.

What the rules do give are a direct representation of the Hi kind checker, where a series of kind equalities are offered to a prover, the prover discharges each proof obligation, and a list of simple equalities -a mapping from schematic variables to terms- is returned.

Kind equalities are derived from terms in a bottom-up fashion, care is taken that they are linearized in the right order during that pass.

**Type Checking**

Now, the real fun starts. Type checking higher-ranked types is a game where unification is done by equational reasoning and instantiating/specializing, skolemizing and -possibly- generalizing type terms by employing type constants, type constructors, type schematic variables, and type skolem constants in a type sequent calculus.

Lets assume higher-ranked type terms with type constants and type constructors (written as lowercase Latin c), function abstractions (written with arrows →), universally quantified types (∀), existentially quantified types (∃) and type predicates denoting interfaces (written as capitals). Without too much fuss, I'll go straight to an example, and just leave the formal grammar for what it is. The type "∀τ ⇒ ::I τ ⇒ τ → list (∃σ ⇒ ::I σ ⇒ σ)" describes a function which takes an element of type τ which implements interface I and returns a list of some unknown type σ which also implements interface I.

For the calculus, we want to unify, so some extra notation is necessary. Type variables are written in (Greek) lowercase, type skolem constants are preceded with an exclamation mark (!), type schematic variables are preceded with a question mark (?).

(Note: For a normal ranked system (Damas-Milner) where quantifiers/type-schemes only appear outermost in type terms, it is easy to derive and implement a straightforward system by just specializing, or skolemizing, types at the defining and applying occurrences in a syntax tree. The prover just needs to deal with simple type equalities and complex type equalities of form τ = skol(T) or τ = spec(T). Generalization is a elaborate procedure which also needs to capture postulated predicates in the antecedent. The current prover works that way.)

The game for a new higher-ranked Hi type system is to arrive at a set of typing rules such that: a) it is unnecessary to generalize, and b) skolemization and specialization are defined such that higher-ranked types can be dealt with trivially.

Lets kick of with some simple stuff, T and S are for possibly quantified type terms, τ and σ for simple type terms.

Γ ⊢ Δ

---------------- [ Eq ]

Γ ⊢ T=T ∧ Δ

Γ ∧ ?τ=σ ⊢ Δ[?τ/σ]

----------------------- [ Elim ]

Γ ⊢ ?τ=σ ∧ Δ

Γ ∧ ?τ=σ ⊢ Δ[?τ/σ]

----------------------- [ Swap ]

Γ ⊢ σ=?τ ∧ Δ

The above three rules discharge identities (Eq) and eliminate schematic variables (Elim and Swap).

To type check higher-ranked types, one normally need rules for specialization, often called instantiation, skolemization, and -possibly- generalization. In a specialized term, universal quantifiers are eliminated and for the variables type schematic variables, or unknowns, are introduced. Similarly, skolemization replaces universally bound variables with Skolem constants. Generalization is the reverse of specialization, type schematic variables (not-monomorphic) are replaced with universal quantifiers.

The role of specialization is, as in higher-order logic proofs, to defer reasoning and instead of supplying an actual witness, construct a witness ad-hoc by unifying on what you can infer from the given proof obligation.

Lets see if I just can get away with specializing terms in small steps. Let's make specialization and skolemization not separate functions but part of the calculus.

Γ ⊢?σ = spec (T[τ/?τ]) ∧ Δ

---------------------------------- [ Spec-∀ ]

Γ ⊢ ?σ = spec (∀τ ⇒ T) ∧ Δ

Γ ⊢?σ = skol (T[τ/!τ]) ∧ Δ

---------------------------------- [ Skol-∀ ]

Γ ⊢ ?σ = skol (∀τ ⇒ T) ∧ Δ

The above two rules, Spec-∀ and Skol-∀, specialize and skolemize universally bound type terms by replacing a bound variable with schematic variables and skolem variables respectively. In the rules, types remain tagged that they are specialized and skolemized such that proving is driven by unification solely.

A minor drawback is that trivial rules need to be written twice, such as the ones below.

Γ ⊢?σ = spec (T) ∧ Δ ∧ ::I ?τ

---------------------------------- [ Pred-Spec-? ]

Γ ⊢ ?σ = spec (::I ?τ ⇒ T) ∧ Δ

Γ ⊢?σ = skol (T) ∧ Δ ∧ ::I ?τ

---------------------------------- [ Pred-Skol-? ]

Γ ⊢ ?σ = skol (::I ?τ ⇒ T) ∧ Δ

The two rules above state that type predicates (interfaces) to be proven on schematized variables are proven after the schematized variables are fully unified.

Γ∧ ::I !τ ⊢?σ = spec (T) ∧ Δ

---------------------------------- [ Pred-Spec-! ]

Γ ⊢ ?σ = spec (::I !τ ⇒ T) ∧ Δ

Γ∧ ::I !τ ⊢?σ = skol (T) ∧ Δ

---------------------------------- [ Pred-Skol-! ]

Γ ⊢ ?σ = skol (::I !τ ⇒ T) ∧ Δ

Similarly, the two rules above state that type predicates on skolemized variables may be assumed.

Γ ⊢?σ = ?σ0→?σ1 ∧ ?σ0= spec T0 ∧ ?σ1 = spec T1 ∧ Δ

------------------------------------------------- [ Arrow ]

Γ ⊢ ?σ = spec T0→T1 ∧ Δ

----------------------------------------------- [ Constructor ]

Γ ⊢ ?σ = spec (c (T0…Ti)) ∧ Δ

To be continued later...

## No comments:

## Post a Comment