Sola's Type "System"

Any Programming Language needs a Type System, and the one I'm writing, Sola is no different. It's still very much in it's infancy, but that doesn't mean I can't build out some basic types and their relations.
Right now, Sola has IntsBools, and Floats. There are {I, U}{8, 16, 32, 64, 128} for the Ints and F32F64 for Float.
All that is written in a new pass: the resolver.

pub enum Type {
    Int(IntegerType),
    Bool,
    Float(FloatType),
}

pub enum IntegerType {
    I8,
    U8,
    I16,
    U16,
    I32,
    U32,
    I64,
    U64,
    I128,
    U128,
}

pub enum FloatType {
    F32,
    F64,
}

The job of the resolver is to assign unique IDs to each Variable and Function, and to resolve all types.

Assigning IDs and keeping track of the Values is quite easy using [id_collections](https://docs.rs/id_collections/latest/id_collections/ "id_collections")::IdVec written by my friend William. The IdVec functions like a regular Rust Vec, except that the indexing is done by id_type specific to that IdVec. You get those id_types by pushing to the IdVec. For Example:

#[id_type]
pub struct VariableId(u32);

let all_variables: IdVec<VariableId, Variable> = IdVec::new();
let id = all_variables.push(some_variable);

For type inference, I started and spent quite some time writing a <partial_cmp> implementation for Type with the intention of using that to see which types can be converted to which other. So u32 would be greater than u8, because every u8 fits in u32.

This had some problems in the implementation and I thought of a much simpler way to express the same thing: a can_convert_to(&self, other: &Type) -> bool function. This is much easier to write, because it only needs to check one way:

fn can_convert_to(&self, other: &Self) -> bool {
    if self == other {
        return true;
    }
    use IntegerType::*;

    match (self, other) {
        (I8, I16) | (I8, I32) | (I8, I64) | (I8, I128) => true,
        (U8, I16) | (U8, I32) | (U8, I64) | (U8, I128) => true,
        (U8, U16) | (U8, U32) | (U8, U64) | (U8, U128) => true,
        (I16, I32) | (I16, I64) | (I16, I128) => true,
        (U16, I32) | (U16, I64) | (U16, I128) => true,
        (U16, U32) | (U16, U64) | (U16, U128) => true,
        (I32, I64) | (I32, I128) => true,
        (U32, I64) | (U32, I128) => true,
        (U32, U64) | (U32, U128) => true,
        (I64, I128) => true,
        (U64, I128) => true,
        _ => false,
    }
}

The actual type-conversion is done by inserting a Cast expression to the AST. This Cast has an inner expression and a return_type. This Cast expression can then be compiled by the compile pass to LLVM IR using the massive build_int_cast_sign_flag function matching on combinations of LLVM and my types

fn cast_value_to_type(
    &mut self,
    value: BasicValueEnum<'ctx>,
    return_type: &Type,
) -> BasicValueEnum<'ctx> {
match value {
    BasicValueEnum::IntValue(int_val) => match &return_type {
            Type::Int(IntegerType::I8) => self.builder.build_int_cast_sign_flag(
                lhs, self.context.i8_type(), true, "tmp"),
            Type::Int(IntegerType::U8) => self.builder.build_int_cast_sign_flag(
                lhs, self.context.u8_type(), true, "tmp")
            ...
     ...
}
...
}

While talking to friends about this system, one of them found a fatal flaw in the system: u8 Literals are not possible to express, because signed is preferred to unsigned for integer literals (if it was the other way around, u8 Literals would be inexpressible).

The way to solve this problem is to use an actual type system, like Hindley-Milner type inference. This will probably be my next endeavor after fixing a few more little things, like error reporting from the resolver.