Every week, one person is assigned to give an hour-long (or less) presentation on a topic of their choice.
Occasionally, we receive external speakers.
Drinks and snacks will be served.
November 12, 2024
|
Jaylene Zhang
How ChatGPT works as a pedagogical tool
Slides
Abstract
ChatGPT has demonstrated proficiency in generating code for homework assignments, but its effectiveness in addressing conceptual questions remains uncertain. In this study, we will pose questions from a midterm exam to the model and assess its potential as an educational tool for students.
|
November 5, 2024
|
Daniel Zackon
Explicit Contexts for Substructural Languages
Abstract
A central challenge in mechanizing the meta-theory of
substructural languages is modeling contexts. In this talk, we will
describe CARVe, a general syntactic infrastructure for managing
substructural contexts explicitly, where elements are annotated with
tags from a resource algebra denoting their availability. We model
resource consumption by changing these tags, and so assumptions persist
as contexts are manipulated. This allows us to define relations between
substructural contexts via simultaneous substitutions without the need
to split them. Moreover, we establish a series of algebraic properties
about our context operations that are typically required to carry out
proofs in practice. The infrastructure is implemented in Beluga, and has
been used to mechanize a broad range of substructural systems.
|
October 29, 2024
|
Chuta Sano
Manifestly Phased Communication via Shared Session Types
Paper and Slides
Abstract
Session types denote message protocols between concurrent processes, allowing a type-safe expression of inter-process communication. Although previous work demonstrate a well-defined notion of subtyping where processes have different perceptions of the protocol, these formulations were limited to linear session types where each channel of communication has a unique provider and client. In this paper, we extend subtyping to shared session types where channels can now have multiple clients instead of a single client. We demonstrate that this generalization can statically capture protocol requirements that span multiple phases of interactions of a client with a shared service provider, something not possible in prior proposals. Moreover, the phases are manifest in the type of the client.
|
October 22, 2024
|
Jerry Nyoike
How Students Progress through (functional) Programming Assignments
Slides
Abstract
Analyzing novice programmer behavior as they learn how to program is an active research area. Homework submission data can give insights into how students approach completing programming assignments. We take two views of how students solve programming assignment problems. First, a fine-grained analysis of the type errors students encounter and how their grade progresses with time. This, provides insights into what concepts students struggle with which can be used by instructors to provide additional clarification. Second, we take a high level view of the students work patterns showing what question a student is working on during a submission giving more information on how the student moves through the assignment.
|
October 8, 2024
|
Jason Hu
Foundations and Applications of Modal Type Theories (Part 2)
Abstract
Over the past few decades, type theories as mathematical foundations have been extensively studied and are well understood. Many proof assistants implement type theories and have found important applications to provide critical security guarantees. In these applications, users often write meta-programs, programs that generate other programs, to implement proof search heuristics and improve their work efficiency. However, as opposed to the deep understanding of type theories, it remains unclear what foundation is suitable to support meta-programming in proof assistants. In this thesis, I investigate modal type theories, a specific approach to this problem. In modal type theories, modalities are a way to shallowly embed syntax into the systems, so users can write meta-programs that manipulate syntax through these modalities.
I explore two different styles of modal systems. In the first part, I investigate the Kripke-style systems, which faithfully model the familiar quasi-quoting style of meta-programming. I develop an explicit substitution calculus and scale it to dependent types, introducing Mint. I prove strong normalization of Mint, which implies its logical consistency, using an untyped domain model.
Nevertheless, the Kripke-style systems only support composition and execution of code, and they cannot easily support a general recursion principle on the structure of code. To support such a general recursion principle, I develop the layered style, where a system is divided into nested layers of sub-languages. The layered style scales quite naturally to dependent types, introducing DeLaM. DeLaM allows users to compose, execute and recurse on dependently typed code. I prove that DeLaM is weakly normalizing and its convertibility problem between types and terms is decidable. Hence, DeLaM provides a type-theoretic foundation to support type-safe meta-programming in proof assistants.
|
October 1, 2024
|
Jason Hu
Foundations and Applications of Modal Type Theories (Part 1)
Abstract
Over the past few decades, type theories as mathematical foundations have been extensively studied and are well understood. Many proof assistants implement type theories and have found important applications to provide critical security guarantees. In these applications, users often write meta-programs, programs that generate other programs, to implement proof search heuristics and improve their work efficiency. However, as opposed to the deep understanding of type theories, it remains unclear what foundation is suitable to support meta-programming in proof assistants. In this thesis, I investigate modal type theories, a specific approach to this problem. In modal type theories, modalities are a way to shallowly embed syntax into the systems, so users can write meta-programs that manipulate syntax through these modalities.
I explore two different styles of modal systems. In the first part, I investigate the Kripke-style systems, which faithfully model the familiar quasi-quoting style of meta-programming. I develop an explicit substitution calculus and scale it to dependent types, introducing Mint. I prove strong normalization of Mint, which implies its logical consistency, using an untyped domain model.
Nevertheless, the Kripke-style systems only support composition and execution of code, and they cannot easily support a general recursion principle on the structure of code. To support such a general recursion principle, I develop the layered style, where a system is divided into nested layers of sub-languages. The layered style scales quite naturally to dependent types, introducing DeLaM. DeLaM allows users to compose, execute and recurse on dependently typed code. I prove that DeLaM is weakly normalizing and its convertibility problem between types and terms is decidable. Hence, DeLaM provides a type-theoretic foundation to support type-safe meta-programming in proof assistants.
|
September 17, 2024
|
Samuel Gélineau
Klister: type inference for type-driven macros
Slides
Abstract
Klister is a programming language at the intersection of Haskell and Racket which I have implemented with David Christiansen and a few other collaborators. It features both type inference and type-driven macros. Supporting both of those features in a single programming language is harder than it sounds; I will demonstrate how standard approaches fall short, and which mold-breaking technique Klister utilises instead.
|
September 10, 2024
|
Reflective Metaprogramming for Session-Typed Processes
Chuta Sano
Abstract
We introduce FuSes, a functional programming language that supports
reflective metaprogramming over session-typed process calculus code. In
particular, a functional layer sits on top of a session-typed process
layer, and the functional layer uses the contextual box modality
extended with linear channel contexts to capture open process code.
FuSes not only supports code generation but also supports pattern
matching over open process code.
Furthermore, we introduce a set of run primitives that enable processes
to be executed and observed from within the functional layer. Using
these constructs, we implement well-known optimizations over concurrent
code such as batch optimizations and geo-load balancing in a type-safe
manner. Our technical contributions include a type system for FuSes, an
operational semantics, and a proof of its type safety.
|
August 27, 2024
|
Max Kopinsky
How does Haskell do Type Inference?
Slides
Abstract
We at COMEPLS are intimately familiar with the classical and
constraint-form Hindley-Milner type inference algorithms. But we are also
interested in advanced type system features, such as type-level functions,
indexed type constructors, and bounded polymorphism -- which HM cannot
directly support. Haskell, however, supports (lightly restricted) versions of
each of these, via type families, GADTs, and type classes respectively.
Haskell requires a single, unified inference system, capable of handling all of
its features and of being extended with new ones. We will present the work
OutsideIn(X): Modular Type Inference with Local Assumptions(S.P. Jones et al, 2011)
which describes the inference algorithm used in GHC today. The algorithm is
abstracted over an arbitrary constraint domain X and supports inference with
minimal type annotations in the presence of local assumptions. In particular,
we show why (the combination of) Haskell's features makes type inference
difficult, present the inference algorithm, and give an intuition for Jones et al.'s
constraint solver for X = type classes + GADTs + type families.
|
August 20, 2024
|
Normalization by Evaluation for Cocon, Part 1: Evaluation
Antoine Gaulin
Slides
Abstract
Abstract: We present a version of Contextual LF with first-class contexts and context variables, adapated with explicit substitutions to allow normalization by evaluation. In this first part, we define an untyped domain that encodes the semantics of Contextual LF. Then, we give a normalization procedure that consists of evaluation of syntactic objects into the domain, and readback of domain objects into syntactic objects. We leave the discussion of the correctness proof for a follow-up talk.
|
August 13, 2024
|
Normalization by Evaluation for Modal Dependent Type Theory
Jason Hu
Abstract
We present the Kripke-style modal type theory, Mint, which
combines dependent types and the necessity modality. It extends the
Kripke-style modal lambda-calculus by Pfenning and Davies to the full
Martin-Löf type theory. As such it encompasses dependently typed variants
of system K, T, K4, and S4. Further, Mint seamlessly supports a full universe
hierarchy, usual inductive types, and large eliminations. In this paper, we give
a modular sound and complete normalization-by-evaluation (NbE) proof for
Mint based on an untyped domain model, which applies to all four
aforementioned modal systems without modification. This NbE proof yields
a normalization algorithm for Mint, which can be directly implemented. To
further strengthen our results, our models and the NbE proof are fully
mechanized in Agda and we extract a Haskell implementation of our NbE
algorithm from it
|
August 6, 2024
|
Syntax Repair as Language Intersection
Breandan Considine
Abstract
I will introduce a new technique for correcting syntax errors in arbitrary context-free languages. Our work stems from the observation that syntax errors with a small repair typically have very few unique small repairs, which can usually be enumerated up to a small edit distance then quickly reranked. We place a heavy emphasis on precision: the enumerated set must contain every possible repair within a few edits and no invalid repairs. To do so, we construct a grammar representing the language intersection between a Levenshtein automaton and a context-free grammar, then decode it in order of likelihood.
|
July 30, 2024
|
SAT-Diff: A Tree Differencing Framework Using Max-SAT Solving
Haolin Ye
Abstract
Computing differences between tree-structured data is a critical
but challenging problem in software analysis. In this talk, we propose
a novel tree diffing approach called SATDiff, which reformulates the structural
diffing problem into a MaxSAT problem. By encoding the necessary transformations
from the source tree to the target tree, SATDiff generates correct, minimal,
and type-safe low-level edit scripts with formal guarantees. We then synthesize
concise high-level edit scripts by effectively merging low-level edits in
the appropriate topological order. To alleviate runtime concerns, SATDiff allows
flexible integration with anytime solvers to provide high-quality solutions in
a timely fashion, albeit without minimality guarantees. This design choice
enables SATDiff to accommodate various user needs, offering both minimal
edit script solutions and timely, high-quality solutions. For the evaluation
of SATDiff, in addition to conciseness, we propose a new language model-based
metric to evaluate the naturalness and comprehensibility of edit scripts. This
metric uses GPT-4 and Claude-3.5 to evaluate edit scripts from a programmer's
perspective, ensuring alignment with human values by following certain guiding
principles, such as discouraging unnecessary repetitive edits. Our empirical
results demonstrate that SATDiff outperforms existing heuristic-based approaches
by a significant margin in terms of both conciseness and naturalness while
maintaining comparable runtime performance.
|
July 23, 2024
|
Combining Automatic Amortized Resource Analysis with Bayesian Learning
Jan Hoffman (invited speaker)
Abstract
There are two approaches to automatically deriving symbolic
worst-case resource bounds for programs: static analysis of the source
code and data-driven analysis of cost measurements obtained by running
the program. Static resource analysis is usually sound but incomplete.
Data-driven analysis can always return a result, but its lack of
robustness often leads to unsound results.
In this talk, I present a hybrid resource bound analysis that tightly
integrates static analysis and data-driven analysis. The static
analysis part builds on automatic amortized resource analysis (AARA),
a type-based resource analysis method that infers resource bounds
using linear optimization. The data-driven part relies on Bayesian
modeling and inference techniques that improve upon previous
data-driven analysis methods by reporting an entire probability
distribution over likely resource cost bounds. This Hybrid AARA is
statistically sound under standard assumptions on the runtime cost
data.
In the first part of the talk, I present type-based AARA and
illustrate its strengths and weaknesses. I then discuss the
challenges with data-driven analyses for worst-case bounds. Finally, I
show how Bayesian learning can be integrated into type-based AARA. I
also summarize an experimental evaluation that indicates that Hybrid
AARA (i) effectively mitigates the incompleteness of purely static
resource analysis; and (ii) is more accurate and robust than purely
data-driven resource analysis.
|
July 16, 2024
|
Teaching the Art of Functional Programming using Automated Grading
Aliya Hameer (invited speaker)
Paper and slides
Abstract
Online programming platforms have immense potential to improve students’ educational experience. They
make programming more accessible, as no installation is required; and automatic grading facilities provide
students with immediate feedback on their code, allowing them to to fix bugs and address errors in their
understanding right away. However, these graders tend to focus heavily on the functional correctness of a
solution, neglecting other aspects of students’ code and thereby causing students to miss out on a significant
amount of valuable feedback.
In this talk, we recount our experience in using the Learn-OCaml online programming platform to
teach functional programming in a second-year university course on programming languages and paradigms.
Moreover, we explore how to leverage Learn-OCaml’s automated grading infrastructure to make it easy
to write more expressive graders that give students feedback on properties of their code beyond simple
input/output correctness, in order to effectively teach elements of functional programming style. In particular,
we describe our extensions to the Learn-OCaml platform that evaluate students on test quality and code style.
By providing these tools and a suite of our own homework problems and associated graders, we aim to
promote functional programming education, enhance students’ educational experience, and make teaching and
learning typed functional programming more accessible to instructors and students alike, in our community
and beyond.
|
July 9, 2024
|
A Gentle Introduction to Session Types
Chuta Sano
Slides
Abstract
Abstract: We present a version of Contextual LF with first-class contexts and context variables, adapated with explicit substitutions to allow normalization by evaluation. In this first part, we define an untyped domain that encodes the semantics of Contextual LF. Then, we give a normalization procedure that consists of evaluation of syntactic objects into the domain, and readback of domain objects into syntactic objects. We leave the discussion of the correctness proof for a follow-up talk.
|
August 27, 2024
|
TBD
TBD
Abstract
TBD
|
September 3, 2024
|
TBD
Jaylene Zhang
Abstract
TBD
|
August 13, 2024
|
Normalization by Evaluation for Modal Dependent Type Theory
Jason Hu
Abstract
We present the Kripke-style modal type theory, Mint, which
combines dependent types and the necessity modality. It extends the
Kripke-style modal lambda-calculus by Pfenning and Davies to the full
Martin-Löf type theory. As such it encompasses dependently typed variants
of system K, T, K4, and S4. Further, Mint seamlessly supports a full universe
hierarchy, usual inductive types, and large eliminations. In this paper, we give
a modular sound and complete normalization-by-evaluation (NbE) proof for
Mint based on an untyped domain model, which applies to all four
aforementioned modal systems without modification. This NbE proof yields
a normalization algorithm for Mint, which can be directly implemented. To
further strengthen our results, our models and the NbE proof are fully
mechanized in Agda and we extract a Haskell implementation of our NbE
algorithm from it
|
August 6, 2024
|
Syntax Repair as Language Intersection
Breandan Considine
Abstract
I will introduce a new technique for correcting syntax errors in arbitrary context-free languages. Our work stems from the observation that syntax errors with a small repair typically have very few unique small repairs, which can usually be enumerated up to a small edit distance then quickly reranked. We place a heavy emphasis on precision: the enumerated set must contain every possible repair within a few edits and no invalid repairs. To do so, we construct a grammar representing the language intersection between a Levenshtein automaton and a context-free grammar, then decode it in order of likelihood.
|
July 30, 2024
|
SAT-Diff: A Tree Differencing Framework Using Max-SAT Solving
Haolin Ye
Abstract
Computing differences between tree-structured data is a critical
but challenging problem in software analysis. In this talk, we propose
a novel tree diffing approach called SATDiff, which reformulates the structural
diffing problem into a MaxSAT problem. By encoding the necessary transformations
from the source tree to the target tree, SATDiff generates correct, minimal,
and type-safe low-level edit scripts with formal guarantees. We then synthesize
concise high-level edit scripts by effectively merging low-level edits in
the appropriate topological order. To alleviate runtime concerns, SATDiff allows
flexible integration with anytime solvers to provide high-quality solutions in
a timely fashion, albeit without minimality guarantees. This design choice
enables SATDiff to accommodate various user needs, offering both minimal
edit script solutions and timely, high-quality solutions. For the evaluation
of SATDiff, in addition to conciseness, we propose a new language model-based
metric to evaluate the naturalness and comprehensibility of edit scripts. This
metric uses GPT-4 and Claude-3.5 to evaluate edit scripts from a programmer's
perspective, ensuring alignment with human values by following certain guiding
principles, such as discouraging unnecessary repetitive edits. Our empirical
results demonstrate that SATDiff outperforms existing heuristic-based approaches
by a significant margin in terms of both conciseness and naturalness while
maintaining comparable runtime performance.
|
July 23, 2024
|
Combining Automatic Amortized Resource Analysis with Bayesian Learning
Jan Hoffman (invited speaker)
Abstract
There are two approaches to automatically deriving symbolic
worst-case resource bounds for programs: static analysis of the source
code and data-driven analysis of cost measurements obtained by running
the program. Static resource analysis is usually sound but incomplete.
Data-driven analysis can always return a result, but its lack of
robustness often leads to unsound results.
In this talk, I present a hybrid resource bound analysis that tightly
integrates static analysis and data-driven analysis. The static
analysis part builds on automatic amortized resource analysis (AARA),
a type-based resource analysis method that infers resource bounds
using linear optimization. The data-driven part relies on Bayesian
modeling and inference techniques that improve upon previous
data-driven analysis methods by reporting an entire probability
distribution over likely resource cost bounds. This Hybrid AARA is
statistically sound under standard assumptions on the runtime cost
data.
In the first part of the talk, I present type-based AARA and
illustrate its strengths and weaknesses. I then discuss the
challenges with data-driven analyses for worst-case bounds. Finally, I
show how Bayesian learning can be integrated into type-based AARA. I
also summarize an experimental evaluation that indicates that Hybrid
AARA (i) effectively mitigates the incompleteness of purely static
resource analysis; and (ii) is more accurate and robust than purely
data-driven resource analysis.
|
July 16, 2024
|
Teaching the Art of Functional Programming using Automated Grading
Aliya Hameer (invited speaker)
Paper and slides
Abstract
Online programming platforms have immense potential to improve students’ educational experience. They
make programming more accessible, as no installation is required; and automatic grading facilities provide
students with immediate feedback on their code, allowing them to to fix bugs and address errors in their
understanding right away. However, these graders tend to focus heavily on the functional correctness of a
solution, neglecting other aspects of students’ code and thereby causing students to miss out on a significant
amount of valuable feedback.
In this talk, we recount our experience in using the Learn-OCaml online programming platform to
teach functional programming in a second-year university course on programming languages and paradigms.
Moreover, we explore how to leverage Learn-OCaml’s automated grading infrastructure to make it easy
to write more expressive graders that give students feedback on properties of their code beyond simple
input/output correctness, in order to effectively teach elements of functional programming style. In particular,
we describe our extensions to the Learn-OCaml platform that evaluate students on test quality and code style.
By providing these tools and a suite of our own homework problems and associated graders, we aim to
promote functional programming education, enhance students’ educational experience, and make teaching and
learning typed functional programming more accessible to instructors and students alike, in our community
and beyond.
|
July 9, 2024
|
A Gentle Introduction to Session Types
Chuta Sano
Slides
Abstract
Not many people in our research group work on session
types. Let's change that. I will be going over an abridged (and
admittedly biased) history of session types from Honda to a more modern
presentation of session types more closer to linear logic. My hope is to
get the audience's feet wet with session types so that more "up to date"
research on session types can follow!
|
July 2, 2024
|
Efficient Evaluation of Lazy Programs, or Compilation of Call-by-Need
Clare Jang
Slides
Abstract
Lazy evaluation is a great tool for constructing a composable program:
it helps us to use parser combinators without messing with recursions. It allows us to keep
the optimal amortised time complexity for persistent data structures. It also opens a way
to use co-data such as a stream. The question is: can we actually enjoy the benefits of lazy
programming without harming real performance? In this talk, I will cover the reason why this
is not trivial and two classic and most influential attempts to solve the issues: G-machine and
Three-Instruction-Machine (TIM). These two show us the key ideas for evaluating and compiling
a lazy program and provide guiding posts for more recent and practical methods
(e.g. Spineless Tagless G-Machine and "Compiling without Continuation")
|
June 25, 2024
|
Pure Type Systems
Antoine Gaulin
Slides
Abstract
Pure type systems (PTSs) provide a unified framework for describing type systems for the λ-calculus.
This allows studying large classes of typed λ-calculi simultaneously.
In particular, one can specify properties of type systems, such as predicativity, and prove meta-theoretic results conditional on these properties.
While PTSs can describe a large variety of type systems, the λ-calculus is, on its own, impractical and difficult to use.
A key advantage of PTS is that we can define extensions of the λ-calculus, say with pairs, definitions, etc., independently of the original type system.
With suitable extensions, we obtain a framework capable of describing practical programming languages, including Standard ML.
Moreover, it is possible study the impact of the new features on arbitrary type systems.
In particular, one can define an extension and show that it preserves normalization, indicating that it is a safe extension.
|
June 18, 2024
|
A Bidirection Demand Semantics for Mechanized Cost Analysis of Lazy Programs
Maite Kramarz
Abstract
Lazy evaluation is a powerful tool that enables better
compositionality and potentially better performance in functional
programming, but it is challenging to analyze its computation cost.
Existing works either require manually annotating sharing, or rely on
separation logic to reason about heaps of mutable cells. In this talk,
we discuss a bidirectional demand semantics that allows for extrinsic
reasoning about the computation cost of lazy programs without relying on
special program logics. We demonstrate how this semantics can be applied
to both simple sorting algorithms and more complex data structures which
rely on laziness.
|
June 11, 2024
|
Yet Another Lecture on Normalization for Type Theory
Jason Hu
Abstract
In this talk, I begin with a less well-known Goedel's Church's paradox to explain why even regular mathematics needs to resolve the halting problem if it carries a notion of computation. For a type theory, the halting problem is resolved by proving normalization, which asserts that all well-typed terms terminate. Continuing Jake's talk on definitional interpreters last week, I give a different normalization algorithm from the one I showed previously.
|
June 4, 2024
|
Continuation-Passing Style and Defunctionalization
Jacob Errington
Abstract
Continuation-Passing Style (CPS) and defunctionalization (d17n) are two program transformations that when used in conjunction, witness a duality between imperative and declarative programming. Whereas CPS makes explicit the fine-grained control flow of a program, d17n eliminates higher-order functions. By combining these two transformations, we obtain a general strategy for transforming programs expressed declaratively into imperative programs:
1. convert the given program to CPS by introducing a functional parameter to accumulate pending operations in a stack; and
2. defunctionalize this accumulator to realize it as a first-order stack data structure.
The resulting program takes on imperative qualities: its function calls are akin to gotos as those functions never return, and the parameters of those functions are sometimes akin to mutable variables. In this presentation I will quickly review the CPS transformation before introducing d17n in general. Then, I will explore a piece of functional programming folklore relating these concepts: I will demonstrate the application of the CPS+d17n transformation to a left-to-right, call-by-value interpreter for a lambda calculus, giving rise to a stack-based state transition system that is precisely the CEK machine of Felleisen and Friedman.
|
May 28, 2024
|
Type Error Localization
Max Kopinsky
Abstract
Traditional implementations of strongly-typed functional programming languages often miss the root cause of type errors. As a consequence, type error messages are often misleading and confusing - particularly for students learning such a language. We describe Tyro, a type error localization tool which determines the optimal source of an error for ill-typed programs following fundamental ideas by Pavlinovic et al. : we first translate typing constraints into SMT (Satisfiability Modulo Theories) using an intermediate representation which is more readable than the actual SMT encoding; during this phase we apply a new encoding for polymorphic types. Second, we translate our intermediate representation into an actual SMT encoding and take advantage of recent advancements in off-the-shelf SMT solvers to effectively find optimal error sources for ill-typed programs. Our design maintains the separation of heuristic and search also present in prior and similar work. In addition, our architecture design increases modularity, re-usability, and trust in the overall architecture using an intermediate representation to facilitate the safe generation of the SMT encoding. We believe this design principle will apply to many other tools that leverage SMT solvers.
Our experimental evaluation reinforces that the SMT approach finds accurate error sources using both expert-labeled programs and an automated method for larger-scale analysis. Compared to prior work, Tyro lays the basis for large-scale evaluation of error localization techniques, which can be integrated into programming environments and enable us to understand the impact of precise error messages for students in practice.
|
May 23, 2024
|
Extending Scala for Safe Concurrent Programming
Phillip Haller (invited speaker)
Abstract
Concurrent programming is notorious for hazards such as data races and
deadlocks. At the same time, program executions are difficult to
reproduce due to various sources of nondeterminism, complicating the
debugging of concurrent programs. These challenges have motivated work
on ensuring safety statically using type systems. Type-based
prevention of data races is a major challenge in the context of
object-oriented programming languages due to pervasive mutation and
aliasing. In addition, extending the type systems of existing
languages is a challenge, preventing recent advances from practical use.
This talk presents LaCasa, a language and type system extending Scala
with uniqueness typing and heap isolation. Goals of the language
extension include (a) integrating with the full Scala language, (b)
minimizing the overhead of type annotations, and (c) enabling the
reuse of existing code without source code changes. We show how LaCasa
prevents data races in concurrent programs based on the actor model.
We then present a variant, Parallel LaCasa, which extends data-race
safety to the Async-Finish model. Finally, we discuss the relation to
|
May 21, 2024
|
Code generation on OCaml by LLMs
Jaylene Zhang
Abstract
TBD
|
May 14, 2024
|
"Un"-intuitionistic logic
Chuta Sano
Abstract
TBD
|
May 7, 2024
|
Polymorphism in Adjoint Foundation of Metaprogramming
Clare Jang
Abstract
TBD
|
April 30, 2024
|
Contextual Refinement Types
Antoine Gaulin
Abstract
TBD
|
April 23, 2024
|
Dependent Layered Modal Type Theory
Jason Hu
Abstract
TBD
|