- Contact
Large Language Models (LLMs) are becoming a standard part of programmer’s toolbox, whether we like it or not. A lot of work can be done on integrating the new LLM tools with conventional programming language research, such as using program verification tools to check correctness of generated programs or using types to improve generated suggestions [1]. We are happy to supervise a range of projects and theses, such as those below, that explore this integration.
-
LLM Integration with Stateful Programming Systems. LLMs are good at generating code, but what if you are in a system that is running and has a live state? Typical example is Smalltalk, but the same applies to debugger or Web Browser Developer Tools. The project would explore how to use LLMs in stateful systems, for example by passing information about runtime state as part of the LLM context or by using LLM not just to write code, but also to suggest other possible interactions with the system (e.g., edit the value of a variable in a debugger).
-
Langauge and Theory for Composing LLM Prompts. To generate larger amounts of code, people often compose chains of LLM prompts (e.g., generate a plan, suggest code for each step or generate code, generate tests, check that they match). This project would look at systems for such composition [2,3] from a programming language perspective. What would a better language for this task look like? Are there any properties about such (meta-)programs that we can study?
References