Four CSE co-authored papers presented at PLDI 2021
Four papers co-authored by eight CSE researchers were presented at the 2021 Programming Language Design and Implementation conference, the premier forum for researchers, developers, practitioners, and students to present research on programming language design and implementation.
Learn more about the papers:
Gefei Zuo (University of Michigan), Jiacheng Ma (University of Michigan), Andrew Quinn (University of Michigan), Pramod Bhatotia (TU Munich), Pedro Fonseca (Purdue University), Baris Kasikci (University of Michigan)
Reproducing production failures is crucial for software reliability, but existing bug reproduction approaches are not efficient, effective, or accurate enough for production systems. As a result, the tools can’t handle a number of possible uses for failure reproduction (like debugging, security forensics, and fuzzing). The paper proposes Execution Reconstruction (ER), a technique that strikes a better balance between efficiency, effectiveness, and accuracy for reproducing production failures. ER uses hardware-assisted control and data tracing to make a technique known as symbolic execution viable for reproducing failures. Its key novelty is the ability to identify data values that are inexpensive to monitor and useful for getting around symbolic execution’s scalability limitations. Unlike existing tools, ER reproduces fully replayable executions that can power a variety of debugging and reliability use cases. It does so while incurring an average of 0.3% runtime overhead.
Cyrus Omar (University of Michigan), David Moon (University of Michigan), Andrew Blinn (University of Michigan), Ian Voysey (Carnegie Mellon University), Nick Collins (University of Chicago), Ravi Chugh (University of Chicago)
Text editing is powerful, but some types of expressions are more naturally represented and manipulated graphically. Examples include expressions that compute colors, music, animations, tabular data, plots, diagrams, and other domain-specific data structures. This paper introduces live literals, or livelits, which allow clients to fill holes of types like these by directly manipulating a user-defined graphical user interface (GUI) embedded persistently into code. Livelits are unique in that they are compositional: a livelit GUI can itself embed spliced expressions, which are typed, lexically scoped, and can in turn embed other livelits. Livelits are also uniquely live: a livelit can provide continuous feedback about the run-time implications of the user’s choices even when splices mention bound variables, because the system continuously gathers closures associated with the hole that the livelit is filling.
Shankara Pailoor (University of Texas at Austin), Yuepeng Wang (University of Pennsylvania), Xinyu Wang (University of Michigan), Isil Dillig (University of Texas at Austin)
Implementations of many data structures use several correlated fields to improve their performance; however, inconsistencies between these fields can be a source of serious program errors. To address this problem, the researchers propose a new technique for automatically refining data structures from integrity constraints. The researchers evaluated their method on 25 data structures from popular Java projects and showed that it can successfully refine 23 of them. They also compared the method against two state-of-the-art synthesis tools and performed an ablation study to justify their design choices.
Qiaochu Chen (University of Texas at Austin), Aaron Lamoreaux (University of Texas at Austin), Xinyu Wang (University of Michigan), Greg Durrett (University of Texas at Austin), Osbert Bastani (University of Pennsylvania), Isil Dillig (University of Texas at Austin)
This paper proposes a new technique for extracting information from webpages based on program synthesis. Given a natural language query and a few labeled webpages, the method synthesizes a program that can be used to extract similar types of information from other, unlabeled webpages. The researchers implemented their ideas in a new tool called WebQA and evaluated it on 25 different tasks across multiple domains. The experiments showed that WebQA significantly outperforms existing tools such as state-of-the-art question answering models and wrapper induction systems.