Presented annually to the author(s) of a paper presented at the POPL held 10 years prior to the award year. The award includes a prize of $1,000 to be split among the authors of the winning paper. The papers are judged by their influence over the past decade.
Selection Committee The award given in year N is for the most influential paper presented at the conference held in year N-10. The selection committee consists of the following members:
The SIGPLAN Chair shall adjudicate conflicts of interest, appointing substitutes to the committee as necessary.
(for 2014): CakeML: A Verified Implementation of ML
This paper provided exciting evidence that the idea of machine-checked verified compilation can generalize beyond CompCert-style optimization verification to end-to-end correctness statements in the context of ML. A particularly novel feature of CakeML that distinguished it from contemporary efforts like CompCert was its support for bootstrapping that facilitates verification of the compiler implementation along with the runtime system. Because all of this functionality is fully expressed and verified within HOL4, the correctness theorem naturally extends to the machine-code implementation generated by bootstrapping, essentially for free. This paper has spurred a long line of productive follow-on work that has been quite impactful in broadening our understanding of verified compilation, particularly with respect to the number of new techniques and methods it spearheaded.
Concurrency remains one of the most challenging aspects of modern-day programming, and verification techniques for ensuring the correctness of concurrent programs are of utmost importance. This paper makes significant theoretical advances in reasoning about concurrency by presenting an elegant meta-theory of concurrent reasoning principles. At the heart of the proposed theory lies a notion of composition, called “views”, which consists of abstract knowledge about the machine state and the thread’s rights to change it. The paper shows that the proposed Concurrent View Framework unifies most prior attempts at reasoning about concurrency, including both type systems as well as different program logics. Due to its foundational and unifying nature the paper has had significant influence and impact on publications on the topic since its appearance.
(for 2012): Multiple facets for dynamic information flow
(for 2010): From program verification to program synthesis
The paper greatly advanced our ability to synthesize programs from logical specifications. It was based on the insight that much of the work carried out by a program verifier could be repurposed not just for checking that code matches a specification, but also to synthesize code that does. The user specifies the input-output behavior as a logical formula, and also provides structural and resource constraints, which describe a template language for the space of possible programs. In later works, such domain-specific languages (DSLs) became the norm for restricting the search space to make synthesis tractable. The verifier then scaffolds the synthesized code onto this structure, adhering to the prescribed resources. The authors were able to synthesize a range of clever algorithms, which served as inspiration for some of the massive effort on verification-based program synthesis over the past decade.
Shape Analysis is the problem of precisely predicting, at compile-time, the contents of mutable, pointer-based data structures. It holds the key to identifying, and hence eliminating, a wide variety of classes of software defects, from null-dereference errors, to use-after-free bugs and memory leaks to subtle data races that complicate concurrency.
This paper showed how to use compositionality to scale shape analysis up
to large code bases, by introducing bi-abduction: a new technique for analyzing each procedure in isolation, independent of its callees.
Bi-abduction allows an analysis to determine separation logic pre-conditions under which the procedure executes safely, and corresponding post-conditions that describe how the procedure mutates the heap when executed from a configuration satisfying the precondition. We can then use separation logic to compose the pre- and post-conditions for different functions, yielding the first precise shape analyses capable of running on millions of lines of code.
This work directly led to the development of the
Inferstatic analyzer which has been used to find and fix tens of thousands of defects across multiple industrial code bases. Further, it rekindled interest in the notion of abduction, leading to new techniques for interactive exploration of the results of static analysis, and compositional program synthesis.
(for 2008): Multiparty asynchronous session types
Session types are a type-based framework for codifying communication structures and verifying protocols in concurrent, message-passing programs. Previously, session types could only model binary (two-party) protocols. This paper generalizes the theory to the multiparty case with asynchronous communications, preventing deadlock and communication errors in more sophisticated communication protocols involving any number (two or more) of participants. The central idea was to introduce global types, which describe multiparty conversations from a global perspective and provide a means to check protocol compliance. This work has inspired numerous authors to build on its pioneering foundations in the session types community and has initiated many applications of multiparty session types in programming languages and tools. It has also influenced other areas of research, such as software contracts, runtime verification and hardware specifications.
This paper presents the CompCert compiler, a landmark effort in program verification. The paper was (and still is) groundbreaking in that it demonstrates the feasibility of using an interactive theorem prover – specifically, Coq – to both program and formally verify a realistic compiler – specifically, the backend of an optimizing compiler for a significant subset of C. The verification ensures that properties proved about C source code continue to hold of the assembly output of the compiler, thereby filling a major gap in the development of high-assurance software and facilitating a great deal of follow-on work on verified software toolchains. Moreover, CompCert made a convincing case that theorem-proving technology is mature enough to be applied to the full functional verification of realistic systems, and in so doing heralded a new age of “big verification”.
The “view-update problem” is a classic problem in databases: when operating on a partial view of a data structure, how should updates to the view be propagated to the original data structure? This POPL 2005 paper was instrumental in bringing the view-update problem to the attention of the programming languages community and demonstrating the broad relevance of the problem beyond databases. The immediate contributions of the paper were (1) to introduce a general mathematical space of well-behaved bidirectional transformations called “lenses,” and (2) to develop a specific instantiation of this framework in the form of a domain-specific language of combinators for tree transformations, which served as the basis for subsequent tools for editing XML and HTML. More broadly, the paper sparked a great deal of follow-on work in the area of BX (“bidirectional transformations”), leading to a fruitful collaboration between the worlds of databases, programming languages, and software engineering.
(for 2004) Abstractions from proofs
Interpolation can be used to construct an inductive invariant for a transition system that is strong enough to prove a given property. Prior to Henzinger et al.’s paper, interpolation had only been applied to model checking of finite-state systems. In this paper, the authors demonstrated a fundamental generalization of Craig interpolation to program analysis by predicate abstraction, opening the door for interpolation to be applied to abstraction refinement of infinite-state systems. This work showed how interpolation offers a fundamental way to explain abstraction refinement in a logical framework and has led to many extensions to increase the power of abstraction in program analysis.
Languages with automatic memory management offer well-known advantages in terms of software engineering, but the use of garbage collection seemed for a long time to place such languages off-limits for hard real-time and embedded systems. Indeed, prior to this highly influential paper, the existing approaches to real-time garbage collection suffered from a number of practical problems: a lack of hard bounds on pause times in the face of fragmentation or large data structures; high space overhead due to the use of copying collection to avoid fragmentation; and uneven mutator utilization due to the use of work-based scheduling algorithms. Bacon, Cheng & Rajan’s paper showed how to overcome all these problems in the design of a real real-time garbage collector for Java. The collector, which was implemented and tested in the Jikes RVM, employed an impressive variety of techniques, such as time-based scheduling and “mostly non-moving” collection, in order to provide guaranteed pause times, consistent mutator utilization, and low space overhead. In short, the paper marked a major advance toward the important goal of turning hard real-time garbage collection into a reality.
This paper showed a path to taking legacy C code and making it safer, by adding type safety guarantees to existing C programs. It uses a combination of static and dynamic techniques to allow programmers to conduct low-level operations while guaranteeing safety properties regarding pointers and minimizing overhead. For many C programs, CCured’s type inference is able to infer that most or all of the pointers are statically verifiable to be type safe. This paper demonstrates the beauty of both dynamic and static types, influencing the design of languages such as C#, and inspiring continuing research on static and dynamic analysis.
In applying the logic of bunched implications (BI) to mutable data structures this paper laid the groundwork for the flowering of separation logic (heap models of separation logic are specific BI models), which allows reasoning about programs with dynamically allocated (and deallocated) memory. This paper introduced the ‘frame rule’ which supports local reasoning in separation logic, where specifications and proofs concentrate on just those memory cells manipulated by a program component rather than the entire global program state. Separation logic underlies continuing research on what is described as local reasoning, having diverse application in domains such as program verification and automated parallelization.
“Anytime, Anywhere: Modal Logics for Mobile Ambients” by Luca Cardelli and Andrew D. Gordon helped spur a flowering of work in the area of process calculi that continues today. The paper focused on modal logics for reasoning about both temporal and spacial modalities for ambient behaviours, demonstrating techniques that also apply to other process calculi (even those without an explicit notion of location), so contributing to excitement in an area that was growing at that time and continues. The work has led to application of concurrency theory in fields as diverse as security, safety critical applications, query languages for semistructured data, and systems biology.
“JFlow: Practical Mostly-Static Information Flow Control” by Andrew C. Myers demonstrated the practicality of using static information flow analysis to protect privacy and preserve integrity by giving an efficient information flow type checker for an extension of the widely-used Java language. The work has had a significant impact both within and beyond the programming language community. In particular, subsequent work for other languages has largely followed the path laid out in this paper, and the compiler infrastructure developed for JFlow (now called Jif) is widely used as a research platform. Furthermore, using the JFlow work as a basis, several major research initiatives are investigating the challenges of building complex, real-world systems with confidentiality guarantees.
(for 1998) From System F to Typed Assembly Language
“From System F to Typed Assembly Language” by Greg Morrisett, David Walker, Karl Crary, and Neal Glew began a major development in the application of type system ideas to low level programming. The paper shows how to compile a high-level, statically typed language into TAL, a typed assembly language defined by the authors. The type system for the assembly language ensures that source-level abstractions like closures and polymorphic functions are enforced at the machine-code level while permitting aggressive, low-level optimizations such as register allocation and instruction scheduling. This infrastructure provides the basis for ensuring the safety of untrusted low-level code artifacts, regardless of their source. A large body of subsequent work has drawn on the ideas in this paper, including work on proof-carrying code and certifying compilers.
(for 1997) Proof-carrying Code
(for 1996) Points-to Analysis in Almost Linear Time
(for 1995) A Language with Distributed Scope
(for 1993) Imperative functional programming