SPLASH 2023
Sun 22 - Fri 27 October 2023 Cascais, Portugal
Thu 26 Oct 2023 16:18 - 16:36 at Room XII - compilation & optimization 2 Chair(s): Fabian Muehlboeck

Thanks to partial evaluation and meta-tracing, it became practical to build language implementations that reach state-of-the-art peak performance by implementing only an interpreter. Systems such as RPython and GraalVM provide components such as a garbage collector and just-in-time compiler in a language-agnostic manner, greatly reducing implementation effort.

However, meta-compilation-based language implementations still need to improve further to reach the low memory use and fast warmup behavior that custom-built systems provide. A key element in this endeavor is interpreter performance. Folklore tells us that bytecode interpreters are superior to abstract-syntax-tree (AST) interpreters both in terms of memory use and run-time performance.

This work assesses the trade-offs between AST and bytecode interpreters to verify common assumptions and whether they hold
in the context of meta-compilation systems. We implemented four interpreters, each an AST and a bytecode one using RPython and GraalVM. We keep the difference between the interpreters as small as feasible to be able to evaluate interpreter performance, peak performance, warmup, memory use, and the impact of individual optimizations.

Our results show that both systems indeed reach performance close to Node.js/V8. Looking at interpreter-only performance,
our AST interpreters are on par with, or even slightly faster than their bytecode counterparts. After just-in-time compilation, the results are roughly on par. This means bytecode interpreters do not have their widely assumed performance advantage. However, we can confirm that bytecodes are more compact in memory than ASTs, which becomes relevant for larger applications. However, for smaller applications, we noticed that bytecode interpreters allocate more memory because boxing avoidance is not as applicable, and because the bytecode interpreter structure requires memory, e.g., for a reified stack.

Our results show AST interpreters to be competitive on top of meta-compilation systems. Together with possible engineering benefits, they should thus not be discounted so easily in favor of bytecode interpreters.

Thu 26 Oct

Displayed time zone: Lisbon change

16:00 - 17:30
compilation & optimization 2OOPSLA at Room XII
Chair(s): Fabian Muehlboeck Australian National University
16:00
18m
Talk
Graph IRs for Impure Higher-Order Languages: Making Aggressive Optimizations Affordable with Precise Effect Dependencies
OOPSLA
Oliver Bračevac Galois, Inc., Guannan Wei Purdue University, Songlin Jia Purdue University, Supun Abeysinghe Purdue University, Yuxuan Jiang Purdue University, Yuyan Bao Augusta University, Tiark Rompf Purdue University
DOI Pre-print
16:18
18m
Talk
AST vs. Bytecode: Interpreters in the Age of Meta-Compilation
OOPSLA
Octave Larose University of Kent, Sophie Kaleba University of Kent, Humphrey Burchell University of Kent, Stefan Marr University of Kent
DOI Pre-print
16:36
18m
Talk
Reusing Just-in-Time Compiled Code
OOPSLA
Meetesh Kalpesh Mehta IIT Bombay, Sebastián Krynski Czech Technical University in Prague, Hugo Musso Gualandi Czech Technical University in Prague, Manas Thakur IIT Bombay, Jan Vitek Northeastern University
DOI
16:54
18m
Talk
TASTyTruffle: Just-in-Time Specialization of Parametric Polymorphism
OOPSLA
Matt D'Souza University of Waterloo, James You University of Waterloo, Ondřej Lhoták University of Waterloo, Aleksandar Prokopec Oracle Labs
DOI
17:12
18m
Talk
Beacons: An End-to-End Compiler Framework for Predicting and Utilizing Dynamic Loop Characteristics
OOPSLA
Girish Mururu Georgia Institute of Technology, Sharjeel Khan Georgia Institute of Technology, Bodhisatwa Chatterjee Georgia Institute of Technology, Chao Chen Georgia Institute of Technology, Chris Porter IBM T.J. Watson Research, Ada Gavrilovska Georgia Institute of Technology, Santosh Pande Georgia Institute of Technology
DOI