Cached call-by-name: incremental evaluation of configurations
Domain specific languages for configuration are peculiar in many ways. One of those aspects is performance: what is the cost of evaluating a configuration? What configuration languages should or could optimize for?
A standard route to take from there is the traditional metamorphosis from from naive tree-walking interpreter to an optimizing bytecode compiler and virtual machine. One could add Just-in-Time Compilation (JIT) in the mix.
In this talk, we will explore an alternative route. We start from a simple observation: most changes to an existing configuration codebase are usually small and localized. We will present our attempt at baking incrementality and evaluation caching in the specification of our configuration language itself, but in a non-intrusive way, such that the semantics doesn’t depend from the specific caching and evaluation strategy that has been chosen.
This is work in progress, both on the side of semantics and implementation, which is currently being experimented in the implementation of the Nickel configuration language.
Tue 24 OctDisplayed time zone: Lisbon change
14:00 - 15:30 | |||
14:00 22mTalk | The LIFE of CUE CONFLANG | ||
14:22 22mTalk | Ansible Is Turing Complete CONFLANG | ||
14:45 22mTalk | Cached call-by-name: incremental evaluation of configurations CONFLANG | ||
15:07 22mLive Q&A | Configuration languages Q&A/Discussion CONFLANG |