You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Domain specific languages for configuration are peculiar in many ways. One of those aspects is performance: what is the cost of evaluating a configuration? What configuration languages should or could optimize for? It seems obvious that some considerations for general-purpose languages are not as important for configuration. Typically, one doesn’t expect configurations to perform a lot of floating point numbers crushing. Numerical optimizations are probably not very rewarding.
On the other hand, large configurations might involve the evaluation of many sub-components. Here, parallelization might help, even more because configuration languages are in general pure, or at least purer than general purpose language, making such parallelization easier - at least on paper.
But is performance even a concern for a configuration language? In our experience (outside of Nickel), performance does become an issue in large configurations.
We experienced long evaluation time for large Dhall configurations (as reported by other users1). The Nix package manager, which uses the Nix expression language to describe packages and system configurations, is notoriously slow to evaluate large expressions. This slowness has motivated the Tvix project, an alternative implementation of the Nix language focused on performance2. There are probably similar stories for other configuration languages3.
Classical approach
A standard route to take from there is the traditional metamorphosis from a naive tree-walking interpreter to an optimizing bytecode compiler and virtual machine (or even a native one). One could add Just-in-Time Compilation (JIT) in the mix. However, designing and implementing a full-fledged compiler (not even talking about JIT) is not a small feat. As mentioned before, configuration languages are special. Classical optimizations might not work so well, and new ones might need to be invented.
Proposal
We propose to explore an alternative route, starting from a simple observation: most changes to an existing configuration codebase are usually small and localized. Changing a value, adding a new machine, etc. What if we could use the previous evaluation to quickly recompute the new configuration? This is precisely the idea of incremental computations, although incremental computations are a more general framework not specially targeted at language evaluation. This issue tracks the research being done to incorporate incremental evaluation natively in the Nickel language, both on the formal level as well as in the implementation, allowing for various caching strategies to be experimented and even to co-exist without changing the semantics of the language.
The content you are editing has changed. Please copy your edits and refresh the page.
Incremental evaluation
Context
Domain specific languages for configuration are peculiar in many ways. One of those aspects is performance: what is the cost of evaluating a configuration? What configuration languages should or could optimize for? It seems obvious that some considerations for general-purpose languages are not as important for configuration. Typically, one doesn’t expect configurations to perform a lot of floating point numbers crushing. Numerical optimizations are probably not very rewarding.
On the other hand, large configurations might involve the evaluation of many sub-components. Here, parallelization might help, even more because configuration languages are in general pure, or at least purer than general purpose language, making such parallelization easier - at least on paper.
But is performance even a concern for a configuration language? In our experience (outside of Nickel), performance does become an issue in large configurations.
We experienced long evaluation time for large Dhall configurations (as reported by other users1). The Nix package manager, which uses the Nix expression language to describe packages and system configurations, is notoriously slow to evaluate large expressions. This slowness has motivated the Tvix project, an alternative implementation of the Nix language focused on performance2. There are probably similar stories for other configuration languages3.
Classical approach
A standard route to take from there is the traditional metamorphosis from a naive tree-walking interpreter to an optimizing bytecode compiler and virtual machine (or even a native one). One could add Just-in-Time Compilation (JIT) in the mix. However, designing and implementing a full-fledged compiler (not even talking about JIT) is not a small feat. As mentioned before, configuration languages are special. Classical optimizations might not work so well, and new ones might need to be invented.
Proposal
We propose to explore an alternative route, starting from a simple observation: most changes to an existing configuration codebase are usually small and localized. Changing a value, adding a new machine, etc. What if we could use the previous evaluation to quickly recompute the new configuration? This is precisely the idea of incremental computations, although incremental computations are a more general framework not specially targeted at language evaluation. This issue tracks the research being done to incorporate incremental evaluation natively in the Nickel language, both on the formal level as well as in the implementation, allowing for various caching strategies to be experimented and even to co-exist without changing the semantics of the language.
Tasks
Footnotes
https://github.com/dhall-lang/dhall-haskell/issues/1890 ↩
https://tvl.fyi/blog/rewriting-nix ↩
https://news.ycombinator.com/item?id=32109264 ↩
The text was updated successfully, but these errors were encountered: