Skip to content

rosetta-rs/parse-rosetta-rs

Repository files navigation

Rust Parsing Benchmarks

This repo tries to assess Rust parsing performance.

crate parser type action code integration input type precedence climbing parameterized rules streaming input
chumsky combinators in source library &str ? ? ?
combine combinators in source library &str ? ? ?
grmtools CFG in grammar library ? ? ? ?
lalrpop LR(1) in grammar build script &str No Yes No
logos lexer in source proc macro &str, &[u8] ? ? ?
nom combinators in source library &str, &[u8], custom No Yes Yes
peg PEG in grammar proc macro (block) &str, &[T], custom Yes Yes No
pest PEG external proc macro (file) &str Yes No No
winnow combinators in source library &str, &[T], custom No Yes Yes
yap combinators in source library &str, &[T], custom No Yes ?

Formerly, we compared:

  • pom: lack of notoriety

Results

Name Overhead (release) Build (debug) Parse (release) Downloads Version
null 0 KiB 199ms 4ms - -
grmtools 2,526 KiB 13s 163ms Download count v0.13.8
chumsky 562 KiB 6s 331ms Download count v0.9.3
combine 184 KiB 4s 47ms Download count v3.8.1
lalrpop 1,496 KiB 13s 37ms Download count v0.22.0
logos 81 KiB 5s 17ms Download count v0.15.0
nom 98 KiB 3s 60ms Download count v8.0.0
peg 82 KiB 2s 21ms Download count v0.8.4
pest 130 KiB 4s 55ms Download count v2.7.15
serde_json 55 KiB 3s 14ms Download count v1.0.134
winnow 76 KiB 2s 28ms Download count v0.7.0
yap 56 KiB 456ms 31ms Download count v0.12.0

System: Linux 5.4.0-170-generic (x86_64), rustc 1.84.0 (9fc6b4312 2025-01-07) w/ -j 8

Note:

  • For more "Parse (release)" comparisons, see parser_benchmarks
  • Parsers have not been validated and might have differing levels of quality (#5)

Running the Benchmarks

$ ./bench.py
$ ./format.py

About

Comparing parser APIs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages