Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3 proposal #2982

Open
wants to merge 91 commits into
base: main
Choose a base branch
from
Open

v3 proposal #2982

wants to merge 91 commits into from

Conversation

dead-claudia
Copy link
Member

@dead-claudia dead-claudia commented Oct 13, 2024

I want your feedback. Please use it, abuse it, and tell me what issues you find. Bugs, ergonomic issues, weird slowdowns, I want to know it all. Feel free to drop a comment here on whatever's on your mind, so I can easily track it.

Also, @MithrilJS/collaborators please don't merge this yet. I want feedback first. I also need to create a docs PR before this can get merged.

Description

This is my v3 proposal. It's pretty detailed and represents a comprehensive overhaul of the API. It synthesizes much of my API research. And yes, all tests pass. 🙂

Preserved the commit history if you're curious how I got here. (Reviewers: you can ignore that. Focus on the file diff - you'll save yourself a lot of time.)

If you want to play around, you can find artifacts here: https://github.com/dead-claudia/mithril.js/releases/tag/v3.0.0-alpha.1

This is not published to npm, so you'll have to use a GitHub tag/commit link. If you just want to mess around in a code playground, here's a link for you.

Quick line-by-line summary If you're short on time and just want a very high level summary, here's a comprehensive list, grouped by type.

Highlights:

  • 9.12 KB to 7.23 KB, or a little over 20% smaller.
  • A base of ES2018 is assumed, including syntax.
  • IE compatibility is no more, and most IE hacks have been removed.
  • No more magic attributes to fuss with!

Additions:

  • New vnode: m.layout((dom) => ...), scheduled to be invoked after render
  • New vnode: m.remove((dom) => ...), scheduled to be invoked after the render in which it's removed from
  • New vnode: m.retain(), retains the current vnode at that given position
  • New vnode: m.set(contextKeys, ...children), sets context keys you can get via the third parameter in components
  • New vnode: m.use([...deps], ...children), works similar to React's useEffect
  • New vnode: m.init(async (signal) => ...), callback return works just like event listeners
  • New utility: tracked = m.tracked() to manage delayed removal
  • New utility: m.throttler()
  • New utility: m.debouncer()

Changes:

  • Vnode have reduced in size (10 to 6) and gained much shorter property names.
  • key: attribute → m.keyed(view, (item) => [key, vnode])
  • m.buildPathname(template, query)m.p(template, query)
  • m.buildQueryString(query)m.query(query)
  • In event listeners, return false to prevent redraws.
  • Event listeners can return promises that are awaited before redrawing. Whatever it resolves to is taken as the return value.
  • If setting an attribute throws, the error is caught and logged unless removeOnThrow: false is passed to m.render.
  • If rendering a view throws, the view is removed and the error is caught and logged unless removeOnThrow: false is passed to m.render.
  • m.route(elem, defaultRoute, {...routes}) -> m.route(prefix, view)
  • m.mount(...) now returns a redraw function that works like m.redraw, but only for that one root. redraw is also set in the context via the redraw key.
  • m.render(...) now accepts an options object for its third parameter. redraw sets the redraw function (and is made available in the context).
  • Components are now either (attrs, old, context) => vnode functions or an (attrs, null, context) => ... function returning that
  • m.request(url, opts)m.fetch(url, opts), accepts all window.fetch options, onprogress, extract, and responseType.

Removals:

  • All submodules other than mithril/stream - the main library no longer needs the DOM to load
  • The built-in router no longer does its own route dispatching
  • m.trust - use innerHTML instead
  • m.censor
  • Magic attributes
  • m.vnode(...) - use m.normalize if it's truly necessary
  • m.mount(root, null) - use m.render(root, null) instead
  • m.redraw() -> either redraw function returned from m.mount or the redraw context key
  • oninit
  • oncreate/onupdate - use m.layout(...) instead
  • onremove - use m.remove(...) instead
  • onbeforeupdate - use m.retain() instead
  • onbeforeremove - use m.tracked() utility to manage parents instead
  • m.parsePathname
  • m.parseQueryString
  • ev.redraw = false - return/resolve with false or reject/throw from event handlers

Miscellaneous:

  • 30% smaller vnode structure, saving you some memory.
  • Optimized mithril/stream to require a lot less allocation - I don't have precise numbers, but it should both be faster and require less memory.
  • Redid the benchmark program.
Benchmarks

Details about my setup and related raw data follow the table.

Benchmark v2.2.8 (ops/sec) This PR (ops/sec) Change
do nothing 4223296 to 4976184 3235769 to 4751093 -23.38% to -4.52%
route match 3072621 to 3635928 429276 to 663092 -86.03% to -81.76%
route non-match 3164628 to 3665086 1899316 to 2676376 -39.98% to -26.98%
path generate with string interpolations 459434 to 627692 756099 to 1080099 +64.57% to +72.07%
path generate with number interpolations 531555 to 649943 811071 to 1100476 +52.58% to +69.32%
path generate no interpolations 2924231 to 3727735 1700104 to 2390492 -41.86% to -35.87%
construct simpleTree 33753 to 39569 43271 to 63881 +28.20% to +61.44%
construct nestedTree 3374704 to 4174918 3252891 to 4303122 -3.61% to +3.07%
construct mutateStylesPropertiesTree 1263 to 2200 2631 to 3019 +108.314% to +37.23%
construct repeatedTree 3487466 to 4169654 2716325 to 4118478 -22.11% to -1.23%
construct shuffledKeyedTree 6415 to 7500 5690 to 7179 -11.30% to -4.28%
render simpleTree 10345 to 10940 12069 to 13248 +16.67% to +21.10%
render nestedTree 9056 to 9741 10989 to 11983 +21.34% to +23.02%
render mutateStylesPropertiesTree 100 to 106 71 to 75 -29.00% to -29.25%
render repeatedTree 39568 to 52586 50636 to 59699 +27.97% to +13.53%
render shuffledKeyedTree 706 to 829 625 to 674 -11.47% to -18.70%
add/remove simpleTree 1192 to 1759 949 to 1447 -20.39% to -17.74%
add/remove nestedTree 935 to 1507 776 to 1111 -17.01% to -26.28%
add/remove mutateStylesPropertiesTree 104 to 114 65 to 74 -37.50% to -35.09%
add/remove repeatedTree 2495 to 4911 3276 to 6658 +31.30% to +35.57%
add/remove shuffledKeyedTree 480 to 723 514 to 640 +7.08% to -11.48%
mount simpleTree 1345 to 1538 806 to 1060 -40.07% to -31.08%
mount all 49 to 54 36 to 41 -26.53% to -24.07%
redraw simpleTree 10000 to 10862 11810 to 12845 +18.10% to +18.26%
redraw all 63 to 67 50 to 54 -20.63% to -19.40%

Browser version:

  • Brave: 1.71.114 Chromium: 130.0.6723.58 (Official Build) (64-bit)
  • Revision: 89c4031c685a4296315a6f421f46275f0b72dd55
  • OS: Windows 10 Version 22H2 (Build 19045.4894)
  • JavaScript: V8 13.0.245.16

This was run in a private window, with only the following two extensions:

  • Tampermonkey (ID: dhdgffkkebhmkfjojejmpbldmpobfkfo)
    • No userscripts installed
  • Rearrange Tabs (ID: ccnnhhnmpoffieppjjkhdakcoejcpbga)

Each run was loaded in a new private window, reloaded twice to ensure it's fully loaded in cache.

Raw output for v2.2.8 benchmarks (ported from this PR's):

24 tests loaded
Timer resolution detected:
- min: 100 µs/tick (10,000 Hz)
- max: 100 µs/tick (10,000 Hz)
- median: 100 µs/tick (10,000 Hz)
- assumed: 100 µs/tick (10,000 Hz)
Frame interval detected:
- min: 16.3 ms/frame (61 Hz)
- max: 100 ms/frame (10 Hz)
- median: 16.7 ms/frame (60 Hz)
- assumed: 16.5 ms/frame (61 Hz)
Tests warmed up, starting benchmark
- min confidence level: 0.99
- min samples/test: 100
- min duration/test: 1 s
- max duration/test: 5 s
- min duration/pass: 11.5 ms
[*** null test ***]:
- mean: 0.84 ns/op (1,190,606,397 Hz)
- median: 213 ns/op (4,703,481 Hz)
- expected: 201 ns/op (4,976,184 Hz) to 237 ns/op (4,223,296 Hz)
- range: 189 ns/op (5,280,776 Hz) to 571 ns/op (1,752,328 Hz)
- CI: 190 ns/op (5,271,656 Hz) to 526 ns/op (1,902,741 Hz)
- MOE: 0.000037 ns/op
- N: 299
- pop: 13,812,015
[route match]:
- mean: 1.15 ns/op (869,198,294 Hz)
- median: 285 ns/op (3,504,141 Hz)
- expected: 275 ns/op (3,635,928 Hz) to 325 ns/op (3,072,621 Hz)
- range: 268 ns/op (3,729,224 Hz) to 781 ns/op (1,280,517 Hz)
- CI: 269 ns/op (3,722,213 Hz) to 737 ns/op (1,356,201 Hz)
- null-adjusted mean: 0.31 ns/op (3,219,810,081 Hz)
- null-adjusted median: 72.8 ns/op (13,742,280 Hz)
- null-adjusted expected: 38.3 ns/op (26,143,050 Hz) to 124 ns/op (8,032,267 Hz)
- null-adjusted range: -303 ns/op (-3,305,596 Hz) to 592 ns/op (1,690,422 Hz)
- null-adjusted CI: -257 ns/op (-3,892,563 Hz) to 548 ns/op (1,825,951 Hz)
- MOE: 0.000085 ns/op
- N: 300
- pop: 10,083,009
[route non-match]:
- mean: 1.10 ns/op (907,227,561 Hz)
- median: 283 ns/op (3,531,337 Hz)
- expected: 273 ns/op (3,665,086 Hz) to 316 ns/op (3,164,628 Hz)
- range: 268 ns/op (3,725,862 Hz) to 866 ns/op (1,154,224 Hz)
- CI: 268 ns/op (3,725,267 Hz) to 733 ns/op (1,364,426 Hz)
- null-adjusted mean: -0.048 ns/op (-20,735,626,058 Hz)
- null-adjusted median: -2.20 ns/op (-455,008,563 Hz)
- null-adjusted expected: -52.6 ns/op (-19,007,727 Hz) to 41.0 ns/op (24,414,101 Hz)
- null-adjusted range: -513 ns/op (-1,951,067 Hz) to 598 ns/op (1,671,596 Hz)
- null-adjusted CI: -469 ns/op (-2,132,576 Hz) to 464 ns/op (2,154,004 Hz)
- MOE: 0.000078 ns/op
- N: 300
- pop: 10,524,192
[path generate with string interpolations]:
- mean: 7.08 ns/op (141,187,845 Hz)
- median: 1.76 µs/op (566,811 Hz)
- expected: 1.59 µs/op (627,692 Hz) to 2.18 µs/op (459,434 Hz)
- range: 1.49 µs/op (671,379 Hz) to 4.40 µs/op (227,500 Hz)
- CI: 1.49 µs/op (669,449 Hz) to 4.09 µs/op (244,538 Hz)
- null-adjusted mean: 5.98 ns/op (167,210,004 Hz)
- null-adjusted median: 1.48 µs/op (675,184 Hz)
- null-adjusted expected: 1.28 µs/op (782,996 Hz) to 1.90 µs/op (525,280 Hz)
- null-adjusted range: 623 ns/op (1,604,909 Hz) to 4.13 µs/op (242,294 Hz)
- null-adjusted CI: 761 ns/op (1,314,307 Hz) to 3.82 µs/op (261,718 Hz)
- MOE: 0.0072 ns/op
- N: 300
- pop: 1,637,779
[path generate with number interpolations]:
- mean: 6.61 ns/op (151,322,006 Hz)
- median: 1.66 µs/op (600,910 Hz)
- expected: 1.54 µs/op (649,943 Hz) to 1.88 µs/op (531,555 Hz)
- range: 1.49 µs/op (672,328 Hz) to 4.11 µs/op (243,190 Hz)
- CI: 1.49 µs/op (671,402 Hz) to 4.04 µs/op (247,558 Hz)
- null-adjusted mean: -0.47 ns/op (-2,108,198,895 Hz)
- null-adjusted median: -100 ns/op (-9,988,775 Hz)
- null-adjusted expected: -638 ns/op (-1,567,407 Hz) to 288 ns/op (3,470,595 Hz)
- null-adjusted range: -2.91 µs/op (-343,851 Hz) to 2.62 µs/op (381,309 Hz)
- null-adjusted CI: -2.60 µs/op (-384,628 Hz) to 2.55 µs/op (392,820 Hz)
- MOE: 0.0064 ns/op
- N: 300
- pop: 1,755,530
[path generate no interpolations]:
- mean: 1.20 ns/op (833,600,663 Hz)
- median: 293 ns/op (3,418,728 Hz)
- expected: 268 ns/op (3,727,735 Hz) to 342 ns/op (2,924,231 Hz)
- range: 256 ns/op (3,904,483 Hz) to 1.03 µs/op (969,741 Hz)
- CI: 257 ns/op (3,885,642 Hz) to 704 ns/op (1,419,809 Hz)
- null-adjusted mean: -5.41 ns/op (-184,883,586 Hz)
- null-adjusted median: -1.37 µs/op (-729,056 Hz)
- null-adjusted expected: -1.61 µs/op (-619,958 Hz) to -1.20 µs/op (-835,683 Hz)
- null-adjusted range: -3.86 µs/op (-259,343 Hz) to -456 ns/op (-2,192,178 Hz)
- null-adjusted CI: -3.78 µs/op (-264,403 Hz) to -785 ns/op (-1,273,721 Hz)
- MOE: 0.000087 ns/op
- N: 300
- pop: 9,670,807
[construct `simpleTree`]:
- mean: 105 ns/op (9,502,298 Hz)
- median: 27.2 µs/op (36,758 Hz)
- expected: 25.3 µs/op (39,569 Hz) to 29.6 µs/op (33,753 Hz)
- range: 24.3 µs/op (41,121 Hz) to 69.2 µs/op (14,444 Hz)
- CI: 24.4 µs/op (41,034 Hz) to 65.5 µs/op (15,259 Hz)
- null-adjusted mean: 104 ns/op (9,611,864 Hz)
- null-adjusted median: 26.9 µs/op (37,157 Hz)
- null-adjusted expected: 24.9 µs/op (40,112 Hz) to 29.4 µs/op (34,062 Hz)
- null-adjusted range: 23.3 µs/op (42,942 Hz) to 69.0 µs/op (14,498 Hz)
- null-adjusted CI: 23.7 µs/op (42,256 Hz) to 65.3 µs/op (15,319 Hz)
- MOE: 6.54 ns/op
- N: 300
- pop: 110,269
[render `simpleTree`]:
- mean: 368 ns/op (2,715,944 Hz)
- median: 93.6 µs/op (10,689 Hz)
- expected: 91.4 µs/op (10,940 Hz) to 96.7 µs/op (10,345 Hz)
- range: 82.9 µs/op (12,069 Hz) to 341 µs/op (2,931 Hz)
- CI: 84.1 µs/op (11,897 Hz) to 213 µs/op (4,703 Hz)
- null-adjusted mean: 263 ns/op (3,802,882 Hz)
- null-adjusted median: 66.3 µs/op (15,072 Hz)
- null-adjusted expected: 61.8 µs/op (16,187 Hz) to 71.4 µs/op (14,007 Hz)
- null-adjusted range: 13.6 µs/op (73,387 Hz) to 317 µs/op (3,156 Hz)
- null-adjusted CI: 18.5 µs/op (53,992 Hz) to 188 µs/op (5,312 Hz)
- MOE: 120 ns/op
- N: 268
- pop: 31,580
[add/remove `simpleTree`]:
- mean: 8.06 µs/op (124,054 Hz)
- median: 672 µs/op (1,488 Hz)
- expected: 568 µs/op (1,759 Hz) to 839 µs/op (1,192 Hz)
- range: 437 µs/op (2,288 Hz) to 1.84 ms/op (543 Hz)
- CI: 437 µs/op (2,288 Hz) to 1.78 ms/op (562 Hz)
- null-adjusted mean: 7.69 µs/op (129,991 Hz)
- null-adjusted median: 579 µs/op (1,729 Hz)
- null-adjusted expected: 472 µs/op (2,120 Hz) to 748 µs/op (1,338 Hz)
- null-adjusted range: 95.9 µs/op (10,432 Hz) to 1.76 ms/op (568 Hz)
- null-adjusted CI: 224 µs/op (4,456 Hz) to 1.69 ms/op (590 Hz)
- MOE: 39.2 µs/op
- N: 100
- pop: 1,479
[construct `nestedTree`]:
- mean: 1.05 ns/op (951,632,307 Hz)
- median: 259 ns/op (3,857,763 Hz)
- expected: 240 ns/op (4,174,918 Hz) to 296 ns/op (3,374,704 Hz)
- range: 231 ns/op (4,331,810 Hz) to 714 ns/op (1,400,172 Hz)
- CI: 231 ns/op (4,324,852 Hz) to 639 ns/op (1,565,767 Hz)
- null-adjusted mean: -8.06 µs/op (-124,070 Hz)
- null-adjusted median: -672 µs/op (-1,489 Hz)
- null-adjusted expected: -839 µs/op (-1,192 Hz) to -568 µs/op (-1,760 Hz)
- null-adjusted range: -1.84 ms/op (-543 Hz) to -436 µs/op (-2,292 Hz)
- null-adjusted CI: -1.78 ms/op (-563 Hz) to -436 µs/op (-2,291 Hz)
- MOE: 0.000065 ns/op
- N: 300
- pop: 11,040,446
[render `nestedTree`]:
- mean: 418 ns/op (2,392,241 Hz)
- median: 106 µs/op (9,397 Hz)
- expected: 103 µs/op (9,741 Hz) to 110 µs/op (9,056 Hz)
- range: 90.6 µs/op (11,034 Hz) to 331 µs/op (3,017 Hz)
- CI: 91.4 µs/op (10,946 Hz) to 213 µs/op (4,702 Hz)
- null-adjusted mean: 417 ns/op (2,398,270 Hz)
- null-adjusted median: 106 µs/op (9,419 Hz)
- null-adjusted expected: 102 µs/op (9,770 Hz) to 110 µs/op (9,076 Hz)
- null-adjusted range: 89.9 µs/op (11,122 Hz) to 331 µs/op (3,019 Hz)
- null-adjusted CI: 90.7 µs/op (11,023 Hz) to 212 µs/op (4,707 Hz)
- MOE: 145 ns/op
- N: 269
- pop: 27,863
[add/remove `nestedTree`]:
- mean: 9.44 µs/op (105,889 Hz)
- median: 819 µs/op (1,221 Hz)
- expected: 664 µs/op (1,507 Hz) to 1.07 ms/op (935 Hz)
- range: 472 µs/op (2,119 Hz) to 2.02 ms/op (496 Hz)
- CI: 480 µs/op (2,084 Hz) to 2.01 ms/op (497 Hz)
- null-adjusted mean: 9.03 µs/op (110,793 Hz)
- null-adjusted median: 712 µs/op (1,404 Hz)
- null-adjusted expected: 553 µs/op (1,808 Hz) to 967 µs/op (1,034 Hz)
- null-adjusted range: 141 µs/op (7,114 Hz) to 1.93 ms/op (519 Hz)
- null-adjusted CI: 267 µs/op (3,742 Hz) to 1.92 ms/op (521 Hz)
- MOE: 55.6 µs/op
- N: 100
- pop: 1,273
[construct `mutateStylesPropertiesTree`]:
- mean: 6.52 µs/op (153,370 Hz)
- median: 526 µs/op (1,901 Hz)
- expected: 454 µs/op (2,200 Hz) to 792 µs/op (1,263 Hz)
- range: 390 µs/op (2,564 Hz) to 1.22 ms/op (820 Hz)
- CI: 393 µs/op (2,545 Hz) to 1.09 ms/op (919 Hz)
- null-adjusted mean: -2.92 µs/op (-342,031 Hz)
- null-adjusted median: -292 µs/op (-3,419 Hz)
- null-adjusted expected: -615 µs/op (-1,626 Hz) to 128 µs/op (7,818 Hz)
- null-adjusted range: -1.63 ms/op (-615 Hz) to 748 µs/op (1,337 Hz)
- null-adjusted CI: -1.62 ms/op (-618 Hz) to 609 µs/op (1,643 Hz)
- MOE: 17.2 µs/op
- N: 100
- pop: 1,818
[render `mutateStylesPropertiesTree`]:
- mean: 104 µs/op (9,570 Hz)
- median: 9.70 ms/op (103 Hz)
- expected: 9.45 ms/op (106 Hz) to 10.0 ms/op (100 Hz)
- range: 8.70 ms/op (115 Hz) to 20.0 ms/op (50 Hz)
- CI: 8.70 ms/op (115 Hz) to 17.3 ms/op (58 Hz)
- null-adjusted mean: 98.0 µs/op (10,207 Hz)
- null-adjusted median: 9.17 ms/op (109 Hz)
- null-adjusted expected: 8.66 ms/op (116 Hz) to 9.55 ms/op (105 Hz)
- null-adjusted range: 7.48 ms/op (134 Hz) to 19.6 ms/op (51 Hz)
- null-adjusted CI: 7.61 ms/op (131 Hz) to 17.0 ms/op (59 Hz)
- MOE: 8.86 ms/op
- N: 100
- pop: 180
[add/remove `mutateStylesPropertiesTree`]:
- mean: 101 µs/op (9,918 Hz)
- median: 9.05 ms/op (110 Hz)
- expected: 8.80 ms/op (114 Hz) to 9.61 ms/op (104 Hz)
- range: 8.05 ms/op (124 Hz) to 20.6 ms/op (49 Hz)
- CI: 8.14 ms/op (123 Hz) to 16.1 ms/op (62 Hz)
- null-adjusted mean: -3.67 µs/op (-272,480 Hz)
- null-adjusted median: -650 µs/op (-1,538 Hz)
- null-adjusted expected: -1.20 ms/op (-833 Hz) to 164 µs/op (6,105 Hz)
- null-adjusted range: -12.0 ms/op (-84 Hz) to 11.9 ms/op (84 Hz)
- null-adjusted CI: -9.21 ms/op (-109 Hz) to 7.37 ms/op (136 Hz)
- MOE: 8.01 ms/op
- N: 100
- pop: 178
[construct `repeatedTree`]:
- mean: 1.01 ns/op (987,742,824 Hz)
- median: 258 ns/op (3,879,988 Hz)
- expected: 240 ns/op (4,169,654 Hz) to 287 ns/op (3,487,466 Hz)
- range: 230 ns/op (4,347,327 Hz) to 746 ns/op (1,339,655 Hz)
- CI: 230 ns/op (4,344,696 Hz) to 635 ns/op (1,574,530 Hz)
- null-adjusted mean: -101 µs/op (-9,918 Hz)
- null-adjusted median: -9.05 ms/op (-111 Hz)
- null-adjusted expected: -9.61 ms/op (-104 Hz) to -8.80 ms/op (-114 Hz)
- null-adjusted range: -20.6 ms/op (-49 Hz) to -8.05 ms/op (-124 Hz)
- null-adjusted CI: -16.1 ms/op (-62 Hz) to -8.14 ms/op (-123 Hz)
- MOE: 0.000060 ns/op
- N: 301
- pop: 11,458,986
[render `repeatedTree`]:
- mean: 80.9 ns/op (12,361,935 Hz)
- median: 21.9 µs/op (45,630 Hz)
- expected: 19.0 µs/op (52,586 Hz) to 25.3 µs/op (39,568 Hz)
- range: 12.9 µs/op (77,672 Hz) to 81.5 µs/op (12,277 Hz)
- CI: 13.6 µs/op (73,358 Hz) to 53.5 µs/op (18,692 Hz)
- null-adjusted mean: 79.9 ns/op (12,518,610 Hz)
- null-adjusted median: 21.7 µs/op (46,173 Hz)
- null-adjusted expected: 18.7 µs/op (53,391 Hz) to 25.0 µs/op (39,947 Hz)
- null-adjusted range: 12.1 µs/op (82,453 Hz) to 81.2 µs/op (12,312 Hz)
- null-adjusted CI: 13.0 µs/op (76,943 Hz) to 53.3 µs/op (18,773 Hz)
- MOE: 3.64 ns/op
- N: 299
- pop: 143,725
[add/remove `repeatedTree`]:
- mean: 3.96 µs/op (252,761 Hz)
- median: 280 µs/op (3,570 Hz)
- expected: 204 µs/op (4,911 Hz) to 401 µs/op (2,495 Hz)
- range: 91.3 µs/op (10,948 Hz) to 1.23 ms/op (813 Hz)
- CI: 117 µs/op (8,514 Hz) to 1.22 ms/op (820 Hz)
- null-adjusted mean: 3.88 µs/op (258,037 Hz)
- null-adjusted median: 258 µs/op (3,873 Hz)
- null-adjusted expected: 178 µs/op (5,607 Hz) to 382 µs/op (2,619 Hz)
- null-adjusted range: 9.88 µs/op (101,173 Hz) to 1.22 ms/op (822 Hz)
- null-adjusted CI: 64.0 µs/op (15,634 Hz) to 1.21 ms/op (829 Hz)
- MOE: 11.1 µs/op
- N: 100
- pop: 2,964
[construct `shuffledKeyedTree`]:
- mean: 541 ns/op (1,849,643 Hz)
- median: 141 µs/op (7,080 Hz)
- expected: 133 µs/op (7,500 Hz) to 156 µs/op (6,415 Hz)
- range: 127 µs/op (7,845 Hz) to 509 µs/op (1,966 Hz)
- CI: 129 µs/op (7,759 Hz) to 350 µs/op (2,857 Hz)
- null-adjusted mean: -3.42 µs/op (-292,769 Hz)
- null-adjusted median: -139 µs/op (-7,200 Hz)
- null-adjusted expected: -267 µs/op (-3,738 Hz) to -47.7 µs/op (-20,953 Hz)
- null-adjusted range: -1.10 ms/op (-907 Hz) to 417 µs/op (2,396 Hz)
- null-adjusted CI: -1.09 ms/op (-916 Hz) to 233 µs/op (4,300 Hz)
- MOE: 403 ns/op
- N: 300
- pop: 21,552
[render `shuffledKeyedTree`]:
- mean: 28.0 µs/op (35,724 Hz)
- median: 1.24 ms/op (810 Hz)
- expected: 1.21 ms/op (829 Hz) to 1.42 ms/op (706 Hz)
- range: 1.10 ms/op (909 Hz) to 17.9 ms/op (56 Hz)
- CI: 1.11 ms/op (901 Hz) to 11.2 ms/op (89 Hz)
- null-adjusted mean: 27.5 µs/op (36,428 Hz)
- null-adjusted median: 1.09 ms/op (914 Hz)
- null-adjusted expected: 1.05 ms/op (952 Hz) to 1.28 ms/op (779 Hz)
- null-adjusted range: 591 µs/op (1,691 Hz) to 17.8 ms/op (56 Hz)
- null-adjusted CI: 760 µs/op (1,316 Hz) to 11.1 ms/op (90 Hz)
- MOE: 1.99 ms/op
- N: 100
- pop: 461
[add/remove `shuffledKeyedTree`]:
- mean: 18.5 µs/op (54,075 Hz)
- median: 1.69 ms/op (593 Hz)
- expected: 1.38 ms/op (723 Hz) to 2.08 ms/op (480 Hz)
- range: 1.20 ms/op (833 Hz) to 4.20 ms/op (238 Hz)
- CI: 1.22 ms/op (817 Hz) to 3.92 ms/op (255 Hz)
- null-adjusted mean: -9.50 µs/op (-105,272 Hz)
- null-adjusted median: 452 µs/op (2,213 Hz)
- null-adjusted expected: -33.0 µs/op (-30,261 Hz) to 877 µs/op (1,141 Hz)
- null-adjusted range: -16.7 ms/op (-60 Hz) to 3.10 ms/op (323 Hz)
- null-adjusted CI: -10.0 ms/op (-100 Hz) to 2.81 ms/op (355 Hz)
- MOE: 270 µs/op
- N: 100
- pop: 683
[mount simpleTree]:
- mean: 7.44 µs/op (134,438 Hz)
- median: 688 µs/op (1,453 Hz)
- expected: 650 µs/op (1,538 Hz) to 744 µs/op (1,345 Hz)
- range: 526 µs/op (1,901 Hz) to 1.80 ms/op (556 Hz)
- CI: 538 µs/op (1,859 Hz) to 1.50 ms/op (669 Hz)
- null-adjusted mean: -11.1 µs/op (-90,461 Hz)
- null-adjusted median: -0.999 ms/op (-1,001 Hz)
- null-adjusted expected: -1.43 ms/op (-698 Hz) to -640 µs/op (-1,563 Hz)
- null-adjusted range: -3.67 ms/op (-272 Hz) to 600 µs/op (1,667 Hz)
- null-adjusted CI: -3.39 ms/op (-295 Hz) to 271 µs/op (3,692 Hz)
- MOE: 27.6 µs/op
- N: 100
- pop: 1,602
[redraw simpleTree]:
- mean: 377 ns/op (2,655,693 Hz)
- median: 95.9 µs/op (10,431 Hz)
- expected: 92.1 µs/op (10,862 Hz) to 100 µs/op (10,000 Hz)
- range: 81.7 µs/op (12,241 Hz) to 344 µs/op (2,906 Hz)
- CI: 81.9 µs/op (12,213 Hz) to 189 µs/op (5,283 Hz)
- null-adjusted mean: -7.06 µs/op (-141,607 Hz)
- null-adjusted median: -592 µs/op (-1,688 Hz)
- null-adjusted expected: -652 µs/op (-1,534 Hz) to -550 µs/op (-1,818 Hz)
- null-adjusted range: -1.72 ms/op (-582 Hz) to -182 µs/op (-5,495 Hz)
- null-adjusted CI: -1.41 ms/op (-707 Hz) to -349 µs/op (-2,868 Hz)
- MOE: 110 ns/op
- N: 268
- pop: 30,864
[mount all]:
- mean: 205 µs/op (4,889 Hz)
- median: 19.1 ms/op (52 Hz)
- expected: 18.5 ms/op (54 Hz) to 20.5 ms/op (49 Hz)
- range: 16.1 ms/op (62 Hz) to 33.7 ms/op (30 Hz)
- CI: 16.1 ms/op (62 Hz) to 33.3 ms/op (30 Hz)
- null-adjusted mean: 204 µs/op (4,898 Hz)
- null-adjusted median: 19.0 ms/op (53 Hz)
- null-adjusted expected: 18.4 ms/op (54 Hz) to 20.4 ms/op (49 Hz)
- null-adjusted range: 15.8 ms/op (63 Hz) to 33.6 ms/op (30 Hz)
- null-adjusted CI: 16.0 ms/op (63 Hz) to 33.3 ms/op (30 Hz)
- MOE: 0.0 ns/op
- N: 100
- pop: 100
[redraw all]:
- mean: 162 µs/op (6,166 Hz)
- median: 15.4 ms/op (65 Hz)
- expected: 14.9 ms/op (67 Hz) to 16.0 ms/op (63 Hz)
- range: 12.2 ms/op (82 Hz) to 34.5 ms/op (29 Hz)
- CI: 12.3 ms/op (82 Hz) to 32.1 ms/op (31 Hz)
- null-adjusted mean: -42.4 µs/op (-23,607 Hz)
- null-adjusted median: -3.70 ms/op (-270 Hz)
- null-adjusted expected: -5.59 ms/op (-179 Hz) to -2.48 ms/op (-404 Hz)
- null-adjusted range: -21.5 ms/op (-47 Hz) to 18.4 ms/op (54 Hz)
- null-adjusted CI: -21.1 ms/op (-47 Hz) to 16.0 ms/op (63 Hz)
- MOE: 0.0 ns/op
- N: 100
- pop: 100
Benchmark run completed in 138 s

Raw output for benchmarks for this PR:

24 tests loaded
Timer resolution detected:
- min: 100 µs/tick (10,000 Hz)
- max: 800 µs/tick (1,250 Hz)
- median: 100 µs/tick (10,000 Hz)
- assumed: 100 µs/tick (10,000 Hz)
Frame interval detected:
- min: 16.3 ms/frame (61 Hz)
- max: 17.1 ms/frame (58 Hz)
- median: 16.7 ms/frame (60 Hz)
- assumed: 16.5 ms/frame (61 Hz)
Tests warmed up, starting benchmark
- min confidence level: 0.99
- min samples/test: 100
- min duration/test: 1 s
- max duration/test: 5 s
- min duration/pass: 11.5 ms
[*** null test ***]:
- mean: 0.96 ns/op (1,036,869,138 Hz)
- median: 234 ns/op (4,266,552 Hz)
- expected: 210 ns/op (4,751,093 Hz) to 309 ns/op (3,235,769 Hz)
- range: 192 ns/op (5,197,328 Hz) to 561 ns/op (1,783,017 Hz)
- CI: 194 ns/op (5,147,690 Hz) to 533 ns/op (1,876,842 Hz)
- MOE: 0.000047 ns/op
- N: 299
- pop: 12,027,682
[route match]:
- mean: 7.10 ns/op (140,878,252 Hz)
- median: 1.73 µs/op (578,468 Hz)
- expected: 1.51 µs/op (663,092 Hz) to 2.33 µs/op (429,276 Hz)
- range: 1.35 µs/op (740,517 Hz) to 4.03 µs/op (248,103 Hz)
- CI: 1.36 µs/op (733,651 Hz) to 3.86 µs/op (259,104 Hz)
- null-adjusted mean: 6.13 ns/op (163,028,792 Hz)
- null-adjusted median: 1.49 µs/op (669,199 Hz)
- null-adjusted expected: 1.20 µs/op (834,000 Hz) to 2.12 µs/op (471,916 Hz)
- null-adjusted range: 790 ns/op (1,266,528 Hz) to 3.84 µs/op (260,541 Hz)
- null-adjusted CI: 830 ns/op (1,204,476 Hz) to 3.67 µs/op (272,837 Hz)
- MOE: 0.0069 ns/op
- N: 300
- pop: 1,638,148
[route non-match]:
- mean: 1.71 ns/op (586,088,879 Hz)
- median: 417 ns/op (2,396,635 Hz)
- expected: 374 ns/op (2,676,376 Hz) to 527 ns/op (1,899,316 Hz)
- range: 329 ns/op (3,035,431 Hz) to 1.31 µs/op (763,190 Hz)
- CI: 330 ns/op (3,030,373 Hz) to 944 ns/op (1,058,961 Hz)
- null-adjusted mean: -5.39 ns/op (-185,456,438 Hz)
- null-adjusted median: -1.31 µs/op (-762,513 Hz)
- null-adjusted expected: -1.96 µs/op (-511,284 Hz) to -982 ns/op (-1,018,764 Hz)
- null-adjusted range: -3.70 µs/op (-270,187 Hz) to -40.1 ns/op (-24,926,993 Hz)
- null-adjusted CI: -3.53 µs/op (-283,330 Hz) to -419 ns/op (-2,388,206 Hz)
- MOE: 0.00020 ns/op
- N: 300
- pop: 6,798,631
[path generate with string interpolations]:
- mean: 4.18 ns/op (239,195,347 Hz)
- median: 1.03 µs/op (972,359 Hz)
- expected: 926 ns/op (1,080,099 Hz) to 1.32 µs/op (756,099 Hz)
- range: 822 ns/op (1,216,034 Hz) to 2.29 µs/op (437,328 Hz)
- CI: 824 ns/op (1,213,513 Hz) to 2.26 µs/op (441,890 Hz)
- null-adjusted mean: 2.47 ns/op (404,128,990 Hz)
- null-adjusted median: 611 ns/op (1,636,194 Hz)
- null-adjusted expected: 399 ns/op (2,504,160 Hz) to 949 ns/op (1,053,809 Hz)
- null-adjusted range: -488 ns/op (-2,049,411 Hz) to 1.96 µs/op (510,941 Hz)
- null-adjusted CI: -120 ns/op (-8,314,765 Hz) to 1.93 µs/op (517,327 Hz)
- MOE: 0.0018 ns/op
- N: 300
- pop: 2,774,998
[path generate with number interpolations]:
- mean: 4.00 ns/op (250,300,909 Hz)
- median: 0.997 µs/op (1,002,940 Hz)
- expected: 909 ns/op (1,100,476 Hz) to 1.23 µs/op (811,071 Hz)
- range: 825 ns/op (1,212,328 Hz) to 2.30 µs/op (435,259 Hz)
- CI: 826 ns/op (1,210,479 Hz) to 2.26 µs/op (442,294 Hz)
- null-adjusted mean: -0.19 ns/op (-5,391,065,491 Hz)
- null-adjusted median: -31.4 ns/op (-31,890,076 Hz)
- null-adjusted expected: -414 ns/op (-2,416,155 Hz) to 307 ns/op (3,256,296 Hz)
- null-adjusted range: -1.46 µs/op (-684,109 Hz) to 1.48 µs/op (677,902 Hz)
- null-adjusted CI: -1.44 µs/op (-695,949 Hz) to 1.44 µs/op (695,951 Hz)
- MOE: 0.0017 ns/op
- N: 300
- pop: 2,903,684
[path generate no interpolations]:
- mean: 1.93 ns/op (517,156,737 Hz)
- median: 463 ns/op (2,161,679 Hz)
- expected: 418 ns/op (2,390,492 Hz) to 588 ns/op (1,700,104 Hz)
- range: 394 ns/op (2,536,121 Hz) to 1.44 µs/op (694,741 Hz)
- CI: 396 ns/op (2,526,772 Hz) to 1.09 µs/op (920,812 Hz)
- null-adjusted mean: -2.06 ns/op (-485,073,915 Hz)
- null-adjusted median: -534 ns/op (-1,871,029 Hz)
- null-adjusted expected: -815 ns/op (-1,227,575 Hz) to -320 ns/op (-3,120,143 Hz)
- null-adjusted range: -1.90 µs/op (-525,436 Hz) to 615 ns/op (1,627,273 Hz)
- null-adjusted CI: -1.87 µs/op (-536,143 Hz) to 260 ns/op (3,847,954 Hz)
- MOE: 0.00028 ns/op
- N: 300
- pop: 5,999,183
[construct `simpleTree`]:
- mean: 71.9 ns/op (13,912,098 Hz)
- median: 18.1 µs/op (55,344 Hz)
- expected: 15.7 µs/op (63,881 Hz) to 23.1 µs/op (43,271 Hz)
- range: 14.0 µs/op (71,207 Hz) to 39.1 µs/op (25,603 Hz)
- CI: 14.1 µs/op (70,924 Hz) to 37.9 µs/op (26,395 Hz)
- null-adjusted mean: 69.9 ns/op (14,296,695 Hz)
- null-adjusted median: 17.6 µs/op (56,798 Hz)
- null-adjusted expected: 15.1 µs/op (66,375 Hz) to 22.7 µs/op (44,069 Hz)
- null-adjusted range: 12.6 µs/op (79,339 Hz) to 38.7 µs/op (25,865 Hz)
- null-adjusted CI: 13.0 µs/op (76,843 Hz) to 37.5 µs/op (26,673 Hz)
- MOE: 2.15 ns/op
- N: 300
- pop: 161,411
[render `simpleTree`]:
- mean: 325 ns/op (3,077,123 Hz)
- median: 78.9 µs/op (12,672 Hz)
- expected: 75.5 µs/op (13,248 Hz) to 82.9 µs/op (12,069 Hz)
- range: 62.4 µs/op (16,034 Hz) to 223 µs/op (4,483 Hz)
- CI: 65.6 µs/op (15,244 Hz) to 152 µs/op (6,577 Hz)
- null-adjusted mean: 253 ns/op (3,951,023 Hz)
- null-adjusted median: 60.8 µs/op (16,436 Hz)
- null-adjusted expected: 52.4 µs/op (19,093 Hz) to 67.2 µs/op (14,880 Hz)
- null-adjusted range: 23.3 µs/op (42,903 Hz) to 209 µs/op (4,784 Hz)
- null-adjusted CI: 27.7 µs/op (36,083 Hz) to 138 µs/op (7,249 Hz)
- MOE: 69.2 ns/op
- N: 260
- pop: 35,744
[add/remove `simpleTree`]:
- mean: 9.60 µs/op (104,208 Hz)
- median: 827 µs/op (1,209 Hz)
- expected: 691 µs/op (1,447 Hz) to 1.05 ms/op (949 Hz)
- range: 562 µs/op (1,780 Hz) to 2.70 ms/op (370 Hz)
- CI: 583 µs/op (1,716 Hz) to 2.56 ms/op (391 Hz)
- null-adjusted mean: 9.27 µs/op (107,861 Hz)
- null-adjusted median: 748 µs/op (1,336 Hz)
- null-adjusted expected: 608 µs/op (1,644 Hz) to 978 µs/op (1,022 Hz)
- null-adjusted range: 339 µs/op (2,951 Hz) to 2.64 ms/op (379 Hz)
- null-adjusted CI: 431 µs/op (2,322 Hz) to 2.49 ms/op (401 Hz)
- MOE: 72.5 µs/op
- N: 100
- pop: 1,249
[construct `nestedTree`]:
- mean: 1.08 ns/op (929,828,782 Hz)
- median: 259 ns/op (3,868,052 Hz)
- expected: 232 ns/op (4,303,122 Hz) to 307 ns/op (3,252,891 Hz)
- range: 218 ns/op (4,591,983 Hz) to 770 ns/op (1,298,276 Hz)
- CI: 222 ns/op (4,502,262 Hz) to 611 ns/op (1,637,140 Hz)
- null-adjusted mean: -9.60 µs/op (-104,220 Hz)
- null-adjusted median: -827 µs/op (-1,209 Hz)
- null-adjusted expected: -1.05 ms/op (-949 Hz) to -691 µs/op (-1,448 Hz)
- null-adjusted range: -2.70 ms/op (-370 Hz) to -561 µs/op (-1,782 Hz)
- null-adjusted CI: -2.56 ms/op (-391 Hz) to -582 µs/op (-1,718 Hz)
- MOE: 0.000064 ns/op
- N: 300
- pop: 10,786,399
[render `nestedTree`]:
- mean: 365 ns/op (2,742,193 Hz)
- median: 86.7 µs/op (11,531 Hz)
- expected: 83.5 µs/op (11,983 Hz) to 91.0 µs/op (10,989 Hz)
- range: 69.5 µs/op (14,397 Hz) to 361 µs/op (2,773 Hz)
- CI: 74.8 µs/op (13,362 Hz) to 182 µs/op (5,490 Hz)
- null-adjusted mean: 364 ns/op (2,750,304 Hz)
- null-adjusted median: 86.5 µs/op (11,566 Hz)
- null-adjusted expected: 83.1 µs/op (12,027 Hz) to 90.8 µs/op (11,017 Hz)
- null-adjusted range: 68.7 µs/op (14,558 Hz) to 360 µs/op (2,775 Hz)
- null-adjusted CI: 74.2 µs/op (13,472 Hz) to 182 µs/op (5,497 Hz)
- MOE: 98.3 ns/op
- N: 258
- pop: 31,862
[add/remove `nestedTree`]:
- mean: 11.5 µs/op (87,281 Hz)
- median: 1.05 ms/op (948 Hz)
- expected: 900 µs/op (1,111 Hz) to 1.29 ms/op (776 Hz)
- range: 600 µs/op (1,667 Hz) to 3.13 ms/op (320 Hz)
- CI: 626 µs/op (1,599 Hz) to 2.42 ms/op (413 Hz)
- null-adjusted mean: 11.1 µs/op (90,150 Hz)
- null-adjusted median: 968 µs/op (1,033 Hz)
- null-adjusted expected: 809 µs/op (1,236 Hz) to 1.21 ms/op (830 Hz)
- null-adjusted range: 239 µs/op (4,177 Hz) to 3.06 ms/op (327 Hz)
- null-adjusted CI: 443 µs/op (2,255 Hz) to 2.35 ms/op (426 Hz)
- MOE: 86.7 µs/op
- N: 100
- pop: 1,055
[construct `mutateStylesPropertiesTree`]:
- mean: 4.03 µs/op (248,164 Hz)
- median: 343 µs/op (2,918 Hz)
- expected: 331 µs/op (3,019 Hz) to 380 µs/op (2,631 Hz)
- range: 322 µs/op (3,103 Hz) to 843 µs/op (1,186 Hz)
- CI: 322 µs/op (3,103 Hz) to 824 µs/op (1,213 Hz)
- null-adjusted mean: -7.43 µs/op (-134,632 Hz)
- null-adjusted median: -712 µs/op (-1,404 Hz)
- null-adjusted expected: -958 µs/op (-1,044 Hz) to -520 µs/op (-1,923 Hz)
- null-adjusted range: -2.80 ms/op (-357 Hz) to 243 µs/op (4,118 Hz)
- null-adjusted CI: -2.10 ms/op (-476 Hz) to 199 µs/op (5,030 Hz)
- MOE: 6.28 µs/op
- N: 100
- pop: 2,925
[render `mutateStylesPropertiesTree`]:
- mean: 147 µs/op (6,782 Hz)
- median: 13.7 ms/op (73 Hz)
- expected: 13.3 ms/op (75 Hz) to 14.1 ms/op (71 Hz)
- range: 12.5 ms/op (80 Hz) to 25.9 ms/op (39 Hz)
- CI: 12.5 ms/op (80 Hz) to 24.8 ms/op (40 Hz)
- null-adjusted mean: 143 µs/op (6,973 Hz)
- null-adjusted median: 13.4 ms/op (75 Hz)
- null-adjusted expected: 12.9 ms/op (77 Hz) to 13.8 ms/op (73 Hz)
- null-adjusted range: 11.7 ms/op (86 Hz) to 25.6 ms/op (39 Hz)
- null-adjusted CI: 11.7 ms/op (85 Hz) to 24.4 ms/op (41 Hz)
- MOE: 0.0 ns/op
- N: 100
- pop: 100
[add/remove `mutateStylesPropertiesTree`]:
- mean: 54.2 µs/op (18,441 Hz)
- median: 14.0 ms/op (71 Hz)
- expected: 13.5 ms/op (74 Hz) to 15.3 ms/op (65 Hz)
- range: 12.2 ms/op (82 Hz) to 26.8 ms/op (37 Hz)
- CI: 12.3 ms/op (81 Hz) to 24.2 ms/op (41 Hz)
- null-adjusted mean: -93.2 µs/op (-10,727 Hz)
- null-adjusted median: 300 µs/op (3,333 Hz)
- null-adjusted expected: -653 µs/op (-1,532 Hz) to 2.03 ms/op (493 Hz)
- null-adjusted range: -13.7 ms/op (-73 Hz) to 14.3 ms/op (70 Hz)
- null-adjusted CI: -12.5 ms/op (-80 Hz) to 11.7 ms/op (86 Hz)
- MOE: 0.0 ns/op
- N: 283
- pop: 283
[construct `repeatedTree`]:
- mean: 1.15 ns/op (868,163,793 Hz)
- median: 265 ns/op (3,767,593 Hz)
- expected: 243 ns/op (4,118,478 Hz) to 368 ns/op (2,716,325 Hz)
- range: 220 ns/op (4,542,414 Hz) to 1.05 µs/op (955,172 Hz)
- CI: 221 ns/op (4,524,253 Hz) to 643 ns/op (1,555,076 Hz)
- null-adjusted mean: -54.2 µs/op (-18,441 Hz)
- null-adjusted median: -14.0 ms/op (-71 Hz)
- null-adjusted expected: -15.3 ms/op (-65 Hz) to -13.5 ms/op (-74 Hz)
- null-adjusted range: -26.8 ms/op (-37 Hz) to -12.2 ms/op (-82 Hz)
- null-adjusted CI: -24.2 ms/op (-41 Hz) to -12.3 ms/op (-81 Hz)
- MOE: 0.000076 ns/op
- N: 301
- pop: 10,070,700
[render `repeatedTree`]:
- mean: 62.9 ns/op (15,904,264 Hz)
- median: 17.9 µs/op (55,939 Hz)
- expected: 16.8 µs/op (59,699 Hz) to 19.7 µs/op (50,636 Hz)
- range: 13.5 µs/op (74,052 Hz) to 37.1 µs/op (26,983 Hz)
- CI: 13.8 µs/op (72,709 Hz) to 31.8 µs/op (31,439 Hz)
- null-adjusted mean: 61.7 ns/op (16,201,058 Hz)
- null-adjusted median: 17.6 µs/op (56,782 Hz)
- null-adjusted expected: 16.4 µs/op (61,041 Hz) to 19.5 µs/op (51,267 Hz)
- null-adjusted range: 12.5 µs/op (80,275 Hz) to 36.8 µs/op (27,144 Hz)
- null-adjusted CI: 13.1 µs/op (76,275 Hz) to 31.6 µs/op (31,659 Hz)
- MOE: 1.42 ns/op
- N: 300
- pop: 184,525
[add/remove `repeatedTree`]:
- mean: 2.71 µs/op (368,532 Hz)
- median: 241 µs/op (4,150 Hz)
- expected: 150 µs/op (6,658 Hz) to 305 µs/op (3,276 Hz)
- range: 84.1 µs/op (11,897 Hz) to 1.40 ms/op (714 Hz)
- CI: 90.2 µs/op (11,083 Hz) to 1.12 ms/op (895 Hz)
- null-adjusted mean: 2.65 µs/op (377,274 Hz)
- null-adjusted median: 223 µs/op (4,483 Hz)
- null-adjusted expected: 130 µs/op (7,666 Hz) to 289 µs/op (3,466 Hz)
- null-adjusted range: 47.0 µs/op (21,278 Hz) to 1.39 ms/op (721 Hz)
- null-adjusted CI: 58.4 µs/op (17,117 Hz) to 1.10 ms/op (907 Hz)
- MOE: 5.49 µs/op
- N: 100
- pop: 4,311
[construct `shuffledKeyedTree`]:
- mean: 596 ns/op (1,676,749 Hz)
- median: 151 µs/op (6,638 Hz)
- expected: 139 µs/op (7,179 Hz) to 176 µs/op (5,690 Hz)
- range: 129 µs/op (7,759 Hz) to 352 µs/op (2,845 Hz)
- CI: 129 µs/op (7,759 Hz) to 347 µs/op (2,881 Hz)
- null-adjusted mean: -2.12 µs/op (-472,349 Hz)
- null-adjusted median: -90.3 µs/op (-11,072 Hz)
- null-adjusted expected: -166 µs/op (-6,025 Hz) to 25.6 µs/op (39,112 Hz)
- null-adjusted range: -1.27 ms/op (-787 Hz) to 267 µs/op (3,739 Hz)
- null-adjusted CI: -988 µs/op (-1,012 Hz) to 257 µs/op (3,894 Hz)
- MOE: 465 ns/op
- N: 300
- pop: 19,536
[render `shuffledKeyedTree`]:
- mean: 17.7 µs/op (56,597 Hz)
- median: 1.54 ms/op (650 Hz)
- expected: 1.48 ms/op (674 Hz) to 1.60 ms/op (625 Hz)
- range: 1.36 ms/op (738 Hz) to 7.95 ms/op (126 Hz)
- CI: 1.36 ms/op (733 Hz) to 6.40 ms/op (156 Hz)
- null-adjusted mean: 17.1 µs/op (58,575 Hz)
- null-adjusted median: 1.39 ms/op (721 Hz)
- null-adjusted expected: 1.31 ms/op (765 Hz) to 1.46 ms/op (685 Hz)
- null-adjusted range: 1.00 ms/op (996 Hz) to 7.82 ms/op (128 Hz)
- null-adjusted CI: 1.02 ms/op (984 Hz) to 6.27 ms/op (159 Hz)
- MOE: 439 µs/op
- N: 100
- pop: 701
[add/remove `shuffledKeyedTree`]:
- mean: 19.5 µs/op (51,184 Hz)
- median: 1.62 ms/op (617 Hz)
- expected: 1.56 ms/op (640 Hz) to 1.95 ms/op (514 Hz)
- range: 1.43 ms/op (698 Hz) to 4.83 ms/op (207 Hz)
- CI: 1.43 ms/op (698 Hz) to 4.25 ms/op (235 Hz)
- null-adjusted mean: 1.87 µs/op (535,135 Hz)
- null-adjusted median: 84.4 µs/op (11,852 Hz)
- null-adjusted expected: -37.5 µs/op (-26,667 Hz) to 463 µs/op (2,159 Hz)
- null-adjusted range: -6.52 ms/op (-153 Hz) to 3.48 ms/op (288 Hz)
- null-adjusted CI: -4.97 ms/op (-201 Hz) to 2.89 ms/op (346 Hz)
- MOE: 318 µs/op
- N: 100
- pop: 640
[mount simpleTree]:
- mean: 12.1 µs/op (82,877 Hz)
- median: 1.08 ms/op (923 Hz)
- expected: 943 µs/op (1,060 Hz) to 1.24 ms/op (806 Hz)
- range: 763 µs/op (1,311 Hz) to 3.18 ms/op (315 Hz)
- CI: 772 µs/op (1,295 Hz) to 2.90 ms/op (345 Hz)
- null-adjusted mean: -7.47 µs/op (-133,845 Hz)
- null-adjusted median: -538 µs/op (-1,859 Hz)
- null-adjusted expected: -1.00 ms/op (-997 Hz) to -322 µs/op (-3,101 Hz)
- null-adjusted range: -4.07 ms/op (-246 Hz) to 1.74 ms/op (574 Hz)
- null-adjusted CI: -3.48 ms/op (-287 Hz) to 1.46 ms/op (683 Hz)
- MOE: 110 µs/op
- N: 100
- pop: 998
[redraw simpleTree]:
- mean: 328 ns/op (3,044,421 Hz)
- median: 80.6 µs/op (12,414 Hz)
- expected: 77.9 µs/op (12,845 Hz) to 84.7 µs/op (11,810 Hz)
- range: 65.5 µs/op (15,259 Hz) to 254 µs/op (3,932 Hz)
- CI: 68.4 µs/op (14,621 Hz) to 157 µs/op (6,372 Hz)
- null-adjusted mean: -11.7 µs/op (-85,197 Hz)
- null-adjusted median: -1.00 ms/op (-997 Hz)
- null-adjusted expected: -1.16 ms/op (-860 Hz) to -858 µs/op (-1,165 Hz)
- null-adjusted range: -3.11 ms/op (-322 Hz) to -508 µs/op (-1,968 Hz)
- null-adjusted CI: -2.83 ms/op (-354 Hz) to -615 µs/op (-1,626 Hz)
- MOE: 72.7 ns/op
- N: 261
- pop: 35,364
[mount all]:
- mean: 270 µs/op (3,706 Hz)
- median: 25.6 ms/op (39 Hz)
- expected: 24.5 ms/op (41 Hz) to 27.7 ms/op (36 Hz)
- range: 21.4 ms/op (47 Hz) to 39.2 ms/op (26 Hz)
- CI: 21.5 ms/op (47 Hz) to 38.3 ms/op (26 Hz)
- null-adjusted mean: 270 µs/op (3,711 Hz)
- null-adjusted median: 25.5 ms/op (39 Hz)
- null-adjusted expected: 24.4 ms/op (41 Hz) to 27.6 ms/op (36 Hz)
- null-adjusted range: 21.1 ms/op (47 Hz) to 39.1 ms/op (26 Hz)
- null-adjusted CI: 21.3 ms/op (47 Hz) to 38.2 ms/op (26 Hz)
- MOE: 0.0 ns/op
- N: 100
- pop: 100
[redraw all]:
- mean: 205 µs/op (4,873 Hz)
- median: 19.4 ms/op (52 Hz)
- expected: 18.6 ms/op (54 Hz) to 20.0 ms/op (50 Hz)
- range: 16.3 ms/op (61 Hz) to 36.6 ms/op (27 Hz)
- CI: 16.7 ms/op (60 Hz) to 34.3 ms/op (29 Hz)
- null-adjusted mean: -64.6 µs/op (-15,475 Hz)
- null-adjusted median: -6.20 ms/op (-161 Hz)
- null-adjusted expected: -9.10 ms/op (-110 Hz) to -4.45 ms/op (-225 Hz)
- null-adjusted range: -22.9 ms/op (-44 Hz) to 15.2 ms/op (66 Hz)
- null-adjusted CI: -21.6 ms/op (-46 Hz) to 12.8 ms/op (78 Hz)
- MOE: 0.0 ns/op
- N: 100
- pop: 100
Benchmark run completed in 145 s

Motivation and Context

This resolves a number of ergonomic issues around the API, crossing out many long-standing feature requests and eliminating a number of gotchas.

Related issues - Fixes https://github.com//issues/1937 by enabling it to be done in userland, mostly via `m.render(elem, vnode, {removeOnThrow: true})` - Resolves https://github.com//issues/2310 by dropping `m.trust` - Resolves https://github.com//issues/2505 by not resolving routes - Resolves https://github.com//issues/2531 by not resolving routes - Fixes https://github.com//issues/2555 - Resolves https://github.com//issues/2592 by dropping `onbeforeremove` - Fixes https://github.com//issues/2621 by making each mount independent of other mount points - Fixes https://github.com//issues/2645 - Fixes https://github.com//issues/2778 - Fixes https://github.com//issues/2794 by dropping internal `ospec` - Fixes https://github.com//issues/2799 - Resolves https://github.com//issues/2802 by dropping `m.request`
Related discussions - Resolves https://github.com//discussions/2754 - Resolves https://github.com//discussions/2775 by not resolving routes - Implements https://github.com//discussions/2912 - Implements https://github.com//discussions/2915 by throwing - Resolves https://github.com//discussions/2916 by dropping `m.request` - Implements https://github.com//discussions/2917 with some minor changes - Implements https://github.com//discussions/2918 - Implements https://github.com//discussions/2919 - Implements https://github.com//discussions/2920 - Resolves https://github.com//discussions/2922 by dropping `m.request` - Resolves https://github.com//discussions/2924 by dropping `m.request` - Implements https://github.com//discussions/2925 - Resolves https://github.com//discussions/2926 by dropping `m.request` - Resolves https://github.com//discussions/2929 by not doing it (doesn't fit with the model) - Resolves https://github.com//discussions/2931 by not resolving routes - Resolves https://github.com//discussions/2934 by dropping `m.request` - Resolves https://github.com//discussions/2935 by not resolving routes - Partially implements https://github.com//discussions/2936 - Partially implements https://github.com//discussions/2937, intentionally skips rest - Resolves https://github.com//discussions/2941 by not resolving routes - Implements https://github.com//discussions/2942 by offering a configurable `route` context key - Implements https://github.com//discussions/2943 - Implements https://github.com//discussions/2945 - Resolves https://github.com//discussions/2946 by making the router a component instead (thus making it implicit) - Implements https://github.com//discussions/2948 by throwing - Resolves https://github.com//discussions/2950 by dropping `m.request`
Things this does *not* resolve - https://github.com//issues/2256 (This needs some further digging to lock down precisely what needs done) - https://github.com//issues/2315 - https://github.com//issues/2359 - https://github.com//issues/2612 - https://github.com//issues/2623 - https://github.com//issues/2643 - https://github.com//issues/2809 - https://github.com//issues/2886

New API

This is all in the style of https://mithril.js.org/api.html. I also include comparisons to v2. Collapsed so you can skip past it without scrolling for days.

New API overview

Vnodes

Vnodes have changed under the hood massively. Now, everything is packed into a much smaller object (10 to 6) when normalized. This means that in large apps, you can get away with way more components before you see perf issues. (Significantly reducing memory overhead is a partial goal of this change.)

Each type below also contains a quick explainer of the underlying normalized vnode object representing them. If a field isn't documented below, it's either not used or an implementation detail.

Note that vnode.m & m.TYPE_MASK is using the bitwise AND operator to select only the low 4 bits of the vnode.m field of the normalized vnode. vnode.m is also guaranteed to be non-negative except for m.retain(), in which the whole field is always -1. All other bits (from offset 4 to offset 30) are an implementation detail unless stated otherwise. At the time of writing, 9 of the remaining 27 bits are currently in use.

Note that once Mithril receives a vnode, the vnode could get modified.

onbeforeupdatem.retain()

Return m.retain() if you want to retain the previous vnode. Also, components always receive the previous attributes via their second argument (null on first render).

Here's an overoptimized, overabstracted counter to show the difference:

This PR:

function CounterDisplay(attrs, old) {
	if (old && attrs.current === old.current) {
		return m.retain()
	}

	return m("div.counter", [
		m("button.counter-increment", {onclick: attrs.onincrement}, "🔼"),
		m("div.counter-value", attrs.current),
		m("button.counter-decrement", {onclick: attrs.ondecrement}, "🔽"),
	])
}

function Counter() {
	let count = 0
	return () => m(CounterDisplay, {
		current: count,
		onincrement() { count++ },
		ondecrement() { count++ },
	})
}

v2:

const CounterDisplay = {
	onbeforeupdate: (vnode, old) => vnode.attrs.count !== old.attrs.count,
	view: ({attrs}) => m("div.counter", [
		m("button.counter-increment", {onclick: attrs.onincrement}, "🔼"),
		m("div.counter-value", attrs.current),
		m("button.counter-decrement", {onclick: attrs.ondecrement}, "🔽"),
	]),
}

function Counter() {
	let count = 0
	return {
		view: () => m(CounterDisplay, {
			current: count,
			onincrement() { count++ },
			ondecrement() { count++ },
		}),
	}
}

Normalized vnode fields:

  • vnode.m is specially set to -1.
  • On render, this vnode becomes a clone of whatever vnode it's retaining, provided it's not a hole (boolean, null, or undefined). For instance, if the vnode it was retaining is an m("div"), it gets its vnode.d field set to the rendered DOM element.

oncreate/onupdate/onremovem.layout/m.remove

DOM-related lifecycle methods have been moved to vnodes. Here's an example using the FullCalendar Bootstrap 4 plugin (an old version of this is currently used in the docs), show the difference:

In both, it uses the same setup usage as is currently shown in the README on npm. It then wraps it the same way Mithril's docs currently wrap it.

This PR, option 1: nest the m.layout and m.remove

import { Calendar } from "@fullcalendar/core"
import bootstrapPlugin from "@fullcalendar/bootstrap"
import dayGridPlugin from "@fullcalendar/daygrid"

function FullCalendar(attrs) {
	let calendar

	return () => m("div", [
		m.layout((elem) => {
			if (calendar == null) {
				calendar = new Calendar(elem, {
					plugins: [
						bootstrapPlugin,
						dayGridPlugin,
					],
					themeSystem: "bootstrap",
					initialView: "dayGridMonth"
				})
				attrs.onready(calendar)
			}

			calendar.render()
		}),

		m.remove(() => calendar.destroy()),
	])
}

function Demo() {
	let calendar

	return () => [
		m("h1", "Calendar"),
		m(FullCalendar, {onready(c) { calendar = c }}),
		m("button", {onclick() { calendar.prev() }}, "Mithril.js Button -"),
		m("button", {onclick() { calendar.next() }}, "Mithril.js Button +"),
	]
}

m.mount(document.body, () => m(Demo))

This PR, option 2: use vnode.d to get the element

import { Calendar } from "@fullcalendar/core"
import bootstrapPlugin from "@fullcalendar/bootstrap"
import dayGridPlugin from "@fullcalendar/daygrid"

function FullCalendar(attrs) {
	let calendar
	let div

	return () => [
		div = m("div"),

		m.layout(() => {
			if (calendar == null) {
				calendar = new Calendar(div.d, {
					plugins: [
						bootstrapPlugin,
						dayGridPlugin,
					],
					themeSystem: "bootstrap",
					initialView: "dayGridMonth"
				})
				attrs.onready(calendar)
			}

			calendar.render()
		}),

		m.remove(() => calendar.destroy()),
	]
}

function Demo() {
	let calendar

	return () => [
		m("h1", "Calendar"),
		m(FullCalendar, {onready(c) { calendar = c }}),
		m("button", {onclick() { calendar.prev() }}, "Mithril.js Button -"),
		m("button", {onclick() { calendar.next() }}, "Mithril.js Button +"),
	]
}

m.mount(document.body, () => m(Demo))

v2:

import { Calendar } from "@fullcalendar/core"
import bootstrapPlugin from "@fullcalendar/bootstrap"
import dayGridPlugin from "@fullcalendar/daygrid"

function FullCalendar() {
	let calendar

	return {
		view: () => m("div"),
		oncreate(vnode) {
			calendar = new Calendar(vnode.dom, {
				plugins: [
					bootstrapPlugin,
					dayGridPlugin,
				],
				themeSystem: "bootstrap",
				initialView: "dayGridMonth"
			})
			vnode.attrs.onready(calendar)
			calendar.render()
		},
		onupdate: () => calendar.render(),
		onremove: () => calendar.destroy(),
	}
}

function Demo() {
	let calendar

	return {
		view: () => [
			m("h1", "Calendar"),
			m(FullCalendar, {onready(c) { calendar = c }}),
			m("button", {onclick() { calendar.prev() }}, "Mithril.js Button -"),
			m("button", {onclick() { calendar.next() }}, "Mithril.js Button +"),
		],
	}
}

m.mount(document.body, Demo)

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_LAYOUT for m.layout(...), m.TYPE_REMOVE for m.remove(...).
  • vnode.a holds the callback for both types.

m(selector, attrs, children), JSX <selector {...attrs}>{children}</selector>

Same as before, but with a few important differences:

  • No magic lifecycle methods. Use relevant vnodes as needed. (For similar reasons, m.censor is removed.)
  • Event handlers await their callbacks before deciding whether to redraw, and they use return false to prevent redraw rather than to "capture" the event (use m.capture(ev) for that).
  • The element's identity is no longer the tag name, but the whole selector. If that changes, the element is replaced, even if the two selectors share a tag name.

This PR:

// Simple case
m("div.class#id", {title: "title"}, ["children"])

// Simple events
m("div", {
	onclick(e) {
		console.log(e)
	},
})

// Prevent redraw
m("div", {
	onclick(e) {
		return false
	},
})

// Redraw after doing something async
m("button.save", {
	async onclick(e) {
		m.capture(e)
		try {
			await m.fetch(m.p("/save/:id", {id: attrs.id}), {
				body: JSON.stringify({value: ev.target.value}),
			})
		} catch (e) {
			error = e.message
		}
	},
}, "Save")

v2:

// Simple case
m("div.class#id", {title: "title"}, ["children"])

// Simple events
m("div", {
	onclick(e) {
		console.log(e)
	},
})

// Prevent redraw
m("div", {
	onclick(e) {
		e.redraw = false
	},
})

// Redraw after doing something async
m("button.save", {
	async onclick(e) {
		e.redraw = false
		e.preventDefault()
		e.stopPropagation()
		try {
			await m.request("/save/:id", {
				params: {id: attrs.id},
				body: JSON.stringify({value: ev.target.value}),
			})
		} catch (e) {
			error = e.message
		}
		m.redraw()
	},
}, "Save")

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_ELEMENT.
  • vnode.t holds the selector, "div.class#id" in this case.
  • vnode.a holds the attributes. Note that if both the selector and attributes have a class/className attribute, they're now normalized to class, not className.
  • vnode.c holds the (normalized) children.
  • vnode.d holds the DOM element, once rendered.

m(Component, attrs, children), JSX <Component {...attrs}>{children}</Component>

Usage is the same as before, but with one difference: no magic lifecycle methods. You'll need to use special attributes to expose inner DOM nodes, lifecycle, and state, and you'll need to use lifecycle vnodes outside the component for everything else. (For similar reasons, m.censor is removed.)

The component definition differs a lot, though, and that'll be covered later.

Note: be careful with attrs.children - that takes precedence over children arguments in both m(Component) and m(".selector") vnodes, and so you may have to extract it as done below.

This PR:

function Greeter({...attrs, children}) {
	return m("div", attrs, ["Hello ", children])
}

// consume it
m(Greeter, {style: "color: red;"}, "world")

// renders to this HTML:
// <div style="color: red;">Hello world</div>

v2:

// define a component
const Greeter = {
	view(vnode) {
		return m("div", vnode.attrs, ["Hello ", vnode.children])
	},
}

// consume it
m(Greeter, {style: "color: red;"}, "world")

// renders to this HTML:
// <div style="color: red;">Hello world</div>

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_COMPONENT.
  • vnode.t holds the component, Greeter in this case.
  • vnode.a holds the attributes. Any children passed in are merged in as {...attrs, children}, but they are not normalized.
  • vnode.c holds the instance vnode, once rendered.

Holes: null, undefined, false, and true

Holes work exactly the same as before, with all the same rules. And like before, they're normalized to null.

"text"

Text vnodes work exactly the same as before, with all the same rules. Anything neither a hole, a fragment array, nor a normalized vnode is stringified. Symbols can even be stringified like before.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_TEXT.
  • vnode.a holds the (stringified) text.

[...children], m(m.Fragment, ...children), JSX <>{children}</>

Unkeyed fragments work the same as before, but with three differences:

  • If you truly want or need a pre-normalized vnode, you were probably using m.fragment(...). Use m.normalize([...]) or m(m.Fragment, ...) instead - there's no longer a dedicated normalized fragment vnode factory.
  • m.Fragment actually looks like a component, rather than looking like an element selector (and being faked as a component in TypeScript types).
  • Fragments can't have lifecycle methods, not even through m(m.Fragment, ...).

Skipping examples, since it's the same across both aside from the above.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_FRAGMENT.
  • vnode.c holds the (normalized) children.

m.keyed(list, view), m.keyed([...entries])

Keyed fragments have changed. Instead of returning an array of vnodes with key properties, you use m.keyed(...). In this new model, keys are more implicit, and while there's a few more brackets and parentheses, it's a bit easier to follow due to the added context of "this is a keyed list, not a normal one" given by the m.keyed call.

This PR:

const colors = ["red", "yellow", "blue", "gray"]
let counter = 0

function getColor() {
	const color = colors[counter]
	counter = (counter + 1) % colors.length
	return color
}

function Boxes() {
	const boxes = new Map()
	let nextKey = 0

	function add() {
		boxes.set(nextKey++, getColor())
	}

	function remove(key) {
		boxes.delete(key)
	}

	return () => [
		m("button", {onclick: add}, "Add box, click box to remove"),
		m(".container", m.keyed(boxes, ([key, box]) => [
			key,
			m(".box",
				{
					"data-color": box.color,
					onclick() { remove(key) },
				},
				m(".stretch")
			)
		])),
	]
}

v2:

const colors = ["red", "yellow", "blue", "gray"]
let counter = 0

function getColor() {
	const color = colors[counter]
	counter = (counter + 1) % colors.length
	return color
}

function Boxes() {
	const boxes = new Map()
	let nextKey = 0

	function add() {
		boxes.set(nextKey++, getColor())
	}

	function remove(key) {
		boxes.delete(key)
	}

	return {
		view: () => [
			m("button", {onclick: add}, "Add box, click box to remove"),
			m(".container", Array.from(boxes, ([key, box]) => (
				m(".box",
					{
						key,
						"data-color": box.color,
						onclick() { remove(key) },
					},
					m(".stretch")
				)
			))),
		],
	}
}

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_KEYED.
  • vnode.c holds the key to (normalized) child map.

m.use([...deps], ...children)

For cases where you want to reset something based on state, this makes that way easier for you, and it's much more explicit than a single-item keyed list. (In fact, it was such a common question during v0.2 and v1 days that we had to eventually document it.) This new factory provides a better story, one that should also hopefully facilitate hooks-like use cases as well.

This PR:

m.mount(rootElem, function() {
	let match

	if (m.match(this.route, "/")) {
		return m(Home)
	}

	if (match = m.match(this.route, "/person/:id")) {
		return m(Layout, m.use([match.id], m(Person, {id: match.id})))
	}

	m.route.set("/")
})

v2:

m.route(rootElem, "/", {
	"/": Home,
	"/person/:id": {
		render: ({attrs}) => m(Layout, [m(Person, {id: attrs.id, key: attrs.id})]),
	},
	// ...
})

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_USE.
  • vnode.a holds the collected dependency array (deps can be any iterable, not just an array).
  • vnode.c holds the (normalized) children.

m.set({key: value, ...}, ...children)

This type is entirely new, without a v2 equivalent. You've probably noticed a bunch of this.route and this.redraw() spattered everywhere. Well, this.redraw() is provided implicitly by the runtime, but the new m.route(...) directly uses m.set({route}, view()) internally.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_USE.
  • vnode.a holds the collected dependency array (deps can be any iterable, not just an array).
  • vnode.c holds the (normalized) children.

m.inline(view)

This type is entirely new, without a v2 equivalent. It's equivalent to the following component, but is optimized for internally (and is its own vnode type).

function Inline({view}) {
	// Call with the context as both `this` and the first argument. Doing it this way reduces how
	// many arrow functions you need, and it also provides an easier way to read context state.
	return view.call(this, this)
}

m.inline = (view) => m(Inline, {view})

It's used internally by the new m.route(prefix, view). See that helper for more details on how this plays out in practice.

Normalized vnode fields:

  • vnode.m & m.TYPE_MASK is m.TYPE_INLINE.
  • vnode.a holds the view function.
  • vnode.c holds the instance vnode, once rendered.

m.mount(element, component)m.mount(element, view)

This PR:

const state = {
	count: 0,
	inc() { state.count++ },
}

function Counter() {
	return m("div", {onclick: state.inc}, state.count)
}

m.mount(document.body, () => m(Counter))

v2:

const state = {
	count: 0,
	inc() { state.count++ },
}

const Counter = {
	view: () => m("div", {onclick: state.inc}, state.count)
}

m.mount(document.body, Counter)

m.redraw()context.redraw()

This PR, option 1: use from a component

This way makes for easier local cleanup.

function Counter() {
	let count = 0
	const timer = setInterval(() => {
		count++
		this.redraw()
	}, 1000)
	return () => [
		m("div", count),
		m.remove(() => clearInterval(timer))
	]
}

m.mount(document.body, () => m(Counter))

This PR, option 2: use from a mount view callback

let count = 0

m.mount(document.body, function (isInit) {
	if (isInit) {
		setInterval(() => {
			count++
			this.redraw()
		}, 1000)
	}
	return m("div", count)
})

v2:

function Counter() {
	let count = 0
	const timer = setInterval(() => {
		count++
		m.redraw()
	}, 1000)
	return {
		onremove: () => clearInterval(timer),
		view: () => m("div", count),
	}
}

m.mount(document.body, Counter)

Routing: m.route(root, defaultRoute, {...routes})m.route(prefix, view)

It's more verbose, but there are unique benefits to doing it this way.

  • Adding and tweaking routes requires not any more code movement than a magic object. If you want to regain that, you can still do it with a simple wrapper.
  • Abstracting routes is as simple as creating a function. The framework doesn't need any special functionality for that.
  • Allowing route matching anywhere means you can just as easily do routing only where you need to. You don't need to specially wrap everything just to make it work. (I've partly run into this trap with other routing frameworks.)
  • You don't need as much effort to glue together a component and a route that doesn't quite perfectly match what that view component expects. (I've been bit by this many times.)
  • If your default route varies based on context, this lets you do that.
  • Conditional routes are trivial. Just use if statements to guard your routes.

The main goal here is to just get as out of your way as possible. Give you the helpers needed to easily solve it yourself with only basic abstractions, instead of trying to magically solve routing for you and ultimately failing because the abstraction didn't quite work as you needed it to.

Note that prefixes are now required - there is no longer a default. They help for documentation, so you can just look at the call and know right away what the prefix is.

Also, m.match/route.match templates are cached just like hyperscript selectors. Updating those too often may result in memory leaks.

This PR, option 1: define using static strings and/or regexps

m.mount(document.body, () => m.route("#!", ({route}) => {
	let match

	if (route.path === "/home") {
		return m(Home)
	}

	if (match = /^\/user\/([^/]+)$/.exec(route.path)) {
		return m(ShowUser, {id: decodeURIComponent(match[0])})
	}

	route.set("/home") // navigate to default route
}))

This PR, option 2: use route.match(...) (equivalent to m.match(route, path)) to match route templates

This method has been benchmarked to oblivion. Hundreds of routes aren't a problem. If anything, they're less of a problem now.

m.mount(document.body, () => m.route("#!", ({route}) => {
	let match
	if (route.match("/home")) return m(Home)
	if (match = route.match("/user/:id")) return m(ShowUser, {id: match.id})
	route.set("/home") // navigate to default route
}))

v2:

m.route.prefix = "#!" // Technically redundant as it's the v2 default
m.route(document.body, "/home", {
	"/home": Home,
	"/user/:id": ShowUser,
})

Note that while the above uses the first argument, the view is actually passed into an m.inline, so it receives all the context, not just the route key, and it also receives it as both this and the first parameter. So this is equivalent to the second option:

m.mount(document.body, () => m.route("#!", function () {
	let match

	if (this.route.match("/home")) {
		return m(Home)
	}

	if (match = this.route.match("/user/:id")) {
		return m(ShowUser, {id: match.id})
	}

	// And finally, set your default path here.
	this.route.set("/home")
}))

route.set(path)

route.set("/home")

Current route

// Returns the full current route, including query and hash
const fullCurrentRoute = route.current
// Returns the current path, including query and hash
const currentPath = route.path
// Returns the current query parameters in a `URLSearchParams` instance.
const currentQuery = route.query

m(m.route.Link, ...)m.link(href, opts?)

This PR:

// Simple static routes can just be given directly
m("a", m.link("/home"), "Go to home page")

// If you want parameters, use `m.p`.
m("a", m.link(m.p("/user/:id", {id: user.id})), user.name)

// If you want to replace the URL state on click, you can pass the same options you can pass to
// `route.set(url, opts)`.
m("a.wizard-exit", {ping: m.p("/event", {a: "wizard-exit", s: wizardState})}, [
	m.link("/things/list", {replace: true}),
	"Exit create wizard",
])

v2:

// Simple static routes can just be given directly
m(m.route.Link, {href: "/home"}, "Go to home page")

// If you want parameters, use `m.p`.
m(m.route.Link, {href: "/user/:id", params: {id: user.id}}, user.name)

// If you want to replace the URL state on click, you can pass the same options you can pass to
// `route.set(url, opts)`.
m(m.route.Link, {
	class: "wizard-exit",
	ping: m.buildPathname("/event", {a: "wizard-exit", s: wizardState}),
	href: "/things/list",
	replace: true,
}, "Exit create wizard")

What happened to route resolvers?

Instead of accepting components, everything is essentially a render function from v2. But for onmatch, the only two legitimate uses for it are auth and lazy route loading. There's better ways to handle both:

  • For lazy route loading, there's a Comp = m.lazy(() => import("./path/to/Comp.js")) helper.
  • For auth, lazily rendering a view is the wrong abstraction. It needs to check for auth from a cached token, prompt on immediate failure, and render the original view on successful auth in either case.
  • On the website, it currently mentions data pre-loading as a use case. That shouldn't use onmatch. Instead, the view for it should be rendered right away, just with loading placeholders for parts that aren't yet ready to show. Trying to force that into a lazy loading system would just cause blank screen flickering and other stuff that at best doesn't look pretty and at worst causes accessibility issues.

m.buildPathname(template, params)m.p(template, params)

It's the same API, just with a shorter name as it's expected to be used a lot more. It also has an entirely new implementation that's almost twice as fast when parameters are involved, to match that same expected jump in usage.

m.fetch, m.link, and route.set expect raw strings and don't accept template parameters, unlike their v2 equivalents.

Here's another example of how it could end up used.

// Issues a fetch to `PUT /api/v1/users/1?name=test`
const result = await fetch(m.p("/api/v1/users/:id", {id: 1, name: "test"}), {
	method: "PUT",
})
console.log(result)

m.buildQueryString(object)m.query(object)

Same as before, just with a shorter name to complement m.p. It also reuses some of the optimizations created for m.p.

// "a=1&b=2"
const querystring = m.query({a: "1", b: "2"})

m.request(url, opts?)m.fetch(url, opts)

The new m.fetch accepts all the same arguments as window.fetch, plus a few other options:

  • opts.responseType: Selects the type of value to return (default: "json")
    • "json": Parse the result as JSON and return the parsed result
    • "formdata": Parse the result as multipart/form-data and return the parsed FormData object
    • "arraybuffer": Collect the result into an ArrayBuffer object and return it
    • "blob": Collect the result into a Blob object and return it
    • "text": Collect the result into a UTF-8 string and return it
    • "document": Parse the result as HTML/XML and return the parsed Document object
  • opts.extract: Instead of using responseType to determine how to extract it, you can provide a (response) => result message to extract the result. Only called on 2xx and takes precedence over opts.responseType.
  • opts.onprogress: Pass a (current, total) => ... function to get called on download progress.

Unfortunately, upload progress isn't currently monitorable using fetch: whatwg/fetch#607. However, this wasn't possible in any previous version of Mithril without using config(xhr) { xhr.upload.onprogress = ... } directly in options, so this isn't much of a removal.

m.fetch returns a promise that resolves according to opts.responseType and opts.extract on 2xx, rejects with an error on anything else. Errors returned through rejections have the following properties:

  • message: The returned UTF-8 text, or the status text if empty. If an error was thrown during the m.fetch call, its message is duplicated here instead.
  • status: The status code, or 0 if it failed before receiving a response.
  • response: The response from the inner fetch call, or undefined if the inner fetch call itself failed to return a response.
  • cause: If an error was thrown during the m.fetch call, this is set to that error.

m.throttler()

This is a general-purpose bi-edge throttler, with a dynamically configurable limit. It's much better than your typical throttle(f, ms) because it lets you easily separate the trigger and reaction using a single shared, encapsulated state object. That same separation is also used to make the rate limit dynamically reconfigurable on hit.

Create as throttled = m.throttler(ms) and do if (await throttled()) return to rate-limit the code that follows. The result is one of three values, to allow you to identify edges:

  • Leading edge: undefined
  • Trailing edge: false, returned only if a second call was made
  • No edge: true

Call throttled.update(ms) to update the interval. This not only impacts future delays, but also any current one.

To dispose, like on component removal, call throttled.dispose().

If you don't sepecify a delay, it defaults to 500ms on creation, which works well enough for most needs. There is no default for throttled.update(...) - you must specify one explicitly.

Important note: due to the way this is implemented in basically all runtimes, the throttler's clock might not tick during sleep, so if you do await throttled() and immediately sleep in a low-power state for 5 minutes, you might have to wait another 10 minutes after resuming to a high-power state.

Example usage:

function Search() {
	const throttled = m.throttler()
	let results, error
	return () => [
		m.remove(throttled.dispose),
		m("input[type=search]", {
			oninput: async (ev) => {
				// Skip redraw if rate limited - it's pointless
				if (await throttled()) return false
				error = results = null
				this.redraw()
				try {
					results = await m.fetch(m.p("/search", {q: ev.target.value}))
				} catch (e) {
					error = e.message
				}
			},
		}),
		results.map((result) => m(SearchResult, {result})),
		!error || m(ErrorDisplay, {error})),
	]
}

Here's the v2 equivalent of the above code, to show how this (and async event listeners) help reduce how much you need to write.

function Search() {
	let timer, results, error

	async function doSearch(query) {
		error = results = null
		try {
			results = await m.request("/search", {params: {q: query}})
		} catch (e) {
			error = e.message
		}
		m.redraw()
	}

	return {
		onremove: () => clearTimeout(timer),
		view: () => [
			m("input[type=search]", {
				oninput: (ev) => {
					if (timer) {
						ev.redraw = false // Skip redraw while throttled
					} else {
						timer = setTimeout(() => { timer = null })
						doSearch(ev.target.value)
					}
				},
			}),
			results.map((result) => m(SearchResult, {result})),
			!error || m(ErrorDisplay, {error})),
		],
	}
}

m.debouncer()

A general-purpose bi-edge debouncer, with a dynamically configurable limit. It's much better than your typical debounce(f, ms) because it lets you easily separate the trigger and reaction using a single shared, encapsulated state object. That same separation is also used to make the rate limit dynamically reconfigurable on hit.

Create as debounced = m.debouncer(ms) and do if (await debounced()) return to rate-limit the code that follows. The result is one of three values, to allow you to identify edges:

  • Leading edge: undefined
  • Trailing edge: false, returned only if a second call was made
  • No edge: true

Call debounced.update(ms) to update the interval. This not only impacts future delays, but also any current one.

To dispose, like on component removal, call debounced.dispose().

If you don't sepecify a delay, it defaults to 500ms on creation, which works well enough for most needs. There is no default for debounced.update(...) - you must specify one explicitly.

Important note: due to the way this is implemented in basically all runtimes, the throttler's clock might not tick during sleep, so if you do await throttled() and immediately sleep in a low-power state for 5 minutes, you might have to wait another 10 minutes after resuming to a high-power state.

Example usage:

function SaveButton() {
	const debounced = m.debouncer()
	let results, error
	return (attrs) => [
		m.remove(debounced.dispose),
		m("input[type=text].value", {
			async oninput(ev) {
				// Skip redraw if rate limited - it's pointless
				if ((await debounced()) !== false) return "skip-redraw"
				try {
					await m.fetch(m.p("/save/:id", {id: attrs.id}), {
						body: JSON.stringify({value: ev.target.value}),
					})
				} catch (e) {
					error = e.message
				}
			},
		}),
		results.map((result) => m(SearchResult, {result})),
		!error || m(ErrorDisplay, {error})),
	]
}

Here's the v2 equivalent of the above code, to show how this (and async event listeners in general) help reduce how much you need to write.

function SaveButton() {
	let timer, results, error

	return {
		onremove: () => clearTimeout(timer),
		view: (vnode) => [
			m("input[type=text].value", {
				oninput(ev) {
					ev.redraw = false // Skip redraw while waiting
					clearTimeout(timer)
					timer = setTimeout(async () => {
						timer = null
						error = results = null
						try {
							results = await m.request("/save/:id", {params: {id: vnode.attrs.id}})
						} catch (e) {
							error = e.message
						}
						m.redraw()
					})
					value = ev.target.value
				},
			}),
			results.map((result) => m(SearchResult, {result})),
			!error || m(ErrorDisplay, {error})),
		],
	}
}

onbeforeremovem.tracked(redraw, initial?), m.trackedList(redraw, initial?)

In v2, for managing animations, you'd do something like this:

const Fader = {
	view: (vnode) => Boolean(vnode.attrs.show) && m("div", {
		onbeforeremove: (vnode) => new Promise((resolve) => {
			vnode.dom.addEventListener("transitionend", resolve, {once: true})
			vnode.dom.classList.add("fade-out")
		}),
	}, "Hi"),
}

function AnimatedTodoList() {
	const items = new Map()
	let nextItem = ""
	let id = 0

	return {
		view: () => m(".todo-list", [
			m(".todo-list-add", [
				m("input[type=text].next-item-value", {
					value: nextItem,
					oninput(ev) { nextItem = ev.target.value },
				}),
				m("button.next-item-add", {
					onclick() { items.set(id++, nextItem) },
				}, "Add"),
			]),
			m(".todo-list-items", Array.from(items, ([key, value]) => (
				m(".todo-item", {
					key,
					onbeforeremove: (vnode) => new Promise((resolve) => {
						vnode.dom.ontransitionend = (ev) => {
							resolve()
							return false
						}
						vnode.dom.classList.add("fade-out")
					}),
				}, [
					m(".item-value", value),
					m("button.remove", {
						onclick() {
							items.delete(key)
						},
					}, "Remove"),
				])
			))),
		]),
	}
}

In this PR, things are done a bit differently as it's much lower-level: you don't remove right away, but instead you first trigger the animation, and then you remove. Tracking the virtual list needed for this is complicated, but that's where m.tracked() and m.trackedList() come in to help - those help track it for you. You just need to make sure to render what they say is live in the given moment.

Here's the equivalent for this PR:

function Fader() {
	const trackHit = m.tracked(this.redraw)

	return (attrs) => m.keyed(trackHit(attrs.show), (handle) => [
		handle.key,
		m("div", {
			class: handle.signal.aborted && "fade-out",
			ontransitionend: handle.release,
		}, "Hi"),
	])
}

function AnimatedTodoList() {
	const items = m.trackedList(this.redraw)
	let nextItem = ""
	let id = 0

	return () => m(".todo-list", [
		m(".todo-list-add", [
			m("input[type=text].next-item-value", {
				value: nextItem,
				oninput(ev) { nextItem = ev.target.value },
			}),
			m("button.next-item-add", {
				onclick() { items.set(id++, nextItem) },
			}, "Add"),
		]),
		m(".todo-list-items", m.keyed(items.live(), (handle) => [
			handle.key,
			m(".todo-item", {
				class: handle.signal.aborted && "fade-out",
				ontransitionend: handle.release,
			}, [
				m(".item-value", handle.value),
				m("button.remove", {onclick: handle.remove}, "Remove"),
			]),
		])),
	])
}

Separating this from the framework also helps bring better structure. For page transitions, you could do something like this:

function Page(attrs) {
	return m("div", {
		class: this.pageHandle.signal.aborted && "fade-out",
		ontransitionend: this.pageHandle.release,
	}, [
		attrs.children,
	])
}

const defaultRoute = "/home"
const routes = {
	"/home": () => m(Home),
	"/user/:id": ({id}) => m.m(ShowUser, {id}),
}

function App() {
	const trackHit = m.tracked(this.redraw)

	return () => m.route("#!", ({route}) => {
		const template = Object.keys(routes).some((t) => route.match(t))

		return m.keyed(trackHit(template), (handle) => Boolean(handle.value) && [
			handle.key,
			m.set({pageHandle: handle}, m(Page, routes[handle.value](match)))
		])
	})
}

m.mount(document.body, () => m(App))

This replacement also comes with a unique advantage: you get a pre-made signal that you can use to immediately abort stuff like fetches before the node is removed.

function ShowUser({id}) {
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: this.pageHandle.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return () => [
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

m.tracked() API

  • tracked = m.tracked(redraw, initial?): Create a tracked value. Default initial value is undefined, but you should
  • handles = tracked(state): Track an incoming state value. Returns a list of handles

m.trackedList() API

  • tracked = m.trackedList(redraw, initial?): Create a tracked value.
  • handles = tracked(state): Track an incoming state value. Returns a list of handles

oninitm.init(callback)

In general, oninit is redundant, even in v2. However, m.init has some use in it that makes it not entirely redundant.

To take the ShowUser example from before:

function ShowUser({id}) {
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: this.pageHandle.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return () => [
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

Let's assume there wasn't a this.pageHandle in the context:

function ShowUser({id}) {
	const ctrl = new AbortController()
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: ctrl.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return () => [
		m.remove(() => ctrl.abort()),
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

This could save a couple lines by using m.init.

function ShowUser({id}) {
	let user, error

	return () => [
		m.init(async (signal) => {
			const response = await fetch(m.p("/api/user/:id", {id}), {signal})
			try {
				if (response.ok) {
					user = await response.json()
				} else {
					error = await response.text()
				}
			} catch (e) {
				error = e.message
			}
		}),
		user != null && m(".user-info", [
			m(".user-name", user),
		]),
		error != null && m(".error", error),
		user == null && error == null && m(".loading"),
	]
}

The closest v2 equivalent would be this, similar to the above variant without m.init:

function ShowUser({id}) {
	const ctrl = new AbortController()
	let user, error

	queueMicrotask(async () => {
		const response = await fetch(m.p("/api/user/:id", {id}), {signal: ctrl.signal})
		try {
			if (response.ok) {
				user = await response.json()
			} else {
				error = await response.text()
			}
		} catch (e) {
			error = e.message
		}
		this.redraw()
	})

	return {
		onremove: () => ctrl.abort(),
		view: () => [
			user != null && m(".user-info", [
				m(".user-name", user),
			]),
			error != null && m(".error", error),
			user == null && error == null && m(".loading"),
		],
	}
}

It's a bit niche, but it can save code in certain situations. It also has the added benefit of scheduling the block to not block frame rendering. (This was done before with queueMicrotask, but common idioms aren't usually aware of that.)

Note that the similar Comp = m.lazy(() => import("./path/to/Comp.js")) does not do similar, but it's also aiming to minimize load delay, letting the initial tree be built during script fetch - there's different perf considerations there.

How Has This Been Tested?

I've written a bunch of new tests, and every current test (to the extent I've kept them) pass. At the time of writing, there's about 219k tests, but all but about 8k of that is for the near exhaustive m.fetch tests.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

  • My code follows the code style of this project.
  • I have added tests to cover my changes.
  • All new and existing tests passed.
  • My change requires a documentation update, and I've opened a pull request to update it already: I need to write that first, and that may be a bit.
  • I have read https://mithril.js.org/contributing.html.

Perf is somewhat improved in spots, but generally the same otherwise.
Cuts only about 2% of the bundle, but removes a pretty bad pain point in the code. Plus, inserting arbitrary HTML outside a container is usually a recipe for disaster in terms of styling.
It's simpler that way and I don't have to condition it.
It's redundant and has been for ages. A v3 increment gives me the ability to minimize surprise.
People can just use equality and regexps as needed.
Also, rename it `m` to make dev stack traces a little more readable, and remove a useless layer of indirection.
This makes that class fully monomorphic and also a bit safer.
That's not been there for years
The long comment at the top of it explains the motivation.
It's now been made redundant with `m.tracked`.
Within the hyperscript side, it's a significant (roughly 10%) perf boost.
Turns out this was way more perf-sensitive than I initially thought. Also included a long comment to explain everything, as it's not immediately obvious just looking at it that it'd be perf-sensitive.
@17dec
Copy link

17dec commented Oct 21, 2024

I was worried ES2018 might break Pale Moon, but just tested the counter example that seems to work. 👍

@panoply
Copy link

panoply commented Oct 21, 2024

I've been waiting for this proposal @dead-claudia. Been some years you've been hacking on it, cool to see it reach this level. Can't wait to get some time and dig into it.

@kfule
Copy link
Contributor

kfule commented Oct 21, 2024

Great job. I will touch it later.
Regarding performance, you might want to compare it to v2.0.4 as well, as v2.0.4 still seems to have a lot of users. Also, as mentioned in #2983, the bug fixes and improvements after v2.0.4 have caused a performance regression.

@StephanHoyer
Copy link
Member

StephanHoyer commented Oct 24, 2024

First, thanks again @dead-claudia for the work you put into that PR.

I really like the way you're heading with the move away from magic-props to special vnodes. This will greatly improve composability on the lifecycle front.

I still don't fully understand, how some of the new special vnodes work, esp. m.retain, m.set, m.use, m.init, m.throttler and m.debouncer. Maybe you can give a few more examples on this.

What I'm skeptical about

  • drop of m.request: I really like the api and flexibility
  • m.q/m.p: why abbreviate that?
  • the new style of routing (but not sure here, because I think I do not fully grasp that)

Another thing that I'm worried about is the amount of breaking changes and the backwards compatibility. As you might have noticed the mithril community is not in a great shape. Most of the main contributors (community and code) have moved away to other projects and a lot of the remaining ones have build massive apps that rely on a stable API, which is a big strength of mithril and allows for such big projects.
So a massive change in the API might not help them and drive them further away from the community.
And for new users I'm not sure if the new (more academic) API is easier to understand then the old one, which was much more straightforward.

So maybe it would be a good idea to keep some of the old ergonomics and deprecate them for at least one major version, so people have time to switch to the newer way of doing stuff. And maybe it turns out that it does not live up to it's expectations and all people like the old API more.

I would at least keep m.redraw, m.route and m.request and the lifecycle-attrs. Sure this would mean a few kb more, but who cares if we have 6kb or 10kb. They also can partly live in a separate compat-package.

So hope this helps.

@StephanHoyer
Copy link
Member

One thing I forgot to ask:

What's the SSR story of v3? I would really like to discontinue mithril-node-render and see it integrated into the main framework. And the question the comes next: How is the support for hydration of SSR-Apps?

…e `m.use`

`m.use` is specially designed to be used a fair amount, and simulating it is relatively slow, so it's worth making a built-in.

`m.key` -> `m.keyed` provides for a simpler interface, one that you don't have to type `key` out explicitly for anymore. (Most frameworks require you to explicitly add a "key" attribute, while this approaches it more like a glorified `Map`.)
It's very common to want context and not care about attributes. What's *not* common is wanting the old attributes and not the new. So that's what this optimizes for.
@kevinfiol
Copy link
Contributor

This is the best early Christmas present 🥲

@dead-claudia
Copy link
Member Author

Updated this PR with some changes and a much better explainer of the new API.

@dead-claudia
Copy link
Member Author

@StephanHoyer

First, thanks again @dead-claudia for the work you put into that PR.

Thanks! 🙏

I really like the way you're heading with the move away from magic-props to special vnodes. This will greatly improve composability on the lifecycle front.

I still don't fully understand, how some of the new special vnodes work, esp. m.retain, m.set, m.use, m.init, m.throttler and m.debouncer. Maybe you can give a few more examples on this.

Updated the initial comment to explain these better, with detailed examples that took about a day or two to work out. In the process, I had to modify m.tracked() as I identified a usability issue in it.

What I'm skeptical about

  • drop of m.request: I really like the api and flexibility

Re-added that, just with a different name (to help people more easily notice there's a difference).

  • m.q/m.p: why abbreviate that?

Unabbreviated m.q into m.query since my initial reasoning (to align with m.p) was admittedly weak.

m.p is abbreviated as it's expected to be used a lot. Where v2's m.request, m.route.Link, and m.route.set accept a separate parameters object to interpolate into a route pattern, I've extracted all that into a separate m.p utility that just spits out an interpolated path. It's abbreviated because I anticipate it to be used a lot with fetching and links.

If you're wondering about why I optimized it so far, check the initial comment of src/std/path-query.js - that explains why I did it.

  • the new style of routing (but not sure here, because I think I do not fully grasp that)

Fixed with some explainers in the initial comment. It's pretty easy to squint and see the similarities.

Another thing that I'm worried about is the amount of breaking changes and the backwards compatibility. As you might have noticed the mithril community is not in a great shape. Most of the main contributors (community and code) have moved away to other projects and a lot of the remaining ones have build massive apps that rely on a stable API, which is a big strength of mithril and allows for such big projects. So a massive change in the API might not help them and drive them further away from the community.

To be fair, this isn't our first rodeo. And we did go through a similar transition from v0.2 to v1.

And for new users I'm not sure if the new (more academic) API is easier to understand then the old one, which was much more straightforward.

More academic APIs have proven to be exactly the right type of abstraction needed for many problems. While the API in my PR is more academic, I also tried to make it simpler.

This feedback did lead me to move from m.key vnodes to m.keyed fragment vnodes, though. So there were some fair criticisms.

Will call out that I'm pulling a page out of the lessons learned from all the ECMAScript design discussions, including max-min classes. There's three major lessons that stick out the most to me:

  1. The smaller the feature, the more likely it is to gain adoption.
  2. The simpler the feature, the less likely it is people will make mistakes with it.
  3. The narrower the feature, the less likely it will be misused.

Academic APIs can sometimes produce more gotchas than they get rid of, but pragmatic use of academic styles of APIs can often get you 90% of the way there with 10% of the effort. And it's that pragmatic use that was my ultimate goal.

So maybe it would be a good idea to keep some of the old ergonomics and deprecate them for at least one major version, so people have time to switch to the newer way of doing stuff. And maybe it turns out that it does not live up to it's expectations and all people like the old API more.

I would at least keep m.redraw, m.route and m.request and the lifecycle-attrs. Sure this would mean a few kb more, but who cares if we have 6kb or 10kb. They also can partly live in a separate compat-package.

Ironically, it's those that you named (mod m.request) that are the very things I wanted to switch up the most.

  • m.redraw()this.redraw() isn't much of a change in practice. Most people aren't dealing with multiple mount points, so this will almost be sed-able in practice.
  • m.route(...) is mostly a one-time thing. v2 routes necessarily can't be spread out very far, so users are only porting one set of things. I did make it a point to retain the same route patterns, modulo :rest... being replaced with *rest, and it's very possible to codemod most v2 m.route(elem, defaultRoute, {...routes}) to this PR's m.mount(elem, () => m.route(prefix, ({route}) => ...)).
  • The lifecycle attributes were causing massive composability issues. I almost raised concerns about this back before the v1 redesign was even released.

I did restore m.request in the form of m.fetch. And I locked down its error behavior a lot better.

So hope this helps.

Oh, this feedback was immensely helpful.


One thing I forgot to ask:

What's the SSR story of v3? I would really like to discontinue mithril-node-render and see it integrated into the main framework. And the question the comes next: How is the support for hydration of SSR-Apps?

There's none currently, but I'm open to adding such a story for that. I'd rather see it deferred until after this PR, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major Type: Breaking Change For any feature request or suggestion that could reasonably break existing code Type: Enhancement For any feature request or suggestion that isn't a bug fix Type: Meta/Feedback For high-level discussion around the project and/or community itself
Projects
Status: High priority
Development

Successfully merging this pull request may close these issues.

7 participants