-
-
Notifications
You must be signed in to change notification settings - Fork 926
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v3 proposal #2982
base: main
Are you sure you want to change the base?
v3 proposal #2982
Conversation
Perf is somewhat improved in spots, but generally the same otherwise.
Cuts only about 2% of the bundle, but removes a pretty bad pain point in the code. Plus, inserting arbitrary HTML outside a container is usually a recipe for disaster in terms of styling.
It's simpler that way and I don't have to condition it.
It's redundant and has been for ages. A v3 increment gives me the ability to minimize surprise.
…t from the router
People can just use equality and regexps as needed.
Also, rename it `m` to make dev stack traces a little more readable, and remove a useless layer of indirection.
This makes that class fully monomorphic and also a bit safer.
That's not been there for years
The long comment at the top of it explains the motivation.
It's now been made redundant with `m.tracked`.
Within the hyperscript side, it's a significant (roughly 10%) perf boost.
Turns out this was way more perf-sensitive than I initially thought. Also included a long comment to explain everything, as it's not immediately obvious just looking at it that it'd be perf-sensitive.
I was worried ES2018 might break Pale Moon, but just tested the counter example that seems to work. 👍 |
I've been waiting for this proposal @dead-claudia. Been some years you've been hacking on it, cool to see it reach this level. Can't wait to get some time and dig into it. |
Great job. I will touch it later. |
First, thanks again @dead-claudia for the work you put into that PR. I really like the way you're heading with the move away from magic-props to special vnodes. This will greatly improve composability on the lifecycle front. I still don't fully understand, how some of the new special vnodes work, esp. What I'm skeptical about
Another thing that I'm worried about is the amount of breaking changes and the backwards compatibility. As you might have noticed the mithril community is not in a great shape. Most of the main contributors (community and code) have moved away to other projects and a lot of the remaining ones have build massive apps that rely on a stable API, which is a big strength of mithril and allows for such big projects. So maybe it would be a good idea to keep some of the old ergonomics and deprecate them for at least one major version, so people have time to switch to the newer way of doing stuff. And maybe it turns out that it does not live up to it's expectations and all people like the old API more. I would at least keep So hope this helps. |
One thing I forgot to ask: What's the SSR story of v3? I would really like to discontinue mithril-node-render and see it integrated into the main framework. And the question the comes next: How is the support for hydration of SSR-Apps? |
…e `m.use` `m.use` is specially designed to be used a fair amount, and simulating it is relatively slow, so it's worth making a built-in. `m.key` -> `m.keyed` provides for a simpler interface, one that you don't have to type `key` out explicitly for anymore. (Most frameworks require you to explicitly add a "key" attribute, while this approaches it more like a glorified `Map`.)
It's very common to want context and not care about attributes. What's *not* common is wanting the old attributes and not the new. So that's what this optimizes for.
This is the best early Christmas present 🥲 |
It mostly delegates to `fetch`. Also, kill `m.withProgress` as that only existed because there was no `m.request`.
472e59e
to
e21c89f
Compare
Updated this PR with some changes and a much better explainer of the new API. |
Thanks! 🙏
Updated the initial comment to explain these better, with detailed examples that took about a day or two to work out. In the process, I had to modify
Re-added that, just with a different name (to help people more easily notice there's a difference).
Unabbreviated
If you're wondering about why I optimized it so far, check the initial comment of
Fixed with some explainers in the initial comment. It's pretty easy to squint and see the similarities.
To be fair, this isn't our first rodeo. And we did go through a similar transition from v0.2 to v1.
More academic APIs have proven to be exactly the right type of abstraction needed for many problems. While the API in my PR is more academic, I also tried to make it simpler. This feedback did lead me to move from Will call out that I'm pulling a page out of the lessons learned from all the ECMAScript design discussions, including max-min classes. There's three major lessons that stick out the most to me:
Academic APIs can sometimes produce more gotchas than they get rid of, but pragmatic use of academic styles of APIs can often get you 90% of the way there with 10% of the effort. And it's that pragmatic use that was my ultimate goal.
Ironically, it's those that you named (mod
I did restore
Oh, this feedback was immensely helpful.
There's none currently, but I'm open to adding such a story for that. I'd rather see it deferred until after this PR, though. |
e21c89f
to
3220a25
Compare
I want your feedback. Please use it, abuse it, and tell me what issues you find. Bugs, ergonomic issues, weird slowdowns, I want to know it all. Feel free to drop a comment here on whatever's on your mind, so I can easily track it.
Also, @MithrilJS/collaborators please don't merge this yet. I want feedback first. I also need to create a docs PR before this can get merged.
Description
This is my v3 proposal. It's pretty detailed and represents a comprehensive overhaul of the API. It synthesizes much of my API research. And yes, all tests pass. 🙂
Preserved the commit history if you're curious how I got here. (Reviewers: you can ignore that. Focus on the file diff - you'll save yourself a lot of time.)
If you want to play around, you can find artifacts here: https://github.com/dead-claudia/mithril.js/releases/tag/v3.0.0-alpha.1
This is not published to npm, so you'll have to use a GitHub tag/commit link. If you just want to mess around in a code playground, here's a link for you.
Quick line-by-line summary
If you're short on time and just want a very high level summary, here's a comprehensive list, grouped by type.Highlights:
Additions:
m.layout((dom) => ...)
, scheduled to be invoked after renderm.remove((dom) => ...)
, scheduled to be invoked after the render in which it's removed fromm.retain()
, retains the current vnode at that given positionm.set(contextKeys, ...children)
, sets context keys you can get via the third parameter in componentsm.use([...deps], ...children)
, works similar to React'suseEffect
m.init(async (signal) => ...)
, callback return works just like event listenerstracked = m.tracked()
to manage delayed removalm.throttler()
m.debouncer()
Changes:
key:
attribute →m.keyed(view, (item) => [key, vnode])
m.buildPathname(template, query)
→m.p(template, query)
m.buildQueryString(query)
→m.query(query)
false
to prevent redraws.removeOnThrow: false
is passed tom.render
.removeOnThrow: false
is passed tom.render
.m.route(elem, defaultRoute, {...routes})
->m.route(prefix, view)
m.mount(...)
now returns aredraw
function that works likem.redraw
, but only for that one root.redraw
is also set in the context via theredraw
key.m.render(...)
now accepts an options object for its third parameter.redraw
sets the redraw function (and is made available in the context).(attrs, old, context) => vnode
functions or an(attrs, null, context) => ...
function returning thatm.request(url, opts)
→m.fetch(url, opts)
, accepts allwindow.fetch
options,onprogress
,extract
, andresponseType
.Removals:
mithril/stream
- the main library no longer needs the DOM to loadm.trust
- useinnerHTML
insteadm.censor
m.vnode(...)
- usem.normalize
if it's truly necessarym.mount(root, null)
- usem.render(root, null)
insteadm.redraw()
-> eitherredraw
function returned fromm.mount
or theredraw
context keyoninit
oncreate
/onupdate
- usem.layout(...)
insteadonremove
- usem.remove(...)
insteadonbeforeupdate
- usem.retain()
insteadonbeforeremove
- usem.tracked()
utility to manage parents insteadm.parsePathname
m.parseQueryString
ev.redraw = false
- return/resolve withfalse
or reject/throw from event handlersMiscellaneous:
mithril/stream
to require a lot less allocation - I don't have precise numbers, but it should both be faster and require less memory.Benchmarks
Details about my setup and related raw data follow the table.
simpleTree
nestedTree
mutateStylesPropertiesTree
repeatedTree
shuffledKeyedTree
simpleTree
nestedTree
mutateStylesPropertiesTree
repeatedTree
shuffledKeyedTree
simpleTree
nestedTree
mutateStylesPropertiesTree
repeatedTree
shuffledKeyedTree
Browser version:
This was run in a private window, with only the following two extensions:
Each run was loaded in a new private window, reloaded twice to ensure it's fully loaded in cache.
Raw output for v2.2.8 benchmarks (ported from this PR's):
Raw output for benchmarks for this PR:
Motivation and Context
This resolves a number of ergonomic issues around the API, crossing out many long-standing feature requests and eliminating a number of gotchas.
Related issues
- Fixes https://github.com//issues/1937 by enabling it to be done in userland, mostly via `m.render(elem, vnode, {removeOnThrow: true})` - Resolves https://github.com//issues/2310 by dropping `m.trust` - Resolves https://github.com//issues/2505 by not resolving routes - Resolves https://github.com//issues/2531 by not resolving routes - Fixes https://github.com//issues/2555 - Resolves https://github.com//issues/2592 by dropping `onbeforeremove` - Fixes https://github.com//issues/2621 by making each mount independent of other mount points - Fixes https://github.com//issues/2645 - Fixes https://github.com//issues/2778 - Fixes https://github.com//issues/2794 by dropping internal `ospec` - Fixes https://github.com//issues/2799 - Resolves https://github.com//issues/2802 by dropping `m.request`Related discussions
- Resolves https://github.com//discussions/2754 - Resolves https://github.com//discussions/2775 by not resolving routes - Implements https://github.com//discussions/2912 - Implements https://github.com//discussions/2915 by throwing - Resolves https://github.com//discussions/2916 by dropping `m.request` - Implements https://github.com//discussions/2917 with some minor changes - Implements https://github.com//discussions/2918 - Implements https://github.com//discussions/2919 - Implements https://github.com//discussions/2920 - Resolves https://github.com//discussions/2922 by dropping `m.request` - Resolves https://github.com//discussions/2924 by dropping `m.request` - Implements https://github.com//discussions/2925 - Resolves https://github.com//discussions/2926 by dropping `m.request` - Resolves https://github.com//discussions/2929 by not doing it (doesn't fit with the model) - Resolves https://github.com//discussions/2931 by not resolving routes - Resolves https://github.com//discussions/2934 by dropping `m.request` - Resolves https://github.com//discussions/2935 by not resolving routes - Partially implements https://github.com//discussions/2936 - Partially implements https://github.com//discussions/2937, intentionally skips rest - Resolves https://github.com//discussions/2941 by not resolving routes - Implements https://github.com//discussions/2942 by offering a configurable `route` context key - Implements https://github.com//discussions/2943 - Implements https://github.com//discussions/2945 - Resolves https://github.com//discussions/2946 by making the router a component instead (thus making it implicit) - Implements https://github.com//discussions/2948 by throwing - Resolves https://github.com//discussions/2950 by dropping `m.request`Things this does *not* resolve
- https://github.com//issues/2256 (This needs some further digging to lock down precisely what needs done) - https://github.com//issues/2315 - https://github.com//issues/2359 - https://github.com//issues/2612 - https://github.com//issues/2623 - https://github.com//issues/2643 - https://github.com//issues/2809 - https://github.com//issues/2886New API
This is all in the style of https://mithril.js.org/api.html. I also include comparisons to v2. Collapsed so you can skip past it without scrolling for days.
New API overview
Vnodes
Vnodes have changed under the hood massively. Now, everything is packed into a much smaller object (10 to 6) when normalized. This means that in large apps, you can get away with way more components before you see perf issues. (Significantly reducing memory overhead is a partial goal of this change.)
Each type below also contains a quick explainer of the underlying normalized vnode object representing them. If a field isn't documented below, it's either not used or an implementation detail.
onbeforeupdate
→m.retain()
Return
m.retain()
if you want to retain the previous vnode. Also, components always receive the previous attributes via their second argument (null
on first render).Here's an overoptimized, overabstracted counter to show the difference:
This PR:
v2:
Normalized vnode fields:
vnode.m
is specially set to -1.null
, orundefined
). For instance, if the vnode it was retaining is anm("div")
, it gets itsvnode.d
field set to the rendered DOM element.oncreate
/onupdate
/onremove
→m.layout
/m.remove
DOM-related lifecycle methods have been moved to vnodes. Here's an example using the FullCalendar Bootstrap 4 plugin (an old version of this is currently used in the docs), show the difference:
This PR, option 1: nest the
m.layout
andm.remove
This PR, option 2: use
vnode.d
to get the elementv2:
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_LAYOUT
form.layout(...)
,m.TYPE_REMOVE
form.remove(...)
.vnode.a
holds the callback for both types.m(selector, attrs, children)
, JSX<selector {...attrs}>{children}</selector>
Same as before, but with a few important differences:
m.censor
is removed.)return false
to prevent redraw rather than to "capture" the event (usem.capture(ev)
for that).This PR:
v2:
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_ELEMENT
.vnode.t
holds the selector,"div.class#id"
in this case.vnode.a
holds the attributes. Note that if both the selector and attributes have aclass
/className
attribute, they're now normalized toclass
, notclassName
.vnode.c
holds the (normalized) children.vnode.d
holds the DOM element, once rendered.m(Component, attrs, children)
, JSX<Component {...attrs}>{children}</Component>
Usage is the same as before, but with one difference: no magic lifecycle methods. You'll need to use special attributes to expose inner DOM nodes, lifecycle, and state, and you'll need to use lifecycle vnodes outside the component for everything else. (For similar reasons,
m.censor
is removed.)The component definition differs a lot, though, and that'll be covered later.
This PR:
v2:
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_COMPONENT
.vnode.t
holds the component,Greeter
in this case.vnode.a
holds the attributes. Any children passed in are merged in as{...attrs, children}
, but they are not normalized.vnode.c
holds the instance vnode, once rendered.Holes:
null
,undefined
,false
, andtrue
Holes work exactly the same as before, with all the same rules. And like before, they're normalized to
null
."text"
Text vnodes work exactly the same as before, with all the same rules. Anything neither a hole, a fragment array, nor a normalized vnode is stringified. Symbols can even be stringified like before.
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_TEXT
.vnode.a
holds the (stringified) text.[...children]
,m(m.Fragment, ...children)
, JSX<>{children}</>
Unkeyed fragments work the same as before, but with three differences:
m.fragment(...)
. Usem.normalize([...])
orm(m.Fragment, ...)
instead - there's no longer a dedicated normalized fragment vnode factory.m.Fragment
actually looks like a component, rather than looking like an element selector (and being faked as a component in TypeScript types).m(m.Fragment, ...)
.Skipping examples, since it's the same across both aside from the above.
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_FRAGMENT
.vnode.c
holds the (normalized) children.m.keyed(list, view)
,m.keyed([...entries])
Keyed fragments have changed. Instead of returning an array of vnodes with
key
properties, you usem.keyed(...)
. In this new model, keys are more implicit, and while there's a few more brackets and parentheses, it's a bit easier to follow due to the added context of "this is a keyed list, not a normal one" given by them.keyed
call.This PR:
v2:
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_KEYED
.vnode.c
holds the key to (normalized) child map.m.use([...deps], ...children)
For cases where you want to reset something based on state, this makes that way easier for you, and it's much more explicit than a single-item keyed list. (In fact, it was such a common question during v0.2 and v1 days that we had to eventually document it.) This new factory provides a better story, one that should also hopefully facilitate hooks-like use cases as well.
This PR:
v2:
Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_USE
.vnode.a
holds the collected dependency array (deps
can be any iterable, not just an array).vnode.c
holds the (normalized) children.m.set({key: value, ...}, ...children)
This type is entirely new, without a v2 equivalent. You've probably noticed a bunch of
this.route
andthis.redraw()
spattered everywhere. Well,this.redraw()
is provided implicitly by the runtime, but the newm.route(...)
directly usesm.set({route}, view())
internally.Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_USE
.vnode.a
holds the collected dependency array (deps
can be any iterable, not just an array).vnode.c
holds the (normalized) children.m.inline(view)
This type is entirely new, without a v2 equivalent. It's equivalent to the following component, but is optimized for internally (and is its own vnode type).
It's used internally by the new
m.route(prefix, view)
. See that helper for more details on how this plays out in practice.Normalized vnode fields:
vnode.m & m.TYPE_MASK
ism.TYPE_INLINE
.vnode.a
holds the view function.vnode.c
holds the instance vnode, once rendered.m.mount(element, component)
→m.mount(element, view)
This PR:
v2:
m.redraw()
→context.redraw()
This PR, option 1: use from a component
This PR, option 2: use from a mount view callback
v2:
Routing:
m.route(root, defaultRoute, {...routes})
→m.route(prefix, view)
It's more verbose, but there are unique benefits to doing it this way.
if
statements to guard your routes.The main goal here is to just get as out of your way as possible. Give you the helpers needed to easily solve it yourself with only basic abstractions, instead of trying to magically solve routing for you and ultimately failing because the abstraction didn't quite work as you needed it to.
This PR, option 1: define using static strings and/or regexps
This PR, option 2: use
route.match(...)
(equivalent tom.match(route, path)
) to match route templatesv2:
Note that while the above uses the first argument, the
view
is actually passed into anm.inline
, so it receives all the context, not just theroute
key, and it also receives it as boththis
and the first parameter. So this is equivalent to the second option:route.set(path)
Current route
m(m.route.Link, ...)
→m.link(href, opts?)
This PR:
v2:
What happened to route resolvers?
Instead of accepting components, everything is essentially a
render
function from v2. But foronmatch
, the only two legitimate uses for it are auth and lazy route loading. There's better ways to handle both:Comp = m.lazy(() => import("./path/to/Comp.js"))
helper.onmatch
. Instead, the view for it should be rendered right away, just with loading placeholders for parts that aren't yet ready to show. Trying to force that into a lazy loading system would just cause blank screen flickering and other stuff that at best doesn't look pretty and at worst causes accessibility issues.m.buildPathname(template, params)
→m.p(template, params)
It's the same API, just with a shorter name as it's expected to be used a lot more. It also has an entirely new implementation that's almost twice as fast when parameters are involved, to match that same expected jump in usage.
Here's another example of how it could end up used.
m.buildQueryString(object)
→m.query(object)
Same as before, just with a shorter name to complement
m.p
. It also reuses some of the optimizations created form.p
.m.request(url, opts?)
→m.fetch(url, opts)
The new
m.fetch
accepts all the same arguments aswindow.fetch
, plus a few other options:opts.responseType
: Selects the type of value to return (default:"json"
)"json"
: Parse the result as JSON and return the parsed result"formdata"
: Parse the result asmultipart/form-data
and return the parsedFormData
object"arraybuffer"
: Collect the result into anArrayBuffer
object and return it"blob"
: Collect the result into aBlob
object and return it"text"
: Collect the result into a UTF-8 string and return it"document"
: Parse the result as HTML/XML and return the parsedDocument
objectopts.extract
: Instead of usingresponseType
to determine how to extract it, you can provide a(response) => result
message to extract the result. Only called on 2xx and takes precedence overopts.responseType
.opts.onprogress
: Pass a(current, total) => ...
function to get called on download progress.m.fetch
returns a promise that resolves according toopts.responseType
andopts.extract
on 2xx, rejects with an error on anything else. Errors returned through rejections have the following properties:message
: The returned UTF-8 text, or the status text if empty. If an error was thrown during them.fetch
call, its message is duplicated here instead.status
: The status code, or 0 if it failed before receiving a response.response
: The response from the innerfetch
call, orundefined
if the innerfetch
call itself failed to return a response.cause
: If an error was thrown during them.fetch
call, this is set to that error.m.throttler()
This is a general-purpose bi-edge throttler, with a dynamically configurable limit. It's much better than your typical
throttle(f, ms)
because it lets you easily separate the trigger and reaction using a single shared, encapsulated state object. That same separation is also used to make the rate limit dynamically reconfigurable on hit.Create as
throttled = m.throttler(ms)
and doif (await throttled()) return
to rate-limit the code that follows. The result is one of three values, to allow you to identify edges:undefined
false
, returned only if a second call was madetrue
Call
throttled.update(ms)
to update the interval. This not only impacts future delays, but also any current one.To dispose, like on component removal, call
throttled.dispose()
.If you don't sepecify a delay, it defaults to 500ms on creation, which works well enough for most needs. There is no default for
throttled.update(...)
- you must specify one explicitly.Example usage:
Here's the v2 equivalent of the above code, to show how this (and async event listeners) help reduce how much you need to write.
m.debouncer()
A general-purpose bi-edge debouncer, with a dynamically configurable limit. It's much better than your typical
debounce(f, ms)
because it lets you easily separate the trigger and reaction using a single shared, encapsulated state object. That same separation is also used to make the rate limit dynamically reconfigurable on hit.Create as
debounced = m.debouncer(ms)
and doif (await debounced()) return
to rate-limit the code that follows. The result is one of three values, to allow you to identify edges:undefined
false
, returned only if a second call was madetrue
Call
debounced.update(ms)
to update the interval. This not only impacts future delays, but also any current one.To dispose, like on component removal, call
debounced.dispose()
.If you don't sepecify a delay, it defaults to 500ms on creation, which works well enough for most needs. There is no default for
debounced.update(...)
- you must specify one explicitly.Example usage:
Here's the v2 equivalent of the above code, to show how this (and async event listeners in general) help reduce how much you need to write.
onbeforeremove
→m.tracked(redraw, initial?)
,m.trackedList(redraw, initial?)
In v2, for managing animations, you'd do something like this:
In this PR, things are done a bit differently as it's much lower-level: you don't remove right away, but instead you first trigger the animation, and then you remove. Tracking the virtual list needed for this is complicated, but that's where
m.tracked()
andm.trackedList()
come in to help - those help track it for you. You just need to make sure to render what they say is live in the given moment.Here's the equivalent for this PR:
Separating this from the framework also helps bring better structure. For page transitions, you could do something like this:
This replacement also comes with a unique advantage: you get a pre-made signal that you can use to immediately abort stuff like fetches before the node is removed.
m.tracked()
APItracked = m.tracked(redraw, initial?)
: Create a tracked value. Default initial value isundefined
, but you shouldhandles = tracked(state)
: Track an incoming state value. Returns a list of handlesm.trackedList()
APItracked = m.trackedList(redraw, initial?)
: Create a tracked value.handles = tracked(state)
: Track an incoming state value. Returns a list of handlesoninit
→m.init(callback)
In general,
oninit
is redundant, even in v2. However,m.init
has some use in it that makes it not entirely redundant.To take the
ShowUser
example from before:Let's assume there wasn't a
this.pageHandle
in the context:This could save a couple lines by using
m.init
.The closest v2 equivalent would be this, similar to the above variant without
m.init
:It's a bit niche, but it can save code in certain situations. It also has the added benefit of scheduling the block to not block frame rendering. (This was done before with
queueMicrotask
, but common idioms aren't usually aware of that.)How Has This Been Tested?
I've written a bunch of new tests, and every current test (to the extent I've kept them) pass. At the time of writing, there's about 219k tests, but all but about 8k of that is for the near exhaustive
m.fetch
tests.Types of changes
Checklist