-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disallow re-entering wasm while off main stack #117
Disallow re-entering wasm while off main stack #117
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
crates/runtime/src/traphandlers.rs
Outdated
caller, | ||
) | ||
}); | ||
let callee_stack_cain = VMContext::try_from_opaque(callee) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let callee_stack_cain = VMContext::try_from_opaque(callee) | |
let callee_stack_chain = VMContext::try_from_opaque(callee) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand this code correctly, then there is a new linear cost to enter Wasm? I suppose the cost may be negligible for now. Nonetheless, I wonder whether we can make this computation lazy? Do we always require immediate access to the StackChainCell
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is some non-constant cost now, but it's not occurring in this piece of code. It's happening when invoke_wasm_and_catch_traps
calls wasmtime_runtime::first_wasm_state_on_fiber_stack
. This is O(n), where n is the number of nested (!) calls to f.call()
, where f
is a host function wrapped in Func
.
In other words, n is the number of times you do the following in a nested fashion:
- You're inside a host call
- You call another host function (i.e., you don't call a Rust function directly, but a Rust function that was turned into a
Func
f
using something likeFunc::wrap
and then invokef.call(...)
.
This felt like it's sufficiently convoluted that n should remain negligible in practice. We could turn the O(n) of wasmtime_runtime::first_wasm_state_on_fiber_stack
into O(1) by storing things differently inside the CallThreadState
, but that slightly complicates the design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is worth documenting for our future selves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I've extended the existing documentation in invoke_wasm_and_catch_traps
Co-authored-by: Daniel Hillerström <[email protected]>
…aces (#119) This PR finalizes the implementation of backtrace creation in the presence of wasmfx stack switching. The only piece missing after this PR is testing, which I'm adding in a separate PR. As of #117, it is already the case that each `CallThreadState` executing wasm stores a pointer to the underlying `Store`'s `StackChainCell`. This PR changes the existing creation of backtraces in `backtrace.rs` as follows: Previously, `trace_with_trap_state` would call call `trace_through_wasm` for the faulting program position plus all the entries in the current thread's `CallThreadState` chain. Now, instead of calling `trace_through_wasm` directly, `trace_with_trap_state` calls a new function `trace_through_continuations`. The latter takes an optional `StackChain` and iterates all the stacks (= continuation stacks + main stack) therein, calling `trace_through_wasm` for each such stack. If no such `StackChain` is given, it just behaves as before. Note that the invariant established in #117 ensures that only the most recent execution of wasm may actually be inside a continuation, while all previous `CallThreadState`s must be on the main stack. As a result, we only ever call `trace_through_wasm` such that the old invariants (which we had commented out at some point) hold again: Its parameters `trampoline_sp` and `fp` describe a continuous (!) sequence of stack memory, either on the main stack or inside a `Fiber` stack. Further, we have `trampoline_sp` > `fp` and following the frame pointer chain starting at `fp` eventually leads to a pointer past (= greater or equal than) `trampoline_sp`. --------- Co-authored-by: Daniel Hillerström <[email protected]>
This PR addresses an issue discussed in #109:
Correctly handling the case where we re-enter wasm while already on a continuation stack is difficult. For the time being, we therefore disallow this. This PR adds the necessary logic to detect this.
Concretely, in
invoke_wasm_and_catch_traps
, we inspect the chain of nested wasm (+ host) invocations, represented by the linked list ofCallThreadState
objects maintained inwasmtime_runtime:traphandlers
. To this end, for thoseCallThreadState
objects that represent execution of wasm, we store a pointer to the correspondingStore
'sStackChainCell
.Please note that the diff of the test file
typed_continuations.rs
looks unnecessarily scary: I moved the existing tests into a modulewasi
, and added modulestest_utils
andhost
.