Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disallow re-entering wasm while off main stack #117

Merged

Conversation

frank-emrich
Copy link

@frank-emrich frank-emrich commented Feb 27, 2024

This PR addresses an issue discussed in #109:
Correctly handling the case where we re-enter wasm while already on a continuation stack is difficult. For the time being, we therefore disallow this. This PR adds the necessary logic to detect this.

Concretely, in invoke_wasm_and_catch_traps, we inspect the chain of nested wasm (+ host) invocations, represented by the linked list of CallThreadState objects maintained in wasmtime_runtime:traphandlers. To this end, for those CallThreadState objects that represent execution of wasm, we store a pointer to the corresponding Store's StackChainCell.

Please note that the diff of the test file typed_continuations.rs looks unnecessarily scary: I moved the existing tests into a module wasi, and added modules test_utils and host.

@dhil dhil self-requested a review February 27, 2024 15:32
Copy link
Member

@dhil dhil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

caller,
)
});
let callee_stack_cain = VMContext::try_from_opaque(callee)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let callee_stack_cain = VMContext::try_from_opaque(callee)
let callee_stack_chain = VMContext::try_from_opaque(callee)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand this code correctly, then there is a new linear cost to enter Wasm? I suppose the cost may be negligible for now. Nonetheless, I wonder whether we can make this computation lazy? Do we always require immediate access to the StackChainCell?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some non-constant cost now, but it's not occurring in this piece of code. It's happening when invoke_wasm_and_catch_traps calls wasmtime_runtime::first_wasm_state_on_fiber_stack. This is O(n), where n is the number of nested (!) calls to f.call(), where f is a host function wrapped in Func.

In other words, n is the number of times you do the following in a nested fashion:

  • You're inside a host call
  • You call another host function (i.e., you don't call a Rust function directly, but a Rust function that was turned into a Func f using something like Func::wrap and then invoke f.call(...).

This felt like it's sufficiently convoluted that n should remain negligible in practice. We could turn the O(n) of wasmtime_runtime::first_wasm_state_on_fiber_stack into O(1) by storing things differently inside the CallThreadState, but that slightly complicates the design.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is worth documenting for our future selves.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I've extended the existing documentation in invoke_wasm_and_catch_traps

crates/runtime/src/traphandlers.rs Outdated Show resolved Hide resolved
@frank-emrich frank-emrich merged commit ae380fd into wasmfx:main Feb 29, 2024
19 checks passed
frank-emrich added a commit that referenced this pull request Mar 6, 2024
…aces (#119)

This PR finalizes the implementation of backtrace creation in the
presence of wasmfx stack switching. The only piece missing after this PR
is testing, which I'm adding in a separate PR.

As of #117, it is already the case that each `CallThreadState` executing
wasm stores a pointer to the underlying `Store`'s `StackChainCell`.

This PR changes the existing creation of backtraces in `backtrace.rs` as
follows:
Previously, `trace_with_trap_state` would call 
call `trace_through_wasm` for the faulting program position plus all the
entries in the current thread's `CallThreadState` chain.
Now, instead of calling `trace_through_wasm` directly,
`trace_with_trap_state` calls a new function
`trace_through_continuations`. The latter takes an optional `StackChain`
and iterates all the stacks (= continuation stacks + main stack)
therein, calling `trace_through_wasm` for each such stack. If no such
`StackChain` is given, it just behaves as before. Note that the
invariant established in #117 ensures that only the most recent
execution of wasm may actually be inside a continuation, while all
previous `CallThreadState`s must be on the main stack.

As a result, we only ever call `trace_through_wasm` such that the old
invariants (which we had commented out at some point) hold again: Its
parameters `trampoline_sp` and `fp` describe a continuous (!) sequence
of stack memory, either on the main stack or inside a `Fiber` stack.
Further, we have `trampoline_sp` > `fp` and following the frame pointer
chain starting at `fp` eventually leads to a pointer past (= greater or
equal than) `trampoline_sp`.

---------

Co-authored-by: Daniel Hillerström <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants