Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add multicore_lockout_victim_deinit. clear lockedout_flag for core1 during core1 reset #2223

Merged
merged 2 commits into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions src/rp2_common/pico_multicore/include/pico/multicore.h
Original file line number Diff line number Diff line change
Expand Up @@ -449,6 +449,13 @@ static inline uint multicore_doorbell_irq_num(uint doorbell_num) {
*/
void multicore_lockout_victim_init(void);

/*! \brief Stop the current core being able to be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
* \ingroup multicore_lockout
*
* This code unhooks the intercore FIFO IRQ, and the FIFO mayt be used for any other purpose after this.
kilograham marked this conversation as resolved.
Show resolved Hide resolved
*/
void multicore_lockout_victim_deinit(void);

/*! \brief Determine if \ref multicore_lockout_victim_init() has been called on the specified core.
* \ingroup multicore_lockout
*
Expand Down
26 changes: 20 additions & 6 deletions src/rp2_common/pico_multicore/multicore.c
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,11 @@
#endif
#endif

// note that these are not reset by core reset, however for now, I think people resetting cores
// and then relying on multicore_lockout for that core without re-initializing, is probably
// something we can live with breaking.
//
// whilst we could clear this in core 1 reset path, that doesn't necessarily catch all,
// and means pulling in this array even if multicore_lockout is not used.
// Note that there is no automatic way for us to clear these flags when a particular core is reset, and yet
// we DO ideally want to clear them, as the `multicore_lockout_victim_init()` checks the flag for the current core
// and is a no-op if set. We DO have a new `multicore_lockout_victim_deinit()` method, which can be called in a pinch after
// the reset before calling `multicore_lockout_victim_init()` again, so that is good. We will reset the flag
// for core1 in `multicore_reset_core1()` though as a convenience since most people will use that to reset core 1.
static bool lockout_victim_initialized[NUM_CORES];

void multicore_fifo_push_blocking(uint32_t data) {
Expand Down Expand Up @@ -118,6 +117,9 @@ void multicore_reset_core1(void) {
bool enabled = irq_is_enabled(irq_num);
irq_set_enabled(irq_num, false);

// Core 1 will be in un-initialized state
lockout_victim_initialized[1] = false;

// Bring core 1 back out of reset. It will drain its own mailbox FIFO, then push
// a 0 to our mailbox to tell us it has done this.
*power_off_clr = PSM_FRCE_OFF_PROC1_BITS;
Expand Down Expand Up @@ -243,6 +245,18 @@ void multicore_lockout_victim_init(void) {
lockout_victim_initialized[core_num] = true;
}

void multicore_lockout_victim_deinit(void) {
uint core_num = get_core_num();
if (lockout_victim_initialized[core_num]) {
// On platforms other than RP2040, these are actually the same IRQ number
// (each core only sees its own IRQ, always at the same IRQ number).
uint fifo_irq_this_core = SIO_FIFO_IRQ_NUM(core_num);
irq_remove_handler(fifo_irq_this_core, multicore_lockout_handler);
irq_set_enabled(fifo_irq_this_core, false);
lockout_victim_initialized[core_num] = false;
}
}

static bool multicore_lockout_handshake(uint32_t magic, absolute_time_t until) {
uint irq_num = SIO_FIFO_IRQ_NUM(get_core_num());
bool enabled = irq_is_enabled(irq_num);
Expand Down
Loading