Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freertos mutex and task switching 2 cores version (IDFGH-14654) #15400

Closed
3 tasks done
ifedotov1984 opened this issue Feb 15, 2025 · 4 comments
Closed
3 tasks done

Freertos mutex and task switching 2 cores version (IDFGH-14654) #15400

ifedotov1984 opened this issue Feb 15, 2025 · 4 comments
Assignees
Labels
Resolution: Won't Do This will not be worked on Status: Done Issue is done internally

Comments

@ifedotov1984
Copy link

Answers checklist.

  • I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there.
  • I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there.
  • I have searched the issue tracker for a similar issue and not found a similar issue.

General issue report

If 2 cores and mutexes are used, a situation arises where on core 1 the highest priority task (which does not use mutexes) is not executed while a task on core 0 is waiting for a mutex.

Attached is a test project with this problem.
If you run it as is, then the task high _1 will be executed after high _0 is executed, although it is on another core and does not wait for the mutex.
2) if increase the priority of high _1 above high _0 (set 12 instead of 9), then the program execution becomes as expected
3) if transfer the task high _1 with priority 9 to core 0, then the program execution becomes as expected
4) if transfer the task low _1 with priority 1 to core 0, then the program execution becomes as expected

Image

mutex_test.zip

I found out that this happens because of FreeRTOS priority inheritance https://www.freertos.org/Documentation/02-Kernel/02-Kernel-features/02-Queues-mutexes-and-semaphores/04-Mutexes.
With one core system, it is clear why this happens.
But with 2 cores, the principle of multiple cores is violated.
It turns out that while the low-priority task is executed on the first core (and takes mutex), the high-priority one (on first core) is not executed, because the high-priority task on the zero core is waiting for a mutex.

@espressif-bot espressif-bot added the Status: Opened Issue is new label Feb 15, 2025
@github-actions github-actions bot changed the title Freertos mutex and task switching 2 cores version Freertos mutex and task switching 2 cores version (IDFGH-14654) Feb 15, 2025
@sudeep-mohanty
Copy link
Collaborator

Hi @ifedotov1984,
Thank you for your query. To me, the behavior of kernel is as expected in the project you shared. As you rightly pointed out, task priority inheritance is at play here but that does not block the execution of high_1.

As you can see from the screenshot you shared, both high_1 and high_0 start execution at a tick count of 59327. high_1 finishes execution and yields time on core 1 for 100 ticks. Hence, it is rescheduled when the tick count reaches 59427. In the meantime, high_0 executes as expected.

Image

I think the illusion that high_1 being blocked by high_0 is incidental because of how the prints occur and the execution time of high_0 and the periodicity of high_1 being the same, i.e, 100 ticks. More importantly, low_1 inherits the priority of high_0 of 10 which makes it the highest priority task on core 1, therefore blocking high_1 which has a priority of 8.

Tweaking your code slightly reveals that high_1 can indeed run without being blocked by high_0. See the code and screenshot below:

static void low_1(void *a)
{
	while(1){
		if(mutex != NULL){
			xSemaphoreTake(mutex, portMAX_DELAY);
			ESP_LOGI("low_1","mutex_lock, tick=%d",(int)xTaskGetTickCount());
			volatile int i;
			for(i=0;i<10000000;i++);
			ESP_LOGI("low_1","mutex_unlock, tick=%d",(int)xTaskGetTickCount());
			ESP_LOGI("low_1","Current priority=%d",(int)uxTaskPriorityGet(NULL)); // <-- Could be executing with inherited priority of high_0
			xSemaphoreGive(mutex);
		}
		vTaskDelay(100);
	}
}

static void high_1(void *a)
{
	while(1){
		ESP_LOGW("high_1","tick, tick=%d",(int)xTaskGetTickCount());
		ESP_LOGW("high_1", "About to yield ..."); // <-- About to yield
		vTaskDelay(100);
	}
}

static void high_0(void *a)
{
		while(1){
		if(mutex != NULL){
			ESP_LOGE("high_0","try to take mutex_lock, tick=%d",(int)xTaskGetTickCount());
			xSemaphoreTake(mutex, portMAX_DELAY);
			ESP_LOGE("high_0","mutex_lock, tick=%d",(int)xTaskGetTickCount());
			vTaskDelay(150); // <-- Exec time of high_0 > periodicity of high_1
			ESP_LOGE("high_0","mutex_unlock, tick=%d",(int)xTaskGetTickCount());
			xSemaphoreGive(mutex);
		}
		vTaskDelay(50);
	}
}
Image

@ifedotov1984
Copy link
Author

It is not illusion.
Yes, both high_1 and high_0 start execution at a tick count of 59327.
But high_1 dont have mutax blocking.
And before 59327 low_1 worked about 800ms, and there was no one high_1 (must be every 100ms).

Also, i tested with DRAM logs, and it is same.

It is all becouse, when high_0 want to take mutax, low_1 priority becomes same as high_0.
And it is higher then priority of high_1. It is true by freertos if there 1 core. But we have 2 cores.

@sudeep-mohanty
Copy link
Collaborator

sudeep-mohanty commented Feb 17, 2025

Hi @ifedotov1984,

It is all becouse, when high_0 want to take mutax, low_1 priority becomes same as high_0.

This explains why high_1 doesn't run isn't it? Since, low_1 is the highest priority ready task on core 1 when it inherits high_0's priority, it gets to run and high_1 is blocked. I'm not sure how having 2 cores helps here.

@sudeep-mohanty
Copy link
Collaborator

I am closing this ticket since there is no change required in ESP-IDF. Please feel free to re-open this ticket or open a new one if more support is needed. Thank you.

@espressif-bot espressif-bot added Status: Done Issue is done internally Resolution: Won't Do This will not be worked on and removed Status: Opened Issue is new labels Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Resolution: Won't Do This will not be worked on Status: Done Issue is done internally
Projects
None yet
Development

No branches or pull requests

3 participants