Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the resumeOnRestart flag work only in one direction? #47

Closed
dev-folks opened this issue Aug 13, 2024 · 11 comments · Fixed by #48
Closed

Does the resumeOnRestart flag work only in one direction? #47

dev-folks opened this issue Aug 13, 2024 · 11 comments · Fixed by #48
Assignees
Labels
enhancement New feature or request question Further information is requested

Comments

@dev-folks
Copy link

dev-folks commented Aug 13, 2024

Hello!
I saw this issue
And I am trying to achieve the exact opposite behavior.
I set resumeOnRestart to false, but if I stop the server and restart it after the scheduled start date, the task still runs...

And within the job data, I also can't find any fields that would help me determine if the task wasn't completed on time and discard it. For one-time tasks, you can use the start time, but for cron tasks, it's problematic.

This behavior is unexpected.

Desired behavior:
The task should run every half hour (at xx:00 and xx:30 each hour). If the server was turned off at xx:20 and turned back on at xx:40, the server should not run the missed task at xx:40.

@code-xhyun code-xhyun self-assigned this Aug 14, 2024
@code-xhyun code-xhyun added the bug Something isn't working label Aug 14, 2024
@code-xhyun
Copy link
Contributor

code-xhyun commented Aug 14, 2024

@dev-folks
I understand. What you want is that when the resumeOnRestart option is set, if the server is turned off and then turned back on, and the time is after nextRunAt, this job should not run and should be considered in a normal pass state. I also think this case should be guaranteed. I'll work on solving this issue.

@code-xhyun code-xhyun added enhancement New feature or request and removed bug Something isn't working labels Aug 14, 2024
@code-xhyun
Copy link
Contributor

code-xhyun commented Aug 14, 2024

@dev-folks
After considering this problem, I've concluded that there's no need to create additional functionality, and it can be implemented independently of the resumeOnRestart option.

To achieve the scenario you want, you can simply use pulse.every('*/30 * * * *') This guarantees a 30-minute interval, and it won't run if it doesn't match the exact time.(This is a matter of whether the repetition interval is relative or absolute.)

If implemented this way,
when the task is supposed to run every 30 minutes (at xx:00 and xx:30 every hour), if the server was turned off at xx:20 and turned back on at xx:40, the server will not run the missed task at xx:40 and will only update the nextRunAt time to xx:00.

If the issue isn't resolved, please share the code you've written with me!

@code-xhyun code-xhyun added question Further information is requested and removed enhancement New feature or request labels Aug 14, 2024
@dev-folks
Copy link
Author

dev-folks commented Aug 14, 2024

@code-xhyun
Thanks for the quick reply!
But this is actually the problem I came here with. The behavior you described isn’t happening.

I'm using cron syntax. For example
await this.pulse.every( '*/1 * * * *', EVENT_JOBS.START_CRON, { taskId: id }, );

However, if I turn off the server at 12:42:40
and then turn it back at 12:43:20,
the job will still run at 12:43:20 instead of 12:44:00.

@dev-folks
Copy link
Author

dev-folks commented Aug 14, 2024

@code-xhyun
This is just a test, but in this case, the flag resumeOnRestart works as expected.

Screenshot 2024-08-14 at 12 51 48 Screenshot 2024-08-14 at 12 52 25

//It’s clear that it’s not working quite right because it doesn’t recalculate the next time based on the cron expression—it just prevents the job from running at the wrong time.

@code-xhyun
Copy link
Contributor

code-xhyun commented Aug 14, 2024

@dev-folks
Modifying the resumeOnRestart function itself in that way could potentially cause various side effects, so it cannot be changed carelessly.
I also created and ran a sample code identical to the one you provided as an example, but it works normally. Try running this code once

// "@pulsecron/pulse": "^1.6.1"
import Pulse from '@pulsecron/pulse';

const mongoConnectionString = 'mongodb://localhost:27017/pulse';

const pulse = new Pulse({ db: { address: mongoConnectionString }, resumeOnRestart: true });

pulse.define('delete old users', async (job) => {
  console.log(job);
  console.log('Deleting old users...');
  return;
});


(async function () {
  await pulse.start();
  await pulse.every('*/1 * * * *', 'delete old users');
})();

@dev-folks
Copy link
Author

@code-xhyun
I'm sorry, but I can't achieve this behavior.

As far as I understand, the job (whether it's on time or overdue) will still be passed to the run function. And in this function, even though the job will get the correct nextRunAt, the job will still be executed immediately.

Am I wrong?

Screenshot 2024-08-14 at 17 49 15 Screenshot 2024-08-14 at 17 42 31

@code-xhyun
Copy link
Contributor

@dev-folks
If you want to check the flow more accurately, add DEBUG=pulse:** to the execution command.
e.g. DEBUG=pulse:** npm run start

@code-xhyun
Copy link
Contributor

@dev-folks
I think I'm now experiencing the same situation as you. I'll look into it more closely soon. I'm sorry for causing you trouble.

@code-xhyun code-xhyun added the enhancement New feature or request label Aug 14, 2024
@dev-folks
Copy link
Author

Thanks! It’s a really useful tool.

Here are my logs:
I stopped the server at 17:04:40.
Then I restarted the server after 17:05:12.

pulse:internal:_findAndLockNextJob found a job available to lock, creating a new job on Pulse with id [66bce0d3e75948cafd5399f3] +21ms
  pulse:internal:processJobs job [template_cron_start] lock status: shouldLock = true +21ms
  pulse:internal:processJobs [template_cron_start:66bce0d3e75948cafd5399f3] job locked while filling queue +0ms
  pulse:internal:processJobs jobQueueFilling: template_cron_start isJobQueueFilling: true +0ms
  pulse:internal:processJobs job [template_cron_start] lock status: shouldLock = true +0ms
  pulse:internal:_findAndLockNextJob _findAndLockNextJob(template_cron_start, [Function]) +0ms
  pulse:internal:processJobs [template_cron_start:66bce0d3e75948cafd5399f3] about to process job +3ms
  pulse:internal:processJobs [template_cron_start:66bce0d3e75948cafd5399f3] nextRunAt is in the past, run the job immediately +0ms
  pulse:internal:processJobs [template_cron_start:66bce0d3e75948cafd5399f3] processing job +0ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] setting lastRunAt to: 2024-08-14T17:05:14.337Z +0ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] computing next run via interval [*/1 * * * *] +0ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] nextRunAt set to [2024-08-14T17:06:00.000Z] +4ms
  pulse:saveJob attempting to save a job into Pulse instance +0ms
  pulse:saveJob [job 66bce0d3e75948cafd5399f3] set job props: 
  pulse:saveJob {
  pulse:saveJob   name: 'template_cron_start',
  pulse:saveJob   attempts: 0,
  pulse:saveJob   backoff: null,
  pulse:saveJob   data: { taskId: '66b4ecb08c5d6aed6adee320' },
  pulse:saveJob   endDate: null,
  pulse:saveJob   lastModifiedBy: undefined,
  pulse:saveJob   priority: 0,
  pulse:saveJob   repeatInterval: '*/1 * * * *',
  pulse:saveJob   repeatTimezone: null,
  pulse:saveJob   shouldSaveResult: false,
  pulse:saveJob   skipDays: null,
  pulse:saveJob   startDate: null,
  pulse:saveJob   lockedAt: 2024-08-14T17:05:14.312Z,
  pulse:saveJob   lastRunAt: 2024-08-14T17:05:14.337Z,
  pulse:saveJob   runCount: 11,
  pulse:saveJob   finishedCount: 10,
  pulse:saveJob   lastFinishedAt: 2024-08-14T17:04:17.241Z,
  pulse:saveJob   type: 'single',
  pulse:saveJob   nextRunAt: 2024-08-14T17:06:00.000Z
  pulse:saveJob } +0ms
  pulse:saveJob current time stored as 2024-08-14T17:05:14.342Z +1ms
  pulse:saveJob job already has _id, calling findOneAndUpdate() using _id as query +0ms
  pulse:saveJob processDbResult() called with success, checking whether to process job immediately or not +1ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] starting job +7ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] process function being called +0ms
  pulse:saveJob attempting to save a job into Pulse instance +3ms
  pulse:saveJob [job 66bce0d3e75948cafd5399f3] set job props: 
  pulse:saveJob {
  pulse:saveJob   name: 'template_cron_start',
  pulse:saveJob   attempts: 0,
  pulse:saveJob   backoff: null,
  pulse:saveJob   data: { taskId: '66b4ecb08c5d6aed6adee320' },
  pulse:saveJob   endDate: null,
  pulse:saveJob   lastModifiedBy: undefined,
  pulse:saveJob   priority: 0,
  pulse:saveJob   repeatInterval: '*/1 * * * *',
  pulse:saveJob   repeatTimezone: null,
  pulse:saveJob   shouldSaveResult: false,
  pulse:saveJob   skipDays: null,
  pulse:saveJob   startDate: null,
  pulse:saveJob   lockedAt: null,
  pulse:saveJob   lastRunAt: 2024-08-14T17:05:14.337Z,
  pulse:saveJob   runCount: 11,
  pulse:saveJob   finishedCount: 11,
  pulse:saveJob   lastFinishedAt: 2024-08-14T17:05:14.346Z,
  pulse:saveJob   type: 'single',
  pulse:saveJob   nextRunAt: 2024-08-14T17:06:00.000Z
  pulse:saveJob } +0ms
  pulse:saveJob current time stored as 2024-08-14T17:05:14.346Z +0ms
  pulse:saveJob job already has _id, calling findOneAndUpdate() using _id as query +0ms
  pulse:saveJob processDbResult() called with success, checking whether to process job immediately or not +4ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] was saved successfully to MongoDB +6ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] has succeeded +0ms
  pulse:job [template_cron_start:66bce0d3e75948cafd5399f3] job finished at [2024-08-14T17:05:14.346Z] and was unlocked +0ms
  

@code-xhyun
Copy link
Contributor

code-xhyun commented Aug 15, 2024

@dev-folks

🎉 This Issue resolved and included in version 1.6.2 🎉

The release is available on:

@dev-folks
Copy link
Author

@code-xhyun

It works great,
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants