-
Notifications
You must be signed in to change notification settings - Fork 776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(core): measure perf for async checks #4609
Conversation
} | ||
|
||
q.then(() => resolve(ruleResult)).catch(error => reject(error)); | ||
q.then(() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be preferable to put this before L289, no? If there's no other event-loop-blocking work pending then it won't make a difference, but if that setTimeout
ends up being what causes other event-loop-blocking work to have a chance to run, I don't think it'd give us a better picture of rule performance to log independently of that other work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. But your solve doesn't work. If I leave it in the function where it is now then it's called synchronous. If I put it in the defer with the setTimeout then its called as soon as the async check starts to await. Either way it doesn't test the time the async check took to complete. The only way to be sure the check finished is to put the time end in the resolve. Even that's a little problematic because while we're awaiting check A, axe will run other checks, so the complete time of A will include the complete time of other checks too.
So I'm a little conflicted about how to best do this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per discussion in standup, we decided we're fine with doing a version that will only be accurate in a { runOnly: 'single-rule' }
scenario. It'd be nice to leave a comment explaining the limitation before merging, though.
Performance of async checks isn't measured correctly. No tests, since we don't generally test perf timer stuff.