-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#102 - Add overhead value to TestTask #243
base: master
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
from: unittest | ||
format: list | ||
groupby: task.id | ||
limit: 20000 | ||
select: | ||
- {value: action.start_time, name: task_min, aggregate: min} | ||
- {value: action.end_time, name: task_max, aggregate: max} | ||
- {value: result.start_time, name: group_min, aggregate: min} | ||
- {value: result.end_time, name: group_max, aggregate: max} | ||
where: | ||
- in: {task.id: {$eval: task_id}} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Rather than having a separate query for this (or maybe in addition), I think we could refactor the existing I'd be fine if you decide to file a follow-up issue to implement this and save it for later. |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,13 +2,15 @@ | |
import json | ||
import os | ||
from abc import ABC, abstractmethod | ||
from argparse import Namespace | ||
from dataclasses import dataclass, field | ||
from enum import Enum | ||
from inspect import signature | ||
from statistics import median | ||
from typing import Dict, List, Optional | ||
|
||
import requests | ||
from adr.query import run_query | ||
from adr.util import memoized_property | ||
from loguru import logger | ||
from urllib3.response import HTTPResponse | ||
|
@@ -358,6 +360,30 @@ def configuration(self): | |
parts = config.split("-") | ||
return "-".join(parts[:-1] if parts[-1].isdigit() else parts) | ||
|
||
@property | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should be memoized |
||
def overhead(self): | ||
worldomonation marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is defined on a specific task, but we will have to define For task-level scheduling, we can just consider the sum of the median durations of all scheduled tasks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, I will implement them while I wait for review from ahal and ekyle. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @worldomonation I will require the group-level run times, but I plan to compute the aggregates after the data has been put into the database. Specifically, there is a BigQuery database with some of the data generated by |
||
"""Calculate the overhead of a task. | ||
|
||
The methodology is simple: each task (action) has a start/end time. | ||
Each group also has a start/end time. Take the earliest known group start | ||
and latest known group end time, ensure the two falls somewhere in between | ||
task start/end. | ||
|
||
This definition of overhead does not take into account inter-group overhead | ||
eg. restarting browser, teardown, etc. | ||
|
||
Returns: | ||
float: difference between task start/end and group start/end times. | ||
""" | ||
data = run_query("test_task_overhead", Namespace(task_id=self.id))["data"].pop() | ||
# Sanity check to ensure group start/end times are within task start/end. | ||
assert data["task_min"] < data["group_min"] | ||
assert data["task_max"] > data["group_max"] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do not crash on a problem: We should be emitting problems as "warnings" to the logs and also declaring the overhead a zero (our best estimate). Depanding on your logging library, something like:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why shouldn't we crash on a problem? I agree crashing on a problem is bad if you're writing an ETL, but mozci is not an ETL, it's a library. Some consumers of the library will want to ignore errors and keep going (like an ETL). Other consumers of the library will want to fail loudly so they can be notified immediately when things go awry. The nice thing about crashing is that it lets the consumers decide how to handle the situation. Though assertions are trickier to catch than a proper There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ahal Since you are the owner, I will leave the error mitigation patterns to you. In the meantime, my reasons are:
Yes, letting an exception escape the callee allows the caller to decide mitigation, but I am not advocating suppression of all exceptions. In this case, we have a reasonable mitigation strategy that I doubt any caller will ever write a handler for; plus it remains visible to anyone monitoring Of course, I can always be wrong. I am sure I have promoted past
|
||
|
||
return (data["group_min"] - data["task_min"]) + ( | ||
data["task_max"] - data["group_max"] | ||
) | ||
|
||
|
||
# Don't perform type checking because of https://github.com/python/mypy/issues/5374. | ||
@dataclass # type: ignore | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need such a high limit? If so, we might want to use
destination: url
like https://github.com/mozilla/mozci/blob/737c4cc0810bd745d13c90c511d18afff6baee20/mozci/queries/push_revisions.query.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No we do not - this was just carried over from the working query. It should return only 1 row. 1000 ought to be enough, though I'd like @klahnakoski to weigh in.