-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#102 - Add overhead value to TestTask #243
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
from: unittest | ||
format: list | ||
groupby: task.id | ||
limit: 20000 | ||
select: | ||
- {value: action.start_time, name: task_min, aggregate: min} | ||
- {value: action.end_time, name: task_max, aggregate: max} | ||
- {value: result.start_time, name: group_min, aggregate: min} | ||
- {value: result.end_time, name: group_max, aggregate: max} | ||
where: | ||
- in: {task.id: {$eval: task_id}} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Rather than having a separate query for this (or maybe in addition), I think we could refactor the existing I'd be fine if you decide to file a follow-up issue to implement this and save it for later. |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,13 +2,15 @@ | |
import json | ||
import os | ||
from abc import ABC, abstractmethod | ||
from argparse import Namespace | ||
from dataclasses import dataclass, field | ||
from enum import Enum | ||
from inspect import signature | ||
from statistics import median | ||
from typing import Dict, List, Optional | ||
|
||
import requests | ||
from adr.query import run_query | ||
from adr.util import memoized_property | ||
from loguru import logger | ||
from urllib3.response import HTTPResponse | ||
|
@@ -358,6 +360,30 @@ def configuration(self): | |
parts = config.split("-") | ||
return "-".join(parts[:-1] if parts[-1].isdigit() else parts) | ||
|
||
@property | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should be memoized |
||
def overhead(self): | ||
worldomonation marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is defined on a specific task, but we will have to define For task-level scheduling, we can just consider the sum of the median durations of all scheduled tasks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, I will implement them while I wait for review from ahal and ekyle. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @worldomonation I will require the group-level run times, but I plan to compute the aggregates after the data has been put into the database. Specifically, there is a BigQuery database with some of the data generated by |
||
"""Calculate the overhead of a task. | ||
|
||
The methodology is simple: each task (action) has a start/end time. | ||
Each group also has a start/end time. Take the earliest known group start | ||
and latest known group end time, ensure the two falls somewhere in between | ||
task start/end. | ||
|
||
This definition of overhead does not take into account inter-group overhead | ||
eg. restarting browser, teardown, etc. | ||
|
||
Returns: | ||
float: difference between task start/end and group start/end times. | ||
""" | ||
data = run_query("test_task_overhead", Namespace(task_id=self.id))["data"].pop() | ||
# Sanity check to ensure group start/end times are within task start/end. | ||
if data["task_min"] < data["group_min"] or data["task_max"] > data["group_max"]: | ||
logger.warning(f"task f{self.id} has inconsistent group duration.") | ||
|
||
return (data["group_min"] - data["task_min"]) + ( | ||
data["task_max"] - data["group_max"] | ||
) | ||
|
||
|
||
# Don't perform type checking because of https://github.com/python/mypy/issues/5374. | ||
@dataclass # type: ignore | ||
|
@@ -496,6 +522,18 @@ def total_duration(self): | |
def median_duration(self): | ||
return median(self.durations) | ||
|
||
@property | ||
def overheads(self): | ||
return [task.overhead for task in self.tasks] | ||
|
||
@property | ||
def total_overheads(self): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: shouldn't be pluralized |
||
return sum(self.overheads) | ||
|
||
@property | ||
def median_overhead(self): | ||
return median(self.overheads) | ||
|
||
@memoized_property | ||
def status(self): | ||
overall_status = None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need such a high limit? If so, we might want to use
destination: url
like https://github.com/mozilla/mozci/blob/737c4cc0810bd745d13c90c511d18afff6baee20/mozci/queries/push_revisions.query.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No we do not - this was just carried over from the working query. It should return only 1 row. 1000 ought to be enough, though I'd like @klahnakoski to weigh in.