Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google-batch quota error does not trigger job failure #303

Open
rivershah opened this issue Jan 1, 2025 · 4 comments
Open

google-batch quota error does not trigger job failure #303

rivershah opened this issue Jan 1, 2025 · 4 comments

Comments

@rivershah
Copy link

Using the google-batch provider, I notice that some batch errors are not propagating to dsub and it continues waiting to run jobs, when when should be aborting

$ dstat --provider google-batch --project <PROJECT_ID> --location <REGION> --jobs '<JOB_ID>' --users '<USER>' --status '*' --format json
[
  {
    "job-name": "<JOB_NAME>",
    "task-id": "<TASK_ID>",
    "last-update": "2025-01-01 13:07:03.664000",
    "status-message": "VM in Managed Instance Group meets error: Batch Error: code - CODE_GCE_QUOTA_EXCEEDED, description - error count is 4, latest message example: Instance '<INSTANCE_ID>' creation failed: Quota 'GPUS_PER_GPU_FAMILY' exceeded.  Limit: 0.0 in region <REGION>."
  }
]

The process that launched it has retries=0, yet it still shows no failure and is patiently

Waiting for job to complete...
Monitoring for failed tasks to retry...
*** This dsub process must continue running to retry failed tasks.
@wnojopra
Copy link
Contributor

wnojopra commented Jan 7, 2025

Hi @rivershah ,

This appears to be working as intended. The idea is that the quota issue is resolvable (either by resources becoming available or user allocating more quota), and then the job continues. For example, imagine submitting 100 jobs when we only have quota to do 50. Once the first 50 finish, we'd want the next 50 to run.

Perhaps better documentation on this should be added.

@rivershah
Copy link
Author

This risks starvation. What is a graceful way to trigger fast failure / timeout please? For example we submit jobs on large gpu machines which can go without availability for days

@wnojopra
Copy link
Contributor

wnojopra commented Jan 8, 2025

Ideally, you could make use of dsub's --timeout flag. It's implemented for the google-cls-v2 provider, but unfortunately not yet for the google-batch providers. The good news is the Batch API has support for a timeout, so it should be a simple passthrough for dsub.

@rivershah
Copy link
Author

Excellent, requesting that we please implement this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants