-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCS Blob Prefix and Blob Delimiter Configuration #6531
Comments
Did you try to set Prefix and delimiter at the same time? As the documentation describes, both Prefix and delimiter should be set.
|
@SpiritZhou , I tried to follow the above link . I gave prefix as "test/" and delimiter as "/". Even after deleting the object , Keda is still scaling the job. |
Any other approach we can follow ? |
What do you mean with this? Is the job ending and KEDA creating a new one or just the job isn't removed? |
@JorTurFer , When the file is uploaded inside the GCS bucket test path , Keda trigger got activated and created a new Job for every polling interval as expected. But when I remove the file from the test path in GCS bucket, Keda still keep creating the new jobs for every polling interval , which is not the expected behavior. |
Could you share KEDA operator logs? |
@JorTurFer Added both operator and keda-config.yaml file. |
Based on the logs, it looks that there is at least 1 item in the queue and that's why KEDA is scaling to more jobs. Could you enable debug logs and send them to us again with debug enabled? https://github.com/kedacore/charts/blob/main/keda/values.yaml#L380 |
@JorTurFer , Attached the operator logs in debug mode. Please note while pulling the debug logs , there is no file in "test/" path but subfolder "processed" in test/ path in GCS bucket keda-test-sample. |
Based on logs, there are 2 items in the queue gcp_storage_scaler Counted 2 items with a limit of 1000 The behaviour of this scaler is covered by e2e tests, so I think that this could be an edge case in some scenarios. I'm going to test this on my own side, do you have any way to reproduce this? maybe a small script using |
Hi , Below are the steps I followed to reproduce the issue,
|
Hi All,
I have create a scaledJob Spec with GCS trigger. I have GCS Bucket in us-central-1 which has "test" folder and "test/processed" folder. This processed folder is sub-folder of test. I want to trigger the ScaledJob when a new file arrives in "test/" folder but not in "test/processed/" folder. I tried to use Blob Prefix and Blob Delimiter to restrict the count of objects. But it is not working as expected. Please recommend the right config YAML.
The text was updated successfully, but these errors were encountered: