Replies: 1 comment
-
Hi @flashpixx, thanks for getting in touch! Sorry for the delay in replying. I think you can achieve what you are trying to do. Armada can handle the queuing and scheduling of pods in Kubernetes that run your workloads. You would need to be able to submit jobs to Armada to run your workloads, and typically you would handle input/output data by mounting some external shared storage in your pods (you can do this through Armada by including whatever mounted storage you need in your pod specs). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have got the following scenario and I try to figure out if Armada is the right tool for my use-case:
My main focus is of performance not in fair execution order. In general a workflow could be:
4a. calculate statistic values on each grid cell and pass the data to a Kafka stream => workflow is done
4b. aggregate the values on each cell e.g. mean
5b. generate an image of the aggregated values (pass the s3 url of the file to a Kafka stream) => workflow is done
Each of the jobs can be present as a single pod, but each pod uses a Spark cluster with horizontal autoscaling of the worker nodes. Each job need some dynamically input data and generates some new data.
Can I do this with Armada and if possible how can I do this? For me it is helpful to scetch some ideas
Beta Was this translation helpful? Give feedback.
All reactions