-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support projected service account volumes #975
Conversation
This is on par with how we did it in the 2019 implementation https://github.yelpcorp.com/services/opa_policy_manager/blob/0ef9d7fc1f193ac217458272695cdda26c0d1e8b/opa_policy_manager/paasta.py#L60 , so I'd say it's fine until it's not! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems generally fine to me - we tend to put most config in MASTER.yaml for Tron, but that might be a pain for this since i think we'd have to plumb things quite a way through Tron for values read from there to be readable here
that said, i can ship once we confirm that service: X
-using configs won't be a problem for this (all my other comments are non-blocking or just me being curious)
tron/utils/authentication.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, this will work a little differently to the rest of tron's config - generally, we have paasta (through setup_tron_namespace
) post the desired tron config for every job and then we store that config in both the tron master's local storage as well as in dynamodb.
the dynamodb part here is usually important as we essentially "freeze" all the config for a job once it's run, so that one were to go back and retry a jobrun from X weeks ago, the retry would use the same configuration. if we did the same here, i think that means that we could potentially have some breakage if the authorization.yamls have changed - so handling the auth stuff on the fly is probably fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, interesting, I was indeed struggling to figure out from where the job configurations were originally sourced.
I agree it's probably fine, or I would argue better, to have this part of the configuration not frozen, so that retries can keep into account changes to authorization policies.
That said, if I just add support for SA volumes here, and then move the rest of the logic in paasta-tools (or rather, rely on what's getting put in place in Yelp/paasta#3888), I can avoid jumping through a bunch of hoops to load configs in here, so I should probably do that.
The equivalent of Yelp/paasta#3888.I really don't like the way I'm resolving the service name, but digging through the call graph I don't see a good place where I could infer that without going insane, so looking for suggestions there.Change of plan, I'm actually going to integration the whole logic into paasta-tools, and just support passing the projected volumes from the configurations here. This way I get rid of the "name of the service" problem, and can also use a single implementation for the "does this service need the token?" logic.