Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure default Spark executor SA is created in Spark clusters #3921

Merged
merged 4 commits into from
Jul 17, 2024
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 12 additions & 7 deletions paasta_tools/setup_tron_namespace.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,18 +85,23 @@ def ensure_service_accounts(job_configs: List[TronJobConfig]) -> None:
kube_client=kube_client,
)
# spark executors are special in that we want the SA to exist in two namespaces:
# the tron namespace - for the spark driver
# and the spark namespace - for the spark executor
# the tron namespace - for the spark driver (which will be created by the ensure_service_account() above)
# and the spark namespace - for the spark executor (which we'll create below)
if action.get_executor() == "spark":
# this kubeclient creation is lru_cache'd so it should be fine to call this for every spark action
spark_kube_client = KubeClient(
config_file=system_paasta_config.get_spark_kubeconfig()
)
ensure_service_account(
action.get_iam_role(),
namespace=spark_tools.SPARK_EXECUTOR_NAMESPACE,
kube_client=spark_kube_client,
)
# this should always be truthy, but let's be safe since this comes from SystemPaastaConfig
if action.get_spark_executor_iam_role():
# this will look quite similar to the above, but we're ensuring that a potentially different SA exists:
# this one is for the actual spark executors to use. if an iam_role is set, we'll use that, otherwise
# there's an executor-specifc default role just like there is for the drivers :)
ensure_service_account(
action.get_spark_executor_iam_role(),
namespace=spark_tools.SPARK_EXECUTOR_NAMESPACE,
kube_client=spark_kube_client,
)


def main():
Expand Down
Loading