-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use Workflow Identity Federation from Azure DevOps pipeline #1025
Comments
Databricks CLI depends on Databricks Go SDK which recently added support for OIDC, see this: The configuration you need to provide though is Please try to change you GitHub actions setup to use these variables and see if it works |
@andrewnester Thanks for your reply. I'm not using GitHub actions though, but Azure DevOpes Pipelines. It appears your solution applies specifically to GitHub actions (see e.g. https://library.tf/providers/microsoft/azuredevops/latest/docs/guides/authenticating_service_principal_using_an_oidc_token). For Azure Pipelines, the above page mentions the variables |
Ah, indeed, I see. In this case, Go SDK we rely on for authentication is not yet supporting OIDC for Azure pipelines. I'm moving this issue to Go SDK as a feature request |
Also it seems to be related to this feature request #495 |
@Pim-Mostert what is surprising is that CLI commands work for you, could you try to run this command with --log-level TRACE flag and provide an output?
|
@andrewnester Sure:
|
Ah, I see, CLI auth works because it eventually configures to use So to summarise:
Thank you! |
It's not an issue for me right now, but I expect it will be in the near future (when my company disables the old service connection). I've opened a new issue: databricks/cli#1722 Please let me know if you need more information. Thanks! |
Describe the issue
I want to deploy a Databricks Asset Bundle from an Azure Pipeline using databricks cli. While authentication for the cli itself seems to work, the actual deployment does not. It appears that the underlying Terraform provider is not able to authenticate.
The issue in particular appears to arise from our DevOps service connection. The service connection is configured for Workload Identity Federation. When I try an old service connection that authenticates using client credentials, the deployment succeeds.
I suspect the bug may be fixed by simply upgrading the version of Terraform that databricks cli uses under the hood. Currently it uses Terraform
1.5.5
. Newer versions of Terraform seems to support the Workload Identity Federation flow. See https://developer.hashicorp.com/terraform/language/settings/backends/azurerm, but note how version1.5.x
of that same page makes no mention of Workload Identity Federation.Relevant documentation:
Configuration
I have tried various combinations of the
ARM_
environment variables above, but I couldn't find a working combination.What did work was using a service principal service connection, in combination with:
Steps to reproduce the behavior
Expected Behavior
The deployment of the asset bundle should succeed.
Actual Behavior
The following error appears in the pipeline's log:
Note that the listing of experiments works fine:
OS and CLI version
Output by the Azure pipeline:
Databricks CLI:
v0.227.0
OS: Ubuntu (Microsoft-hosted agent, latest version)
Is this a regression?
I don't know, I'm new to Databricks.
Debug Logs
See attachment.
debug_logs.txt
The text was updated successfully, but these errors were encountered: