-
Notifications
You must be signed in to change notification settings - Fork 393
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added
databricks_entitlements
resource (#1583)
- Loading branch information
Showing
9 changed files
with
1,044 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,88 @@ | ||
--- | ||
subcategory: "Security" | ||
--- | ||
# databricks_entitlements Resource | ||
|
||
This resource allows you to set entitlements to existing [databricks_users](user.md), [databricks_group](group.md) or [databricks_service_principal](service_principal.md) | ||
|
||
## Example Usage | ||
|
||
Setting entitlements for a regular user: | ||
|
||
```hcl | ||
data "databricks_user" "me" { | ||
user_name = "[email protected]" | ||
} | ||
resource "databricks_entitlements" "me" { | ||
user_id = data.databricks_user.me.id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
``` | ||
|
||
Setting entitlements for a service principal: | ||
|
||
```hcl | ||
data "databricks_service_principal" "this" { | ||
application_id = "11111111-2222-3333-4444-555666777888" | ||
} | ||
resource "databricks_entitlements" "this" { | ||
service_principal_id = data.databricks_service_principal.this.sp_id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
``` | ||
|
||
Setting entitlements to all users in a workspace - referencing special `users` [databricks_group](../data-sources/group.md) | ||
|
||
```hcl | ||
data "databricks_group" "users" { | ||
display_name = "users" | ||
} | ||
resource "databricks_entitlements" "workspace-users" { | ||
group_id = data.databricks_group.users.id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
``` | ||
|
||
## Argument Reference | ||
|
||
The following arguments are available to specify the identity you need to enforce entitlements. You must specify exactly one of those arguments otherwise resource creation will fail. | ||
|
||
* `user_id` - Canonical unique identifier for the user. | ||
* `group_id` - Canonical unique identifier for the group. | ||
* `service_principal_id` - Canonical unique identifier for the service principal. | ||
|
||
The following entitlements are available. | ||
|
||
* `allow_cluster_create` - (Optional) Allow the user to have [cluster](cluster.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Cluster-usage) and `cluster_id` argument. Everyone without `allow_cluster_create` argument set, but with [permission to use](permissions.md#Cluster-Policy-usage) Cluster Policy would be able to create clusters, but within boundaries of that specific policy. | ||
* `allow_instance_pool_create` - (Optional) Allow the user to have [instance pool](instance_pool.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Instance-Pool-usage) and [instance_pool_id](permissions.md#instance_pool_id) argument. | ||
* `databricks_sql_access` - (Optional) This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature in User Interface and through [databricks_sql_endpoint](sql_endpoint.md). | ||
|
||
## Import | ||
|
||
The resource can be imported using a synthetic identifier. Examples of valid synthetic identifiers are: | ||
|
||
* `user/user_id` - user `user_id`. | ||
* `group/group_id` - group `group_id`. | ||
* `spn/spn_id` - service principal `spn_id`. | ||
|
||
```bash | ||
terraform import databricks_entitlements.me user/<user-id> | ||
``` | ||
|
||
## Related Resources | ||
|
||
The following resources are often used in the same context: | ||
|
||
* [End to end workspace management](../guides/workspace-management.md) guide. | ||
* [databricks_group](group.md) to manage [groups in Databricks Workspace](https://docs.databricks.com/administration-guide/users-groups/groups.html) or [Account Console](https://accounts.cloud.databricks.com/) (for AWS deployments). | ||
* [databricks_group](../data-sources/group.md) data to retrieve information about [databricks_group](group.md) members, entitlements and instance profiles. | ||
* [databricks_group_instance_profile](group_instance_profile.md) to attach [databricks_instance_profile](instance_profile.md) (AWS) to [databricks_group](group.md). | ||
* [databricks_group_member](group_member.md) to attach [users](user.md) and [groups](group.md) as group members. | ||
* [databricks_instance_profile](instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](mount.md). | ||
* [databricks_user](../data-sources/user.md) data to retrieve information about [databricks_user](user.md). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
package acceptance | ||
|
||
import ( | ||
"os" | ||
"testing" | ||
|
||
"github.com/databricks/terraform-provider-databricks/internal/acceptance" | ||
|
||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" | ||
) | ||
|
||
func TestAccEntitlementResource(t *testing.T) { | ||
if _, ok := os.LookupEnv("CLOUD_ENV"); !ok { | ||
t.Skip("Acceptance tests skipped unless env 'CLOUD_ENV' is set") | ||
} | ||
t.Parallel() | ||
config := acceptance.EnvironmentTemplate(t, ` | ||
resource "databricks_user" "first" { | ||
user_name = "tf-eerste+{var.RANDOM}@example.com" | ||
display_name = "Eerste {var.RANDOM}" | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
resource "databricks_group" "second" { | ||
display_name = "{var.RANDOM} group" | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
resource "databricks_entitlements" "first_entitlements" { | ||
user_id = databricks_user.first.id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
resource "databricks_entitlements" "second_entitlements" { | ||
group_id = databricks_group.second.id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
`) | ||
acceptance.AccTest(t, resource.TestCase{ | ||
Steps: []resource.TestStep{ | ||
{ | ||
Config: config, | ||
Check: resource.ComposeTestCheckFunc( | ||
resource.TestCheckResourceAttr("databricks_entitlements.first_entitlements", "allow_cluster_create", "true"), | ||
resource.TestCheckResourceAttr("databricks_entitlements.first_entitlements", "allow_instance_pool_create", "true"), | ||
resource.TestCheckResourceAttr("databricks_entitlements.second_entitlements", "allow_cluster_create", "true"), | ||
resource.TestCheckResourceAttr("databricks_entitlements.second_entitlements", "allow_instance_pool_create", "true"), | ||
), | ||
}, | ||
{ | ||
Config: config, | ||
}, | ||
}, | ||
}) | ||
} | ||
|
||
func TestAccServicePrincipalEntitlementsResourceOnAzure(t *testing.T) { | ||
if cloud, ok := os.LookupEnv("CLOUD_ENV"); !ok || cloud != "azure" { | ||
t.Skip("Test is only for CLOUD_ENV=azure") | ||
} | ||
t.Parallel() | ||
acceptance.Test(t, []acceptance.Step{ | ||
{ | ||
Template: `resource "databricks_service_principal" "this" { | ||
application_id = "00000000-1234-5678-0000-000000000001" | ||
display_name = "SPN {var.RANDOM}" | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
resource "databricks_entitlements" "service_principal" { | ||
service_principal_id = databricks_service_principal.this.id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
}`, | ||
}, | ||
}) | ||
} | ||
|
||
func TestAccServicePrincipalEntitlementsResourceOnAws(t *testing.T) { | ||
if cloud, ok := os.LookupEnv("CLOUD_ENV"); !ok || cloud != "aws" { | ||
t.Skip("Test is only for CLOUD_ENV=aws") | ||
} | ||
t.Parallel() | ||
acceptance.Test(t, []acceptance.Step{ | ||
{ | ||
Template: `resource "databricks_service_principal" "this" { | ||
display_name = "SPN {var.RANDOM}" | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
} | ||
resource "databricks_entitlements" "service_principal" { | ||
service_principal_id = databricks_service_principal.this.id | ||
allow_cluster_create = true | ||
allow_instance_pool_create = true | ||
}`, | ||
}, | ||
}) | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
package scim | ||
|
||
import ( | ||
"context" | ||
"fmt" | ||
"strings" | ||
|
||
"github.com/databricks/terraform-provider-databricks/common" | ||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" | ||
) | ||
|
||
// ResourceGroup manages user groups | ||
func ResourceEntitlements() *schema.Resource { | ||
type entity struct { | ||
GroupId string `json:"group_id,omitempty" tf:"force_new"` | ||
UserId string `json:"user_id,omitempty" tf:"force_new"` | ||
SpnId string `json:"service_principal_id,omitempty" tf:"force_new"` | ||
} | ||
entitlementSchema := common.StructToSchema(entity{}, | ||
func(m map[string]*schema.Schema) map[string]*schema.Schema { | ||
addEntitlementsToSchema(&m) | ||
alof := []string{"group_id", "user_id", "service_principal_id"} | ||
for _, field := range alof { | ||
m[field].AtLeastOneOf = alof | ||
} | ||
return m | ||
}) | ||
addEntitlementsToSchema(&entitlementSchema) | ||
return common.Resource{ | ||
Create: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { | ||
return patchEntitlements(ctx, d, c, "add") | ||
}, | ||
Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { | ||
split := strings.SplitN(d.Id(), "/", 2) | ||
if len(split) != 2 { | ||
return fmt.Errorf("ID must be two elements: %s", d.Id()) | ||
} | ||
switch strings.ToLower(split[0]) { | ||
case "group": | ||
group, err := NewGroupsAPI(ctx, c).Read(split[1]) | ||
if err != nil { | ||
return err | ||
} | ||
return group.Entitlements.readIntoData(d) | ||
case "user": | ||
user, err := NewUsersAPI(ctx, c).Read(split[1]) | ||
if err != nil { | ||
return err | ||
} | ||
return user.Entitlements.readIntoData(d) | ||
case "spn": | ||
spn, err := NewServicePrincipalsAPI(ctx, c).Read(split[1]) | ||
if err != nil { | ||
return err | ||
} | ||
return spn.Entitlements.readIntoData(d) | ||
} | ||
return nil | ||
}, | ||
Update: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { | ||
return enforceEntitlements(ctx, d, c) | ||
}, | ||
Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { | ||
return patchEntitlements(ctx, d, c, "remove") | ||
}, | ||
Schema: entitlementSchema, | ||
}.ToResource() | ||
} | ||
|
||
func patchEntitlements(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient, op string) error { | ||
groupId := d.Get("group_id").(string) | ||
userId := d.Get("user_id").(string) | ||
spnId := d.Get("service_principal_id").(string) | ||
request := PatchRequestComplexValue([]patchOperation{ | ||
{ | ||
op, | ||
"entitlements", | ||
readEntitlementsFromData(d), | ||
}, | ||
}) | ||
if groupId != "" { | ||
groupsAPI := NewGroupsAPI(ctx, c) | ||
err := groupsAPI.UpdateEntitlements(groupId, request) | ||
if err != nil { | ||
return err | ||
} | ||
d.SetId("group/" + groupId) | ||
} | ||
if userId != "" { | ||
usersAPI := NewUsersAPI(ctx, c) | ||
err := usersAPI.UpdateEntitlements(userId, request) | ||
if err != nil { | ||
return err | ||
} | ||
d.SetId("user/" + userId) | ||
} | ||
if spnId != "" { | ||
spnAPI := NewServicePrincipalsAPI(ctx, c) | ||
err := spnAPI.UpdateEntitlements(spnId, request) | ||
if err != nil { | ||
return err | ||
} | ||
d.SetId("spn/" + spnId) | ||
} | ||
return nil | ||
} | ||
|
||
func enforceEntitlements(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error { | ||
split := strings.SplitN(d.Id(), "/", 2) | ||
if len(split) != 2 { | ||
return fmt.Errorf("ID must be two elements: %s", d.Id()) | ||
} | ||
identity := strings.ToLower(split[0]) | ||
id := strings.ToLower(split[1]) | ||
request := PatchRequestComplexValue( | ||
[]patchOperation{ | ||
{ | ||
"remove", "entitlements", generateFullEntitlements(), | ||
}, | ||
{ | ||
"add", "entitlements", readEntitlementsFromData(d), | ||
}, | ||
}, | ||
) | ||
switch identity { | ||
case "group": | ||
NewGroupsAPI(ctx, c).UpdateEntitlements(id, request) | ||
case "user": | ||
NewUsersAPI(ctx, c).UpdateEntitlements(id, request) | ||
case "spn": | ||
NewServicePrincipalsAPI(ctx, c).UpdateEntitlements(id, request) | ||
} | ||
return nil | ||
} |
Oops, something went wrong.