subcategory |
---|
Databricks SQL |
To manage SQLA resources you must have databricks_sql_access
on your databricks_group or databricks_user.
-> documentation for this resource is a work in progress.
A query may have one or more visualizations.
resource "databricks_directory" "shared_dir" {
path = "/Shared/Queries"
}
resource "databricks_sql_query" "q1" {
data_source_id = databricks_sql_endpoint.example.data_source_id
name = "My Query Name"
query = <<EOT
SELECT {{ p1 }} AS p1
WHERE 1=1
AND p2 in ({{ p2 }})
AND event_date > date '{{ p3 }}'
EOT
parent = "folders/${databricks_directory.shared_dir.object_id}"
run_as_role = "viewer"
parameter {
name = "p1"
title = "Title for p1"
text {
value = "default"
}
}
parameter {
name = "p2"
title = "Title for p2"
enum {
options = ["default", "foo", "bar"]
value = "default"
// passes to sql query as string `"foo", "bar"` if foo and bar are both selected in the front end
multiple {
prefix = "\""
suffix = "\""
separator = ","
}
}
}
parameter {
name = "p3"
title = "Title for p3"
date {
value = "2022-01-01"
}
}
tags = [
"t1",
"t2",
]
}
Example permission to share query with all users:
resource "databricks_permissions" "q1" {
sql_query_id = databricks_sql_query.q1.id
access_control {
group_name = data.databricks_group.users.display_name
permission_level = "CAN_RUN"
}
// You can only specify "CAN_EDIT" permissions if the query `run_as_role` equals `viewer`.
access_control {
group_name = data.databricks_group.team.display_name
permission_level = "CAN_EDIT"
}
}
The following arguments are supported:
data_source_id
- Data source ID of a SQL warehousequery
- The text of the query to be run.name
- The title of this query that appears in list views, widget headings, and on the query page.parent
- The identifier of the workspace folder containing the object.description
- General description that conveys additional information about this query such as usage notes.run_as_role
- Run as role. Possible values areviewer
,owner
.
For parameter definition
title
- The text displayed in a parameter picking widget.name
- The literal parameter marker that appears between double curly braces in the query text. Parameters can have several different types. Type is specified using one of the following configuration blocks:text
,number
,enum
,query
,date
,datetime
,datetimesec
,date_range
,datetime_range
,datetimesec_range
.
For text
, number
, date
, datetime
, datetimesec
block
value
- The default value for this parameter.
In addition to all arguments above, the following attributes are exported:
id
- the unique ID of the SQL Query.
You can import a databricks_sql_query
resource with ID like the following:
terraform import databricks_sql_query.this <query-id>
In case you see Error: cannot create sql query: Internal Server Error
during terraform apply
; double check that you are using the correct data_source_id
Operations on databricks_sql_query
schedules are ⛔️ deprecated. You can create, update or delete a schedule for SQLA and other Databricks resources using the databricks_job resource.
The following resources are often used in the same context:
- End to end workspace management guide.
- databricks_sql_dashboard to manage Databricks SQL Dashboards.
- databricks_sql_endpoint to manage Databricks SQL Endpoints.
- databricks_sql_global_config to configure the security policy, databricks_instance_profile, and data access properties for all databricks_sql_endpoint of workspace.
- databricks_sql_permissions to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and more.
- databricks_job to schedule Databricks SQL queries (as well as dashboards and alerts) using Databricks Jobs.