-
Notifications
You must be signed in to change notification settings - Fork 94
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Don't try setting the command_type unless enable_v3_retries==True
Add an example of using v3_retries Update changelog Signed-off-by: Jesse Whitehouse <[email protected]>
- Loading branch information
Jesse Whitehouse
committed
Aug 16, 2023
1 parent
d28a692
commit e7d878b
Showing
3 changed files
with
42 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
from databricks import sql | ||
import os | ||
|
||
# Users of connector versions >= 2.9.0 and <= 3.0.0 can use the v3 retry behaviour by setting _enable_v3_retries=True | ||
# This flag will be deprecated in databricks-sql-connector~=3.0.0 as it will become the default. | ||
# | ||
# The new retry behaviour is defined in src/databricks/sql/auth/retry.py | ||
# | ||
# The new retry behaviour allows users to force the connector to automatically retry requests that fail with | ||
# that are not retried by default (in most cases only codes 429 and 503 are retried by default). Additional HTTP | ||
# codes to retry are specified as a list passed to `_retry_dangerous_codes`. | ||
# | ||
# Note that, as implied in the name, doing this is *dangerous* and should not be configured in all usages. | ||
# With the default behaviour, ExecuteStatement Thrift commands are only retried for codes 429 and 503 because | ||
# we can be certain at run-time that the statement never reached Databricks compute. These codes are returned by | ||
# the SQL gateway / load balancer. So there is no risk that retrying the request would result in a doubled | ||
# (or tripled etc) command execution. These codes are always accompanied by a Retry-After header, which we honour. | ||
# | ||
# However, if your use-case emits idempotent queries such as SELECT statements, it can be helpful to retry | ||
# for 502 (Bad Gateway) codes etc. In these cases, there is a possibility that the initial command _did_ reach | ||
# Databricks compute and retrying it could result in additional executions. | ||
|
||
with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"), | ||
http_path = os.getenv("DATABRICKS_HTTP_PATH"), | ||
access_token = os.getenv("DATABRICKS_TOKEN"), | ||
_enable_v3_retries = True, | ||
_retry_dangerous_codes=[502,400]) as connection: | ||
|
||
with connection.cursor() as cursor: | ||
cursor.execute("SELECT * FROM default.diamonds LIMIT 2") | ||
result = cursor.fetchall() | ||
|
||
for row in result: | ||
print(row) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters