Skip to content

Commit

Permalink
Bumped up the version to 4.0.0.b3 and also changed the structure to h…
Browse files Browse the repository at this point in the history
…ave pyarrow as optional
  • Loading branch information
jprakash-db committed Nov 6, 2024
1 parent 3d1ef79 commit ee7f1e3
Show file tree
Hide file tree
Showing 78 changed files with 2,403 additions and 1,111 deletions.
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# Release History

# 3.6.0 (2024-10-25)

- Support encryption headers in the cloud fetch request (https://github.com/databricks/databricks-sql-python/pull/460 by @jackyhu-db)

# 3.5.0 (2024-10-18)

- Create a non pyarrow flow to handle small results for the column set (databricks/databricks-sql-python#440 by @jprakash-db)
- Fix: On non-retryable error, ensure PySQL includes useful information in error (databricks/databricks-sql-python#447 by @shivam2680)

# 3.4.0 (2024-08-27)

- Unpin pandas to support v2.2.2 (databricks/databricks-sql-python#416 by @kfollesdal)
- Make OAuth as the default authenticator if no authentication setting is provided (databricks/databricks-sql-python#419 by @jackyhu-db)
- Fix (regression): use SSL options with HTTPS connection pool (databricks/databricks-sql-python#425 by @kravets-levko)

# 3.3.0 (2024-07-18)

- Don't retry requests that fail with HTTP code 401 (databricks/databricks-sql-python#408 by @Hodnebo)
Expand Down
8 changes: 4 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,18 +85,18 @@ We use [Pytest](https://docs.pytest.org/en/7.1.x/) as our test runner. Invoke it
Unit tests do not require a Databricks account.

```bash
poetry run python -m pytest databricks_sql_connector_core/tests/unit
poetry run python -m pytest tests/unit
```
#### Only a specific test file

```bash
poetry run python -m pytest databricks_sql_connector_core/tests/unit/tests.py
poetry run python -m pytest tests/unit/tests.py
```

#### Only a specific method

```bash
poetry run python -m pytest databricks_sql_connector_core/tests/unit/tests.py::ClientTestSuite::test_closing_connection_closes_commands
poetry run python -m pytest tests/unit/tests.py::ClientTestSuite::test_closing_connection_closes_commands
```

#### e2e Tests
Expand Down Expand Up @@ -133,7 +133,7 @@ There are several e2e test suites available:
To execute the core test suite:

```bash
poetry run python -m pytest databricks_sql_connector_core/tests/e2e/driver_tests.py::PySQLCoreTestSuite
poetry run python -m pytest tests/e2e/driver_tests.py::PySQLCoreTestSuite
```

The `PySQLCoreTestSuite` namespace contains tests for all of the connector's basic features and behaviours. This is the default namespace where tests should be written unless they require specially configured clusters or take an especially long-time to execute by design.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)

The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[databricks-sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies.
The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL. Use `pip install databricks-sql-connector[sqlalchemy]` to install with SQLAlchemy's dependencies. `pip install databricks-sql-connector[alembic]` will install alembic's dependencies.

This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.

Expand Down
File renamed without changes.
24 changes: 0 additions & 24 deletions databricks_sql_connector/pyproject.toml

This file was deleted.

Empty file.
22 changes: 13 additions & 9 deletions examples/custom_cred_provider.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,27 @@
from databricks.sdk.oauth import OAuthClient
import os

oauth_client = OAuthClient(host=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
client_id=os.getenv("DATABRICKS_CLIENT_ID"),
client_secret=os.getenv("DATABRICKS_CLIENT_SECRET"),
redirect_url=os.getenv("APP_REDIRECT_URL"),
scopes=['all-apis', 'offline_access'])
oauth_client = OAuthClient(
host=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
client_id=os.getenv("DATABRICKS_CLIENT_ID"),
client_secret=os.getenv("DATABRICKS_CLIENT_SECRET"),
redirect_url=os.getenv("APP_REDIRECT_URL"),
scopes=["all-apis", "offline_access"],
)

consent = oauth_client.initiate_consent()

creds = consent.launch_external_browser()

with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path = os.getenv("DATABRICKS_HTTP_PATH"),
credentials_provider=creds) as connection:
with sql.connect(
server_hostname=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
credentials_provider=creds,
) as connection:

for x in range(1, 5):
cursor = connection.cursor()
cursor.execute('SELECT 1+1')
cursor.execute("SELECT 1+1")
result = cursor.fetchall()
for row in result:
print(row)
Expand Down
26 changes: 14 additions & 12 deletions examples/insert_data.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,23 @@
from databricks import sql
import os

with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path = os.getenv("DATABRICKS_HTTP_PATH"),
access_token = os.getenv("DATABRICKS_TOKEN")) as connection:
with sql.connect(
server_hostname=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
access_token=os.getenv("DATABRICKS_TOKEN"),
) as connection:

with connection.cursor() as cursor:
cursor.execute("CREATE TABLE IF NOT EXISTS squares (x int, x_squared int)")
with connection.cursor() as cursor:
cursor.execute("CREATE TABLE IF NOT EXISTS squares (x int, x_squared int)")

squares = [(i, i * i) for i in range(100)]
values = ",".join([f"({x}, {y})" for (x, y) in squares])
squares = [(i, i * i) for i in range(100)]
values = ",".join([f"({x}, {y})" for (x, y) in squares])

cursor.execute(f"INSERT INTO squares VALUES {values}")
cursor.execute(f"INSERT INTO squares VALUES {values}")

cursor.execute("SELECT * FROM squares LIMIT 10")
cursor.execute("SELECT * FROM squares LIMIT 10")

result = cursor.fetchall()
result = cursor.fetchall()

for row in result:
print(row)
for row in result:
print(row)
8 changes: 5 additions & 3 deletions examples/interactive_oauth.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,14 @@
token across script executions.
"""

with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path = os.getenv("DATABRICKS_HTTP_PATH")) as connection:
with sql.connect(
server_hostname=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
) as connection:

for x in range(1, 100):
cursor = connection.cursor()
cursor.execute('SELECT 1+1')
cursor.execute("SELECT 1+1")
result = cursor.fetchall()
for row in result:
print(row)
Expand Down
12 changes: 7 additions & 5 deletions examples/m2m_oauth.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,19 @@ def credential_provider():
# Service Principal UUID
client_id=os.getenv("DATABRICKS_CLIENT_ID"),
# Service Principal Secret
client_secret=os.getenv("DATABRICKS_CLIENT_SECRET"))
client_secret=os.getenv("DATABRICKS_CLIENT_SECRET"),
)
return oauth_service_principal(config)


with sql.connect(
server_hostname=server_hostname,
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
credentials_provider=credential_provider) as connection:
server_hostname=server_hostname,
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
credentials_provider=credential_provider,
) as connection:
for x in range(1, 100):
cursor = connection.cursor()
cursor.execute('SELECT 1+1')
cursor.execute("SELECT 1+1")
result = cursor.fetchall()
for row in result:
print(row)
Expand Down
47 changes: 27 additions & 20 deletions examples/persistent_oauth.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,37 +17,44 @@
from typing import Optional

from databricks import sql
from databricks.sql.experimental.oauth_persistence import OAuthPersistence, OAuthToken, DevOnlyFilePersistence
from databricks.sql.experimental.oauth_persistence import (
OAuthPersistence,
OAuthToken,
DevOnlyFilePersistence,
)


class SampleOAuthPersistence(OAuthPersistence):
def persist(self, hostname: str, oauth_token: OAuthToken):
"""To be implemented by the end user to persist in the preferred storage medium.
def persist(self, hostname: str, oauth_token: OAuthToken):
"""To be implemented by the end user to persist in the preferred storage medium.
OAuthToken has two properties:
1. OAuthToken.access_token
2. OAuthToken.refresh_token
OAuthToken has two properties:
1. OAuthToken.access_token
2. OAuthToken.refresh_token
Both should be persisted.
"""
pass
Both should be persisted.
"""
pass

def read(self, hostname: str) -> Optional[OAuthToken]:
"""To be implemented by the end user to fetch token from the preferred storage
def read(self, hostname: str) -> Optional[OAuthToken]:
"""To be implemented by the end user to fetch token from the preferred storage
Fetch the access_token and refresh_token for the given hostname.
Return OAuthToken(access_token, refresh_token)
"""
pass
Fetch the access_token and refresh_token for the given hostname.
Return OAuthToken(access_token, refresh_token)
"""
pass

with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path = os.getenv("DATABRICKS_HTTP_PATH"),
auth_type="databricks-oauth",
experimental_oauth_persistence=DevOnlyFilePersistence("./sample.json")) as connection:

with sql.connect(
server_hostname=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
auth_type="databricks-oauth",
experimental_oauth_persistence=DevOnlyFilePersistence("./sample.json"),
) as connection:

for x in range(1, 100):
cursor = connection.cursor()
cursor.execute('SELECT 1+1')
cursor.execute("SELECT 1+1")
result = cursor.fetchall()
for row in result:
print(row)
Expand Down
69 changes: 37 additions & 32 deletions examples/query_cancel.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,47 +5,52 @@
The current operation of a cursor may be cancelled by calling its `.cancel()` method as shown in the example below.
"""

with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path = os.getenv("DATABRICKS_HTTP_PATH"),
access_token = os.getenv("DATABRICKS_TOKEN")) as connection:
with sql.connect(
server_hostname=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
access_token=os.getenv("DATABRICKS_TOKEN"),
) as connection:

with connection.cursor() as cursor:
def execute_really_long_query():
try:
cursor.execute("SELECT SUM(A.id - B.id) " +
"FROM range(1000000000) A CROSS JOIN range(100000000) B " +
"GROUP BY (A.id - B.id)")
except sql.exc.RequestError:
print("It looks like this query was cancelled.")
with connection.cursor() as cursor:

exec_thread = threading.Thread(target=execute_really_long_query)
def execute_really_long_query():
try:
cursor.execute(
"SELECT SUM(A.id - B.id) "
+ "FROM range(1000000000) A CROSS JOIN range(100000000) B "
+ "GROUP BY (A.id - B.id)"
)
except sql.exc.RequestError:
print("It looks like this query was cancelled.")

print("\n Beginning to execute long query")
exec_thread.start()
exec_thread = threading.Thread(target=execute_really_long_query)

# Make sure the query has started before cancelling
print("\n Waiting 15 seconds before canceling", end="", flush=True)
print("\n Beginning to execute long query")
exec_thread.start()

seconds_waited = 0
while seconds_waited < 15:
seconds_waited += 1
print(".", end="", flush=True)
time.sleep(1)
# Make sure the query has started before cancelling
print("\n Waiting 15 seconds before canceling", end="", flush=True)

print("\n Cancelling the cursor's operation. This can take a few seconds.")
cursor.cancel()
seconds_waited = 0
while seconds_waited < 15:
seconds_waited += 1
print(".", end="", flush=True)
time.sleep(1)

print("\n Now checking the cursor status:")
exec_thread.join(5)
print("\n Cancelling the cursor's operation. This can take a few seconds.")
cursor.cancel()

assert not exec_thread.is_alive()
print("\n The previous command was successfully canceled")
print("\n Now checking the cursor status:")
exec_thread.join(5)

print("\n Now reusing the cursor to run a separate query.")
assert not exec_thread.is_alive()
print("\n The previous command was successfully canceled")

# We can still execute a new command on the cursor
cursor.execute("SELECT * FROM range(3)")
print("\n Now reusing the cursor to run a separate query.")

print("\n Execution was successful. Results appear below:")
# We can still execute a new command on the cursor
cursor.execute("SELECT * FROM range(3)")

print(cursor.fetchall())
print("\n Execution was successful. Results appear below:")

print(cursor.fetchall())
18 changes: 10 additions & 8 deletions examples/query_execute.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
from databricks import sql
import os

with sql.connect(server_hostname = os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path = os.getenv("DATABRICKS_HTTP_PATH"),
access_token = os.getenv("DATABRICKS_TOKEN")) as connection:
with sql.connect(
server_hostname=os.getenv("DATABRICKS_SERVER_HOSTNAME"),
http_path=os.getenv("DATABRICKS_HTTP_PATH"),
access_token=os.getenv("DATABRICKS_TOKEN"),
) as connection:

with connection.cursor() as cursor:
cursor.execute("SELECT * FROM default.diamonds LIMIT 2")
result = cursor.fetchall()
with connection.cursor() as cursor:
cursor.execute("SELECT * FROM default.diamonds LIMIT 2")
result = cursor.fetchall()

for row in result:
print(row)
for row in result:
print(row)
Loading

0 comments on commit ee7f1e3

Please sign in to comment.