-
Notifications
You must be signed in to change notification settings - Fork 705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to write SUPER type column to Redshift using redshift.copy #3095
Comments
Hi @kukushking , Could you please try explicitly serializing 'translation' column as JSON by using *json.dumps, which ensures the column is in correct format
|
@Rutuja2506 - that worked for me! |
@kukushking, could you please reopen this issue? The original problem persists as serialize_to_json=True is not functioning correctly. To address this, please either: Remove the option: If it's not feasible to fix at this time. Thanks! |
@misteliy @duarteocarmo what is the type of the column in your source data frame: object or a string? One thing below code ensures is that the column is a string that can be serialised to JSON by and written into SUPER type.
|
@misteliy just to be clear I have added a test case and clarified the docs. |
Describe the bug
Description
Having issues writing data to a Redshift table containing a SUPER type column using
awswrangler.redshift.copy
. Even withserialize_to_json=True
, the SUPER type column is not properly handled.Environment
Table Schema
My function
Example data:
When I query the data in redshift - the translation column is a string rather than SuperJson..
How to Reproduce
See above.
Expected behavior
The col in Redshift should be superjson
Your project
No response
Screenshots
No response
OS
Mac
Python version
3.12.4
AWS SDK for pandas version
3.11.0
Additional context
No response
The text was updated successfully, but these errors were encountered: