You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The documentation of read_parquet function suggests that using the 'chunked' argument makes the function memory friendly as it will return an iterable of dataframes instead of a regular dataframe. However, when tested with a 500MB parquet file, with chunked = 1, the function takes up more than 7GB memory even before returning the iterable object. That indicates the function is doing something underneath (possibly loading the file in memory) before being able to give back a streamable object.
If this an expected behavior, then the function unfortunately cannot be considered as memory-friendly as it ends up taking up a lot of memory, and the documentation should explicitly specify that so that the users would know what to expect. If it is not the expected behavior, then it is possibly a bug.
Sharing our code below:
def get_table_chunks_from_s3_file(app_configs: dict, sqs_values_dict: dict):
bucket = sqs_values_dict["BucketName"]
key = sqs_values_dict["ObjectKey"]
boto3_session = app_configs["boto3_session"]
file_path = "s3://" + bucket + "/" + key
# Below function call takes up high memory before being able to return the dataframes object.
dataframes = wr.s3.read_parquet(
path=file_path, chunked=1, boto3_session=boto3_session
)
for dataframe in dataframes:
yield pyarrow.table(dataframe)
Note that in the above, I have added a comment to share which part of the code takes up a lot of memory.
How to Reproduce
Run the read_parquet function for a relatively large parquet file in S3 and check how much memory it consumes (through a memory profiler) before giving back an iterable object.
Expected behavior
The function (as the documentation suggests) should not be taking up so much memory while trying to return an iterable of dataframes.
Your project
No response
Screenshots
No response
OS
Ubuntu 22.04
Python version
3.10.12
AWS SDK for pandas version
3.9.1
Additional context
Support Case ID: 172918156100319
The text was updated successfully, but these errors were encountered:
geetparekh
changed the title
read_parquet function takes up a lot of memory even before it returns the first dataframe
read_parquet function takes up a lot of memory even before it returns the iterable object
Oct 30, 2024
Hi @geetparekh, note the rows in the parquet datasets are organized in row groups, and rows within a row group must be read in one go. The assumption that chunked=1 will result in lower memory footprint is not correct.
import boto3
import awswrangler as wr
import pyarrow as pa
# Path to a public parquet dataset 440.6MB file
path = "s3://ursa-labs-taxi-data/2009/01/data.parquet"
session = boto3.Session()
def get_table_chunks_from_s3_file(path, chunked, session):
dataframes = wr.s3.read_parquet(path=path, chunked=chunked, boto3_session=session)
yield from dataframes
# this line runs immediately and returns a generator as expected
g = get_table_chunks_from_s3_file(path, 1000000, session)
# this line consumes the first item from the generator and loads 1000000 records
next(g)
The parquet file used for the test has 14092413 in 216 row groups:
> parquet-tools inspect data.parquet
############ file meta data ############
created_by: parquet-cpp version 1.5.1-SNAPSHOT
num_columns: 18
num_rows: 14092413
num_row_groups: 216
format_version: 1.0
serialized_size: 324078
...
Reading with chunked=1000000 memory never peaked above 1G... Given Parquet's efficient compression, this is expected.
I will continue doing some tests to reproduce your issue and keep you updated.
Describe the bug
The documentation of read_parquet function suggests that using the 'chunked' argument makes the function memory friendly as it will return an iterable of dataframes instead of a regular dataframe. However, when tested with a 500MB parquet file, with chunked = 1, the function takes up more than 7GB memory even before returning the iterable object. That indicates the function is doing something underneath (possibly loading the file in memory) before being able to give back a streamable object.
If this an expected behavior, then the function unfortunately cannot be considered as memory-friendly as it ends up taking up a lot of memory, and the documentation should explicitly specify that so that the users would know what to expect. If it is not the expected behavior, then it is possibly a bug.
Sharing our code below:
Note that in the above, I have added a comment to share which part of the code takes up a lot of memory.
How to Reproduce
Run the read_parquet function for a relatively large parquet file in S3 and check how much memory it consumes (through a memory profiler) before giving back an iterable object.
Expected behavior
The function (as the documentation suggests) should not be taking up so much memory while trying to return an iterable of dataframes.
Your project
No response
Screenshots
No response
OS
Ubuntu 22.04
Python version
3.10.12
AWS SDK for pandas version
3.9.1
Additional context
Support Case ID: 172918156100319
The text was updated successfully, but these errors were encountered: