You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should the max_bytes_per_partition config map to librdkafka's fetch.message.max.bytes config instead of message.max.bytes? message.max.bytes is a broker or topic config that seems more relevant to a Kafka producer.
Property
C/P
Range
Default
Importance
Description
message.max.bytes
*
1000 .. 1000000000
1000000
medium
Maximum Kafka protocol request message size. Due to differing framing overhead between protocol versions the producer is unable to reliably enforce a strict max message limit at produce time and may exceed the maximum size by one message in protocol ProduceRequests, the broker will enforce the the topic's max.message.bytes limit (see Apache Kafka documentation). Type: integer
fetch.message.max.bytes
C
1 .. 1000000000
1048576
medium
Initial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched. Type: integer
fetch.max.bytes
C
0 .. 2147483135
52428800
medium
Maximum amount of data the broker shall return for a Fetch request. Messages are fetched in batches by the consumer and if the first message batch in the first non-empty partition of the Fetch request is larger than this value, then the message batch will still be returned to ensure the consumer can make progress. The maximum message batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (broker topic config). fetch.max.bytes is automatically adjusted upwards to be at least message.max.bytes (consumer config). Type: integer
This mapping was changed in part because of the error that occurs when setting fetch.max.bytes to a value < message.max.bytes. #163
fetch.max.bytes` must be >= `message.max.bytes` (Rdkafka::Config::ClientCreationError)
This error happens because we are explicitly setting fetch.max.bytes and the value is not being automatically adjusted by librdkafka. After changing the mapping of max_bytes_per_partition -> fetch.message.max.bytes, I was thinking of adding a message_max_bytes config along with validations within Racecar to prevent that error.
The text was updated successfully, but these errors were encountered:
Should the
max_bytes_per_partition
config map to librdkafka'sfetch.message.max.bytes
config instead ofmessage.max.bytes
?message.max.bytes
is a broker or topic config that seems more relevant to a Kafka producer.max.message.bytes
limit (see Apache Kafka documentation).Type: integer
Type: integer
message.max.bytes
(broker config) ormax.message.bytes
(broker topic config).fetch.max.bytes
is automatically adjusted upwards to be at leastmessage.max.bytes
(consumer config).Type: integer
This mapping was changed in part because of the error that occurs when setting
fetch.max.bytes
to a value <message.max.bytes
. #163This error happens because we are explicitly setting
fetch.max.bytes
and the value is not being automatically adjusted by librdkafka. After changing the mapping ofmax_bytes_per_partition
->fetch.message.max.bytes
, I was thinking of adding amessage_max_bytes
config along with validations within Racecar to prevent that error.The text was updated successfully, but these errors were encountered: