Skip to content

Kafka 3.8.0

Kafka 3.8.0 #1027

Triggered via pull request August 23, 2024 07:15
Status Success
Total duration 11m 32s
Artifacts

graalvm-latest.yml

on: pull_request
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

10 errors and 3 warnings
KafkaConfigurationSpec.test custom consumer deserializer: kafka/src/test/groovy/io/micronaut/configuration/kafka/KafkaConfigurationSpec.groovy#L80
Condition not satisfied: (consumer.delegate.deserializers.keyDeserializer as StringDeserializer).encoding == StandardCharsets.US_ASCII.name() | | | | | | | | | | | | | | | | US-ASCII US-ASCII (java.lang.String) | | | | | | class java.nio.charset.StandardCharsets | | | | | false | | | | US-ASCII (sun.nio.cs.US_ASCII) | | | <org.apache.kafka.common.serialization.StringDeserializer@5ba64c76 encoding=US-ASCII> | | Deserializers{keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer@5ba64c76, valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer@68d2b518} | <org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer@426de59f metrics=org.apache.kafka.common.metrics.Metrics@61aa5dbf kafkaConsumerMetrics=org.apache.kafka.clients.consumer.internals.metrics.KafkaConsumerMetrics@483f19a3 log=org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger@764a1e3d clientId=consumer-null-64 groupId=Optional.empty coordinator=null deserializers=Deserializers{keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer@5ba64c76, valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer@68d2b518} fetcher=org.apache.kafka.clients.consumer.internals.Fetcher@58105ba0 offsetFetcher=org.apache.kafka.clients.consumer.internals.OffsetFetcher@7e8d0d69 topicMetadataFetcher=org.apache.kafka.clients.consumer.internals.TopicMetadataFetcher@17a12b32 interceptors=org.apache.kafka.clients.consumer.internals.ConsumerInterceptors@6aa626ab isolationLevel=read_uncommitted time=org.apache.kafka.common.utils.SystemTime@3031bd04 client=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient@2641182e subscriptions=SubscriptionState{type=NONE, subscribedPattern=null, subscription=, groupSubscription=, defaultResetStrategy=latest, assignment=[] (id=0)} metadata=org.apache.kafka.clients.consumer.internals.ConsumerMetadata@38a74f6f retryBackoffMs=100 retryBackoffMaxMs=1000 requestTimeoutMs=30000 defaultApiTimeoutMs=60000 closed=true assignors=[org.apache.kafka.clients.consumer.RangeAssignor@636fb5ae, org.apache.kafka.clients.consumer.CooperativeStickyAssignor@5e0bed5a] clientTelemetryReporter=Optional[org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter@70250edd] currentThread=-1 refcount=0 cachedSubscriptionHasAllFetchPositions=false> <org.apache.kafka.clients.consumer.KafkaConsumer@3361b54c delegate=org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer@426de59f>
KafkaConfigurationSpec.test custom producer serializer: kafka/src/test/groovy/io/micronaut/configuration/kafka/KafkaConfigurationSpec.groovy#L106
Condition not satisfied: (producer.keySerializer as StringSerializer).encoding == StandardCharsets.US_ASCII.name() | | | | | | | | | | | | US-ASCII US-ASCII (java.lang.String) | | | | class java.nio.charset.StandardCharsets | | | false | | US-ASCII (sun.nio.cs.US_ASCII) | <org.apache.kafka.common.serialization.StringSerializer@1b54b2ef encoding=US-ASCII> <org.apache.kafka.clients.producer.KafkaProducer@37a080e0 log=org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger@21880359 clientId=producer-66 metrics=org.apache.kafka.common.metrics.Metrics@70ba2b39 producerMetrics=org.apache.kafka.clients.producer.internals.KafkaProducerMetrics@7ab61b45 partitioner=null maxRequestSize=1048576 totalMemorySize=33554432 metadata=org.apache.kafka.clients.producer.internals.ProducerMetadata@49222373 accumulator=org.apache.kafka.clients.producer.internals.RecordAccumulator@285163c5 sender=org.apache.kafka.clients.producer.internals.Sender@9936615 ioThread=Thread[kafka-producer-network-thread | producer-66,5,] compression=org.apache.kafka.common.compress.NoCompression@43b53851 errors=org.apache.kafka.common.metrics.Sensor@1ee71a77 time=org.apache.kafka.common.utils.SystemTime@3031bd04 keySerializer=org.apache.kafka.common.serialization.StringSerializer@1b54b2ef valueSerializer=org.apache.kafka.common.serialization.StringSerializer@38b8a50b producerConfig=org.apache.kafka.clients.producer.ProducerConfig@ac47a93b maxBlockTimeMs=60000 partitionerIgnoreKeys=false interceptors=org.apache.kafka.clients.producer.internals.ProducerInterceptors@2a89cfc7 apiVersions=org.apache.kafka.clients.ApiVersions@40141d77 transactionManager=org.apache.kafka.clients.producer.internals.TransactionManager@7ee91c50 clientTelemetryReporter=Optional[org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter@3847e7b]>
KafkaBatchErrorStrategySpec.test batch mode with 'retry' error strategy when there are serialization errors: kafka/src/test/groovy/io/micronaut/configuration/kafka/errors/KafkaBatchErrorStrategySpec.groovy#L105
Condition not satisfied: myConsumer.exceptions[0].message.startsWith('Error deserializing key/value') | | | | | | | | | false | | | Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption. | | io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption. | | at io.micronaut.configuration.kafka.processor.ConsumerState.wrapExceptionInKafkaListenerException(ConsumerState.java:461) | | at io.micronaut.configuration.kafka.processor.ConsumerState.handleException(ConsumerState.java:457) | | at io.micronaut.configuration.kafka.processor.ConsumerState.handleException(ConsumerState.java:452) | | at io.micronaut.configuration.kafka.processor.ConsumerStateBatch.resolveWithErrorStrategy(ConsumerStateBatch.java:153) | | at io.micronaut.configuration.kafka.processor.ConsumerStateBatch.pollRecords(ConsumerStateBatch.java:74) | | at io.micronaut.configuration.kafka.processor.ConsumerState.pollAndProcessRecords(ConsumerState.java:197) | | at io.micronaut.configuration.kafka.processor.ConsumerState.refreshAssignmentsPollAndProcessRecords(ConsumerState.java:164) | | at io.micronaut.configuration.kafka.processor.ConsumerState.threadPollLoop(ConsumerState.java:154) | | at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) | | at io.micrometer.core.instrument.Timer.lambda$wrap$0(Timer.java:193) | | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) | | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) | | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) | | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) | | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) | | at java.base/java.lang.Thread.run(Thread.java:842) | | Caused by: org.apache.kafka.common.errors.RecordDeserializationException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption. | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.newRecordDeserializationException(CompletedFetch.java:346) | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.parseRecord(CompletedFetch.java:330) | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.fetchRecords(CompletedFetch.java:284) | | at org.apache.kafka.clients.consumer.internals.FetchCollector.fetchRecords(FetchCollector.java:168) | | at org.apache.kafka.clients.consumer.internals.FetchCollector.collectFetch(FetchCollector.java:134) | | at org.apache.kafka.clients.consumer.internals.Fetcher.collectFetch(Fetcher.java:145) | | at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.pollForFetches(LegacyKafkaConsumer.java:667) | | at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.poll(LegacyKafkaConsumer.java:618) | | at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.poll(LegacyKafkaConsumer.java:591) | | at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:874) | | at io.micronaut.configuration.kafka.processor.ConsumerStateBatch.pollRecords(ConsumerStateBatch.java:67) | | ... 11 more | | Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by IntegerDeserializer is not 4 | | at org.apache.kafka.common.serialization.IntegerDeserializer.deserialize(IntegerDeserializer.java:48) | | at org.apache.kafka.common.serialization.IntegerDeserializer.deserialize(IntegerDeserializer.java:24) | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.parseRecord(CompletedFetch.java:327) | | ... 20 more | [io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption., io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption.] <io.micronaut.configuration.kafka.errors.KafkaBatchErrorStrategySpec$RetryDeserConsumer@2c239953 count=0 received=[111/222, 333, 444/555] exceptions=[io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption., io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption.] successful=[] skipped=[] skippedOffsets=[]>
KafkaConfigurationSpec.test custom consumer deserializer: kafka/src/test/groovy/io/micronaut/configuration/kafka/KafkaConfigurationSpec.groovy#L80
Condition not satisfied: (consumer.delegate.deserializers.keyDeserializer as StringDeserializer).encoding == StandardCharsets.US_ASCII.name() | | | | | | | | | | | | | | | | US-ASCII US-ASCII (java.lang.String) | | | | | | class java.nio.charset.StandardCharsets | | | | | false | | | | US-ASCII (sun.nio.cs.US_ASCII) | | | <org.apache.kafka.common.serialization.StringDeserializer@6dd80c6 encoding=US-ASCII> | | Deserializers{keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer@6dd80c6, valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer@6aee016d} | <org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer@62ff64ba metrics=org.apache.kafka.common.metrics.Metrics@392aaeae kafkaConsumerMetrics=org.apache.kafka.clients.consumer.internals.metrics.KafkaConsumerMetrics@57e3c86e log=org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger@1f1eed2b clientId=consumer-null-64 groupId=Optional.empty coordinator=null deserializers=Deserializers{keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer@6dd80c6, valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer@6aee016d} fetcher=org.apache.kafka.clients.consumer.internals.Fetcher@230714e6 offsetFetcher=org.apache.kafka.clients.consumer.internals.OffsetFetcher@4b98d218 topicMetadataFetcher=org.apache.kafka.clients.consumer.internals.TopicMetadataFetcher@21a9a01d interceptors=org.apache.kafka.clients.consumer.internals.ConsumerInterceptors@3b56282c isolationLevel=read_uncommitted time=org.apache.kafka.common.utils.SystemTime@20edab84 client=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient@2de8b593 subscriptions=SubscriptionState{type=NONE, subscribedPattern=null, subscription=, groupSubscription=, defaultResetStrategy=latest, assignment=[] (id=0)} metadata=org.apache.kafka.clients.consumer.internals.ConsumerMetadata@412d517b retryBackoffMs=100 retryBackoffMaxMs=1000 requestTimeoutMs=30000 defaultApiTimeoutMs=60000 closed=true assignors=[org.apache.kafka.clients.consumer.RangeAssignor@6e605c34, org.apache.kafka.clients.consumer.CooperativeStickyAssignor@45ed6e0a] clientTelemetryReporter=Optional[org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter@e27215b] currentThread=-1 refcount=0 cachedSubscriptionHasAllFetchPositions=false> <org.apache.kafka.clients.consumer.KafkaConsumer@6ca48223 delegate=org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer@62ff64ba>
KafkaConfigurationSpec.test custom producer serializer: kafka/src/test/groovy/io/micronaut/configuration/kafka/KafkaConfigurationSpec.groovy#L106
Condition not satisfied: (producer.keySerializer as StringSerializer).encoding == StandardCharsets.US_ASCII.name() | | | | | | | | | | | | US-ASCII US-ASCII (java.lang.String) | | | | class java.nio.charset.StandardCharsets | | | false | | US-ASCII (sun.nio.cs.US_ASCII) | <org.apache.kafka.common.serialization.StringSerializer@41339af6 encoding=US-ASCII> <org.apache.kafka.clients.producer.KafkaProducer@7998eb53 log=org.apache.kafka.common.utils.LogContext$LocationAwareKafkaLogger@5adc5b3a clientId=producer-66 metrics=org.apache.kafka.common.metrics.Metrics@7cb7004 producerMetrics=org.apache.kafka.clients.producer.internals.KafkaProducerMetrics@2e99f113 partitioner=null maxRequestSize=1048576 totalMemorySize=33554432 metadata=org.apache.kafka.clients.producer.internals.ProducerMetadata@2858d94d accumulator=org.apache.kafka.clients.producer.internals.RecordAccumulator@74cfd712 sender=org.apache.kafka.clients.producer.internals.Sender@15bfa1f1 ioThread=Thread[#522,kafka-producer-network-thread | producer-66,5,] compression=org.apache.kafka.common.compress.NoCompression@2fcfbcf3 errors=org.apache.kafka.common.metrics.Sensor@43d22b4 time=org.apache.kafka.common.utils.SystemTime@20edab84 keySerializer=org.apache.kafka.common.serialization.StringSerializer@41339af6 valueSerializer=org.apache.kafka.common.serialization.StringSerializer@4b4ea285 producerConfig=org.apache.kafka.clients.producer.ProducerConfig@71bbbb41 maxBlockTimeMs=60000 partitionerIgnoreKeys=false interceptors=org.apache.kafka.clients.producer.internals.ProducerInterceptors@23163234 apiVersions=org.apache.kafka.clients.ApiVersions@716c2a81 transactionManager=org.apache.kafka.clients.producer.internals.TransactionManager@2c55fa70 clientTelemetryReporter=Optional[org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter@12312ff5]>
KafkaBatchErrorStrategySpec.test batch mode with 'retry' error strategy when there are serialization errors: kafka/src/test/groovy/io/micronaut/configuration/kafka/errors/KafkaBatchErrorStrategySpec.groovy#L105
Condition not satisfied: myConsumer.exceptions[0].message.startsWith('Error deserializing key/value') | | | | | | | | | false | | | Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption. | | io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption. | | at io.micronaut.configuration.kafka.processor.ConsumerState.wrapExceptionInKafkaListenerException(ConsumerState.java:461) | | at io.micronaut.configuration.kafka.processor.ConsumerState.handleException(ConsumerState.java:457) | | at io.micronaut.configuration.kafka.processor.ConsumerState.handleException(ConsumerState.java:452) | | at io.micronaut.configuration.kafka.processor.ConsumerStateBatch.resolveWithErrorStrategy(ConsumerStateBatch.java:153) | | at io.micronaut.configuration.kafka.processor.ConsumerStateBatch.pollRecords(ConsumerStateBatch.java:74) | | at io.micronaut.configuration.kafka.processor.ConsumerState.pollAndProcessRecords(ConsumerState.java:197) | | at io.micronaut.configuration.kafka.processor.ConsumerState.refreshAssignmentsPollAndProcessRecords(ConsumerState.java:164) | | at io.micronaut.configuration.kafka.processor.ConsumerState.threadPollLoop(ConsumerState.java:154) | | at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:141) | | at io.micrometer.core.instrument.Timer.lambda$wrap$0(Timer.java:193) | | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) | | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) | | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) | | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) | | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) | | at java.base/java.lang.Thread.run(Thread.java:1583) | | Caused by: org.apache.kafka.common.errors.RecordDeserializationException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption. | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.newRecordDeserializationException(CompletedFetch.java:346) | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.parseRecord(CompletedFetch.java:330) | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.fetchRecords(CompletedFetch.java:284) | | at org.apache.kafka.clients.consumer.internals.FetchCollector.fetchRecords(FetchCollector.java:168) | | at org.apache.kafka.clients.consumer.internals.FetchCollector.collectFetch(FetchCollector.java:134) | | at org.apache.kafka.clients.consumer.internals.Fetcher.collectFetch(Fetcher.java:145) | | at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.pollForFetches(LegacyKafkaConsumer.java:667) | | at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.poll(LegacyKafkaConsumer.java:618) | | at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.poll(LegacyKafkaConsumer.java:591) | | at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:874) | | at io.micronaut.configuration.kafka.processor.ConsumerStateBatch.pollRecords(ConsumerStateBatch.java:67) | | ... 11 more | | Caused by: org.apache.kafka.common.errors.SerializationException: Size of data received by IntegerDeserializer is not 4 | | at org.apache.kafka.common.serialization.IntegerDeserializer.deserialize(IntegerDeserializer.java:48) | | at org.apache.kafka.common.serialization.IntegerDeserializer.deserialize(IntegerDeserializer.java:24) | | at org.apache.kafka.clients.consumer.internals.CompletedFetch.parseRecord(CompletedFetch.java:327) | | ... 20 more | [io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption., io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption.] <io.micronaut.configuration.kafka.errors.KafkaBatchErrorStrategySpec$RetryDeserConsumer@22080528 count=0 received=[111/222, 333, 444/555] exceptions=[io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption., io.micronaut.configuration.kafka.exceptions.KafkaListenerException: Error deserializing VALUE for partition batch-mode-retry-deser-0 at offset 3. If needed, please seek past the record to continue consumption.] successful=[] skipped=[] skippedOffsets=[]>
build_matrix
The following actions use a deprecated Node.js version and will be forced to run on node20: gradle/gradle-build-action@v2. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
build (17, micronaut-tests:micronaut-tasks-sasl-plaintext:nativeTest)
The following actions use a deprecated Node.js version and will be forced to run on node20: gradle/gradle-build-action@v2, mikepenz/action-junit-report@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
build (21, micronaut-tests:micronaut-tasks-sasl-plaintext:nativeTest)
The following actions use a deprecated Node.js version and will be forced to run on node20: gradle/gradle-build-action@v2, mikepenz/action-junit-report@v3. For more info: https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/