You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are encountering an issue with corrupted messages (i.e., messages where Kafka polling returns an exception due to message corruption, such as disk failure). The message ends up in the StreamsUncaughtExceptionHandler instead of the DeserializationExceptionHandler.
As a result, we are unable to perform a CONTINUE action and can only choose between REPLACE_THREAD, SHUTDOWN_CLIENT, or SHUTDOWN_APPLICATION.
We tried to add a Consumer interceptor, in order to exclude the corrupted record, unfortunately the error seems to happen beforehand.
We then manually connect as admin to our broker to increase the offset.
Is there a way in Kafka Streams to handle this automatically (e.g., log and continue)? Or can we reclassify the error as a DeserializationException?
Here the related stack trace :
2024-10-30T11:20:27.358686406+01:00 stdout F org.apache.kafka.streams.errors.StreamsException: org.apache.kafka.common.KafkaException: Encountered corrupt message when fetching offset 20007698 for topic-partition http-4
2024-10-30T11:20:27.358693489+01:00 stdout F at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:729)
2024-10-30T11:20:27.358700803+01:00 stdout F at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:645)
2024-10-30T11:20:27.358708009+01:00 stdout F Caused by: org.apache.kafka.common.KafkaException: Encountered corrupt message when fetching offset 20007698 for topic-partition http-4
2024-10-30T11:20:27.358714788+01:00 stdout F at org.apache.kafka.clients.consumer.internals.FetchCollector.handleInitializeErrors(FetchCollector.java:365)
2024-10-30T11:20:27.358721637+01:00 stdout F at org.apache.kafka.clients.consumer.internals.FetchCollector.initialize(FetchCollector.java:230)
2024-10-30T11:20:27.358729409+01:00 stdout F at org.apache.kafka.clients.consumer.internals.FetchCollector.collectFetch(FetchCollector.java:110)
2024-10-30T11:20:27.358736859+01:00 stdout F at org.apache.kafka.clients.consumer.internals.Fetcher.collectFetch(Fetcher.java:145)
2024-10-30T11:20:27.35875298+01:00 stdout F at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.pollForFetches(LegacyKafkaConsumer.java:666)
2024-10-30T11:20:27.358759755+01:00 stdout F at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.poll(LegacyKafkaConsumer.java:617)
2024-10-30T11:20:27.358794496+01:00 stdout F at org.apache.kafka.clients.consumer.internals.LegacyKafkaConsumer.poll(LegacyKafkaConsumer.java:590)
2024-10-30T11:20:27.358801205+01:00 stdout F at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:874)
2024-10-30T11:20:27.358807603+01:00 stdout F at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:1230)
2024-10-30T11:20:27.358813821+01:00 stdout F at org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:1178)
2024-10-30T11:20:27.358819939+01:00 stdout F at org.apache.kafka.streams.processor.internals.StreamThread.runOnceWithoutProcessingThreads(StreamThread.java:909)
2024-10-30T11:20:27.358826123+01:00 stdout F at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:686)
2024-10-30T11:20:27.35883331+01:00 stdout F ... 1 common frames omitted
2024-10-30T11:20:27.35883923+01:00 stdout F
2024-10-30T11:20:27.359486336+01:00 stdout F 2024-10-30 10:20:27.359 ERROR 1 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [streams-app-f5e795ae-67e3-405f-82d8-99c62bb1b15c] Replacing thread in the streams uncaught exception handler
Thank you very much for your help.
The text was updated successfully, but these errors were encountered:
Hello team,
We are encountering an issue with corrupted messages (i.e., messages where Kafka polling returns an exception due to message corruption, such as disk failure). The message ends up in the
StreamsUncaughtExceptionHandler
instead of theDeserializationExceptionHandler
.As a result, we are unable to perform a
CONTINUE
action and can only choose betweenREPLACE_THREAD
,SHUTDOWN_CLIENT
, orSHUTDOWN_APPLICATION
.We tried to add a Consumer interceptor, in order to exclude the corrupted record, unfortunately the error seems to happen beforehand.
We then manually connect as admin to our broker to increase the offset.
Is there a way in Kafka Streams to handle this automatically (e.g., log and continue)? Or can we reclassify the error as a
DeserializationException
?Here the related stack trace :
Thank you very much for your help.
The text was updated successfully, but these errors were encountered: