You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I got this problem, how to deal with it, please.
It is hitting these logs a lot all the time.
Then checked source, maybe it append the data which its size is more than MaxBufferSize?
How can it append the data which is more than MaxBufferSize(256M) one time ?
Looking forward to getting your reply
Thank you
Here is config.
`
<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender">
<!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
<!-- 微服务名 -->
<tag>${applicationName}</tag>
<!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
<!-- Host name/address and port number which Fluentd placed -->
<remoteHost>${fluentdAddr}</remoteHost>
<port>24224</port>
<!-- [Optional] Multiple name/addresses and port numbers which Fluentd placed
<remoteServers>
<remoteServer>
<host>primary</host>
<port>24224</port>
</remoteServer>
<remoteServer>
<host>secondary</host>
<port>24224</port>
</remoteServer>
</remoteServers>
-->
<!-- [Optional] Additional fields(Pairs of key: value) -->
<!-- 环境 -->
<additionalField>
<key>env</key>
<value>${profile}</value>
</additionalField>
<!-- [Optional] Configurations to customize Fluency's behavior: https://github.com/komamitsu/fluency#usage -->
<ackResponseMode>true</ackResponseMode>
<!-- <fileBackupDir>/tmp</fileBackupDir> -->
<bufferChunkInitialSize>2097152</bufferChunkInitialSize>
<bufferChunkRetentionSize>16777216</bufferChunkRetentionSize>
<maxBufferSize>268435456</maxBufferSize>
<bufferChunkRetentionTimeMillis>1000</bufferChunkRetentionTimeMillis>
<connectionTimeoutMilli>5000</connectionTimeoutMilli>
<readTimeoutMilli>5000</readTimeoutMilli>
<waitUntilBufferFlushed>30</waitUntilBufferFlushed>
<waitUntilFlusherTerminated>40</waitUntilFlusherTerminated>
<flushAttemptIntervalMillis>200</flushAttemptIntervalMillis>
<senderMaxRetryCount>12</senderMaxRetryCount>
<!-- [Optional] Enable/Disable use of EventTime to get sub second resolution of log event date-time -->
<useEventTime>true</useEventTime>
<sslEnabled>false</sslEnabled>
<!-- [Optional] Enable/Disable use the of JVM Heap for buffering -->
<jvmHeapBufferMode>false</jvmHeapBufferMode>
<!-- [Optional] If true, Map Marker is expanded instead of nesting in the marker name -->
<flattenMapMarker>false</flattenMapMarker>
<!-- [Optional] default "marker" -->
<markerPrefix></markerPrefix>
<!-- [Optional] Message encoder if you want to customize message -->
<encoder>
<pattern><![CDATA[%-5level %logger{50}#%line %message]]></pattern>
</encoder>
<!-- [Optional] Message field key name. Default: "message" -->
<messageFieldKeyName>msg</messageFieldKeyName>
</appender>
<appender name="FLUENCY" class="ch.qos.logback.classic.AsyncAppender">
<!-- Max queue size of logs which is waiting to be sent (When it reach to the max size, the log will be disappeared). -->
<queueSize>999</queueSize>
<!-- Never block when the queue becomes full. -->
<neverBlock>true</neverBlock>
<!-- The default maximum queue flush time allowed during appender stop.
If the worker takes longer than this time it will exit, discarding any remaining items in the queue.
10000 millis
-->
<maxFlushTime>1000</maxFlushTime>
<appender-ref ref="FLUENCY_SYNC"/>
</appender>
`
I am sure there is not a so big log data ( more than 256M),I think it maybe that the unsent data are accumulated into a big data, so it is keeping failing. Is it possible about that?
The text was updated successfully, but these errors were encountered:
Hi, I got this problem, how to deal with it, please.
It is hitting these logs a lot all the time.
Then checked source, maybe it append the data which its size is more than MaxBufferSize?
How can it append the data which is more than MaxBufferSize(256M) one time ?
Looking forward to getting your reply
Thank you
Here is config.
`
`
I am sure there is not a so big log data ( more than 256M),I think it maybe that the unsent data are accumulated into a big data, so it is keeping failing. Is it possible about that?
The text was updated successfully, but these errors were encountered: