-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update container-log-max-files-max-size #3774
Conversation
Welcome @elieser1101! |
Hi @elieser1101. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
Co-authored-by: Antonio Ojea <[email protected]>
/lgtm |
/test pull-kind-verify |
@BenTheElder now it keeps the whole run in two files This is important to be able to debug problems on CI runs, specially rare flakes that does not happen frequently, is the second time this week it happens to me I can not debug because there are no logs when the problem happened /lgtm /hold Wait for Ben /assign @BenTheElder |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea, elieser1101 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
kubelet_extra_args=" \"v\": \"${KIND_CLUSTER_LOG_LEVEL}\"" | ||
kubelet_extra_args=" \"v\": \"${KIND_CLUSTER_LOG_LEVEL}\" | ||
\"container-log-max-files\": \"10\" | ||
\"container-log-max-size\": \"100Mi\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned whenever we have to do CI-only configuration @aojea, because it means we're no longer actually testing the defaults, do we think this is because of the verbosity on the components?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compared with an e2e gce log the sizes are similar https://gcsweb.k8s.io/gcs/kubernetes-ci-logs/pr-logs/pull/128474/pull-kubernetes-e2e-gce/1852746024750157824/artifacts/e2e-44626956c7-674b9-master/
the difference is that those components logs to the filesystem, with the gorunner thingy, in kind everything logs to the pods and is rotated by the kubelet IIRC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the difference is that those components logs to the filesystem, with the gorunner thingy, in kind everything logs to the pods and is rotated by the kubelet IIRC
This seems like a gap to revisit later, ideally we want to make sure component logs have robust retention, but maybe not random pods.
Also, we could just change the defaults in kind, kubeadm, or kubelet. Again, I think we should avoid tuning the cluster "for CI", but in the fullness of capturing logs for debugging now /hold cancel |
As requested in #3772