You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ERROR MESSAGE :
Name: grafana-hpa
Namespace: monitoring
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: grafana-prometheus
meta.helm.sh/release-namespace: monitoring
CreationTimestamp: Sun, 13 Oct 2024 09:53:05 -0400
Reference: Deployment/grafana-prometheus
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): / 70%
resource memory on pods (as a percentage of request): / 70%
Min replicas: 1
Max replicas: 3
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: missing request for cpu in container grafana-sc-dashboard of Pod grafana-prometheus-5797c587bb-mntjx
Events:
Type Reason Age From Message
Warning FailedGetResourceMetric 24s (x536 over 135m) horizontal-pod-autoscaler failed to get cpu utilization: missing request for cpu in container grafana-sc-dashboard of Pod grafana-prometheus-5797c587bb-mntjx
The text was updated successfully, but these errors were encountered:
i want to enable HPA for grafana-prometheus dashboard deployment , i am facing issue to set the resources for sidecar image , can you please help us in which yaml file to set the resource's to achieve HPA,
below is the deployment :
Name: grafana-prometheus
Namespace: monitoring
CreationTimestamp: Sun, 13 Oct 2024 09:53:05 -0400
Labels: app.kubernetes.io/instance=grafana-prometheus
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=10.0.1
helm.sh/chart=grafana-6.32.1
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: grafana-prometheus
meta.helm.sh/release-namespace: monitoring
Selector: app.kubernetes.io/instance=grafana-prometheus,app.kubernetes.io/name=grafana
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/instance=grafana-prometheus
app.kubernetes.io/name=grafana
Annotations: checksum/config: 4473a5ac39913444dae13e96e1f8b0b0f44630c2ad378715da7f8a5ff4b9f0a6
checksum/dashboards-json-config: 07fdf556f70c71c38a5636a3ee1f8623d98129cb85287837f8c2c48ac8ec915f
checksum/sc-dashboard-provider-config: db0c89f9c224b725d4433d8e294847179796e26c1ed8ae8a0215b17c3b7b760a
checksum/secret: 408b46f08f9d4469fdd438249a0e975163c04bbe2b30a3c657845280a0fe61b8
Service Account: grafana-prometheus
Init Containers:
init-chown-data:
Image: nexus.fnc.reg/cicada/7.0/grafana/busybox:1.36
Port:
Host Port:
Command:
chown
-R
472:472
/var/lib/grafana
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 500m
memory: 512Mi
Environment:
Mounts:
/var/lib/grafana from storage (rw)
download-dashboards:
Image: nexus.fnc.reg/cicada/7.0/curlimages/curl:8.2.0
Port:
Host Port:
Command:
/bin/sh
Args:
-c
mkdir -p /var/lib/grafana/dashboards/default && /bin/sh -x /etc/grafana/download_dashboards.sh
Environment:
Mounts:
/etc/grafana/download_dashboards.sh from config (rw,path="download_dashboards.sh")
/var/lib/grafana from storage (rw)
Containers:
grafana-sc-dashboard:
Image: nexus.fnc.reg/cicada/7.0/kiwigrid/k8s-sidecar:1.25.0
Port:
Host Port:
Environment:
METHOD: WATCH
LABEL: grafana_dashboard
LABEL_VALUE: 1
FOLDER: /tmp/dashboards
RESOURCE: both
Mounts:
/tmp/dashboards from sc-dashboard-volume (rw)
grafana-sc-datasources:
Image: nexus.fnc.reg/cicada/7.0/kiwigrid/k8s-sidecar:1.25.0
Port:
Host Port:
Environment:
METHOD: WATCH
LABEL: grafana_datasource
LABEL_VALUE: 1
FOLDER: /etc/grafana/provisioning/datasources
RESOURCE: both
REQ_USERNAME: <set to the key 'admin-user' in secret 'grafana-prometheus'> Optional: false
REQ_PASSWORD: <set to the key 'admin-password' in secret 'grafana-prometheus'> Optional: false
REQ_URL: http://localhost:3000/api/admin/provisioning/datasources/reload
REQ_METHOD: POST
Mounts:
/etc/grafana/provisioning/datasources from sc-datasources-volume (rw)
grafana:
Image: nexus.fnc.reg/cicada/7.0/grafana/grafana:10.0.1
Ports: 80/TCP, 3000/TCP
Host Ports: 0/TCP, 0/TCP
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 500m
memory: 512Mi
Liveness: http-get http://:3000/api/health delay=60s timeout=30s period=10s #success=1 #failure=10
Readiness: http-get http://:3000/api/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
GF_SECURITY_ADMIN_USER: <set to the key 'admin-user' in secret 'grafana-prometheus'> Optional: false
GF_SECURITY_ADMIN_PASSWORD: <set to the key 'admin-password' in secret 'grafana-prometheus'> Optional: false
GF_SECURITY_ALLOW_EMBEDDING: true
GF_PATHS_DATA: /var/lib/grafana/
GF_PATHS_LOGS: /var/log/grafana
GF_PATHS_PLUGINS: /var/lib/grafana/plugins
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
Mounts:
/etc/grafana/grafana.ini from config (rw,path="grafana.ini")
/etc/grafana/provisioning/dashboards/dashboardproviders.yaml from config (rw,path="dashboardproviders.yaml")
/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml from sc-dashboard-provider (rw,path="provider.yaml")
/etc/grafana/provisioning/datasources from sc-datasources-volume (rw)
/tmp/dashboards from sc-dashboard-volume (rw)
/var/lib/grafana from storage (rw)
/var/lib/grafana/dashboards/default/custom-dashboard.json from dashboards-default (rw,path="custom-dashboard.json")
/var/lib/grafana/dashboards/default/jvm-micrometer.json from dashboards-default (rw,path="jvm-micrometer.json")
/var/lib/grafana/dashboards/default/k8s-node-resources-alerts.json.json from dashboards-default (rw,path="k8s-node-resources-alerts.json.json")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: grafana-prometheus
Optional: false
dashboards-default:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: grafana-prometheus-dashboards-default
Optional: false
storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: grafana-prometheus
ReadOnly: false
sc-dashboard-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
sc-dashboard-provider:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: grafana-prometheus-config-dashboards
Optional: false
sc-datasources-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
Conditions:
Type Status Reason
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets:
NewReplicaSet: grafana-prometheus-5797c587bb (1/1 replicas created)
Events:
ERROR MESSAGE :
Name: grafana-hpa
Namespace: monitoring
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: grafana-prometheus
meta.helm.sh/release-namespace: monitoring
CreationTimestamp: Sun, 13 Oct 2024 09:53:05 -0400
Reference: Deployment/grafana-prometheus
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): / 70%
resource memory on pods (as a percentage of request): / 70%
Min replicas: 1
Max replicas: 3
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: missing request for cpu in container grafana-sc-dashboard of Pod grafana-prometheus-5797c587bb-mntjx
Events:
Type Reason Age From Message
Warning FailedGetResourceMetric 24s (x536 over 135m) horizontal-pod-autoscaler failed to get cpu utilization: missing request for cpu in container grafana-sc-dashboard of Pod grafana-prometheus-5797c587bb-mntjx
The text was updated successfully, but these errors were encountered: