Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Label mapping configs do not work when set from Helm values.yaml #3722

Open
2 tasks done
Jeinhaus opened this issue Oct 28, 2024 · 10 comments
Open
2 tasks done
Labels
bug Something isn't working needs-triage

Comments

@Jeinhaus
Copy link

Kubecost Helm Chart Version

v2.4.1

Kubernetes Version

v1.30.4

Kubernetes Platform

EKS

Description

Defining and enabling kubecostProductConfigs.labelMappingConfigs does not work.
It seems, that the generated configmap (app-configs) contain the correct information, but those are not reflected in the kubecost UI. The UI still shows the default values for all the label mappings.
We tried both our label format as well as the same label with all special characters replaced by _ as reflected in the Prometheus metrics.

Trying the missing setting from #3264 did not work either.

Steps to reproduce

  1. Set the kubecostProductConfigs.labelMappingConfigs to
 labelMappingConfigs:
   enabled: true
   team_label: platform/tenant
   environment_label: platform/environment-name
  1. Deploy the chart
  2. Open the settings page and check the "Labels" section

Expected behavior

I would expect the labels defined in kubecostProductConfigs.labelMappingConfigs to be set in the UI.

Impact

Not too big, since we can set these in the UI manually.

Screenshots

No response

Logs

No response

Slack discussion

No response

Troubleshooting

  • I have read and followed the issue guidelines and this is a bug impacting only the Helm chart.
  • I have searched other issues in this repository and mine is not recorded.
@jessegoodier
Copy link
Collaborator

Thanks for the detail. we will take a look.

@jessegoodier
Copy link
Collaborator

I didn't notice this earlier, but you have a / in your label.
These are converted to _ in the metrics.
If you go to the UI and do a filter for label = platform it should autocomplete the available labels and that will be what you can use in the helm config.

Does this make sense?
@KarolTegha FYI

@Jeinhaus
Copy link
Author

Jeinhaus commented Oct 31, 2024

@jessegoodier thanks for looking into this.

As mentionend above, we already tried this:

We tried both our label format as well as the same label with all special characters replaced by _ as reflected in the Prometheus metrics.

This did not work either sadly :-/

Also, setting these labels with the / in the UI did actually work. So it's a bit confusing what these labels should actually look like.

@jessegoodier
Copy link
Collaborator

jessegoodier commented Nov 1, 2024

Edit- I believe I found the issue. Will send an update when I have more detail.

@jessegoodier
Copy link
Collaborator

Plase see edit above, I think we have a repro.

@Jeinhaus
Copy link
Author

Jeinhaus commented Nov 8, 2024

Plase see edit above, I think we have a repro.

Any update on this? @jessegoodier

@jessegoodier
Copy link
Collaborator

Yes, this is certainly a bug when using aggregator as a statefulset. I can give you a workaround now if it is urgent.
Otherwise we will get it fixed in the next patch or 2.5.

@Jeinhaus
Copy link
Author

Jeinhaus commented Nov 14, 2024

Yes, this is certainly a bug when using aggregator as a statefulset. I can give you a workaround now if it is urgent. Otherwise we will get it fixed in the next patch or 2.5.

I would appreciate a workaround, even if it's not very urgent.
Is the bug related to #3264 (comment) as well? Would be great to have both working again.

@jessegoodier
Copy link
Collaborator

I'm not sure I understand the #3264 relation.
for this, there is a configmap that is not being mounted to the aggregator statefulset.
We will get a patch out next week.

if you look at:
sts/kubecost-aggregator -- cat /var/configs/apiconfig.json

and edit that to your needs, you can then upload it with a hack like:
cat "$file" | kubectl exec -i -n "$NAMESPACE" $kubecost_aggregator -c aggregator -- sh -c "cat > /var/configs/apiconfig.json")

@Jeinhaus
Copy link
Author

Thanks for the workaround and great that a patch is on the way already.
The #3264 relation was just added, because multiple of those settings in the values.yaml section seem to not work (correctly), so I thought this might be related. But that was just guesswork without any indication. Sorry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs-triage
Projects
None yet
Development

No branches or pull requests

2 participants