Skip to content

Commit

Permalink
Docs/rework comp matrix (#14)
Browse files Browse the repository at this point in the history
  • Loading branch information
puffitos authored Nov 22, 2024
1 parent f7aceaa commit bbfab41
Show file tree
Hide file tree
Showing 2 changed files with 75 additions and 78 deletions.
81 changes: 42 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,49 +1,54 @@
# CaaS Cluster Monitoring
# caas-cluster-monitoring

## Installation

With de-installation of the origin Rancher Monitoring and keept resources
A fork of the official [rancher cluster monitoring](https://github.com/rancher/charts/tree/dev-v2.9/charts/rancher-monitoring)
with more up-to-date prometheus-operator CRDs, features and a maintained fork of rancher's [prometheus-auth](https://github.com/caas-team/prometheus-auth)
to enable multi-tenancy for the prometheus metrics.

```bash
helm -n cattle-monitoring-system delete rancher-monitoring
kubectl -n cattle-monitoring-system delete secret alertmanager-rancher-monitoring-alertmanager
```
## Maintainers

Deleting of rancher-monitoring-crd would delete also all corresponding Custom Resources. We delete only the Helm release secrets and keep CRDs into the cluster
| Name | Email | Url |
| ---- | ------ | --- |
| eumel8 | <[email protected]> | <https://www.telekom.com> |
| puffitos | <[email protected]> | <https://www.telekom.com> |

```bash
kubectl -n cattle-monitoring-system get secrets -o name --no-headers | grep sh.helm.release.v1.rancher-monitoring-crd | xargs kubectl -n cattle-monitoring-system delete $1
```
## Source Code

Nevertheless we need to upgrade CRDs manually because there is no logic to do this in Helm:
* <https://github.com/caas-team/caas-cluster-monitoring>
* <https://github.com/prometheus-community/helm-charts>

```bash
cd charts
tar xvfz kube-prometheus-stack-51.0.3.tgz
cd kube-prometheus-stack/charts/crds
kubectl apply -f crds/ --server-side --force-conflicts
```
## Installation

To decouple CRDs from this chart (you may have installed CRDs from another chart or logic), feature is disabled:
If you're coming from an existing rancher-monitoring installation:

```yaml
kube-prometheus-stack:
crds:
enabled: false
```
* you must first update the prometheus-operator CRDs separately. This chart only includes the kube-prometheus-stack *without* the CRDs.
* you should additionally uninstall the rancher-monitoring chart before installing this one.
* do not delete the `rancher-monitoring-crds` chart, as this will delete all custom resources already created (or back them up first and recreate them).

Upgrade to the kube-prometheus-stack:
To install run the following command:

```bash
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```

available config parameters:
## Compatibility matrix

The following table shows the compatibility between the CaaS Cluster Monitoring chart and the CaaS Project Monitoring versions:

| CaaS Cluster Monitoring | compatible with CaaS Project Monitoring | used kube-prometheus-stack |
| ----------------------- | --------------------------------------- | ------------------------------ |
| < 0.0.6 | < 1.0.0 | 51.0.3 |
| 0.0.6 < x < 1.0.0 | 1.0.0 <= y < 1.4.0 | 58.4.0 |

where `x` is the CaaS Cluster Monitoring Version and `y` is the CaaS Project Monitoring Version.

## Configuration

The installation can be configured using the various parameters defined in the `values.yaml` file. The following tables list the configurable parameters of the CaaS Cluster Monitoring chart and their default values.

### caas

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `caas.clusterCosts` | bool | `true` | whether the cluster has kubecost installed |
| `caas.defaultEgress` | bool | `false` | whether the cluster needs defaultEgress installed |
| `caas.dynatrace` | bool | `true` | whether the cluster has a dynatrace operator installed |
Expand All @@ -59,7 +64,7 @@ available config parameters:
### global

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `global.cattle.clusterId` | string | `"local"` | |
| `global.cattle.clusterName` | string | `"local"` | |
| `global.cattle.systemDefaultRegistry` | string | `"mtr.devops.telekom.de"` | |
Expand All @@ -73,7 +78,7 @@ available config parameters:
### kube-prometheus-stack

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `kube-prometheus-stack.alertmanager.alertmanagerSpec.alertmanagerConfigNamespaceSelector` | object | `{}` | |
| `kube-prometheus-stack.alertmanager.alertmanagerSpec.alertmanagerConfigSelector.matchExpressions[0].key` | string | `"release"` | |
| `kube-prometheus-stack.alertmanager.alertmanagerSpec.alertmanagerConfigSelector.matchExpressions[0].operator` | string | `"In"` | |
Expand Down Expand Up @@ -462,7 +467,7 @@ available config parameters:
### rkeControllerManager

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeControllerManager.clients.https.enabled` | bool | `true` | |
| `rkeControllerManager.clients.https.insecureSkipVerify` | bool | `true` | |
| `rkeControllerManager.clients.https.useServiceAccountCredentials` | bool | `true` | |
Expand All @@ -488,7 +493,7 @@ available config parameters:
### rkeEtcd

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeEtcd.clients.https.authenticationMethod.authorization.enabled` | bool | `false` | |
| `rkeEtcd.clients.https.authenticationMethod.bearerTokenFile.enabled` | bool | `false` | |
| `rkeEtcd.clients.https.authenticationMethod.bearerTokenSecret.enabled` | bool | `false` | |
Expand All @@ -514,7 +519,7 @@ available config parameters:
### rkeIngressNginx

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeIngressNginx.clients.nodeSelector."node-role.kubernetes.io/worker"` | string | `"true"` | |
| `rkeIngressNginx.clients.port` | int | `10015` | |
| `rkeIngressNginx.clients.tolerations[0].effect` | string | `"NoExecute"` | |
Expand All @@ -529,7 +534,7 @@ available config parameters:
### rkeProxy

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeProxy.clients.port` | int | `10013` | |
| `rkeProxy.clients.tolerations[0].effect` | string | `"NoExecute"` | |
| `rkeProxy.clients.tolerations[0].operator` | string | `"Exists"` | |
Expand All @@ -546,7 +551,7 @@ available config parameters:
### rkeScheduler

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeScheduler.clients.https.authenticationMethod.authorization.enabled` | bool | `false` | |
| `rkeScheduler.clients.https.authenticationMethod.bearerTokenFile.enabled` | bool | `false` | |
| `rkeScheduler.clients.https.authenticationMethod.bearerTokenSecret.enabled` | bool | `false` | |
Expand Down Expand Up @@ -575,7 +580,7 @@ available config parameters:
### hardenedKubelet

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `hardenedKubelet.clients.https.authenticationMethod.authorization.enabled` | bool | `false` | |
| `hardenedKubelet.clients.https.authenticationMethod.bearerTokenFile.enabled` | bool | `false` | |
| `hardenedKubelet.clients.https.authenticationMethod.bearerTokenSecret.enabled` | bool | `false` | |
Expand Down Expand Up @@ -619,6 +624,4 @@ available config parameters:
| `hardenedKubelet.serviceMonitor.endpoints[2].path` | string | `"/metrics/probes"` | |
| `hardenedKubelet.serviceMonitor.endpoints[2].port` | string | `"metrics"` | |
| `hardenedKubelet.serviceMonitor.endpoints[2].relabelings[0].sourceLabels[0]` | string | `"__metrics_path__"` | |
| `hardenedKubelet.serviceMonitor.endpoints[2].relabelings[0].targetLabel` | string | `"metrics_path"` | |

Autogenerated from chart metadata using [helm-docs v1.11.3](https://github.com/norwoodj/helm-docs/releases/v1.11.3)
| `hardenedKubelet.serviceMonitor.endpoints[2].relabelings[0].targetLabel` | string | `"metrics_path"` | |
72 changes: 33 additions & 39 deletions README.md.gotmpl
Original file line number Diff line number Diff line change
@@ -1,51 +1,47 @@
# CaaS Cluster Monitoring
{{ template "chart.header" . }}

## Installation
A fork of the official [rancher cluster monitoring](https://github.com/rancher/charts/tree/dev-v2.9/charts/rancher-monitoring)
with more up-to-date prometheus-operator CRDs, features and a maintained fork of rancher's [prometheus-auth](https://github.com/caas-team/prometheus-auth)
to enable multi-tenancy for the prometheus metrics.

With de-installation of the origin Rancher Monitoring and keept resources
{{ template "chart.maintainersSection" . }}

{{ template "chart.sourcesSection" . }}

```bash
helm -n cattle-monitoring-system delete rancher-monitoring
kubectl -n cattle-monitoring-system delete secret alertmanager-rancher-monitoring-alertmanager
```
## Installation

Deleting of rancher-monitoring-crd would delete also all corresponding Custom Resources. We delete only the Helm release secrets and keep CRDs into the cluster
If you're coming from an existing rancher-monitoring installation:

```bash
kubectl -n cattle-monitoring-system get secrets -o name --no-headers | grep sh.helm.release.v1.rancher-monitoring-crd | xargs kubectl -n cattle-monitoring-system delete $1
```
* you must first update the prometheus-operator CRDs separately. This chart only includes the kube-prometheus-stack *without* the CRDs.
* you should additionally uninstall the rancher-monitoring chart before installing this one.
* do not delete the `rancher-monitoring-crds` chart, as this will delete all custom resources already created (or back them up first and recreate them).

Nevertheless we need to upgrade CRDs manually because there is no logic to do this in Helm:
To install run the following command:

```bash
cd charts
tar xvfz kube-prometheus-stack-51.0.3.tgz
cd kube-prometheus-stack/charts/crds
kubectl apply -f crds/ --server-side --force-conflicts
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```

To decouple CRDs from this chart (you may have installed CRDs from another chart or logic), feature is disabled:
## Compatibility matrix

```yaml
kube-prometheus-stack:
crds:
enabled: false
```
The following table shows the compatibility between the CaaS Cluster Monitoring chart and the CaaS Project Monitoring versions:

Upgrade to the kube-prometheus-stack:
| CaaS Cluster Monitoring | compatible with CaaS Project Monitoring | used kube-prometheus-stack |
| ----------------------- | --------------------------------------- | ------------------------------ |
| < 0.0.6 | < 1.0.0 | 51.0.3 |
| 0.0.6 < x < 1.0.0 | 1.0.0 <= y < 1.4.0 | 58.4.0 |

```bash
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```
where `x` is the CaaS Cluster Monitoring Version and `y` is the CaaS Project Monitoring Version.

available config parameters:

## Configuration

The installation can be configured using the various parameters defined in the `values.yaml` file. The following tables list the configurable parameters of the CaaS Cluster Monitoring chart and their default values.

### caas

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "caas" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -55,7 +51,7 @@ available config parameters:
### global

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "global" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -65,7 +61,7 @@ available config parameters:
### kube-prometheus-stack

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "kube-prometheus-stack" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -75,7 +71,7 @@ available config parameters:
### rkeControllerManager

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeControllerManager" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -85,7 +81,7 @@ available config parameters:
### rkeEtcd

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeEtcd" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -95,7 +91,7 @@ available config parameters:
### rkeIngressNginx

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeIngressNginx" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -105,7 +101,7 @@ available config parameters:
### rkeProxy

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeProxy" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -115,7 +111,7 @@ available config parameters:
### rkeScheduler

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeScheduler" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -125,11 +121,9 @@ available config parameters:
### hardenedKubelet

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "hardenedKubelet" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
{{- end }}
{{- end }}

Autogenerated from chart metadata using [helm-docs v1.11.3](https://github.com/norwoodj/helm-docs/releases/v1.11.3)
{{- end }}

0 comments on commit bbfab41

Please sign in to comment.