Skip to content

baloise/kikkers

Repository files navigation

kikkers

Argo Events playground (Code Camp 2024)

MinIO Use Case

minio

Setup

Deploy MinIO instance

  • Expose minio and minio-console svc
  • Add generated ca to destinationCA in Routes
  • Define MINIO_ROOT_USER and MINIO_ROOT_PASSWORD storage-configuration config.env
  • tenant
  • Create bucket with name test and generate Access and Secret Keys

Create native NATs eventbus

Create RBAC needed to run workflows

Create EventSource

  • eventsource.yaml
  • Watch for s3:ObjectCreated:Put in bucket test using Access and Secret Keys provided in artifacts-minio secret and create sudoku event
  • Filter to prefix: "input" and suffix: ".txt"
  • Point to Route to trust certificate (Route re-encrypt using MinIO destination certificate)

Create Sensor

  • sensor.yaml
  • Create workflow that References a workflowTemplate as soon as event is created on the eventbus
  • Reference event sudoku event from eventSource minio
  • test-dep provide metadata from event to parameters to the created workflow
  • Argo Sensor k8s trigger create Argo Workflow resource
    • Created Argo Workflows resource references Argo Workflow Template
  • use event metadata to provide path to local s3 downloaded file

Create workflowTemplate

  • sudoku-wft.yaml
  • s3 input (get files from MinIO using Access and Secret Keys provided in artifacts-minio secret)
  • s3 output (put files to MinIO using Access and Secret Keys provided in artifacts-minio secret)
  • set archive to {} to keep plain files when put to s3
  • ghcr.io/luechtdiode/sudoku:0.0.2 Sudoku Solver Repo

Make use of extended handling of event-metadata

Capture more context-data from Minio Sudoku Event and try to identify Sudoku File Uploader for further notification purposes

Sample Event Payload (see pricipalId: Sudoku Requester):

[{
  eventVersion:2.0,
  eventSource:minio:s3,
  awsRegion:,
  eventTime:2024-10-30T13:02:09.050Z,
  eventName:s3:ObjectCreated:Put,
  userIdentity:{
    principalId:Sudoku Requester
  },
  requestParameters:{
    principalId:Sudoku Requester,
    region:,
    sourceIPAddress:redacted minio host ip
  },
  responseElements:{
    x-amz-id-2:912a0631cecf761f73c7401d71bc819e9fd4b007e66fba8f36cc12235413475e,
    x-amz-request-id:18033C9D81E3DDB9,
    x-minio-deployment-id:358cb810-c0cc-4d30-9e86-e5bfbdf8cd0f,
    x-minio-origin-endpoint:https://minio.redacted.svc.cluster.local
  },
  s3:{
    s3SchemaVersion:1.0,
    configurationId:Config,
    bucket:{
      name:test,
      ownerIdentity:{
        principalId:Sudoku Requester
      },
      arn:arn:aws:s3:::test
    },
    object:{
      key:input/sudoku.txt,
      size:2591,
      eTag:5824476e697ce635d1eae18057131aab,contentType:text/plain,
      userMetadata:{content-type:text/plain},
      sequencer:18033C9D81FB2313
    }
  },
  source:{
    host:redacted minio host ip,
    port:,
    userAgent:MinIO (linux; amd64) minio-go/v7.0.70 MinIO Console/(dev)
  }
}]

Use username of file-uploader to define outputfolder. See separate workflow-implmentation

image

Argo Rollouts

  • Application need to be able to scale horizontally (more than one replica possible to run at the same time)

rollouts

  • Deploy using OLM, as OpenShift Argo Rollout plugin is needed for HAProxy traffic-splitting/routing integration on Argo Rollout Controller deployment.
      trafficRouting:
        plugins:
          argoproj-labs/openshift:
            routes:
              - ...

Ignore weight in ArgoCD

  resource.customizations: |
    route.openshift.io/Route:
      ignoreDifferences: |
        jsonPointers:
        - /spec/to/weight
        jqPathExpressions:
        - '.spec.alternateBackends[]?.weight'

https://docs.openshift.com/gitops/1.14/argo_rollouts/routing-traffic-by-using-argo-rollouts.html

Setup

Rollout

  • Stable Service
  • Canary Service
  • Route
    • Define alternateBackends to point to canary service using weight managed by Argo Rollouts
  • Rollout Resource
    • Defines Pod spec
    • Differs from deployment in strategy section
      • Split traffic between canary service and stable service using the alternateBackends defintion in Route
      • Use OpenShift traffic routing plugin
      • Add Criteria in steps to proceed with rollouts using an AnalysisTemplate
      • Pass route name to rerfence it in HAproxy metrics in analysis

Analysis

  • AnalysisTemplate
    • Use success rate of request to rollout or rollback
      • Run analysis every 10s / 10 times
      • failureLimit: 3 # Fixme
      • success rate above 90% required to proceed
      • Use OpenShift Monitoring provided metrics haproxy_backend_http_responses_total
      • Authenticate using serviceAccount against Thanos Querier API

Provide openshift-monitoring metrics (Do not use this in producation!)

  • ServiceMonitor
    • For demonstation purpose only. Do not add serviceMonitors to RH provided openshift-monitoring stack
    • Make sure you have 1s scrape interval
  • Thanos Querier SA
    • Provide SA that can access openshift-monitoring using a bearer token
  • Thanos Querier CRB
    • openshift-monitoring uses oauth proxy / openshift-delegate-urls to gran access to serviceAccounts that can get namespace

Create short-live token for Demo

TOKEN=$(kubectl create token thanos-querier-reader --namespace argo-rollouts-playground-test --duration 6000m)
kubectl create secret generic token --from-literal=token="Bearer $TOKEN"

Trivial Demo

Create some requests

kubectl delete rollout rollouts-haproxy-demo
while true; do sleep 0.1 && curl https://rollouts-demo-route-argo-rollouts-playground-test.apps.baloise.dev -s -I | grep -i "HTTP/1.1" -A1; done

Metrics

Generate some 403

        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx

kubectl argo rollouts dashboard

alt text

Start TUI

kubectl argo rollouts -n argo-rollouts-playground-test get rollout rollouts-haproxy-demo --watch

alt text

Proceed with rollout

kubectl argo rollouts -n argo-rollouts-playground-test get rollout rollouts-haproxy-demo --watch
kubectl get analysisruns.argoproj.io

About

Argo Events playground (Code Camp 2024)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published