Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[social network] Workloads die out, jaeger can only capture a few traces with k8s & minikube deployment #291

Open
SkollRyu opened this issue Nov 6, 2023 · 0 comments

Comments

@SkollRyu
Copy link
Contributor

SkollRyu commented Nov 6, 2023

The actual cause is because of the default sampling rate value in helm chart is different from the docker setup, Raised a PR for this already

Hello All,

I've been following the steps and deploy social network with k8s and minikube using helm chart. And I have already tried 3 ways to forward the workloads to the nginx, but jaeger can only capture only a few traces (compared to docker using the exact same command).
For example: for 428 requests: docker: ~90 traces, k8s: ~10 traces

I have tried:

  1. Port-forward: Use kubectl port-forward nginx-thrift-<id>-<id> 8080:8080
  2. Node-Port: Add type: NodePort in the values.yaml of Nginx-thrift ( I changed the baseTemplate as well to ensure it works)
  3. Loadbalancer: Add nginx-thrift type: LoadBalancer and then minikube tunnel
  4. Put wrk2 into same cluster node: I make a pod based on the ubuntu-client.YAML in openshift. Run the script. Managed to send it successfully, but jaeger still only captures a few traces

I can access the application with all three methods mentioned above. But when I run the command ../wrk2/wrk -D exp -t 5 -c 5 -d 20 -L -s ./wrk2/scripts/social-network/compose-post.lua http://localhost:8080/wrk2-api/post/compose -R 20, it only shows a few traces.

But the wrk2 output seems show all request has been made successfully?

Test Results @ http://localhost:8080/wrk2-api/post/compose 
  Thread Stats   Avg      Stdev     99%   +/- Stdev
    Latency    16.19ms   12.72ms  69.44ms   91.43%
    Req/Sec     4.10      8.21    27.00     84.98%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%   11.85ms
 75.000%   14.93ms
 90.000%   26.77ms
 99.000%   69.44ms
 99.900%  108.61ms
 99.990%  108.61ms
 99.999%  108.61ms
100.000%  108.61ms

  Detailed Percentile spectrum:
       Value   Percentile   TotalCount 1/(1-Percentile)

       6.627     0.000000            1         1.00
       9.455     0.100000           21         1.11
      10.263     0.200000           42         1.25
      10.631     0.300000           63         1.43
      11.127     0.400000           84         1.67
      11.847     0.500000          105         2.00
      12.135     0.550000          116         2.22
      12.471     0.600000          126         2.50
      13.311     0.650000          137         2.86
      14.023     0.700000          147         3.33
      14.927     0.750000          158         4.00
      16.319     0.775000          163         4.44
      17.823     0.800000          168         5.00
      18.591     0.825000          174         5.71
      21.023     0.850000          179         6.67
      24.159     0.875000          184         8.00
      24.991     0.887500          187         8.89
      26.767     0.900000          189        10.00
      27.375     0.912500          192        11.43
      37.407     0.925000          195        13.33
      41.631     0.937500          197        16.00
      42.399     0.943750          199        17.78
      42.687     0.950000          200        20.00
      43.775     0.956250          201        22.86
      48.351     0.962500          203        26.67
      51.359     0.968750          204        32.00
      51.839     0.971875          205        35.56
      51.839     0.975000          205        40.00
      58.847     0.978125          206        45.71
      62.399     0.981250          207        53.33
      62.399     0.984375          207        64.00
      69.439     0.985938          208        71.11
      69.439     0.987500          208        80.00
      69.439     0.989062          208        91.43
      76.671     0.990625          209       106.67
      76.671     0.992188          209       128.00
      76.671     0.992969          209       142.22
      76.671     0.993750          209       160.00
      76.671     0.994531          209       182.86
     108.607     0.995313          210       213.33
     108.607     1.000000          210          inf
#[Mean    =       16.187, StdDeviation   =       12.719]
#[Max     =      108.544, Total count    =          210]
#[Buckets =           27, SubBuckets     =         2048]
-----------------------------------------------------------------------
  428 requests in 20.00s, 89.86KB read
Requests/sec:     21.40  
Transfer/sec:      4.49KB

Although one way to get around may be set up k8s on bare metal machine, but I don't have the resources to do right now.
Does anyone know how to resolve this without setting up k8s on bare metal machine?

Thanks!

PS: The summary of my system (POP_OS based on ubuntu 22.04 LTS)

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz
    CPU family:          6
    Model:               142
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    Stepping:            12
    CPU max MHz:         4600.0000
    CPU min MHz:         400.0000
    BogoMIPS:            3999.93

@SkollRyu SkollRyu changed the title [social media network] Workloads die out, jaeger can only capture a few traces with k8s & minikube deployment [social network] Workloads die out, jaeger can only capture a few traces with k8s & minikube deployment Nov 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant