Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Ver. 5.11.0.9244-ls241 spams logs and sends CPU to 100% #237

Open
1 task done
NightHawkATL opened this issue Oct 3, 2024 · 5 comments
Open
1 task done

[BUG] Ver. 5.11.0.9244-ls241 spams logs and sends CPU to 100% #237

NightHawkATL opened this issue Oct 3, 2024 · 5 comments

Comments

@NightHawkATL
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Radarr will randomly start using 100% CPU on the VM I have all of my Arrs installed on and will cause issues. I have to go into Portainer and restart the container to get it to stop. At first it would fill up the swap so I disabled that 2 days ago then it started hitting 100% CPU yesterday and today. The VM has 4 vCPU and 16GB RAM. I am also running a different 4K instance for Radarr and it is not exhibiting the same behavior as this one. It is installed as a part of the same stack.
image
image

Expected Behavior

that is won't use 100% CPU for one container. Don't know what else to put here.

Steps To Reproduce

it is random so I would say that you setup a VM with Ubuntu 22.04 and install Docker, Compose and Portainer and then disable the swap space on the VM. Then let Radarr run for a day or two and it should peg the CPU to 100%

Environment

- OS: Ubuntu 22.04
- How docker service was installed: As a stack with all of the Arrs in one stack and on the same docker network.

CPU architecture

x86-64

Docker creation

# movie management      
  radarr:
    image: ghcr.io/linuxserver/radarr:latest
    container_name: radarr
    restart: unless-stopped
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - /portainer/Files/AppData/Config/radarr:/config
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone      
      - /home/{redacted-user}/stuff:/stuff
      - /home/{redacted-user}/backup:/backup
    ports:
      - 7878:7878

Container logs

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:24 +00:00", the heartbeat has been running for "00:00:01.3019212" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:27 +00:00", the heartbeat has been running for "00:00:01.6742217" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:33 +00:00", the heartbeat has been running for "00:00:01.6027067" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:38 +00:00", the heartbeat has been running for "00:00:01.6800331" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:43 +00:00", the heartbeat has been running for "00:00:01.7568994" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:47 +00:00", the heartbeat has been running for "00:00:01.7845453" which is longer than "00:00:01". This could be caused by thread pool starvation. 

[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:53 +00:00", the heartbeat has been running for "00:00:01.2124929" which is longer than "00:00:01". This could be caused by thread pool starvation. ```
Copy link

github-actions bot commented Oct 3, 2024

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

@NightHawkATL
Copy link
Author

?

@LinuxServer-CI
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions.

@NightHawkATL
Copy link
Author

image
Still doing it as of today.

@j0nnymoe
Copy link
Member

While yes that shows the container, have you checked the process that's actually running and also your radarr logs to see what it's doing? If your other instance isn't doing it, that points to a configuration issue more than a container issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Issues
Development

No branches or pull requests

3 participants