Skip to content
This repository has been archived by the owner on Dec 24, 2019. It is now read-only.

oss-db and elasticsearch containers don't stop automatically #86

Open
valeriocos opened this issue Aug 14, 2019 · 9 comments
Open

oss-db and elasticsearch containers don't stop automatically #86

valeriocos opened this issue Aug 14, 2019 · 9 comments

Comments

@valeriocos
Copy link
Member

valeriocos commented Aug 14, 2019

When stopping the docker-compose the containers scava-deployment_oss-db_1 and scava-deployment_elasticsearch_1 don't stop automatically and they throw an error. These containers must be manually stopped

Stopping scava-deployment_admin-webapp_1   ... done
Stopping scava-deployment_api-server_1     ... done
Stopping scava-deployment_dashb-importer_1 ... done
Stopping scava-deployment_kb-service_1     ... done
Stopping scava-deployment_prosoul_1        ... done
Stopping scava-deployment_auth-server_1    ... done
Stopping scava-deployment_oss-app_1        ... 
Stopping scava-deployment_kibiter_1        ... done
Stopping scava-deployment_oss-db_1         ... error
Stopping scava-deployment_elasticsearch_1  ... error
Stopping scava-deployment_kb-db_1          ... done

ERROR: for scava-deployment_oss-app_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=70)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Any idea @MarcioMateus @md2manoppello @creat89 ?

EDITED:
I'm using a678687 to execute the docker-compose

@creat89
Copy link
Contributor

creat89 commented Aug 14, 2019

I don't think I have seen that error before, but I'll try to check later on our server. @Danny2097, have you seen this error before?

@Danny2097
Copy link
Member

@creat89 / @valeriocos no I have not seen this issue before. The only issue I saw was with the initial start-up of the ES container.

@valeriocos what analysis tasks did you have running prior to this (if any)? Perhaps there is a rogue process preventing it closing?

@valeriocos
Copy link
Member Author

valeriocos commented Aug 14, 2019

None, I did a docker system prune -a and checked that no containers were running after.

This issue may be related to another one I'm going to open soon. When doing docker-compose up, the platform gets blocked on some kb-db actions (after prosoul actions). Nothing (workers, projects, metrics, etc.) is visible on the cockpit UI except for http://localhost:5601/#/home (plese see #85 (comment)). Have you seen something similar?

@valeriocos
Copy link
Member Author

any news on this issue @creat89 , did you have time to check it on your server?

@tdegueul
Copy link

tdegueul commented Aug 16, 2019

On my side, running dev commit 35e7d18 which pulls the image for crossminer/scava@fd85adc from Scava:

$ docker system prune -a --volumes
$ docker-compose -f docker-compose-build.yml build --no-cache --parallel
$ docker-compose -f docker-compose-build.yml up

Containers stop just fine:

Stopping scava-deployment_admin-webapp_1   ... done
Stopping scava-deployment_dashb-importer_1 ... done
Stopping scava-deployment_api-server_1     ... done
Stopping scava-deployment_kb-service_1     ... done
Stopping scava-deployment_prosoul_1        ... done
Stopping scava-deployment_auth-server_1    ... done
Stopping scava-deployment_kibiter_1        ... done
Stopping scava-deployment_oss-db_1         ... done
Stopping scava-deployment_elasticsearch_1  ... done
Stopping scava-deployment_kb-db_1          ... done

@valeriocos
Copy link
Member Author

@tdegueul , I'm replicating your steps (the only differences are the option --volumes and --parallel)

@valeriocos
Copy link
Member Author

closing the issue, after following these steps I didn't get any error, thanks @tdegueul

@valeriocos
Copy link
Member Author

The issue happened again after following the steps at: #89 (comment)

^CGracefully stopping... (press Ctrl+C again to force)
Stopping scava-deployment_admin-webapp_1  ... done
Stopping scava-deployment_kb-service_1    ... done
Stopping scava-deployment_api-server_1    ... done
Stopping scava-deployment_oss-app_1       ... 
Stopping scava-deployment_kibiter_1       ... done
Stopping scava-deployment_auth-server_1   ... done
Stopping scava-deployment_oss-db_1        ... error
Stopping scava-deployment_elasticsearch_1 ... error
Stopping scava-deployment_kb-db_1         ... done

ERROR: for scava-deployment_oss-app_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=70)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

@MarcioMateus
Copy link
Contributor

I saw similar behaviour when using Docker engine for Mac.

I usually "solve" the problem with a restart of the docker engine. When it doesn't solve the problem, then it is time to run docker system prune -a --volumes.

After a quick search, it seems to be related with the "stress" that the containers do on the docker daemon. The solutions proposed are:

  • Reduce number of docker containers and/or docker images sizes (may be difficult, but we can try to do some optimisations to the images)
  • Increase the resources available to the docker engine (nr cpus, RAM, storage) (not always possible)
  • Change the value of the environment variables that define the timeouts:
export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants