Engage docker-compose configuration to bring up backend and celery tasks
To use this in a development setting, make sure you clone this repo with --recurse-submodules
That is:
git clone --recurse-submodules [email protected]:hackla-engage/engage-docker.git
If you forgot that step, you can just issue git submodule init
and then git submodule update
Why?
Because we include two submodules: engage-backend and engage-celery
You can do your development in this pulled repository, because in effect you are cloning those two repos. Make sure you create a new branch if you want to do development in either of those repos.
To develop on those submodules, go to GitHub. Make forks of either engage-backend or engage-celery. Add a repository as a myfork
or some title you'll remember.
For example:
- Fork engage-backend like I did here
- Go to your local clone of this repo and enter the
engage-backend
subdirectory - Get the cloning address for your fork. Mine, for example is:
[email protected]:eselkin/engage-backend.git
- Add the remote:
git remote add myfork [email protected]:eselkin/engage-backend.git
- Make a new branch
git branch somedevelopmentfeature
- Checkout that local branch
git checkout somedevelopmentfeature
- Make changes to that submodule
- Push the branch to your fork:
git push --set-upstream myfork somedevelopmentfeature
- Make a pull request from your fork on GitHub.com
We provide a docker-compose-dev.yml
and a dev.env
. The docker-compose file uses this environment file for its configuration. Several attributes can be changed but it is recommended that you do not include changes to that file in a PR.
To run:
- Make sure you have docker and docker-compose on your system
- Make sure you have enough disk space (We've tried to keep the containers small but it's still space)
- Make sure you have cloned the repository correctly (see above)
- Make sure you don't have postgres running on port 5432 on your system (most people don't)
- Make sure you don't have redis running on port 6379 on your system (most people don't)
- Make sure you don't have rabbitmq running on your system (most people don't)
docker-compose -f docker-compose-dev.yml build
# Wait!docker-compose -f docker-compose-dev.yml up
# Run
Using the configuration from dev.env the scraper will begin scraping agendas after 1 minute and will populate a local database (postgres) running in one of the containers. This is controlled by the environment variable BEAT_SANTAMONICA_SCRAPE=*
...
If you wish to increase the time between scrapes, change the * to another cron minute interval (i.e. */5 for every five minutes).
You can log into that postgres container with the username and password from the dev.env
file. However, in production this would not be possible.
Since the directories are mounted as volumes in the docker-compose dev yaml, your changes will be present if you issue a docker-compose -f docker-compose-dev.yml down
and then up
again. You won't need to build again. If you have altered the requirements.txt files or the Pipfiles in the submodules, it's suggested that you build again.