Thank you for wanting to help us!
You can contribute to the project in many ways:
-
Contribute to the questions database:
- Add new questions
- Propose modifications on existing questions
-
Contribute to the application:
- Notify us about bugs
- Propose improvements (through Github Issues)
- Help the development of the application (through Github Issues and Pull Requests)
- Help with translation
Every question (finalised or in draft) is currently in a shared plateform (Notion.so), which is regularly synchronised with the database.
You can see an export of the whole databse of the application in folder /data
(Note : this is not the database, just an export).
You can propose new questions directly in Contribuer (TODO translate page and update link).
Questions are then validated in the shared plateform.
Did you find a bug? Do you find the interface counter-intuitive, or you have an idea to improve the design?
There are 2 options to share your feedback:
- via Contribuer (TODO translate page and update link).
- on Github Issues by creating a new issue (here).
The technical stack is detailed further down.
You can help with code review, documentation, add tests, improve design, add new features...
If you want to add a new feature:
- comment an Issue to give your rationale or create a new one if it does not exist, we will then discuss in the Issue thread on how to proceed.
- Start coding! and create a new PR with a review request
- Our Backend uses Python Django :
- API with Django Rest Framework
- Admin console
- PostgreSQL database
We use the backend to:
- validate data coming from the share plateform
- Create new quizzes
- have an endpoint for stats
repo : quiz-anthropocene/public-frontend
- Our Frontend uses Vue.js
- Bootstrap 4
Currently, the data is directly read from yaml files in folder /data
. An API is under construction
- Our Backend is hosted on Scalingo
- Our Frontend is hosted on Netlify (free tier)
- CI/CD with Github Actions
- Cron jobs with Github Actions
View folder quiz-anthropocene/public-frontend/data/architecture
- You need Python 3.9 & Pipenv already installed.
- Clone the code locally (you can also Fork the project if you plan to add modifications and do PR)
git clone [email protected]:quiz-anthropocene/know-your-planet.git
- Install Backend dependencies
cd backend pipenv sync
- Duplicate file
backend/.env.example
and rename intobackend/.env
- Install PostgreSQL
- Build the database
* If you haven't created a USER to login to postgresql, please do before the previous commands. Alternatively, during postgresql installation, you need to choose a superuser (postgres) password and you just need to add '-U postgres' to the previous commands.
// optional: dropdb quiz_anthropocene psql -c "CREATE USER quiz_anthropocene_team WITH PASSWORD 'password'" psql -c "CREATE DATABASE quiz_anthropocene OWNER quiz_anthropocene_team" psql -c "GRANT ALL PRIVILEGES ON DATABASE quiz_anthropocene to quiz_anthropocene_team" psql -c "ALTER USER quiz_anthropocene_team CREATEROLE CREATEDB"
- Start migrations
* Go to the Windows section if you have an issue with a Windows environment
pipenv run python manage.py migrate
- Load the database
pipenv run python manage.py init_db_from_yaml --with-sql-reset
- Install pre-commit git hook
pre-commit install
* Go to the Windows section if you have an issue with a Windows environment
cd backend
pipenv run python manage.py runserver
You can reach the backen at url http://localhost:8000
You can reach the API documentation at url http://localhost:8000/api/docs/
First start by creating an admin user
cd backend
pipenv run python manage.py createsuperuser --username [email protected] --email [email protected]
Start the backend and go to url http://localhost:8000/django
Tests
pipenv run python manage.py test
Linting ? with pre-commit
First install gettext
The tranlation files can be found under /locale
Use tags {% translate "Word" %}
Then update .po
files
python manage.py makemessages --all --no-wrap --no-location
https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes
python manage.py makemessages -l <LANGUAGE_CODE>
Add the language code in settings.py
Use Poedit to simplify your job.
It will update the .po
files in /locale
.
Then compile the .po
files into .mo
python manage.py compilemessages
Note : for the backend, every command should start with pipenv run
Import the whole database
python manage.py init_db_from_yaml --with-sql-reset
Import questions into the database
// doesn't work since files in /data are "flat"
python manage.py loaddata ../data/questions.yaml
// works only if questions have been deleted previously in database
python manage.py loaddata ../data/questions.yaml --model=question --format=yaml-pretty-flat
Export questions from database to YAML files
// We use a slightly different format to simplify the files
python manage.py dumpdata api.question --output=../data/questions.yaml --format=yaml-pretty-flat
// but it's still possible to do a normal data dump
python manage.py dumpdata api.question --output=../data/questions.yaml
Reinitialise statistics of a question
python manage.py reset_question_stats <question_id>
Reinitialise the whole database
python manage.py reset_db // django-extensions
python manage.py migrate
python manage.py init_db_from_yaml --with-sql-reset
Import a PGSQL dump
// if it's a .tar.gz, run first
tar -xvzf <dump_name>.tar.gz
pg_restore -d quiz_anthropocene --clean --no-owner --no-privileges <dump_name>.pgsql
// if there are permission issues
for tbl in `psql -qAt -c "select tablename from pg_tables where schemaname = 'public';" quiz_anthropocene` ; do psql -c "alter table \"$tbl\" owner to quiz_anthropocene_team" quiz_anthropocene ; done
for tbl in `psql -qAt -c "select sequence_name from information_schema.sequences where sequence_schema = 'public';" quiz_anthropocene` ; do psql -c "alter sequence \"$tbl\" owner to quiz_anthropocene_team" quiz_anthropocene ; done
Queries M2M
qz1 = Quiz.objects.first()
qz1.questions.all()
qz1.quizquestion_set.all()
qz1.relationships.all()
qz1.from_quizs.all()
qz1.to_quizs.all()
q = Question.objects.first()
q.quizs.all()
q1.quizquestion_set.all()
Generate the model graph
pip install pygraphviz
python manage.py graph_models -a -X ContentType,LogEntry,AbstractUser,User,AbstractBaseSession,Session,Group,Permission -o graph.png
Install dependencies
pipenv sync
Update dependencies
pipenv update
// Runs $ pipenv lock then $ pipenv sync
Update a specific package
// First edit the Pipfile
pipenv install
Update the Metabase instance on Heroku
Resize images (PNG)
- Install pngquant
- Run the software on a specific file:
pngquant -f --ext .png <filename>
- Run the software fol all files in a folder:
pngquant -f --ext .png **/*.png
You might have encoding issues with Windows during database import for example
pipenv run python -X utf8 manage.py init_db_from_yaml --with-sql-reset
- Install pre-commit
pip install pre-commit
Error UnicodeDecodeError: charmap codec can't decode byte
- Add
-X utf8