CSE2000 Software Project |
- Overview
- Features
- Getting started
- Usage
- Running the tests
- Coding style
- Deployment
- Contributing
- Issue Board
- Authors
- License
- Acknowledgements
- Screenshots
The following repository contains the source code from the 2019-2020 CSE200 Software Project course at TU Delft.
In our project, we have developed software for the 'Stereotypes in Computer Science' research project, which is a collaboration between Leiden University, Delft University of Technology, NEMO Science Museum, and VHTO. The software developed will be used in order to study stereotypes that children hold about computer scientists, and whether these stereotypes are affected by a virtual intervention with role models. The software will also provide a way for people to learn about their own stereotypes.
For this purpose, the project has been split in two parts: a data collection application and a data dissemination application.
- Parents can complete a digital consent form for their children
- Children can take an IAT test
- The IAT test contains:
- binary questions
- multiple choice questions
- likert questions
- open questions
- information text
- video intervention
- Test answers are stored in a database
- Test results are processed automatically
- The results can be send by email
- The admin can:
- see live statistics about the application
- see the name of participants that have taken the test in the last hour
- download the data in excel format
- select different quiz version
- send or remove the quiz data if the test is not finished
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
A step by step series of examples that tell you how to get a development env running.
Install npm and Nodejs
$ node --version
$ npm --version
Install React
$ sudo npm install [email protected] -g
Install Python:
$ python3 --version
Install Pip:
$ pip3 --version
Install PostgreSql:
$ which psql
Install Redis. For a more detailed explanation, you can follow the guide here
$ redis-server --version
Install all dependencies:
$ pip3 install requirements.txt
For more documentation, please consult the Dockerfiles present in each application.
A step by step series of examples that tell you how to get a development env running.
To start the server execute:
$ gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 -b 0.0.0.0:8000 server:app
Detailed instructions on how to run the server can be found in the correspoding README.md file.
To run the application execute:
$ npm install
$ npm run start
Detailed instructions on how to run the client applications can be found in the correspoding README.md files:
Make sure you have Docker and Docker Compose installed on you machine.
$ docker --version
Docker version 19.03.8
$ docker-compose --version
docker-compose version 1.25.0
$ cp client/client-consent-app/.env_example client/client-consent-app/.env
$ cp client/client/data-collection/.env_example client/client/data-collection/.env
$ cp client/client/data-dissemination/.env_example client/client/data-dissemination/.env
$ sudo chmod +x start.sh
$ sudo ./start.sh
OR
$ sudo docker-compose up
- The server in running on http://localhost:8000/
- The consent form is on http://localhost:3001/
- The client-data-collection is on http://localhost:3002/
- The client-data-dissemination is on http://localhost:3003/
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
5e760610e30a stereotypescs_app "gunicorn -k geventw…" 2 minutes ago Up 2 minutes stoic_bell
...
$ docker exec -it <CONTAINER_ID> /bin/bash
In this example, <CONTAINER_ID> should be replaced by 5e760610e30a.
$ python3 -c 'from api.script import *; populate();'
A comprehensive documentation can be found at: https://app.swaggerhub.com/apis-docs/adondera/StereotypesCS/1.0.0
There you have all the necessary information to make requests to the server that was used for this project.
How are the folders organised. Where to find stuff.
.
├── api
│ ├── endpoints
│ │ ├── auth.py
│ │ ├── consent.py
│ │ ├── dashboard.py
│ │ └── ...
│ ├── models
│ │ ├── category.py
│ │ ├── question.py
│ │ ├── participant.py
│ │ └── ...
│ ├── static
│ │ └── IATs
│ ├── tests
│ │ ├── test_files
│ │ ├── conftest.py
│ │ ├── test_requests.py
│ │ └── ...
│ ├── __init__.py
│ ├── README.md
│ └── script.py
├── client
│ ├── client-consent-app
│ │ ├── public
│ │ ├── src
│ │ ├── app.js
│ │ └── ...
│ ├── data-collection
│ │ ├── public
│ │ ├── selenium_tests
│ │ ├── src
│ │ ├── app.js
│ │ └── ...
│ └── data-dissemination
│ ├── public
│ ├── src
│ ├── app.js
│ └── ...
├── docs
│ ├── client_meetings
│ ├── sprint_retrospectives
│ ├── team_meetings
│ └── ...
├── migrations
│ ├── versions
│ └── ...
├── .gitignore
├── .gitlab-ci.yml
├── config.py
├── LICENSE
├── Procfile
├── pylintrc
├── README.md
├── requirements.txt
├── runtime.txt
└── server.py
File/Folder Name | Details |
---|---|
api | server-side code |
client | client-side code |
docs | documentation, meeting notes |
migrations | migration files for database schema |
config.py | server configurations |
requirements.txt | package dependencies |
runtime.txt | python version (for deployment) |
server.py | running the server application |
- Start the client collection application
- Log-in with username admin and password admin
- Select a test version and press "Load Questions" and then "Start Session" to enter the test
- Register a child via the consent app.
- You should now see the child name on the starting page.
- Press "Begin" to start the test
- Follow the instructions to complete the test
- If you want to skip to the end of the quiz, press "q" button
- At the end you can enter additional notes in the open box
- Use the NEMO code to submit the test
- You should be redirected back to the start page
For the dissemination application the quiz can be started directly, without having to login or to complete the consent form.
Explaining how to run the automated tests for this system.
Navigate to each application directory and run:
$ npm test
Make sure you have pytest installed:
$ pytest --version
Select the testing configuration:
$ export APP_SETTINGS=config.TestingConfig
To run all tests use:
$ pytest api
NOTE: For tests to pass you need to have both your PostgreSQL database and the Redis server running.
You can also run individual tests with:
$ pytest api -k test_file.py
To run tests with coverage use:
$ coverage run --source api --branch -m pytest api
$ coverage report
Explaining how to run the static analysis tools for this system.
For the client-side we used eslint as a static analysis tool.
$ npx eslint .
For the server-side we used pylint as a static analysis tool.
You can get pylint as a plugin and run it directly from your Pycharm IDE.
You can also install and run pylint from your terminal.
To instal pylint use:
$ pip3 install pylint
To run pylint use:
$ pylint --load-plugins "pylint_flask_sqlalchemy, pylint_flask" api
You should get a pylint message that looks like this: "Your code has been rated at 8.32/10"
For deploying our applications, we have used Heroku. They can either be deployed manually from the GitLab CI, or you can deploy them from your own terminal using git.
For deploying any of the client applications you can execute the following command:
git subtree push --prefix client/name_of_folder name_of_the_remote master
In our case, the name of the folder is either client-consent-app, data-collection or data-dissemination
For deploying the Flask server you can directly run:
git push name_of_the_remote name_of_your_branch:master
- The consent form can be found at https://frontend-nemo.herokuapp.com/
- The collection application can be found at https://collection-nemo.herokuapp.com/
- The dissemination application can be found at https://dissemination-nemo.herokuapp.com/
Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Merge Request
Product Backlog
: contains user stories derived from project requirements- Every user story has a label for its importance according to the MoSCoW method
Sprint backlog
: contains tasks derived from user stories for a specific sprint- The labels
application::collection
andapplication::dissemination
refer to the two parts of our project (Data collection and Data dissemination)
- Each sprint will have a separate board
- A user story will be assigned to a sprint (milestone)
Sprint Backlog
list will contain tasks derived from a user story
- React - The web framework used
- Axios - Promise based HTTP client
- Material UI - A popular React UI framewor
- React-Redux - A Predictable State Container for JS Apps
- React-Router - Declarative Routing for ReactJS
- Flask-Restful - REST API functionalities of Flask
- Flask-JWT-extended - JWT integration for authorisation and authentication
- Flask-SocketIO - Socket communication framework
- Flask-SQLAlchemy - SQL toolkit and object-relational mapper.
- Redis - In-memory data structure store
- Cloudinary - Cloud management service for media contents
- Sendgrid - Email delivery service
- PostgreSQL - Relational database system
Development team:
- Alexandru Manolache - Backend - amanolache
- Alin Dondera - Backend - adondera
- Andrei Geadau - Frontend - ageadau
- Dragos Vecerdea - Frontend - dvecerdea
- Ionut Constantinescu - Backend - iconstantinesc
Faculty supervisors:
- Daphne van Tetering - TA
- Myrthe Tielman - Coach
Client:
- Shirley de Wit
This project is licensed under the MIT License - see the LICENSE file for details
- VHTO
- NEMO Science Museum
- Leiden University
- Delft University of Technology