121 is an open source platform for Cash based Aid built for the Humanitarian sector by the Netherlands Red Cross. -- Learn more about the platform: https://www.121.global/
See: status.121.global
Static analysis, formatting, code-style, functionality, integration, etc:
See also: Testing
The documentation of the 121 platform can be found on the Wiki of this repository on GitHub: https://github.com/global-121/121-platform/wiki
-
Install Git: https://git-scm.com/download/
-
Install Node.js: https://nodejs.org/en/download/
-
Install the version specified in the
.node-version
-file. -
To prevent conflicts between projects or components using other versions of Node.js it is recommended to use a 'version manager'.
-
FNM (for Windows/macOS/Linux
-
NVM - Node Version Manager (for macOS/Linux).
-
NVM for Windows (for Windows))
-
-
-
Install Docker
-
On Linux, install Docker Engine + Compose plugin: https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
-
On macOS, install Docker Desktop: https://docs.docker.com/docker-for-mac/install/
-
On Windows, install Docker Desktop: https://docs.docker.com/docker-for-windows/install/
If there are issues running Docker on Windows, you might need to do the following:
- Install WSL2 Linux kernel package.
Check step 4 on https://learn.microsoft.com/en-us/windows/wsl/install-manual - Set WSL2 as default version in PowerShell
wsl --set-default-version 2
- Check step 5 on https://learn.microsoft.com/en-us/windows/wsl/install-manual
- Install WSL2 Linux kernel package.
-
With these tools in place you can checkout the code and start setting up:
git clone https://github.com/global-121/121-platform.git
Navigate to the root folder of this repository:
cd 121-platform
Then install the required version of Node.js and npm
:
-
If you use FNM:
fnm use
(And follow the prompts) -
If you use NVM
-
On macOS/Linux:
nvm install
-
On Windows:
nvm install <version in .node-version-file>
-
Now, make sure to run the following in the root folder to install the necessary pre-hooks:
npm install
Switch to the repository folder
cd services/
Copy the centralized .env file
cp .env.example .env
Environment variables are explained in the comments of the .env.example
-file, some already have a value that is safe/good to use for development, some need to be unique/specific for your environment.
Some variables are for credentials or tokens to access third-party services.
To start all services, after setup, from the root of this repository, run:
npm run start:services
To see the status/logs of all/a specific Docker-container(s), run: (Where <container-name>
is optional; See container-names in docker-compose.yml
).
npm run logs:services <container-name>
To verify the successful installation and setup of services, access their Swagger UI:
- 121-Service: http://localhost:3000/docs/
- Mock-Service: http://localhost:3001/docs/
Install dependencies for the portal, run:
npm run install:portal
Also, make sure to create an env file for each interface. For example:
cp interfaces/Portal/.env.example interfaces/Portal/.env
To start the portal, from the root of this repository, run:
npm run start:portal
- Or explore the specific options as defined in each interface's own
package.json
orREADME.md
.
When started, the portal will be available via:
- Portal: http://localhost:8888
When you use VS Code, you can start multiple editor-windows at once, from the root of this repository, run:
npm run code:all
To start an individual interface/service in VS Code:
-
Run: (where
<package>
is one ofportal
,portalicious
,121-service
,mock-service
)npm run code:<package>
When making changes to the data-model of the 121-service
(creating/editing any \*.entity.ts
files), you need to create a migration script to take these changes into affect.
The process is:
-
Make the changes in the
\*.entity.ts
file -
To generate a migration-script run:
docker exec 121-service npm run migration:generate src/migration/<descriptive-name-for-migration-script>
. This will compare the data-model according to your code with the data-model according to your database, and generate any CREATE, ALTER, etc SQL-statements that are needed to make the database align with code again. -
Restart the 121-service through
docker restart 121-service
: this will always run any new migration-scripts (and thus update the data-model in the database), so in this case the just generated migration-script. -
If more changes required, then follow the above process as often as needed.
-
Do NOT import any files from our code base into your migrations. For example, do NOT import seed JSON files to get data to insert into the database, since the migration may break if ever these seed JSON files change. Instead, "hard code" the needed data in your migration file.
-
Do NOT change migration files anymore after they have been merged to main, like commenting out parts of it, since there is a high probability this will result in bugs or faulty data on production instances. Instead, create a new migration file. The exception is bug fixing a migration file, for example if a file was imported that causes the migration to fail (see 5 above).
-
To run this file locally, do:
docker exec -it 121-service npm run migration:run
-
If you want to revert one migration you can run:
docker exec -it 121-service npm run migration:revert
-
If ever running into issues with migrations locally, the reset process is:
- Delete all tables in the
121-service
database schema - Restart
121-service
container - This will now run all migration-scripts, which starts with the
InitialMigration
-script, which creates all tables - (Run seed)
- Delete all tables in the
-
See also TypeORM migration documentation for more info
NOTE: if you're making many data-model changes at once, or are doing a lot of trial and error, there is an alternative option:
- In
services/121-service/src/ormconfig.ts
setsynchronize
totrue
and restart121-service
. - This will make sure that any changes you make to
\*.entity.ts
files are automatically updated in your database tables, which allows for quicker development/testing. - When you're done with all your changes, you will need to revert all changes temporarily to be able to create a migration script. There are multiple ways to do this, for example by stashing all your changes, or working with a new branch, etc. Either way:
- stashing all your changes (git stash)
- restart 121-service and wait until the data-model changes are actually reverted again
- set
synchronize
back tofalse
and restart 121-service - load your stashed changes again (git stash pop)
- generate migration-script (see above)
- restart 121-service (like above, to run the new migration-script)
To test the migrations you are creating you can use this .sh script (unix only) ./services/121-service/src/migration/test-migration.sh
example usage ./services/121-service/src/migration/test-migration.sh main feat.new-awesome-entity
This script performs the following steps:
- Checks out the old branch and stops the specified Docker containers.
- Starts the Docker containers to apply the migration and load some data.
- Waits for the service to be up and running, then resets the database with mock data.
- Checks out the new branch, applies any stashed changes, and restarts the Docker containers to run the migrations again.
All services use JSON Web Token (JWT) to handle authentication. The token should be passed with each request by the browser via an access_token
cookie. The JWT authentication middleware handles the validation and authentication of the token.
All the tokens and access keys for third party APIs should be added on the .env
-file and subsequently imported using the environment variables within typescript files.
To help with some types if files/tasks we've listed them here:
-
Workspace recommendations for VS Code When you open the root-folder of this repository in VSCode and go to: "Extensions" and use the filter: "Recommended"(
@recommended
); A list should be shown and each extension can be installed individually.Generic highlights:
- Cucumber (Gherkin) Full Support - To work with
.feature
-files for test-scenarios
Interfaces / front-end highlights:
- i18n Ally - To work with translations in the HTML-templates and component TS-files
- Cucumber (Gherkin) Full Support - To work with
If the Swagger-UI is not accessible after installing Docker and setting up the services, you can take the following steps to debug:
docker compose ps
to list running containers and their statusdocker compose logs -f <container-name>
to check their logs/console output (or leave out the<container-name>
to get ALL output)
If there are issues with Docker commands, it could be due to permissions. Prefix your commands with sudo docker....
If the errors are related to not being able to access/connect to the database then reset/recreate the database by:
- Setting
dropSchema: true
insrc/ormconfig.ts
of the specific service. - Restarting that service will reset/recreate its database(-schema)
When considering upgrading the (LTS) version of the Node.js runtime, take into account:
- The Node.js Release schedule: https://github.com/nodejs/release#release-schedule
- The (specific) version supported by Microsoft Azure App Services,
in their Node.js Support Timeline: https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md - Angular's Actively supported versions: https://angular.io/guide/versions#actively-supported-versions
- Ionic Framework's supported Node.js versions: https://ionicframework.com/docs/intro/environment#Node--npm
When new Node.js dependencies are added to a service since it is last build on you local machine, you can:
-
Verify if everything is installed properly:
docker compose exec <container-name> npm ls
-
If that produces errors or reports missing dependencies, try to build the service from a clean slate with:
npm run install:services -- --no-cache <container-name>
Or similarly:
npm run start:services -- --force-recreate <container-name>
- Scenarios of end-to-end/integration-tests for the whole platform are described in
/features
. - Each component has its own individual tests:
- Unit-tests and UI-tests for all interfaces; Run with
npm test
in eachinterfaces/*
-folder. - Unit-tests and API/integration-tests for all services; Run with
npm test
in eachservices/*
-folder. See:121-service/README.md
/Testing for details.
- Unit-tests and UI-tests for all interfaces; Run with
- Is it to test query-magic?
- Is it to test essential endpoints (FSP integrations) and import/exports/etc?
- Often used (with different parameters) endpoints: PATCH /registration etc.
- Is there actual business-logic performed?
- Not necessary:
- update single (program) properties?
- Examples:
- import registrations -> change PA-status (with list of refIds) -> export included PAs
- update PA attributes: all different content-types + possible values (including edge cases)
These tests are still expensive (to bootstrap app + database)
There are a few reasons why we write unit tests cases:
- Unit tests are written to ensure the integrity the functional level aspect of the code written. It helps us identify mistakes, unnecessary code and also when there is room for improvement to make the code more intuitive and efficient.
- We also write unit test cases to clearly state what the method is supposed to do, so it is smoother for new joiners to be onboarded
- It helps us achieve recommended devOps protocols for maintaining code base while working within teams.
How are Unit Tests affected when we make changes within the code in future?
- We should aim to write and update unit tests along side the current development, so that our tests are up to date and also reflect the changes done. Helps us stay in track
- Unit tests in this case differ from manual or automated UI testing. While UI may not exhibit any changes on the surface it is possible code itself might be declaring new variables or making new method calls upon modifications, all of those need to be tested and the new test-scenario or spec-file should be committed together with the feature change.
We are using jasmine
for executing unit tests within interfaces
and jest
within services
. However, while writing the unit test cases, the writing style and testing paradigm do not differ since jest
is based on jasmine
.
See the Guide: Writing tests
See notable changes and the currently released version on the Releases page.
This project uses the CalVer
-format: YY.MM-MICRO
.
This is how we create and publish a new release of the 121-platform. (See the glossary for definitions of some terms.)
- Define what code gets released.
(Is the current state of themain
-branch what we want? Or a specific commit/point-in-the-past?) - Check the changes since the last release, by replacing
vX.X-X
with the latest release in this URL:https://github.com/global-121/121-platform/compare/vX.X-X...main
Check any changes to:services/.env.example
:
If there are, then make any configuration changes to the staging-service(Or Mock-Service) in the Azure Portal, relating to new/changed/removedENV
-variables, changed default values, etc.interfaces/Portal/.env.example
:
If there are, then make any configuration changes to the "staging"-environment settings on GitHub.interfaces/Portalicious/.env.example
(Optional)
If there are, then make any configuration changes to the "staging"-environment settings on GitHub.
- Define the
version
-name for the upcoming release. - "Draft a release" on GitHub
- For "Choose a tag": Insert the
version
to create a new tag - For "Target": Choose the commit which you would like to release (defined in the first step).
- Set the title of the release to
<version>
. - Use the "Generate release notes" button and double-check the contents.
This will be the basis of the "Inform stakeholders"-message to be posted on Teams
- For "Choose a tag": Insert the
- Publish the release on GitHub (as 'latest', not 'pre-release')
This will trigger the deployment-workflow that can be monitored under GitHub Action runs - Check the deployed release on the staging-environment (this can take some time...)
- Now, and throughout the release process, it is wise to monitor the combined CPU usage of our App-Services.
- If all looks fine, proceed with deploying the release to all other production-instances.
- Make any configuration changes (
ENV
-variables, etc.) on each App-Service just before deployment. - Make any configuration changes for the Portal(s) in each GitHub-environment-settings.
- Use the "Deploy
<client name>
All" deployment-workflows on GitHub Actions to deploy theversion
-tag to each production-instance.⚠️ Note:
Start with deployment of the "Demo"-instance.
This will also deploy the Mock-Service to its production-environment.
This follows a similar process to regular release + deployment, with some small changes.
- Checkout the
<version>
tag which contains the code that you want to hotfix. - Create a new local hotfix-branch using that tag as the
HEAD
(e.g.hotfix/<vX.X-X>
, with an increased finalMICRO
-number) and make the changes. - Push this branch to the upstream/origin repository on GitHub.
- Create a new release + tag (see above) selecting the
hotfix/v*
-branch as target, and publish it. - Use the deployment-workflows on GitHub Actions to deploy the newly created tag (not the branch). For each required instance.
- After the hotfix has been released to production, follow standard procedures to merge the hotfix-branch into the
main
-branch.
Note: Do not rebase/update the hotfix/v*
-branch onto the main
-branch until AFTER you have successfully deployed the hotfix to production.
The hotfix branch is created from a "dangling" commit, this makes the GitHub UI confused when you look at a PR between the newly created hotfix
-branch and the main
-branch. Any conflict warnings shown on GitHub are not relevant for the hotfix-deployment, they'll only need to be addressed to merge the hotfix into the main
-branch afterwards.
If you deploy the 121-platform to a server for the first time it is recommended to setup a separate Postgres database server. The connection to this database can be made by editing the POSTGRES_*
variables in services/.env
.
See: (via GitHub Action(s); i.e. deploy_test_*.yml
)
- PR's to the branch
main
are automatically deployed to an individual preview-environment. - When merged, a separate deployment is done to the test-environment; for that interface only.
See: (via GitHub Action(s); i.e. deploy_staging_*.yml
)
- Created/published releases are automatically deployed to the staging-environment
- A manual deploy can be done using the GitHub UI, using "Run workflow/
workflow_dispatch
" and selecting the preferred release-versiontag
(orbranch
for testing on the staging-environment).
See: (via GitHub Action(s); i.e. deploy_test_service.yml
, deploy_test_mock-service.yml
)
- When merged, a separate deployment is done to the test-environment.
- Make sure to update any environment-configuration in the Azure-portal as soon as possible, preferably before the merge & deploy.
- Create the necessary Azure resources
- Configure the service configurations based on
.env.example
- Create the necessary build/deploy-workflow files
- Merge these new files into the
main
-branch - Build/Deploy the platform via the GitHub Action(s) by selecting the target release-version
tag
- Decide on what
version
to deploy - Prepare the environment accordingly (Setting all service-configuration in Azure Portal)
- A manual deploy can be done using the GitHub UI, using "Run workflow/
workflow_dispatch
" and selecting the preferred release-versiontag
(orbranch
for testing on the staging-environment).
Term | Definition (we use) |
---|---|
version |
A name specified in the CalVer -format: YY.MM-MICRO |
tag |
A specific commit or point-in-time on the git-timeline; named after a version, i.e. v22.1.0 |
release |
A fixed 'state of the code-base', published on GitHub |
deployment |
An action performed to get (released) code running on an environment |
environment |
A machine that can run code (with specified settings); i.e. a service, or your local machine |
Released under the Apache 2.0 License. See LICENSE.