- Removed support for Python 3.8 and 3.9 plus other requirement updates (PR #337, PR #335). New minimum python version is 3.11.
- Bug fixes for docs
- Adds experimental image support (PR #314)
- Clarifies installation instructions for Linux and Windows operating systems (PR #299)
- Pin Pydantic to less than v2.0 (PR #277)
- Code updates for PyTorch Lightning v2.0.0 (PR #266, PR #272)
- Switch to pyproject.toml-based build and other requirement updates (PR #254, PR #255, PR #260, PR #262, PR #268)
- Adds a depth estimation module for predicting the distance between animals and the camera (PR #247). This model comes from one of the winning solutions in the Deep Chimpact: Depth Estimation for Wildlife Conservation machine learning challenge hosted by DrivenData.
- Do not cache videos if the
VIDEO_CACHE_DIR
environment variable is an empty string or zero (PR #245)
- Fixes Lightning deprecation of DDPPlugin (PR #244)
- Adds a page to the docs summarizing the performance of the African species classification model on a holdout set (PR #235)
- Turn off showing local variables in Typer's exception and error handling (PR #237)
- Fixes bug where the column order was incorrect for training models when the provided labels are a subset of the model's default labels (PR #236)
- The default
time_distributed
model (African species classification) has been retrained on over 250,000 videos. This 16x increase in training data significantly improves accuracy. This new version replaces the previous one. (PR #226, PR #232) - A new default model option is added:
blank_nonblank
. This model only does blank detection. This binary model can be trained and finetuned in the same way as the species classification models. This model was trained on both African and European data, totaling over 263,000 training videos. (PR #228) - Detect if a user is training in a binary model and preprocess the labels accordingly (PR #215)
- Add a validator to ensure that using a model’s default labels is only possible when the species in the provided labels file are a subset of those (PR #229)
- Refactor the logic in
instantiate_model
for clarity (PR #229) - Use pqdm to check for missing files in parallel (PR #224)
- Set
model_name
based on the provided checkpoint so that user-trained models use the appropriate video loader config (PR #221) - Leave
data_dir
as a relative path (PR #219) - Ensure hparams yaml files get included in the source distribution (PR #210)
- Hold back setuptools so mkdocstrings works (PR #207)
- Factor out
get_cached_array_path
(PR #202)
- Retrains the time distributed species classification model using the updated MegadetectorLite frame selection (PR #199)
- Replaces the MegadetectorLite frame selection model with an improved model trained on significantly more data (PR #195)
- Pins
thop
to an earlier version (PR #191) - Fixes caching so a previously downloaded checkpoint file actually gets used (PR #190, PR #194)
- Removes a lightning deprecation warning for DDP (PR #187)
- Ignores extra columns in the user-provided labels or filepaths csv (PR #186)
Releasing to pick up #179.
- PR #179 removes the DensePose extra from the default dev requirements and tests. Docs are updated to clarify how to install and run tests for DensePose.
Releasing to pick up #172.
- PR #172 fixes bug where video loading that uses the YoloX model (all of the built in models) resulted in videos not being able to load.
Releasing to pick up #167 and #169.
- PR #169 fixes error in splitting data into train/test/val when only a few videos.
- PR #167 refactors yolox into an
object_detection
module
Other documentation fixes also included.
The algorithms used by zamba
v1 were based on the winning solution from the
Pri-matrix Factorization machine learning competition, hosted by DrivenData. Data for the competition was provided by the Chimp&See project and manually labeled by volunteers. The competition had over 300 participants and over 450 submissions throughout the three month challenge. The v1 algorithm was adapted from the winning competition submission, with some aspects changed during development to improve performance.
The core algorithm in zamba
v1 was a stacked ensemble which consisted of a first layer of models that were then combined into a final prediction in a second layer. The first level of the stack consisted of 5 keras
deep learning models, whose individual predictions were combined in the second level
of the stack to form the final prediction.
In v2, the stacked ensemble algorithm from v1 is replaced with three more powerful single-model options: time_distributed
, slowfast
, and european
. The new models utilize state-of-the-art image and video classification architectures, and are able to outperform the much more computationally intensive stacked ensemble model.
zamba
v2 incorporates data from Western Europe (Germany). The new data is packaged in the pretrained european
model, which can predict 11 common European species not present in zamba
v1.
zamba
v2 also incorporates new training data from 15 countries in central and west Africa, and adds 12 additional species to the pretrained African models.
Model training is made available zamba
v2, so users can finetune a pretrained model using their own data to improve performance for a specific ecology or set of sites. zamba
v2 also allows users to retrain a model on completely new species labels.