Skip to content

stickykeys99/calamanCy

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

calamanCy: NLP pipelines for Tagalog

example workflow PyPI

calamanCy is a Tagalog natural language preprocessing framework made with spaCy. Its goal is to provide pipelines and datasets for downstream NLP tasks. This repository contains material for using calamanCy, reproduction of results, and guides on usage.

calamanCy takes inspiration from other language-specific spaCy Universe frameworks such as DaCy, huSpaCy, and graCy. The name is based from calamansi, a citrus fruit native to the Philippines and used in traditional Filipino cuisine.

πŸ“° News

πŸ”§ Installation

To get started with calamanCy, simply install it using pip by running the following line in your terminal:

pip install calamanCy

Development

If you are developing calamanCy, first clone the repository:

git clone [email protected]:ljvmiranda921/calamanCy.git

Then, create a virtual environment and install the dependencies:

python -m venv venv
venv/bin/pip install -e .  # requires pip>=23.0
venv/bin/pip install .[dev]

# Activate the virtual environment
source venv/bin/activate

or alternatively, use make dev.

Running the tests

We use pytest as our test runner:

python -m pytest --pyargs calamancy

πŸ‘©β€πŸ’» Usage

To use calamanCy you first have to download either the medium, large, or transformer model. To see a list of all available models, run:

import calamancy
from model in calamancy.models():
    print(model)

# ..
# tl_calamancy_md-0.1.0
# tl_calamancy_lg-0.1.0
# tl_calamancy_trf-0.1.0

To download and load a model, run:

nlp = calamancy.load("tl_calamancy_md-0.1.0")
doc = nlp("Ako si Juan de la Cruz")

The nlp object is an instance of spaCy's Language class and you can use it as any other spaCy pipeline. You can also access these models on Hugging Face πŸ€—.

πŸ“¦ Models and Datasets

calamanCy provides Tagalog models and datasets that you can use in your spaCy pipelines. You can download them directly or use the calamancy Python library to access them. The training procedure for each pipeline can be found in the models/ directory. They are further subdivided into versions. Each folder is an instance of a spaCy project.

Here are the models for the latest release:

Model Pipelines Description
tl_calamancy_md (73.7 MB) tok2vec, tagger, morphologizer, parser, ner CPU-optimized Tagalog NLP model. Pretrained using the TLUnified dataset. Using floret vectors (50k keys)
tl_calamancy_lg (431.9 MB) tok2vec, tagger, morphologizer, parser, ner CPU-optimized large Tagalog NLP model. Pretrained using the TLUnified dataset. Using fastText vectors (714k)
tl_calamancy_trf (775.6 MB) transformer, tagger, parser, ner GPU-optimized transformer Tagalog NLP model. Uses roberta-tagalog-base as context vectors.

πŸ““ API

The calamanCy library contains utility functions that help you load its models and infer on your text. You can think of these functions as "syntactic sugar" to the spaCy API. We highly recommend checking out the spaCy Doc object, as it provides the most flexibility.

Loaders

The loader functions provide an easier interface to download calamanCy models. These models are hosted on HuggingFace so you can try them out first before downloading.

function get_latest_version

Return the latest version of a calamanCy model.

Argument Type Description
model str The string indicating the model.
RETURNS str The latest version of the model.

function models

Get a list of valid calamanCy models.

Argument Type Description
RETURNS List[str] List of valid calamanCy models

function load

Load a calamanCy model as a spaCy language pipeline.

Argument Type Description
model str The model to download. See the available models at calamancy.models().
force bool Force download the model. Defaults to False.
**kwargs dict Additional arguments to spacy.load().
RETURNS Language A spaCy language pipeline.

Inference

Below are lightweight utility classes for users who are not familiar with spaCy's primitives. They are only useful for inference and not for training. If you wish to train on top of these calamanCy models (e.g., text categorization, task-specific NER, etc.), we advise you to follow the standard spaCy training workflow.

General usage: first, you need to instantiate a class with the name of a model. Then, you can use the __call__ method to perform the prediction. The output is of the type Iterable[Tuple[str, Any]] where the first part of the tuple is the token and the second part is its label.

method EntityRecognizer.__call__

Perform named entity recognition (NER). By default, it uses the v0.1.0 of TLUnified-NER with the following entity labels: PER (Person), ORG (Organization), LOC (Location).

Argument Type Description
text str The text to get the entities from.
YIELDS Iterable[Tuple[str, str]] the token and its entity in IOB format.

method Tagger.__call__

Perform parts-of-speech tagging. It uses the annotations from the TRG and Ugnayan treebanks with the following tags: ADJ, ADP, ADV, AUX, DET, INTJ, NOUN, PART, PRON, PROPN, PUNCT, SCONJ, VERB.

Argument Type Description
text str The text to get the POS tags from.
YIELDS Iterable[Tuple[str, Tuple[str, str]]] the token and its coarse- and fine-grained POS tag.

method Parser.__call__

Perform syntactic dependency parsing. It uses the annotations from the TRG and Ugnayan treebanks.

Argument Type Description
text str The text to get the dependency relations from.
YIELDS Iterable[Tuple[str, str]] the token and its dependency relation.

πŸ“ Reporting Issues

If you have questions regarding the usage of calamanCy, bug reports, or just want to give us feedback after giving it a spin, please use the Issue tracker. Thank you!

πŸ“œ Citation

If you are citing the open-source software, please use:

@inproceedings{miranda-2023-calamancy,
    title = "calaman{C}y: A {T}agalog Natural Language Processing Toolkit",
    author = "Miranda, Lester James",
    editor = "Tan, Liling  and
      Milajevs, Dmitrijs  and
      Chauhan, Geeticka  and
      Gwinnup, Jeremy  and
      Rippeth, Elijah",
    booktitle = "Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023)",
    month = dec,
    year = "2023",
    address = "Singapore, Singapore",
    publisher = "Empirical Methods in Natural Language Processing",
    url = "https://aclanthology.org/2023.nlposs-1.1",
    pages = "1--7",
    abstract = "We introduce calamanCy, an open-source toolkit for constructing natural language processing (NLP) pipelines for Tagalog. It is built on top of spaCy, enabling easy experimentation and integration with other frameworks. calamanCy addresses the development gap by providing a consistent API for building NLP applications and offering general-purpose multitask models with out-of-the-box support for dependency parsing, parts-of-speech (POS) tagging, and named entity recognition (NER). calamanCy aims to accelerate the progress of Tagalog NLP by consolidating disjointed resources in a unified framework.The calamanCy toolkit is available on GitHub: https://github.com/ljvmiranda921/calamanCy.",
}

If you are citing the NER dataset, please use:

@inproceedings{miranda-2023-developing,
    title = "Developing a Named Entity Recognition Dataset for {T}agalog",
    author = "Miranda, Lester James",
    editor = "Wijaya, Derry  and
      Aji, Alham Fikri  and
      Vania, Clara  and
      Winata, Genta Indra  and
      Purwarianti, Ayu",
    booktitle = "Proceedings of the First Workshop in South East Asian Language Processing",
    month = nov,
    year = "2023",
    address = "Nusa Dua, Bali, Indonesia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.sealp-1.2",
    doi = "10.18653/v1/2023.sealp-1.2",
    pages = "13--20",
}

About

NLP pipelines for Tagalog using spaCy

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.0%
  • Shell 3.6%
  • Makefile 0.4%