tsl (Torch Spatiotemporal) is a library built to accelerate research on neural spatiotemporal data processing methods, with a focus on Graph Neural Networks.
Built upon popular libraries such as PyTorch, PyG (PyTorch Geometric), and PyTorch Lightning, tsl provides a unified and user-friendly framework for efficient neural spatiotemporal data processing, that goes from data preprocessing to model prototyping.
-
Create Custom Models and Datasets Easily build your own custom models and datasets for spatiotemporal data analysis. Whether you're working with sensor networks, environmental data, or any other spatiotemporal domain, tsl's high-level APIs empower you to develop tailored solutions.
-
Access a Wealth of Existing Datasets and Models Leverage a vast collection of datasets and models from the spatiotemporal data processing literature. Explore and benchmark against state-of-the-art baselines, and test your brand new model on widely used public datasets.
-
Handle Irregularities and Missing Data Seamlessly manage irregularities in your spatiotemporal data streams, including missing data and variations in network structures. Ensure the robustness and reliability of your data processing pipelines.
-
Streamlined Preprocessing Automate the preprocessing phase with tsl's methods for scaling, resampling and clustering time series. Spend less time on data preparation and focus on extracting meaningful patterns and insights.
-
Efficient Data Structures Utilize tsl's straightforward data structures, seamlessly integrated with PyTorch and PyG, to accelerate your workflows. Benefit from the flexibility and compatibility of these widely adopted libraries.
-
Scalability with PyTorch Lightning Scale your computations effortlessly, from a single CPU to clusters of GPUs, with tsl's integration with PyTorch Lightning. Accelerate training and inference across various hardware configurations.
-
Modular Neural Layers Build powerful and modular neural spatiotemporal models using tsl's collection of specialized layers. Create architectures with ease, leveraging the flexibility and extensibility of the library.
-
Reproducible Experiments Ensure experiment reproducibility using the Hydra framework, a standard in the field. Validate and compare results confidently, promoting rigorous research in spatiotemporal data mining.
Before you start using tsl, please review the documentation to get an understanding of the library and its capabilities.
You can also explore the examples provided in the examples
directory to see how train deep learning models working with spatiotemporal data.
Before installing tsl, make sure you have installed PyTorch (>=1.9.0) and PyG (>=2.0.3) in your virtual environment (see PyG installation guidelines). tsl is available for Python>=3.8. We recommend installation from github to be up-to-date with the latest version:
pip install git+https://github.com/TorchSpatiotemporal/tsl.git
Alternatively, you can install the library from the pypi repository:
pip install torch-spatiotemporal
To avoid dependencies issues, we recommend using Anaconda and the provided environment configuration by running the command:
conda env create -f conda_env.yml
The best way to start using tsl is by following the tutorial notebook in examples/notebooks/a_gentle_introduction_to_tsl.ipynb
.
Visit the documentation to learn more about the library, including detailed API references, examples, and tutorials.
The documentation is hosted on readthedocs. For local access, you can build it from the docs
directory.
Contributions are welcome! For major changes or new features, please open an issue first to discuss your ideas. See the Contributing guidelines for more details on how to get involved. Help us build a better tsl!
Thanks to all contributors! 🧡
If you use Torch Spatiotemporal for your research, please consider citing the library
@software{Cini_Torch_Spatiotemporal_2022,
author = {Cini, Andrea and Marisca, Ivan},
license = {MIT},
month = {3},
title = {{Torch Spatiotemporal}},
url = {https://github.com/TorchSpatiotemporal/tsl},
year = {2022}
}
By Andrea Cini and Ivan Marisca.
This project is licensed under the terms of the MIT license. See the LICENSE file for details.