Releases: Nixtla/neuralforecast
Releases · Nixtla/neuralforecast
v1.7.6
New Features
- [FEAT]: Support providing DataLoader arguments to optimize GPU usage @jasminerienecker (#1186)
- [FEAT]: Set activation function in GRN of TFT @marcopeix (#1175)
- [FEAT]: Conformal Predictions in NeuralForecast @JQGoh (#1171)
Bug Fixes
- [FIX]: Ability load models saved using versions before 1.7 @tylernisonoff (#1207)
- [FIX]: Conformal prediction issues @elephaint (#1179)
- [FIX]: Feature importance when using only hist_exog in TFT fails @elephaint (#1174)
- [FIX]: Remove unused output layer NBEATSx @elephaint (#1168)
- [FIX]: Fix Tweedie loss @elephaint (#1164)
- [FIX]: MLPMultivariate incorrect static_exog parsing @elephaint (#1170)
- [FIX]: Deprecate activation functions for GRU @marcopeix (#1198)
Documentation
- [DOC]: Tutorial on cross-validation @marcopeix (#1176)
- [DOC]: Build docs on release only @elephaint (#1183)
v1.7.5
New Features
- [FEAT]: Move RevIN class to common module @JQGoh (#1083)
- [FEAT]: Add RMoK @marcopeix (#1148)
- FEAT: TimeLLM is faster and supports more LLMs @ive2go (#1139)
- [FEAT]: TFT-Interpretability @amirouyanis (#1104)
- [FEAT]: ]Add support for the local file dataloader with the Automodels @jasminerienecker (#1095)
Bug Fixes
- [FIX] CV Refit works with non-standard column names @elephaint (#1149)
- [FIX]: replace self.pred_len with self.h @carusyte (#1129)
- [FIX]: only define static encoder when applicable in TFT @jmoralez (#1114)
- [FIX]: remove cast to float in scalers @jmoralez (#1115)
- [FIX]: timemixer shapes mismatch and doc update @carusyte @marcopeix (#1138)
Dependencies
- Bump pypa/gh-action-pypi-publish from 1.10.0 to 1.10.1 in the ci-dependencies group @dependabot (#1146)
- Bump the ci-dependencies group with 2 updates @dependabot (#1135)
v1.7.4
New Features
- [FEAT] - Add KAN @marcopeix (#999)
- [FEAT] - Add TimeMixer @marcopeix (#1071)
- [FEAT] - Add support for datasets that can't fit in memory @jasminerienecker (#1049)
Bug Fixes
- [FIX] ignore pytorch lightning's PossibleUserWarning @fabianbergermann (#1081)
- [FIX] bug in the NBEATSx exogenous basis stack @jasminerienecker (#1072)
- [FIX] Fix nbdev_version in test & environment @elephaint (#1089)
Documentation
- [DOCS] add tutorial for large dataset DataLoader @jasminerienecker (#1074)
- [DOCS] Restructure documentation @elephaint (#1063)
- [DOCS] Fix examples @elephaint (#1092)
- [DOCS] Fix tables @elephaint (#1090)
- [DOCS] Fix docs layout issues @elephaint (#1085)
- [DOCS] Fix issues @elephaint (#1082)
Dependencies
- Bump actions/setup-python from 5.1.0 to 5.1.1 in the ci-dependencies group @dependabot (#1067)
- use commit hash in actions and add dependabot updates @jmoralez (#1066)
v1.7.3
New Features
- [FEAT] ISQF @elephaint (#1019)
- [FEAT] - Add SOFTS model @marcopeix (#1024)
- [FEAT] Add option to support user defined learning rate scheduler for NeuralForecast Models @JQGoh (#998)
- [FEAT] Implicit Quantile Networks @elephaint (#1007)
Bug Fixes
- use assign argument if available in nn.Module.load_state_dict @jmoralez (#1032)
- update min_size in TimeSeriesDataset.append @jmoralez (#1033)
- fix num_tasks in spark integration @jmoralez (#1028)
Documentation
- fix: add tsmixer tutorial to sidebar @AzulGarza (#978)
- Update models in the README @candalfigomoro (#946)
Enhancement
v1.7.2
New Features
- [FEAT] DeepNPTS model @elephaint (#990)
- [FEAT] TiDE model @elephaint (#971)
Bug Fixes
- [FIX] Refit after validation boolean @elephaint (#991)
- fix cross_validation results with uneven windows @jmoralez (#989)
- [FIX] fix wrong import doc PatchTST @elephaint (#967)
- [FIX] raise exception nbeats h=1 with stacks @elephaint (#966)
Enhancement
- reduce default warnings @jmoralez (#974)
- Create CODE_OF_CONDUCT.md @tracykteal (#972)
v1.7.1
New Features
- multi-node distributed training with spark @jmoralez (#935)
- [FEAT] Add BiTCN model @elephaint (#958)
- [FEAT] - Add iTransformer to neuralforecast @marcopeix (#944)
- [FEAT] Add MLPMultivariate model @elephaint (#938)
Bug Fixes
- [FIX] Fixes default settings of BiTCN @elephaint (#961)
- [FIX] HINT not producing coherent forecasts @elephaint (#964)
- [FIX] Fixes 948 multivariate predict/val issues when n_series > 1024 @elephaint (#962)
- handle exogenous variables of TFT in parent class @jmoralez (#959)
- fix early stopping in ray auto models @jmoralez (#953)
- fix cross_validation when the id is the index @jmoralez (#951)
Documentation
- add MLflow logging example @cargecla1 (#892)
v1.7.0
New Features
- [FEAT] Added TSMixerx model @elephaint (#921)
- Add Time-LLM @marcopeix (#908)
- [FEAT] Added TSMixer model @elephaint (#914)
- Add option to support user defined optimizer for NeuralForecast Models @JQGoh (#901)
- [FEAT] Added NLinear model @ggattoni (#900)
- [FEAT] Added DLinear model @cchallu (#875)
- support refit in cross_validation @jmoralez (#842)
- use environment variable to get id as column in outputs @jmoralez (#841)
- support different column names for ids, times and targets @jmoralez (#838)
- polars support @jmoralez (#829)
- add callbacks to auto models @jmoralez (#795)
Bug Fixes
- [FIX] Avoid raised error for varied step_size parameter during predict_insample() @JQGoh (#933)
- [FIX] 926 auto ensure all models support alias and 924 Configuring hyperparameter space for Auto* Models @elephaint (#927)
- fix base_multivariate window generation @jmoralez (#907)
- Fix optuna multigpu @jmoralez (#889)
- support saving and loading models with alias @jmoralez (#867)
- [FIX] Polars
.columns
produces list rather than Pandas Index @akmalsoliev (#862) - add missing models to filename dict @jmoralez (#856)
- ensure exogenous features are lists @jmoralez (#851)
- fix save with save_dataset=False @jmoralez (#850)
- copy config in optuna @jmoralez (#844)
- Fixed: Exception: max_epochs is deprecated, use max_steps instead. @twobitunicorn (#835)
- fix single column 2d array polars df @jmoralez (#830)
- move scalers to core @jmoralez (#813)
- [FIX] Default AutoPatchTST config @cchallu (#811)
- [FIX] ReVin Numerical Stability @dluuo (#781)
- On Windows, prevent long trial directory names @tg2k (#735)
Documentation
- removed documentation for missing argument @yarnabrina (#913)
- feat: Added cross-validation tutorial @MMenchero (#897)
- chore: update license to apache-2 @AzulGarza (#882)
- [FEAT] Model table in README @cchallu (#880)
- redirect to mintlify docs @jmoralez (#816)
- add missing models to documentation @jmoralez (#775)
Dependencies
- add windows to CI @jmoralez (#814)
- address future warnings @jmoralez (#898)
- use scalers from coreforecast @jmoralez (#873)
- add python 3.11 to CI @jmoralez (#839)
Enhancement
- Reduce device transfers @elephaint (#923)
- extract common methods to BaseModel @jmoralez (#915)
- remove TQDMProgressBar callback @jmoralez (#899)
- use fsspec in save and load methods @jmoralez (#895)
- Feature/Check input for NaNs when available_mask = 1 @JQGoh (#894)
- switch
flake8
toruff
@Borda (#871) - use future instead of deprecation warnings @jmoralez (#849)
- add frequency validation and futr_df debugging methods @jmoralez (#833)
v1.6.4
New Features
- TemporalNorm with ReVIN learnable parameters @kdgutier (#768)
- support optuna in auto models @jmoralez (#763)
- [FEAT] TimesNet model @cchallu (#757)
- add local_scaler_type @jmoralez (#754)
- [FEAT] Implementation of Exogenous - NBEATSx @akmalsoliev (#738)
Bug Fixes
- [FIX] futr_exog_list in Auto and HINT classes @cchallu (#773)
- fix off by one error in BaseRecurrent available_ts @KeAWang (#759)
Documentation
- [DOCS] Scaling tutorial @cchallu (#770)
- [DOCS] Auto hyperparameter selection with optuna @cchallu (#767)
- [DOCS] Update tutorials to v.1.6.3 @cchallu (#741)
Enhancement
v1.6.2
What's Changed
- [FEAT] Add
horizon_weight
parameter to losses andBasePointLoss
in #704 - [FIX] Fix device error in
horizon_weight
in #706 - [FIX] Base Windows padding in #715
- [FIX] Fixed bug in validation loss scale in #720
- [FIX] Base recurrent valid loss on original scale in #721
Full Changelog: v1.6.1...v1.6.2
v1.6.1
New Models
- DeepAR
- FEDformer
New features
- Available Mask to specify missing data in input data frame.
- Improve
fit
andcross_validation
methods withuse_init_models
parameter to restore models to initial parameters. - Added robust losses:
HuberLoss
,TukeyLoss
,HuberQLoss
, andHuberMQLoss
. - Added Bernoulli
DistributionLoss
to build temporal classifiers. - New
exclude_insample_y
parameter to all models to build models only based on exogenous regressors. - Added dropout to
NBEATSx
andNHITS
models. - Improved
predict
method of windows-based models to create batches to control memory usage. Can be controlled with the newinference_windows_batch_size
parameter. - Improvements to the
HINT
family of hierarchical models: identity reconciliation,AutoHINT
, and reconciliation methods in hyperparameter selection. - Added
inference_input_size
hyperparameter to recurrent-based methods to control historic length during inference to better control memory usage and inference times.
New tutorials and documentation
- Neuralforecast map and How-to add new models
- Transformers for time-series
- Predict insample tutorial
- Interpretable Decomposition
- Outlier Robust Forecasting
- Temporal Classification
- Predictive Maintenance
- Statistical, Machine Learning, and Neural Forecasting methods
Fixed bugs and new protections
- Fixed bug on
MinMax
scalers that returned NaN values when the mask had 0 values. - Fixed bug on
y_loc
andy_scale
being in different devices. - Added
early_stopping_steps
to theHINT
method. - Added protection in the
fit
method of all models to stop training when training or validation loss becomes NaN. Print input and output tensors for debugging. - Added protection to prevent the case
val_check_step
>max_steps
from causing an error when early stopping is enabled. - Added PatchTST to save and load methods dictionaries.
- Added
AutoNBEATSx
to core'sMODEL_DICT
. - Added protection to the
NBEATSx-i
model wherehorizon
=1 causes an error due to collapsing trend and seasonality basis.