Skip to content

Latest commit

 

History

History
executable file
·
21 lines (14 loc) · 2.19 KB

README.md

File metadata and controls

executable file
·
21 lines (14 loc) · 2.19 KB

Anomaly Detection

Implementation of various generative neural network models for anomaly detection in Julia, using the Flux framework. Serves as a codebase for the comparative study presented in the paper

Škvára, Vít, Tomáš Pevný, and Václav Šmídl. Are generative deep models for novelty detection truly better? arXiv preprint arXiv:1807.05027 (2018).

Models implemented:

acronym name paper
AE Autoencoder Vincent, Pascal, et al. "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion." Journal of Machine Learning Research 11.Dec (2010): 3371-3408. link
VAE Variational Autoencoder Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013). link
sVAE symetric Variational Autoencoder Pu, Yunchen, et al. "Symmetric variational autoencoder and connections to adversarial learning." arXiv preprint arXiv:1709.01846 (2017). link
GAN Generative Adversarial Network Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014. link
fmGAN GAN with feature-matching loss Salimans, Tim, et al. "Improved techniques for training gans." Advances in Neural Information Processing Systems. 2016. link

Experiments:

Experiments are executed on the Loda (Lightweight on-line detector of anomalies) datasets that are in the experiments directory. The sampling method is based on this paper. After downloading the datasets, you can create your own using the experiments/prepare_data.jl function. For experimental evaluation, you need the EvalCurves package:

>> Pkg.clone("https://github.com/vitskvara/EvalCurves.jl.git")