Reproducing the paper "PADAM: Closing The Generalization Gap of Adaptive Gradient Methods In Training Deep Neural Networks" for the ICLR 2019 Reproducibility Challenge
-
Updated
Apr 13, 2019 - Python
Reproducing the paper "PADAM: Closing The Generalization Gap of Adaptive Gradient Methods In Training Deep Neural Networks" for the ICLR 2019 Reproducibility Challenge
A compressed adaptive optimizer for training large-scale deep learning models using PyTorch
Simple MATLAB toolbox for deep learning network: Version 1.0.3
Implement different variants of gradient descent in python using numpy
Effect of Optimizer Selection and Hyperparameter Tuning on Training Efficiency and LLM Performance
Deep Learning Optimizers
flexible and extensible implementation of a multithreaded feedforward neural network in Java including popular optimizers, wrapped up in a console user interface
Implementation and comparison of SGD, SGD with momentum, RMSProp and AMSGrad optimizers on the Image classification task using MNIST dataset
Implementation of a 3 layer neural net in numpy, trained and tested on MNIST dataset
CNeuron is a simple singular neural network neuron implementation in C, designed for easy integration into C projects.
Advance Machine Learning (CSL 712) Course Lab Assignments
Week 1 assignment form Coursera's "Advanced Machine Learning - Introduction to Deep Learning"
Using different optimizers for a comparison study, finding the root of differences by visualization and to find the best case for a specific task
Add a description, image, and links to the sgd-momentum topic page so that developers can more easily learn about it.
To associate your repository with the sgd-momentum topic, visit your repo's landing page and select "manage topics."