Skip to content

zellyn/deeplearning-class-2011

Repository files navigation

Overview

My code for the Deep Learning class exercises. There should be nothing proprietary in here. For the earlier exercises, I tried to create parallel implementations in Octave and NumPy. Later on, class-supplied helper code necessitated the use of Matlab (for now).

Materials

Note

Questions

UFLDL

“Since J(W,b) is a non-convex function, gradient descent is susceptible to local optima; however, in practice gradient descent usually works fairly well.” - UFLDL/Backpropagation

Why? Is it almost convex? Are the local optima all of a similar quality? Are any of the variations (squared error / squared error + weight decay / squared error + weight decay + sparsity constraints) convex?

Tasks

Python

Implement cnn_exercise.

About

Code for Deep Learning class at Google

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published