We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://www.almaany.com/ar/dict/ar-ar/%D9%86%D9%90%D8%AD%D9%92%D8%B1%D9%90%D9%8A%D8%B1/ https://www.almaany.com/ar/dict/ar-ar/%D9%86%D8%B7%D8%A7%D8%B3%D9%8A/ https://www.almaany.com/ar/thes/ar-ar/%D9%86%D8%B7%D8%A7%D8%B3%D9%8A/ https://www.maajim.com/dictionary/%D9%86%D8%B7%D8%A7%D8%B3%D9%8A https://ar.wikipedia.org/wiki/%D8%A7%D8%B3%D8%AA%D9%82%D8%B1%D8%A7%D8%A1_(%D9%85%D9%86%D8%B7%D9%82)
https://www.ics.uci.edu/~pjsadows/notes.pdf
https://ai.stackexchange.com/questions/6343/how-do-i-implement-softmax-forward-propagation-and-backpropagation-to-replace-si?newreg=955c85b8c8704de1be03d7b566f51405
https://stats.stackexchange.com/questions/235528/backpropagation-with-softmax-cross-entropy
https://algorithmsdatascience.quora.com/BackPropagation-a-collection-of-notes-tutorials-demo-and-codes
https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/
https://stackoverflow.com/questions/33541930/how-to-implement-the-softmax-derivative-independently-from-any-loss-function
https://stackoverflow.com/questions/40575841/numpy-calculate-the-derivative-of-the-softmax-function
https://en.wikipedia.org/wiki/Softmax_function#Artificial_neural_networks
(?) http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
https://medium.com/@14prakash/back-propagation-is-very-simple-who-made-it-complicated-97b794c97e5c
https://medium.com/@aerinykim/how-to-implement-the-softmax-derivative-independently-from-any-loss-function-ae6d44363a9d
http://www.cs.toronto.edu/~tijmen/csc321/documents/softmax.pdf
https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function https://peterroelants.github.io/posts/cross-entropy-softmax/
https://stats.stackexchange.com/questions/79454/softmax-layer-in-a-neural-network
https://deepnotes.io/softmax-crossentropy
https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/
https://towardsdatascience.com/neural-network-embeddings-explained-4d028e6f0526
http://ajcr.net/Basic-guide-to-einsum/ https://stackoverflow.com/questions/26089893/understanding-numpys-einsum https://machinelearningmastery.com/broadcasting-with-numpy-arrays/ https://docs.scipy.org/doc/numpy-1.15.0/user/basics.broadcasting.html
https://en.wikipedia.org/wiki/Ensemble_learning https://towardsdatascience.com/ensemble-methods-in-machine-learning-what-are-they-and-why-use-them-68ec3f9fef5f
https://www.youtube.com/watch?v=sq2gPzlrM0g https://arxiv.org/abs/1502.05767 http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec8a.pdf http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec8b.pdf
http://www.heatmapping.org/ http://www.heatmapping.org/tutorial/
https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf (first paper)
https://www.youtube.com/watch?v=iDulhoQ2pro (first paper)
https://mchromiak.github.io/articles/2017/Sep/12/Transformer-Attention-is-all-you-need/
https://jalammar.github.io/illustrated-transformer/
https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270
https://arxiv.org/abs/1810.04805
https://medium.com/@kolloldas/building-the-mighty-transformer-for-sequence-tagging-in-pytorch-part-i-a1815655cd8
https://arxiv.org/abs/1807.03819
https://joshvarty.com/2018/02/19/ltfn-6-weight-initialization/
http://adventuresinmachinelearning.com/weight-initialization-tutorial-tensorflow/
https://eli.thegreenplace.net/2015/memory-layout-of-multi-dimensional-arrays
https://eli.thegreenplace.net/2018/elegant-python-code-for-a-markov-chain-text-generator/
https://hackernoon.com/automated-text-generator-using-markov-chain-de999a41e047
http://www.emergentmind.com/neural-network
http://www.cs.toronto.edu/~tijmen/csc321/
https://lilianweng.github.io/lil-log/2017/08/20/from-GAN-to-WGAN.html?fbclid=IwAR3vrT1pUt5xgzGcZdHPwkOqh-NjsgIKUEEa0JdmmBL66I2mKhA7DrHbGxc
The text was updated successfully, but these errors were encountered:
disooqi
No branches or pull requests
https://www.almaany.com/ar/dict/ar-ar/%D9%86%D9%90%D8%AD%D9%92%D8%B1%D9%90%D9%8A%D8%B1/
https://www.almaany.com/ar/dict/ar-ar/%D9%86%D8%B7%D8%A7%D8%B3%D9%8A/
https://www.almaany.com/ar/thes/ar-ar/%D9%86%D8%B7%D8%A7%D8%B3%D9%8A/
https://www.maajim.com/dictionary/%D9%86%D8%B7%D8%A7%D8%B3%D9%8A
https://ar.wikipedia.org/wiki/%D8%A7%D8%B3%D8%AA%D9%82%D8%B1%D8%A7%D8%A1_(%D9%85%D9%86%D8%B7%D9%82)
backpropagation
Softmax
https://www.ics.uci.edu/~pjsadows/notes.pdf
https://ai.stackexchange.com/questions/6343/how-do-i-implement-softmax-forward-propagation-and-backpropagation-to-replace-si?newreg=955c85b8c8704de1be03d7b566f51405
https://stats.stackexchange.com/questions/235528/backpropagation-with-softmax-cross-entropy
https://algorithmsdatascience.quora.com/BackPropagation-a-collection-of-notes-tutorials-demo-and-codes
https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/
https://stackoverflow.com/questions/33541930/how-to-implement-the-softmax-derivative-independently-from-any-loss-function
https://stackoverflow.com/questions/40575841/numpy-calculate-the-derivative-of-the-softmax-function
https://en.wikipedia.org/wiki/Softmax_function#Artificial_neural_networks
(?) http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
https://medium.com/@14prakash/back-propagation-is-very-simple-who-made-it-complicated-97b794c97e5c
https://medium.com/@aerinykim/how-to-implement-the-softmax-derivative-independently-from-any-loss-function-ae6d44363a9d
http://www.cs.toronto.edu/~tijmen/csc321/documents/softmax.pdf
https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function
https://peterroelants.github.io/posts/cross-entropy-softmax/
https://stats.stackexchange.com/questions/235528/backpropagation-with-softmax-cross-entropy
https://stats.stackexchange.com/questions/79454/softmax-layer-in-a-neural-network
https://deepnotes.io/softmax-crossentropy
https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/
CNN
LSTM
RNN
Embeddings
https://towardsdatascience.com/neural-network-embeddings-explained-4d028e6f0526
Attension Mechanism
Tensorflow
Numpy
http://ajcr.net/Basic-guide-to-einsum/
https://stackoverflow.com/questions/26089893/understanding-numpys-einsum
https://machinelearningmastery.com/broadcasting-with-numpy-arrays/
https://docs.scipy.org/doc/numpy-1.15.0/user/basics.broadcasting.html
Ensemble learning
https://en.wikipedia.org/wiki/Ensemble_learning
https://towardsdatascience.com/ensemble-methods-in-machine-learning-what-are-they-and-why-use-them-68ec3f9fef5f
Automatic Differentiation
https://www.youtube.com/watch?v=sq2gPzlrM0g
https://arxiv.org/abs/1502.05767
http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec8a.pdf
http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/slides/lec8b.pdf
heatmapping
http://www.heatmapping.org/
http://www.heatmapping.org/tutorial/
Transformer
https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf (first paper)
https://www.youtube.com/watch?v=iDulhoQ2pro (first paper)
https://mchromiak.github.io/articles/2017/Sep/12/Transformer-Attention-is-all-you-need/
https://jalammar.github.io/illustrated-transformer/
https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270
https://arxiv.org/abs/1810.04805
https://medium.com/@kolloldas/building-the-mighty-transformer-for-sequence-tagging-in-pytorch-part-i-a1815655cd8
https://arxiv.org/abs/1807.03819
https://joshvarty.com/2018/02/19/ltfn-6-weight-initialization/
http://adventuresinmachinelearning.com/weight-initialization-tutorial-tensorflow/
https://eli.thegreenplace.net/2015/memory-layout-of-multi-dimensional-arrays
https://eli.thegreenplace.net/2018/elegant-python-code-for-a-markov-chain-text-generator/
https://hackernoon.com/automated-text-generator-using-markov-chain-de999a41e047
http://www.emergentmind.com/neural-network
http://www.cs.toronto.edu/~tijmen/csc321/
https://lilianweng.github.io/lil-log/2017/08/20/from-GAN-to-WGAN.html?fbclid=IwAR3vrT1pUt5xgzGcZdHPwkOqh-NjsgIKUEEa0JdmmBL66I2mKhA7DrHbGxc
The text was updated successfully, but these errors were encountered: