- Course: https://www.coursera.org/specializations/deep-learning?
- Offered by: https://www.deeplearning.ai/
- Instructor: https://www.andrewng.org/
- My certificates: https://www.coursera.org/account/accomplishments/specialization/55B8USWRBGBQ
- Neural network, binary classification, logistic regression, Gradient descent, Vectorization
- Python basics wiht numpy, broadcasting
- Understanding Neural network representation, Activation functions and their derivatives, backpropagation, random initialization
- Deep L-layer network, forward propagation, parameters and hyperparameters
- Buildin a deep neural network step by step
-
[Week 01: N/A]
- N/A
- Train/test/dev set and bias, variance
- Regularization: overfitting, underfitting, droupout
- Gradient checking: vanishing/exploding gradients
- Understanding mini-batch gradient descent, bias correction, RMSprop
- Adam optimization, Learning rate decay,
- Hyperperameter tunning process, Batch normalization, Softmax regression
- Understanding deep learning framworks: Tensorflow
- N/A
- Machine Learning strategy: Train/test/dev set distribution
- Comparing human level performance: Avoidable bias
- Error Analysis: Mismatched training dev/test
- Transfer learning, Multitask learning, End-to-end deep learning
- Machine Learning flight simulator
-
[Week 01: N/A]
-
[Week 02: N/A]
- N/A
- The basics of computer vision, edge detection, padding, stride, filter, pooling layer
- Build a simple one layer convolutional network
- ResNets, Inception network, Transfer learning, Data augmentation
- Building a Residual networks
- Object detection, IOU, bounding box prediction, non-max suppression, YOLO algorithm, Detection algorithm
- Face recognition and verification: siamese network, triplet loss
- Neural style transfer
- Classic networks: Gradient-based learning applied to document recognition
- Classic networks: ImageNet Classification with Deep Convolutional Neural Networks
- Classic networks: Very Deep Convolutional Networks for Large-Scale Image Recognition
- ResNet: Deep Residual Learning for Image Recognition
- 1x1 convolution: Network In Network
- Inception network: Going Deeper With Convolutions
- Convolutional inplementation with sliding windows: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks
- Bounding box predictions: You Only Look Once: Unified, Real-Time Object Detection
- Region proposal: R-CNN: Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
- Build a Recurrent Neural Networks(RNN) using Gated Recurrent Unit(GRU) and Long Short Term Memory(LSTM)
- Word Embeddings: word2vec and GloVe word vectors
- Solve NLP problems: Text analysis / Sentiment Analysis
- Sequence to sequence architecture: Beam search, Bleu score, Attention model
- Speech recognition
-
Siamese network: DeepFace: Closing the Gap to Human-Level Performance in Face Verification
-
Triplet Loss: FaceNet: A Unified Embedding for Face Recognition and Clustering
-
Style transfer: A Neural Algorithm of Artistic Style
-
GRU: On the Properties of Neural Machine Translation: Encoder-Decoder Approaches
-
GRU: Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
-
LSTM: Long Short-Term Memory
-
Word embedding: Visualizing Data using t-SNE
-
Word embedding: Linguistic Regularities in Continuous Space Word Representations
-
Language model: A Neural Probabilistic Language Model
-
Negative sampling: Distributed Representations of Words and Phrases and their Compositionality
-
Debiasing word embedding: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
-
RNN: Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)
-
RNN: Deep Visual-Semantic Alignments for Generating Image Descriptions
-
Bleu: BLEU: a Method for Automatic Evaluation of Machine Translation
-
NMT: Neural Machine Translation by Jointly Learning to Align and Translate
-
Attention model: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention