CS231n: Deep Learning for Computer Vision作为CV领域入门的经典课程,我在看完课程讲解后,按照Schedul的顺序,在Course Notes的帮助下,完成了3次作业(Assignment 1, Assignment 2 and Assignment 3),并在此记录作业的解答过程。
注:所有和中文相关的内容、注释、推导过程均为本人所写,如有错误,欢迎指正。
The notebook knn.ipynb will walk you through implementing the kNN classifier.
The notebook svm.ipynb will walk you through implementing the SVM classifier.
The notebook softmax.ipynb will walk you through implementing the Softmax classifier.
The notebook two_layer_net.ipynb will walk you through the implementation of a two-layer neural network classifier.
The notebook features.ipynb will examine the improvements gained by using higher-level representations as opposed to using raw pixel values.
The notebook FullyConnectedNets.ipynb
will have you implement fully connected networks of arbitrary depth. To optimize these models you will implement several popular update rules.
In notebook BatchNormalization.ipynb
you will implement batch normalization, and use it to train deep fully connected networks.
The notebook Dropout.ipynb
will help you implement dropout and explore its effects on model generalization.
In the notebook ConvolutionalNetworks.ipynb
you will implement several new layers that are commonly used in convolutional networks.
For this part, you will be working with PyTorch, a popular and powerful deep learning framework.
Open up PyTorch.ipynb
. There, you will learn how the framework works, culminating in training a convolutional network of your own design on CIFAR-10 to get the best performance you can.
The notebook Network_Visualization.ipynb
will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images.
The notebook RNN_Captioning.ipynb
will walk you through the implementation of vanilla recurrent neural networks and apply them to image captioning on COCO.
The notebook Transformer_Captioning.ipynb
will walk you through the implementation of a Transformer model and apply it to image captioning on COCO.
In the notebook Generative_Adversarial_Networks.ipynb
you will learn how to generate images that match a training dataset and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. When first opening the notebook, go to Runtime > Change runtime type
and set Hardware accelerator
to GPU
.
In the notebook Self_Supervised_Learning.ipynb
, you will learn how to leverage self-supervised pretraining to obtain better performance on image classification tasks. When first opening the notebook, go to Runtime > Change runtime type
and set Hardware accelerator
to GPU
.
The notebook LSTM_Captioning.ipynb
will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs and apply them to image captioning on COCO.
hanlulu1998/CS231n: Stanford University CS231n 2016 winter assignments (github.com)