Classic CNNs - LeNet-5, AlexNet, VGG-16

Original Source: https://www.coursera.org/specializations/deep-learning

Note that before feeded into fully connected layer, 3d unit is flattened to 1d.

LeNet-5

LeCun et al., 1998. Gradient-based learning applied to document recognition

LeNet-5

  • 60k parameters
  • As we get deeper, $n_H$ and $n_W$ decrease, and $n_c$ increases
  • Structure: conv+pool -> conv+pool … -> fc
  • Average pooling
  • Ssigmoid/tanh activation function

AlexNet

Krizhevsky et al., 2012. ImageNet classification with deep convolutional neural networks

AlexNet

  • ~60m parameters
  • Structure similar to LeNet-5
  • Max pooling
  • Same padding
  • ReLU activation function

VGG-16

Simonyan & Zisserman 2015. Very deep convolutional networks for large-scale image recognition

VGG-16

  • ~138m parameters
  • Structure similar to AlexNet but uses multiple convs before pool

Leave a Comment