Skip to content

plon-io/cifar10VGG16

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Keras VGG implementation for CIFAR-10 classification

What is Keras?

"Keras is an open source neural network library written in Python and capable of running on top of either TensorFlow, CNTK or Theano.

Use Keras if you need a deep learning libraty that:

  • Allows for easy and fast prototyping
  • Supports both convolutional networks and recurrent networks, as well as combinations of the two
  • Runs seamlessly on CPU and GPU

Keras is compatible with Python 2.7-3.5"[1].

Since Semptember 2016, Keras is the second-fastest growing Deep Learning framework after Google's Tensorflow, and the third largest after Tensorflow and Caffe[2].

What is Deep Learning?

"Deep Learning is the application to learning tasks of artificial neural networks(ANNs) that contain more than one hidden layer. Deep learning is part of Machine Learning methods based on learning data representations. Learning can be supervised, parially supervised or unsupervised[3]."

What is a VGG model?

What will you learn?

You will learn:

  • What is Keras library and how to use it
  • What is Deep Learning
  • How to use ready datasets
  • What is Convolutional Neural Networks(CNN)
  • How to build step by step Convolutional Neural Networks(CNN)
  • What are differences in model results
  • What is supervised and unsupervised learning
  • Basics of Machine Learning
  • Introduction to Artificial Intelligence(AI)

Project structure

  • vgg16.py - simple example of VGG16 neural net
  • README.md - description of this project

Convolutional Neural Network

VGG-16 neural network

Network Architecture

OPERATION           DATA DIMENSIONS   WEIGHTS(N)   WEIGHTS(%)

               Input   #####      3   32   32
              Conv2D    \|/  -------------------      1792     0.0%
                relu   #####     64   32   32
              Conv2D    \|/  -------------------     36928     0.1%
                relu   #####     64   32   32
        MaxPooling2D   Y max -------------------         0     0.0%
                       #####     64   16   16
              Conv2D    \|/  -------------------     73856     0.2%
                relu   #####    128   16   16
              Conv2D    \|/  -------------------    147584     0.4%
                relu   #####    128   16   16
        MaxPooling2D   Y max -------------------         0     0.0%
                       #####    128    8    8
              Conv2D    \|/  -------------------    295168     0.9%
                relu   #####    256    8    8
              Conv2D    \|/  -------------------    590080     1.8%
                relu   #####    256    8    8
              Conv2D    \|/  -------------------    590080     1.8%
                relu   #####    256    8    8
        MaxPooling2D   Y max -------------------         0     0.0%
                       #####    256    4    4
              Conv2D    \|/  -------------------   1180160     3.5%
                relu   #####    512    4    4
              Conv2D    \|/  -------------------   2359808     7.0%
                relu   #####    512    4    4
              Conv2D    \|/  -------------------   2359808     7.0%
                relu   #####    512    4    4
        MaxPooling2D   Y max -------------------         0     0.0%
                       #####    512    2    2
              Conv2D    \|/  -------------------   2359808     7.0%
                relu   #####    512    2    2
              Conv2D    \|/  -------------------   2359808     7.0%
                relu   #####    512    2    2
              Conv2D    \|/  -------------------   2359808     7.0%
                relu   #####    512    2    2
        MaxPooling2D   Y max -------------------         0     0.0%
                       #####    512    1    1
             Flatten   ||||| -------------------         0     0.0%
                       #####         512
               Dense   XXXXX -------------------   2101248     6.2%
                relu   #####        4096
             Dropout    | || -------------------         0     0.0%
                       #####        4096
               Dense   XXXXX -------------------  16781312    49.9%
                relu   #####        4096
             Dropout    | || -------------------         0     0.0%
                       #####        4096
               Dense   XXXXX -------------------     40970     0.1%
             softmax   #####          10

Model

    model = Sequential()

model.add(Conv2D(64, (3, 3), padding='same', input_shape=x_train.shape[1:], name='block1_conv1'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(64, (3, 3), padding='same', name='block1_conv2'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool'))

model.add(Conv2D(128, (3, 3), padding='same', name='block2_conv1'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(128, (3, 3), padding='same', name='block2_conv2'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool'))

model.add(Conv2D(256, (3, 3), padding='same', name='block3_conv1'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(256, (3, 3), padding='same', name='block3_conv2'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(256, (3, 3), padding='same', name='block3_conv3'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool'))

model.add(Conv2D(512, (3, 3), padding='same', name='block4_conv1'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(512, (3, 3), padding='same', name='block4_conv2'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(512, (3, 3), padding='same', name='block4_conv3'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool'))

model.add(Conv2D(512, (3, 3), padding='same', name='block5_conv1'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(512, (3, 3), padding='same', name='block5_conv2'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(Conv2D(512, (3, 3), padding='same', name='block5_conv3'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))

model.add(MaxPooling2D((2,2), strides=(2,2), name='block5_pool'))

model.add(Flatten())

model.add(Dense(4096))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(4096, name='fc2'))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(num_classes))
model.add(BatchNormalization()) if BATCH_NORM else None
model.add(Activation('softmax'))

sgd = SGD(lr=0.0005, decay=0, nesterov=True)

Train model:

model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
    return model


cnn_n = base_model()
cnn_n.summary()

Fit model:

cnn = cnn_n.fit(x_train,y_train, batch_size=batch_size, epochs=epochs,validation_data=(x_test,y_test),shuffle=True)

Results:

All results are for 50k iteration, learning rate=0.0005. Neural networks have benn trained at 16 cores and 16GB RAM on plon.io.

  • epochs = 10 accuracy=10.0%

x y

Confusion matrix result:

[[   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]
 [   0    0    0    0    0    0    0    0    0 1000]]

Time of learning process: xxx

.

Resources

Grab the code or run project in online IDE

You can also check here my other Keras CIFAR-10 classification using 4 and 6-layer neural nets.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages