Zk1001 February 2016

Low accuracy with change to TensorFlow Cifar10 example

I am trying to modify the network structure provided by Cifar10 in TensorFlow. Typically, I added another convolution layer (conv12) after the first convolution layer (conv1). No matter how I set the filter (I tried all 1x1, 3x3, 5x5) and whether using weight decay or not, having a new layer will decrease the accuracy to below than 10%. This is equivalent to a random guess in Cifar10 since there are 10 classes.

The code structure is as following, I don't modify any other part of the cifar except setting the size of input image to be 48x48 (instead of 24x24). I guess the input size should not matter.

Note that the conv12 is a depthwise convolution layer because I want to add just a linear layer after the conv1 layer in order to minimize the change to the original code. Doing that I expected that the accuracy should be similar to the original version, but it decreases to around 10%. (I also tried a normal convolution layer but it didn't work also.)

  with tf.variable_scope('conv1') as scope:
    kernel1 = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
                                         stddev=1e-4, wd=0.0)
    conv_1 = tf.nn.conv2d(images, kernel1, [1, 1, 1, 1], padding='SAME')
    biases1 = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
    bias1 = tf.nn.bias_add(conv_1, biases1)
    conv1 = tf.nn.relu(bias1, name=scope.name)
    _activation_summary(conv1)


  with tf.variable_scope('conv12') as scope:
    kernel12 = _variable_with_weight_decay('weights', shape=[1, 1, 64, 1],
                                         stddev=1e-4, wd=0.0)
    #conv_12 = tf.nn.conv2d(conv1, kernel12, [1, 1, 1, 1], padding='SAME')
    conv_12 = tf.nn.depthwise_conv2d(conv1, kernel12, [1, 1, 1, 1], padding='SAME')
    biases12 = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
    bias12 = tf.nn.bias_add(conv_12, biases12)        
    conv12 = tf.nn.relu(bias12)
    _activation_summary(conv12)

  pool1 = t        

Answers


dga February 2016

Your second convolution:

kernel12 = _variable_with_weight_decay('weights', shape=[1, 1, 64, 1]

is taking the depth-64 output of the previous layer and squeezing it down to a depth-1 output. That doesn't seem like it will match with whichever code you have following this (if it's conv2 from the cifar example from TensorFlow, then it definitely isn't going to work well, because that one expects a depth-64 input.)

Perhaps you really wanted shape=[1, 1, 64, 64], which would simply add an extra "inception-style" 1x1 convolutional layer into your model?

Post Status

Asked in February 2016
Viewed 1,847 times
Voted 9
Answered 1 times

Search




Leave an answer