2.3 卷积神经网络-卷积神经网络实战

170 阅读4分钟

4.1.3 卷积神经网络实战

  • 使用神经网路进行图像分类

    使用TensorFlow封装的函数简化定义模型的过程

    # (3072, 10)
    w = tf.get_variable('w', [x.get_shape()[-1], 10],
                        initializer=tf.random_normal_initializer(0, 1))
    # (10, )
    b = tf.get_variable('b', [10],
                       initializer=tf.constant_initializer(0.0))
    # [None, 3072] * [3072, 1] = [None, 1]
    y_ = tf.matmul(x, w) + b
    

    上面这段代码可以替换为

    y_ = tf.layers.dense(hidden, 10)
    

    dense函数时一个全连接的api,我们可以用他构建更多的层次

    下面的代码构建了一个三个隐含层的神经网络,前两层有100个神经元,第三层有50个,激活函数都是relu

    # activation选择激活函数
    hidden1 = tf.layers.dense(x, 100, activation=tf.nn.relu)
    hidden2 = tf.layers.dense(hidden1, 100, activation=tf.nn.relu)
    hidden3 = tf.layers.dense(hidden2, 50, activation=tf.nn.relu)
    
    y_ = tf.layers.dense(hidden3, 10)
    

    修改完代码后重新跑我们的测试,模型准确率达到了百分之50

    [Train] Step: 500, loss: 2.14457, acc: 0.25000
    [Train] Step: 1000, loss: 1.38850, acc: 0.45000
    [Train] Step: 1500, loss: 1.48442, acc: 0.45000
    [Train] Step: 2000, loss: 1.30306, acc: 0.70000
    [Train] Step: 2500, loss: 1.81453, acc: 0.35000
    [Train] Step: 3000, loss: 1.25715, acc: 0.55000
    [Train] Step: 3500, loss: 1.24998, acc: 0.55000
    [Train] Step: 4000, loss: 1.52799, acc: 0.45000
    [Train] Step: 4500, loss: 1.40961, acc: 0.40000
    [Train] Step: 5000, loss: 1.29267, acc: 0.65000
    (10000, 3072)
    (10000,)
    [Test ] Step: 5000, acc: 0.46650
    [Train] Step: 5500, loss: 1.61286, acc: 0.20000
    [Train] Step: 6000, loss: 1.14901, acc: 0.45000
    [Train] Step: 6500, loss: 1.59980, acc: 0.60000
    [Train] Step: 7000, loss: 1.80693, acc: 0.40000
    [Train] Step: 7500, loss: 1.60266, acc: 0.45000
    [Train] Step: 8000, loss: 1.46613, acc: 0.65000
    [Train] Step: 8500, loss: 1.77019, acc: 0.45000
    [Train] Step: 9000, loss: 1.57591, acc: 0.50000
    [Train] Step: 9500, loss: 0.96180, acc: 0.75000
    [Train] Step: 10000, loss: 1.08688, acc: 0.70000
    (10000, 3072)
    (10000,)
    [Test ] Step: 10000, acc: 0.49750
    
  • 使用卷积神经网络进行图像分类

    全连接有api,池化核卷积当然也有

    # conv1:神经元图,feature map,输出图像
    conv1 = tf.layers.conv2d(x_image,
                             32, # output channel number
                             (3,3), # kernal size
                             padding = 'same', # same 代表输出图像的大小没有变化,valid 代表不做padding
                             activation = tf.nn.relu,
                             name = 'conv1'
                             )
    # 16*16
    pooling1 = tf.layers.max_pooling2d(conv1,
                                       (2, 2), # kernal size
                                       (2, 2), # stride
                                       name = 'pool1' # name为了给这一层做一个命名,这样会让图打印出来的时候会是一个有意义的图
                                      )
    

    这样就构建了一个卷积层一个池化层,我们可以再复制两遍,修改输入,这样就有了三层卷积层三层池化层

    最后再加一层全连接

    conv2 = tf.layers.conv2d(pooling1,
                             32, # output channel number
                             (3,3), # kernal size
                             padding = 'same', # same 代表输出图像的大小没有变化,valid 代表不做padding
                             activation = tf.nn.relu,
                             name = 'conv2'
                             )
    # 8*8
    pooling2 = tf.layers.max_pooling2d(conv2,
                                       (2, 2), # kernal size
                                       (2, 2), # stride
                                       name = 'pool2' # name为了给这一层做一个命名,这样会让图打印出来的时候会是一个有意义的图
                                      )
    
    conv3 = tf.layers.conv2d(pooling2,
                             32, # output channel number
                             (3,3), # kernal size
                             padding = 'same', # same 代表输出图像的大小没有变化,valid 代表不做padding
                             activation = tf.nn.relu,
                             name = 'conv3'
                             )
    # 4*4*32
    pooling3 = tf.layers.max_pooling2d(conv3,
                                       (2, 2), # kernal size
                                       (2, 2), # stride
                                       name = 'pool3' # name为了给这一层做一个命名,这样会让图打印出来的时候会是一个有意义的图
                                      )
    
    # [None, 4*4*42] 将三通道的图形转换成矩阵
    flatten = tf.layers.flatten(pooling3)
    y_ = tf.layers.dense(flatten, 10)
    

    这样卷积神经网络结构就搭建完了,可以看到,我们在完成数据准备和测试结果之后,使用TensorFlow的API构建网络结构还是非常简单的。

    使用卷积神经网络,10000次训练进行测试,分类结果达到了百分之69

    [Train] Step: 500, loss: 1.24817, acc: 0.60000
    [Train] Step: 1000, loss: 1.24423, acc: 0.50000
    [Train] Step: 1500, loss: 1.15608, acc: 0.55000
    [Train] Step: 2000, loss: 0.89077, acc: 0.85000
    [Train] Step: 2500, loss: 0.91770, acc: 0.60000
    [Train] Step: 3000, loss: 1.09620, acc: 0.55000
    [Train] Step: 3500, loss: 0.83352, acc: 0.70000
    [Train] Step: 4000, loss: 1.00452, acc: 0.60000
    [Train] Step: 4500, loss: 1.13865, acc: 0.60000
    [Train] Step: 5000, loss: 0.63163, acc: 0.85000
    (10000, 3072)
    (10000,)
    [Test ] Step: 5000, acc: 0.64850
    [Train] Step: 5500, loss: 1.29329, acc: 0.55000
    [Train] Step: 6000, loss: 1.14539, acc: 0.65000
    [Train] Step: 6500, loss: 0.48069, acc: 0.80000
    [Train] Step: 7000, loss: 1.02633, acc: 0.65000
    [Train] Step: 7500, loss: 0.93267, acc: 0.70000
    [Train] Step: 8000, loss: 0.97426, acc: 0.70000
    [Train] Step: 8500, loss: 0.97432, acc: 0.75000
    [Train] Step: 9000, loss: 0.84112, acc: 0.70000
    [Train] Step: 9500, loss: 0.79695, acc: 0.70000
    [Train] Step: 10000, loss: 0.64198, acc: 0.80000
    (10000, 3072)
    (10000,)
    [Test ] Step: 10000, acc: 0.69950