tensorflow-gpu版TFLearn实现简单MNIST手写字识别3

Handwritten Number Recognition with TFLearn and MNIST

# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist

Retrieving training and test data

The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.

Each MNIST data point has:

  1. an image of a handwritten digit and
  2. a corresponding label (a number 0-9 that identifies the image)

We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.

We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].

Flattened data

For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.

Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.

# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
image.png

Visualizing the data

import matplotlib.pyplot as plt
%matplotlib inline

# Function for displaying a training image by it's index in the MNIST     set
def show_digit(index):
    label = trainY[index].argmax(axis=0)
    # Reshape 784 array into 28x28 image
    image = trainX[index].reshape([28,28])
    plt.title('Training data, index: %d,  Label: %d' % (index, label))
    plt.imshow(image, cmap='gray_r')
    plt.show()

    # Display the first (index 0) training image
    show_digit(3)
image.png

Building the network

TFLearn lets you build the network by defining the layers in that network.

For this example, you'll define:

  1. The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
  2. Hidden layers, which recognize patterns in data and connect the input to the output layer, and
  3. The output layer, which defines how the network learns and outputs a label for a given image.

Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,

net = tflearn.input_data([None, 100])

would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.

Adding layers

To add new hidden layers, you use

net = tflearn.fully_connected(net, n_units, activation='ReLU')

This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).

Then, to set how you train the network, use:

net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')

Again, this is passing in the network you've been building. The keywords:

  • optimizer sets the training method, here stochastic gradient descent
  • learning_rate is the learning rate
  • loss determines how the network error is calculated. In this example, with categorical cross-entropy.

Finally, you put all this together to create the model with tflearn.DNN(net).

<textarea tabindex="0" style="padding: 0px; width: 1px; height: 1em; bottom: -1em; position: absolute;" spellcheck="false" wrap="off" autocorrect="off" autocapitalize="off"></textarea>

<pre class=" CodeMirror-line " role="presentation">Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.</pre>

<pre class=" CodeMirror-line " role="presentation">​</pre>

<pre class=" CodeMirror-line " role="presentation">Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer. </pre>

Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.

Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.

# Define the neural network
def build_model():
    # This resets all parameters and variables, leave this here
    tf.reset_default_graph()
     #### Your code ####
    #Include the input layer, hidden layer(s), and set how you want to train the model
    #input
    net = tflearn.input_data([None, 784])
    #hidden
    net = tflearn.fully_connected(net, 128, activation='ReLU')
    net = tflearn.fully_connected(net, 20, activation='ReLU')
    # Output
    net = tflearn.fully_connected(net, 10, activation='softmax')    
    net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1,\
                         loss='categorical_crossentropy')
    # This model assumes that your network is named "net"    
    model = tflearn.DNN(net)
    return model

# Build the model
model = build_model()

Training the network

Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.

Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!

Training

model.fit(trainX, trainY, validation_set=0.1, show_metric=True,       batch_size=100, n_epoch=110)
image.png

Testing

After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.

A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!

# Compare the labels that our model predicts with the actual labels

# Find the indices of the most confident prediction for each item. That     tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)

# Calculate the accuracy, which is the percentage of times the   predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)

# Print out the result
print("Test accuracy: ", test_accuracy)
image.png
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 157,012评论 4 359
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 66,589评论 1 290
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 106,819评论 0 237
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,652评论 0 202
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 51,954评论 3 285
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,381评论 1 210
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,687评论 2 310
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,404评论 0 194
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,082评论 1 238
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,355评论 2 241
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 31,880评论 1 255
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,249评论 2 250
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 32,864评论 3 232
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,007评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,760评论 0 192
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,394评论 2 269
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,281评论 2 259

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,096评论 0 10
  • 最近一段时间,因为家有病号陪护,需要长期和家人共处一室,原本关系不错,但毕竟需要彼此习惯,因而也会有很多不适应。 ...
    怡儿话书影阅读 198评论 0 1
  • 双眼相拥, 能从彼此的眼神中, 读出驱逐孤独的温柔, 但找不回, 滚烫了脸颊的温度。 在我空荡的世界, 独自徘徊,...
    绎文阅读 175评论 0 0
  • 一般要写一个大的类似连载的东西首先都要介绍一下这个东西是什么,那我也简单的介绍一下好了。突然想写一写自己今年开始进...
    三盏灯亮一盏阅读 291评论 0 0