笔记:
1.递归神经网络和BP神经网络的区别:
不同点:递归神经网络L有反馈回路,可以记住上一次的输出,并作为下一次的输入之一,BP神经网络没有反馈回路。
相同点:都有梯度消失的问题,之前输入的数据会随着时间的流逝,信号会不断的衰弱,对决策的影响越来越小。
使用y=x的激活函数,则不会出现梯度消失问题,但是网络会一直往下传播,重要的信息也记住,不重要的也记住。正确的应该正确的记住,错误的忘记,于是诞生了LSTM网络。
2.权值和偏置值只设定一对就可以,其他没有设定的,tensorflow会自动为我们设置。
3.代码调试过程中的错误:
- <ipython-input-8-a4f3de4a35a8> in RNN(X, weights, biases)
- 38 inputs = tf.reshape(X,[-1,max_time,n_inputs])
- 39 #定义LSTM基本CELL
- ---> 40 lstm_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(lstm_size)
- 41 #lstm_cell = rnn.BasicLSTMCell(lstm_size)
- 42 # final_state[0]是cell state
- AttributeError: module 'tensorflow.contrib.rnn' has no attribute 'core_rnn_cell'
解决:
#1.0版本改了很多
#原代码是这样的:
lstm_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(lstm_size)
#应该改为:
from tensorflow.contrib import rnn
lstm_cell = rnn.BasicLSTMCell(lstm_size)
4.最终输出准确率为93%,可以继续优化。
源码
- # coding: utf-8
- # In[1]:
- import tensorflow as tf
- from tensorflow.examples.tutorials.mnist import input_data
- from tensorflow.contrib import rnn
- # In[2]:
- #载入数据集
- mnist = input_data.read_data_sets("D://MNIST_data",one_hot=True)
- # 输入图片是28*28
- n_inputs = 28 #输入一行,一行有28个数据
- max_time = 28 #一共28行
- lstm_size = 100 #隐层单元
- n_classes = 10 # 10个分类
- batch_size = 50 #每批次50个样本
- n_batch = mnist.train.num_examples // batch_size #计算一共有多少个批次
- #这里的none表示第一个维度可以是任意的长度
- x = tf.placeholder(tf.float32,[None,784])
- #正确的标签
- y = tf.placeholder(tf.float32,[None,10])
- #初始化权值
- weights = tf.Variable(tf.truncated_normal([lstm_size, n_classes], stddev=0.1))
- #初始化偏置值
- biases = tf.Variable(tf.constant(0.1, shape=[n_classes]))
- #定义RNN网络
- def RNN(X,weights,biases):
- # inputs=[batch_size, max_time, n_inputs]
- inputs = tf.reshape(X,[-1,max_time,n_inputs])
- #定义LSTM基本CELL
- #lstm_cell = tf.contrib.rnn.core_rnn_cell.BasicLSTMCell(lstm_size)
- lstm_cell = rnn.BasicLSTMCell(lstm_size)
- # final_state[0]是cell state
- # final_state[1]是hidden_state
- outputs,final_state = tf.nn.dynamic_rnn(lstm_cell,inputs,dtype=tf.float32)
- results = tf.nn.softmax(tf.matmul(final_state[1],weights) + biases)
- return results
- #计算RNN的返回结果
- prediction= RNN(x, weights, biases)
- #损失函数
- cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
- #使用AdamOptimizer进行优化
- train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
- #结果存放在一个布尔型列表中
- correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
- #求准确率
- accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))#把correct_prediction变为float32类型
- #初始化
- init = tf.global_variables_initializer()
- with tf.Session() as sess:
- sess.run(init)
- for epoch in range(6):
- for batch in range(n_batch):
- batch_xs,batch_ys = mnist.train.next_batch(batch_size)
- sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
- acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
- print ("Iter " + str(epoch) + ", Testing Accuracy= " + str(acc))
- # In[ ]:
输出
Extracting D://MNIST_data\train-images-idx3-ubyte.gz Extracting D://MNIST_data\train-labels-idx1-ubyte.gz Extracting D://MNIST_data\t10k-images-idx3-ubyte.gz Extracting D://MNIST_data\t10k-labels-idx1-ubyte.gz WARNING:tensorflow:From <ipython-input-7-89efd89d18dd>:52: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See tf.nn.softmax_cross_entropy_with_logits_v2. Iter 0, Testing Accuracy= 0.8148 Iter 1, Testing Accuracy= 0.8737 Iter 2, Testing Accuracy= 0.899 Iter 3, Testing Accuracy= 0.9196 Iter 4, Testing Accuracy= 0.9217 Iter 5, Testing Accuracy= 0.9274