tf.nn.rnn_cell.LSTMCell
又名:tf.nn.rnn_cell.BasicLSTMCell 、tf.contrib.rnn.LSTMCell
参见: tf.nn.rnn_cell.LSTMCell
输出:
- output:LSTM单元输出
h ,与LSTM cell state 的区别在于该输出又经过激活以及和一个sigmoid函数输出相乘。shape: [batch_size,num_units]
- new_state:当前时间步上的
LSTM cell state 和LSTM output ,LSTMStateTuple:(c,h),其中,h 与上述的output张量相同。shape: ([batch_size,num_units],[batch_size,num_units])
示例:
batch_size=10
embedding_dim=300
inputs=tf.Variable(tf.random_normal([batch_size,embedding_dim]))
previous_state=(tf.Variable(tf.random_normal([batch_size,128])),tf.Variable(tf.random_normal([batch_size,128])))
lstmcell=tf.nn.rnn_cell.LSTMCell(128)
outputs,(c_state,h_state)=lstmcell(inputs,previous_state)
输出:
(<tf.Tensor 'lstm_cell/mul_2:0' shape=(10, 128) dtype=float32>,
LSTMStateTuple(c=<tf.Tensor 'lstm_cell/add_1:0' shape=(10, 128) dtype=float32>, h=<tf.Tensor 'lstm_cell/mul_2:0' shape=(10, 128) dtype=float32>))
tf.nn.rnn_cell.MultiRNNCell
参见:tf.nn.rnn_cell.MultiRNNCell
输出:
- outputs: 最顶层cell的最后一个时间步的输出。shape:[batch_size,cell.output_size]
- states:每一层的state,M层LSTM则输出M个LSTMStateTuple组成的Tuple。
示例:
batch_size=10
inputs=tf.Variable(tf.random_normal([batch_size,128]))
previous_state0=(tf.random_normal([batch_size,100]),tf.random_normal([batch_size,100]))
previous_state1=(tf.random_normal([batch_size,200]),tf.random_normal([batch_size,200]))
previous_state2=(tf.random_normal([batch_size,300]),tf.random_normal([batch_size,300]))
num_units=[100,200,300]
cells=[tf.nn.rnn_cell.LSTMCell(num_unit) for num_unit in num_units]
mul_cells=tf.nn.rnn_cell.MultiRNNCell(cells)
outputs,states=mul_cells(inputs,(previous_state0,previous_state1,previous_state2))
输出:
outputs:
<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/mul_2:0' shape=(10, 300) dtype=float32>
states:
Out[29]:
(LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_0/lstm_cell/add_1:0' shape=(10, 100) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_0/lstm_cell/mul_2:0' shape=(10, 100) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_1/lstm_cell/add_1:0' shape=(10, 200) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_1/lstm_cell/mul_2:0' shape=(10, 200) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/add_1:0' shape=(10, 300) dtype=float32>, h=<tf.Tensor 'multi_rnn_cell_1/cell_2/lstm_cell/mul_2:0' shape=(10, 300) dtype=float32>))
tf.nn.dynamic_rnn
参见:tf.nn.dynamic_rnn
输出:
- outputs: 每个时间步上的LSTM输出;若有多层LSTM,则为每一个时间步上最顶层的LSTM的输出。shape: [batch_size,max_time,cell.output_size]
- state:最后一个时间步的状态,该状态使用LSTMStateTuple结构输出;若有M层LSTM,则输出M个LSTMStateTuple。单层LSTM输出:[batch_size,cell.output_size];M层LSTM输出:M个LSTMStateTuple组成的Tuple,这也即是说:outputs[:,-1,:]==state[-1,:,:]。
示例:
batch_size=10
max_time=20
data=tf.Variable(tf.random_normal([batch_size,max_time,128]))
# create a BasicRNNCell
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=128)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size,dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.nn.dynamic_rnn(cell=rnn_cell, inputs=data,
initial_state=initial_state,
dtype=tf.float32)
输出:
outpus:
<tf.Tensor 'rnn_2/transpose_1:0' shape=(10, 20, 128) dtype=float32>
state:
<tf.Tensor 'rnn_2/while/Exit_3:0' shape=(10, 128) dtype=float32>
batch_size=10
max_time=20
data=tf.Variable(tf.random_normal([batch_size,max_time,128]))
# create 2 LSTMCells
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.contrib.rnn.LSTMStateTuple for each cell
outputs, state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
outputs:
<tf.Tensor 'rnn_1/transpose_1:0' shape=(10, 20, 256) dtype=float32>
state:
Out[20]:
(LSTMStateTuple(c=<tf.Tensor 'rnn_1/while/Exit_3:0' shape=(10, 128) dtype=float32>, h=<tf.Tensor 'rnn_1/while/Exit_4:0' shape=(10, 128) dtype=float32>),
LSTMStateTuple(c=<tf.Tensor 'rnn_1/while/Exit_5:0' shape=(10, 256) dtype=float32>, h=<tf.Tensor 'rnn_1/while/Exit_6:0' shape=(10, 256) dtype=float32>))
tf.nn.bidirectional_dynamic_rnn
tf.contrib.seq2seq.dynamic_decode
- 输出:
- final_outputs,包含rnn_output和sample_id,分别可用final_output.rnn_output和final_outputs.sample_id获取到。
- final_state,可以从最后一个解码器状态获取alignments,
alignments = tf.transpose(final_decoder_state.alignment_history.stack(), [1, 2, 0])
- final_sequence_lengths
来源:http://www./content-4-115051.html
|