You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm a bit confused as to why the dimensions are context_length by context_length. A bit of context - I don't understand what you're doing in the following lines:
Label is the sentence position represented in length 20 one-hot vector. The sentence right above the question is label 1, and it is encoded as [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0].
s_embedded = sentenceLSTM(sentences, real_lens, reuse = reuse)
size: [batch_size*20, 32]
As you know, each 20 sentences in one context passes the same sentenceLSTM. In tensorflow, it is really inefficient to use for loop to deal with 20 sentences. Therefore, I treated 20 sentences as batch.
c_embedded = tf.concat([s_embedded, labels], axis=1)
size: [batch_size*20, 52]
tag labels for each sentences
tagged_c_objects are 20 length list of embedded sentences.
There is no permutation function in tensorflow, so I made 20 length list to make all combinations using itertools.
What is
I'm a bit confused as to why the dimensions are context_length by context_length. A bit of context - I don't understand what you're doing in the following lines:
Could you explain this to me?
Cheers
The text was updated successfully, but these errors were encountered: