Issue
I want to compare the predicted values yp
from my neural network in a pairwise fashion, and so I was using (back in my old numpy implementation):
idx = np.repeat(np.arange(len(yp)), len(yp))
jdx = np.tile(np.arange(len(yp)), len(yp))
s = yp[[idx]] - yp[[jdx]]
This basically create a indexing mesh which I then use. idx=[0,0,0,1,1,1,...]
while jdx=[0,1,2,0,1,2...]
. I do not know if there is a simpler manner of doing it...
Anyhow, TensorFlow has a tf.tile()
, but it seems to be lacking a tf.repeat()
.
idx = np.repeat(np.arange(n), n)
v2 = v[idx]
And I get the error:
TypeError: Bad slice index [ 0 0 0 ..., 215 215 215] of type <type 'numpy.ndarray'>
It also does not work to use a TensorFlow constant for the indexing:
idx = tf.constant(np.repeat(np.arange(n), n))
v2 = v[idx]
-
TypeError: Bad slice index Tensor("Const:0", shape=TensorShape([Dimension(46656)]), dtype=int64) of type <class 'tensorflow.python.framework.ops.Tensor'>
The idea is to convert my RankNet implementation to TensorFlow.
Solution
You can achieve the effect of np.repeat()
using a combination of tf.tile()
and tf.reshape()
:
idx = tf.range(len(yp))
idx = tf.reshape(idx, [-1, 1]) # Convert to a len(yp) x 1 matrix.
idx = tf.tile(idx, [1, len(yp)]) # Create multiple columns.
idx = tf.reshape(idx, [-1]) # Convert back to a vector.
You can simply compute jdx
using tf.tile()
:
jdx = tf.range(len(yp))
jdx = tf.tile(jdx, [len(yp)])
For the indexing, you could try using tf.gather()
to extract non-contiguous slices from the yp
tensor:
s = tf.gather(yp, idx) - tf.gather(yp, jdx)
Answered By - mrry
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.