Issue
The data type of input_data is an array of numpy.float64 but the code still fails inside a tensorflow library because it's not a 'double'. Not sure how to remedy this.
import tensorflow as tf
import numpy as np
input_data = np.random.uniform(low=0.0, high=1.0, size=100)
print("type(input_data):", type(input_data), "type(input_data[0]):", type(input_data[0]))
class ArtificialNeuron(tf.Module):
def __init__(self):
self.w = tf.Variable(tf.random.normal(shape=(1, 1)))
self.b = tf.Variable(tf.zeros(shape=(1,)))
def __call__(self, x):
return tf.sigmoid(tf.matmul(x, self.w) + self.b)
neuron = ArtificialNeuron()
# Fails here: InvalidArgumentError: cannot compute MatMul as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:MatMul] name:
output_data = neuron(input_data)
Solution
The error is thrown because you are mixing float32
and float64
tensors without conversion. By default numpy
uses float64
while TensorFlow uses float32
. Normally the higher level modules perform the conversion, but I think you are using low-level building blocks, so you have to do the conversion yourself.
You can simply test this:
import numpy as np
import tensorflow as tf
x = np.array([[1.0]], dtype=np.float64)
w = tf.zeros(shape=(1,1))
tf.matmul(x, w)
# => InvalidArgumentError: cannot compute MatMul as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:MatMul]
# Changing np.float32 to np.float64 above the code works...
So you either have to convert your input to float32
by using np.float32(input_data)
, or use everywhere float64
tensors. You might also change the default precision of TensorFlow to float64
as described here: TensorFlow default precision mode?
In short, to fix your code, replace input_data
in the last line by np.float32(input_data)
and size=100
in the definition of input_data
by size=(100,1)
.
Answered By - gabalz
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.