Issue
I am encountering a strange problem in Python, where evaluating a condition involving a np.linalg.norm
value is causing issues. This is my test script:
import numpy as np
def isGoalReached():
start = np.array([4, 5, 6])
goal = np.array([17, 18, 19])
dist = np.linalg.norm(start - goal)
return (dist < 0.5)
print (isGoalReached())
print (isGoalReached() == 0)
print (isGoalReached() is False)
Output:
False
True
False
If I remove the np.linalg.norm
part, and rewrite so:
import numpy as np
def isGoalReached():
dist = 123.456
return (dist < 0.5)
print (isGoalReached())
print (isGoalReached() == 0)
print (isGoalReached() is False)
Output:
False
True
True
What is causing this very strange behavior? I am on Python 3.7.3 (Anaconda)
Solution
Seems np.linalg.norm returns a numpy.float64
, and when you do the comparison, the type is numpy.bool_
.
In [83]: np.bool_?
Init signature: np.bool_(self, /, *args, **kwargs)
Docstring:
Boolean type (True or False), stored as a byte.
That is why is
doesn't work. If you add
return bool(dist < 0.5)
it would work as you expect here.
Answered By - Wesley Bowman
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.