Issue
I tried this simple example in Python.
import numpy as np
a = np.array([1,2,3,4])
b = np.array([20])
a + b # broadcasting takes place!
np.dot(a,b) # no broadcasting here?!
I thought np.dot
also uses broadcasting, but it seems it doesn't.
I wonder why i.e. what is the philosophy behind this behavior?
Which operations in NumPy use broadcasting and which not?
Is there another version of the dot
function for dot product,
which actually uses broadcasting?
Solution
The reason it doesn't broadcast is because the docs say so. However, that's not a very good, or satisfying, answer to the question. So perhaps I can shed some light on why.
The point of broadcasting is to take operators and apply them pointwise to different shapes of data without the programmer having to explicitly write for
loops all the time.
print(a + b)
is way shorter and just as readable as
my_new_list = []
for a_elem, b_elem in zip(a, b):
my_new_list.append(a_elem + b_elem)
print(my_new_list)
The reason it works for +
, and -
, and all of those operators is, and I'm going to borrow some terminology from J here, that they're rank 0. What that means is that, in the absence of any broadcasting rules, +
is intended to operate on scalars, i.e. ordinary numbers. The original point of the +
operator is to act on numbers, so Numpy comes along and extends that rank 0 behavior to higher ranks, allowing it to work on vectors (rank 1) and matrices (rank 2) and tensors (rank 3 and beyond). Again, I'm borrowing J terminology here, but the concept is the same in Numpy.
Now, the fundamental difference is that dot
doesn't work that way. The dot product function, in Numpy at least, is already special-cased to do different things for different rank arguments. For rank 1 vectors, it performs an inner product, what we usually call a "dot product" in a beginner calculus course. For rank 2 vectors, it acts like matrix multiplication. For higher-rank vectors, it's an appropriate generalization of matrix multiplication that you can read about in the docs linked above. But the point is that dot
already works for all ranks. It's not an atomic operation, so we can't meaningfully broadcast it.
If dot
was specialized to only work on rank 1 vectors, and it only performed the beginner calculus rank 1 inner product, then we would call it a rank 1 operator, and it could be broadcast over higher-rank tensors. So, for instance, this hypothetical dot
function, which is designed to work on two arguments, each of shape (n,)
, could be applied to two arguments of shape (n, m)
and (n, m)
, where the operation would be applied pointwise to each row. But Numpy's dot
has different behavior. They decided (and probably rightly so) that dot
should handle its own "broadcasting"-like behavior because it can do something smarter than just apply the operation pointwise.
Answered By - Silvio Mayolo
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.