Issue
I want to create a matrix that contains every combination of the sums of all elements in two large vectors using Torch, ultimately using CUDA within Torch.
The best way to describe it is with this (inefficient) code:
import numpy as np
import torch
x = torch.Tensor([1.1,2.2,3.3,4.4,5.5])
x_cent = torch.Tensor([10.2,20.2,100.1])
res_matrix = torch.zeros([int((x.shape)[0]), int((x_cent.shape)[0])])
res_col = torch.zeros([int((x.shape)[0])])
for i in x_cent:
res_col = x.add(i)
for i in range(0,int((x_cent.shape)[0])):
res_col = x.add(x_cent[i])
res_matrix[:,i] = res_col
print(res_matrix)
The output of this is:
> tensor([[ 11.3000, 21.3000, 101.2000],
[ 12.4000, 22.4000, 102.3000],
[ 13.5000, 23.5000, 103.4000],
[ 14.6000, 24.6000, 104.5000],
[ 15.7000, 25.7000, 105.6000]])
There may be a term for this operation, and if someone can point it out, I will edit this question and include the term.
Can you suggest a more efficient (vectorised?) approach to this that I could implement using the CUDA device on very large vectors? I'm guessing that this is a very simple question, but I am a beginner with torch.
Thanks!
Solution
Prompted by a comment above, and looking around, I see that a similar blog post exists: every combination of addition of two vectors -- but not a matrix
This can be modified to produce a matrix:
(x.unsqueeze(1) + x_cent.unsqueeze(0))
Any better approaches?
Answered By - userX
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.