Issue
Consider the following simple neural net:
class CustomNN(torch.nn.Module):
def __init__(self):
super(CustomNN, self).__init__()
def forward(self, x):
sleep(1)
return x
I am wondering if we can call forward()
in parallel. Following the official tutorial, I thought that the following code would work:
x = torch.rand(10, 5).cuda()
futures = [torch.jit.fork(model, x[i,:]) for i in range(10)]
results = [torch.jit.wait(fut) for fut in futures]
I expected this to run in about 1 second, but it still sleeps for the full 10 seconds. Is there any way to call the model in parallel?
Solution
You cannot do this in python, unfortunately, as per torch.jit.fork To serve TorchScript modules you need to make a C++ application with a proper thread pool – see here
Besides, your code does not include (or you do not show it) the conversion.
You have to wrap the Module with torch.jit.script
for it to be scripted (or traced) into ScriptModule
traced_NN = torch.jit.script(CustomNN())
Even still, it won't work, as only pytorch functions (and not even fully), python builtins, and math
module is supported in TorchScript (see here)
Answered By - Ksoksodzhi
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.