Issue
I'm using the Soft Actor-Critic implementation available here for one of my projects. But when I try to run it, I get the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [256, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
The error arises from the gradient computation in the sac.py
file. I can't see the operation that might be inplace. Any help?
The traceback:
Traceback (most recent call last)
<ipython-input-10-c124add9a61d> in <module>()
22 for i in range(updates_per_step):
23 # Update parameters of all the networks
---> 24 critic_1_loss, critic_2_loss, policy_loss, ent_loss, alpha = agent.update_parameters(memory, batch_size, updates)
25 updates += 1
26
2 frames
<ipython-input-7-a2432c4c3767> in update_parameters(self, memory, batch_size, updates)
87
88 self.policy_optim.zero_grad()
---> 89 policy_loss.backward()
90 self.policy_optim.step()
91
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
196 products. Defaults to ``False``.
197 """
--> 198 torch.autograd.backward(self, gradient, retain_graph, create_graph)
199
200 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
Solution
Just downgrade the PyTorch to anything under 1.5.0
(which is latest, as of writing this).
pip uninstall torch
pip install torch==1.4.0
Answered By - Saptam
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.