Issue
On my Windows 10, if I directly create a GPU tensor, I can successfully release its memory.
import torch
a = torch.zeros(300000000, dtype=torch.int8, device='cuda')
del a
torch.cuda.empty_cache()
But if I create a normal tensor and convert it to GPU tensor, I can no longer release its memory.
import torch
a = torch.zeros(300000000, dtype=torch.int8)
a.cuda()
del a
torch.cuda.empty_cache()
Why this is happening.
Solution
At least in Ubuntu, your script does not release memory when it is run in the interactive shell and works as expected when running as a script. I think there are some reference issues in the in-place call. The following will work in both the interactive shell and as a script.
import torch
a = torch.zeros(300000000, dtype=torch.int8)
a = a.cuda()
del a
torch.cuda.empty_cache()
Answered By - hkchengrex
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.