Issue
I think this is a very basic question, my apologies as I am very new to pytorch. I am trying to find if an image is manipulated or not using MantraNet. After running 2-3 inferences I get the CUDA out of memory, then after restarting the kernel also I keep getting the same error: The error is given below:
RuntimeError: CUDA out of memory. Tried to allocate 616.00 MiB (GPU 0; 4.00 GiB total capacity; 1.91 GiB already allocated; 503.14 MiB free; 1.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The 'tried to allocate memory(here 616.00 MiB) keeps changing. I checked the GPU statistics and it shoots up while I try to do the inferencing. In tensorflow I know we can control the memory usage by defining an upper limit, is there anything similar in pytorch that one can try?
Solution
So during inferencing for a single image I realized there was no way I was able to fit my image in the gpu which I was using, so after resizing the image to a particular value I was able to do the inferencing without facing any memory issue.
Answered By - arunmenon
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.