Issue
I followed this answer to install Pytorch with cuda support in my pipenv (running on a Windows machine). My Pipfile
looks like this:
[...]
[[source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu118"
verify_ssl = false
[packages]
[...]
torch = {index = "pytorch",version = "==2.1.0"}
torchvision = {index = "pytorch",version = "==0.16.0"}
torchaudio = {index = "pytorch",version = "==2.1.0"}
and when I execute pipenv graph
it looks also good to me:
(venv) λ pipenv graph
[...]
torchaudio==2.1.0+cu118
└── torch [required: ==2.1.0+cu118, installed: 2.1.0+cu118]
├── filelock [required: Any, installed: 3.13.1]
├── fsspec [required: Any, installed: 2023.10.0]
├── jinja2 [required: Any, installed: 3.1.2]
│ └── MarkupSafe [required: >=2.0, installed: 2.1.3]
├── networkx [required: Any, installed: 3.2.1]
├── sympy [required: Any, installed: 1.12]
│ └── mpmath [required: >=0.19, installed: 1.3.0]
└── typing-extensions [required: Any, installed: 4.8.0]
torchvision==0.16.0+cu118
├── numpy [required: Any, installed: 1.26.1]
├── pillow [required: >=5.3.0,!=8.3.*, installed: 10.1.0]
├── requests [required: Any, installed: 2.31.0]
│ ├── certifi [required: >=2017.4.17, installed: 2023.7.22]
│ ├── charset-normalizer [required: >=2,<4, installed: 3.3.2]
│ ├── idna [required: >=2.5,<4, installed: 3.4]
│ └── urllib3 [required: >=1.21.1,<3, installed: 2.0.7]
└── torch [required: ==2.1.0+cu118, installed: 2.1.0+cu118]
├── filelock [required: Any, installed: 3.13.1]
├── fsspec [required: Any, installed: 2023.10.0]
├── jinja2 [required: Any, installed: 3.1.2]
│ └── MarkupSafe [required: >=2.0, installed: 2.1.3]
├── networkx [required: Any, installed: 3.2.1]
├── sympy [required: Any, installed: 1.12]
│ └── mpmath [required: >=0.19, installed: 1.3.0]
└── typing-extensions [required: Any, installed: 4.8.0]
However, the Pytorch in my venv reports it was not built with CUDA support
(venv) λ python
Python 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False
>>> torch.cuda.current_device()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\dev\projects\chessmait\venv\Lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
_lazy_init()
File "C:\dev\projects\chessmait\venv\Lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
In a different project on the same machine, I have a conda envrionment and cuda is working there, so it is supported by my machine. In the conda environment, those packages are installed:
pytorch 2.1.0 py3.10_cuda11.8_cudnn8_0 pytorch
pytorch-cuda 11.8 h24eeafa_5 pytorch
pytorch-mutex 1.0 cuda pytorch
Furthermore, I checked this answer and my output is:
λ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:09:35_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
Does anyone have an idea what is wrong / what I can try to get PyTorch running with cuda support in my venv?
Solution
I solved the problem.
Reason was, I mixed up venv
and pipenv
.
After deleting the environments via
pipenv --rm
and deleting the venv
directory in the project, I set it up correctly with pipenv:
pipenv install
pipenv shell
Now torch.cuda.is_available()
returns True
.
Answered By - FeivelFei
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.