Issue
I am working on installing PyTorch from source but am unsure about the specific dependency versions to use for the version of PyTorch I want to install.
In particular, I noticed performance variations depending on the gcc version I use to compile PyTorch. Which compiler should I be using to get the best PyTorch performance?
Tensorflow doc provides such useful information. They call it "Tested build configurations": https://www.tensorflow.org/install/source#tested_build_configurations.
Solution
The README.md has instructions to build from source.
If you are installing from source, you will need a C++14 compiler. Also, we highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
Once you have Anaconda installed, here are the instructions.
If you want to compile with CUDA support, install
- NVIDIA CUDA 9 or above
- NVIDIA cuDNN v7 or above
- Compiler compatible with CUDA
If you want to disable CUDA support, export environment variable USE_CUDA=0
. Other potentially useful environment variables may be found in setup.py
.
If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here
You can found Latest, official Compiler requirements here
Answered By - Tensorflow Warrior
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.