PyTorch 2.2.1 Windows编译

经测试 VS 2019 16.11.34 + CUDA 11.8 可成功编译

VS 2022 17.8.8编译时会报错
CUDA 11.6编译时会报错

conda create -p E:\miniconda3\envs\pytorchdebug python=3.9
conda activate pytorchdebug
pip install install astunparse numpy ninja pyyaml setuptools cmake cffi typing_extensions future six requests dataclasses

#git clone https://github.com/pytorch/pytorch.git
git clone https://gitee.com/veenlee/pytorch.git

cd D:\pytorch
git checkout v2.2.1
git submodule sync
git submodule update --init --recursive


https://github.com/pytorch/pytorch#from-source


conda install cmake ninja
# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
pip install -r requirements.txt

conda install intel::mkl-static intel::mkl-include
conda install -c conda-forge libuv=1.39

"D:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat" x64

rm -rf ./build/

https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda
默认以release模式编译:
#python setup.py build
开启debug模式:
On the initial build, you can also speed things up with the environment variables DEBUG, USE_DISTRIBUTED, USE_MKLDNN, USE_CUDA, 
USE_FLASH_ATTENTION, USE_MEM_EFF_ATTENTION, BUILD_TEST, USE_FBGEMM, USE_NNPACK and USE_QNNPACK.
DEBUG=1 will enable debug builds (-g -O0)
REL_WITH_DEB_INFO=1 will enable debug symbols with optimizations (-g -O3)
USE_DISTRIBUTED=0 will disable distributed (c10d, gloo, mpi, etc.) build.
USE_MKLDNN=0 will disable using MKL-DNN.
USE_CUDA=0 will disable compiling CUDA (in case you are developing on something not CUDA related), to save compile time.
BUILD_TEST=0 will disable building C++ test binaries.
USE_FBGEMM=0 will disable using FBGEMM (quantized 8-bit server operators).
USE_NNPACK=0 will disable compiling with NNPACK.
USE_QNNPACK=0 will disable QNNPACK build (quantized 8-bit operators).
USE_XNNPACK=0 will disable compiling with XNNPACK.
USE_FLASH_ATTENTION=0 and USE_MEM_EFF_ATTENTION=0 will disable compiling flash attention and memory efficient kernels respectively
For example:
#set DEBUG=1
#set USE_MKLDNN=0
#set USE_FBGEMM=0
#set USE_NNPACK=0
#set USE_QNNPACK=0
#set USE_XNNPACK=0
set REL_WITH_DEB_INFO=1
set USE_CUDA=0
set USE_DISTRIBUTED=0
set BUILD_TEST=0
echo %USE_CUDA%
python setup.py develop

使用LibTorch的时候,C++标准要设置为C++17
Include:
D:\pytorch\torch\include
D:\pytorch\torch\include\torch\csrc\api\include
Lib:
D:\pytorch\build\lib
libs:
c10.lib;torch.lib;torch_cpu.lib;

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

1 × 3 =