Pytorch 转 ONNX

pip install onnx onnxruntime onnxruntime-gpu

PyTorch model to ONNX:
    model = models.efficientnet_v2_s()

    torch_input = torch.randn(1, 3, 384, 384, device="cuda")
    torch.onnx.export(model,  # model being run
                      torch_input,  # model input
                      # where to save the model (can be a file or file-like object)
                      opset_version=11,  # the ONNX version to export the model to
                      input_names=['input'],  # the model's input names
                      output_names=['output']  # the model's output names

Test ONNX model:
import multiprocessing
import onnx
import onnxruntime as ort
import numpy as np

def main():
    onnx_model_path = "effnet.onnx"

    # Load the ONNX model
    model = onnx.load(onnx_model_path)

    # Check that the model is well formed

    # Print a human readable representation of the graph

    #onnx_provider = 'CPUExecutionProvider'
    onnx_provider = 'CUDAExecutionProvider'
    ort_session = ort.InferenceSession(onnx_model_path, providers=[onnx_provider])

    outputs =
        {"input": np.random.randn(1, 3, 384, 384).astype(np.float32)},

if __name__ == "__main__":

EP Error C:\a\_work\1\s\onnxruntime\python\ onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (,  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
ONNX Runtime	CUDA	cuDNN	                              Notes
1.18	        12.4 (Linux)/ (Windows)   The default CUDA version for ORT 1.18 is CUDA 11.8. 
                                                              To install CUDA 12 package, please look at Install ORT. Java CUDA 12 support is back for release 1.18
1.18	        11.8 (Linux)/ (Windows)

需要安装CUDA 12.4 or 11.8 + cuDNN

Could not locate zlibwapi.dll. Please make sure it is in your library path!


您的电子邮箱地址不会被公开。 必填项已用*标注

14 − 9 =