onnxruntime - ONNX Runtime CUDAExecutionProvider Fails to Load on Windows: "LoadLibrary failed with error 126"

I'm encountering an issue when trying to run a model with ONNX Runtime using GPU acceleration on W

I'm encountering an issue when trying to run a model with ONNX Runtime using GPU acceleration on Windows. The error message indicates that the CUDAExecutionProvider cannot be loaded due to a LoadLibrary failed with error 126. The error traceback points to the following issue:

[ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "F:\path\to\env\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Additionally, I see this warning when trying to initialize the provider:

Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime.

For the CUDA dir's I got 12.4 to 12.6 installed. System Info:

CUDA version: nvcc -- version finds 12.6, Manually installed cuDNN 9.5 for it this week, contradicting:
cuDNN version: 9.1 (verified via torch.backends.cudnn.version())
PyTorch version: 2.5.1+cu124 (working fine with CUDA 12.4)
ONNX Runtime: Installed via uv add onnxruntime-gpu

What I've Tried:

CUDA and cuDNN verification:
    CUDA 12.4 is installed and working with PyTorch.
    cuDNN 9.1 is verified by Pytorch.


Reinstalled ONNX Runtime with GPU support:
    I uninstalled onnxruntime and reinstalled onnxruntime-gpu using pip install onnxruntime-gpu.

Confirmed PyTorch is using CUDA:
    Verified with torch.cuda.is_available() and torch.backends.cudnn.enabled that CUDA is available in PyTorch.

Verified ONNX Runtime execution providers:
    Checked available execution providers with onnxruntime.get_available_providers()—CUDAExecutionProvider is listed, onnxruntime.get_device() returns GPU.

Error Output:

D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort:1539 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "F:\path\to\env\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

What I'm Looking For:

Suggestions on how to resolve the LoadLibrary failed with error 126.
Whether there's a mismatch between CUDA/cuDNN versions for ONNX Runtime and PyTorch (both are using CUDA 12.x but with different versions).
How to configure ONNX Runtime to use GPU acceleration with my current setup.

Thanks in advance!

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1742323865a4422355.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信