Skip to content
\n

If I run my test case it will use basically only the CPU and would take around 50s.

\n

But I force to use only CUDAExecutionProvider as:

\n
    providers: list[tuple[str, dict]] = [(\"CPUExecutionProvider\", {})]\n    if provider >= 0:\n        providers = [(\"CUDAExecutionProvider\", {\"device_id\": provider})]
\n

I get these warnings:

\n
2024-09-03 20:26:34.417215913 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.\n2024-09-03 20:26:34.417281814 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.\n
\n

However it's much faster, like 5x faster, using the GPU, and takes only 10s.

\n

So, what can I do here? Are there better options to tell runtime to use the GPU preferably? Or how to silence these warnings?

","upvoteCount":2,"answerCount":2,"acceptedAnswer":{"@type":"Answer","text":"

There are logging entries that should tell you which kernels were assigned to CPU. Check your logging level. If those are shape related, then you are fine. Otherwise, file an issue with ONNXRuntime (not onnx).

","upvoteCount":1,"url":"https://github.com/onnx/onnx/discussions/6341#discussioncomment-10593804"}}}

CUDAExecutionProvider and CPUExecutionProvider #6341

Answered by yuslepukhin
alanwilter asked this question in Q&A
Discussion options

You must be logged in to vote

There are logging entries that should tell you which kernels were assigned to CPU. Check your logging level. If those are shape related, then you are fine. Otherwise, file an issue with ONNXRuntime (not onnx).

Replies: 2 comments 3 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by justinchuby
Comment options

You must be logged in to vote
3 replies
@justinchuby
Comment options

@sophies927
Comment options

@MaanavD
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
6 participants