Describe your question/issue here! (delete this when you post)
I am running inference on a local server on a CPU in windows. The server is running inside a podman container. for 400 images it takes ~6min for the prediction. Is there a way I can reduce the duration. Is it possible to check chunks of images?
- Project Type:
- Operating System & Browser:
- Project Universe Link or Workspace/Project ID: