Can I define inference client for Inference pipeline?

Is there a way to give an argument for the pipeline, that would take the inference client, in the same way as we can do with, for example inference_sdk, by specifying the client?

client = InferenceHTTPClient(
    api_url="http://localhost:9001",
    api_key="API_KEY",
)

As a pseudo code, I’m looking for something like this:

remote_gpu_server = "http:/localhost:9001"

pipeline = InferencePipeline.init(
    video_reference=["ac.mp4"],
    model_id="yolov8n-640",
    on_prediction=render_boxes,
    host=remote_gpu_server
)

In this example I would have proxied the traffic from my server to my localhost port.

Currently it seems that the pipeline defaults to my local computer, instead of looking for something in localhost:9001.

Thanks!

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.