How do I run my semantic segmentation model on a video?


I’ve trained a semantic segmentation model to detect the track for an autonomous car. I was able to overlay the track boundary onto images, but I want to be able to do the same over a video. However, I’ve recently learned that video inference was not supported with the semantic segmentation model…

Are there any alternatives in what I can do to process a video on my model?

Hey @devin

Yes, semantic segmentation is currently not supported on the predict_video method of our Python SDK, but the predict() function should work and you can use it along with Supervision.

See this Supervision cookbook here for a guide on how to annotate video.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.