I’ve been working on this for 15 hours and could really use some guidance. I have an instant sequence model trained on 3,000 photos to recognize beta and alpha particles, and I’m trying to figure out how to feed it a video so it can measure their length or speed. I don’t mind which metric it uses, but ideally, I’d get a table listing each detected trail with its type and measured length or speed. Any tips or pointers would be greatly
Hi @Sebastien_Berrang ,
How far are you with the model (I assume instance segmentation), have you trained it already? How I’d approach it:
- Train the model so it has good accuracy
- Use InferencePipeline to run your model locally
- Use SuperVision to load results (for visualization/working with them later) and run object tracking
- Loop over each tracklet, calculate speed from pixel movement and time between frames (to get pixles/sec)
- To get eg. mm/sec, you’ll need camera intrinsics as well and distance from cam to the particles
Thoughts?
Hi @eric_roboflow I have fully trained the model and created a model I just can’t get it to do what I want o see the right block in the workflow but I can’t see to get them to work l.
Hi @Sebastien_Berrang I’d probably just skip the workflows/blocks, and go straight to python programming (inferencePipeline), as you’ll need to do some custom python coding anyways (last 2 steps)
il try that and see how it goes
@Sebastien_Berrang Great, let me know on erik@roboflow.com if you’ll have further challenges, and we can schedule a quick call:)
This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.