I’m attempting to create a two stage workflow where stage one detects the region(s) of interest or rather detects each specimen found in the input image and stage two makes predictions about the abnormalities found within each specimen. The workflow is currently set up like this:
input image > ROI object detection model > dynamic crop > abnormality object detection model > bounding box visualization > output
where the bounding box visualization block is tiled to visualize on the original input image using the predictions from the abnormality object detection model
I’ve found that using the ROI cropped images dramatically increases the accuracy of predictions however, I’m struggling to find a way to map the predictions made on the cropped images back to the original input image. Is this possible without creating a custom block?
I’m currently getting this error when I run the workflow as specified above: For step bounding_box_visualization attempted to plug input data that are in different dimensions, whereas block defines the inputs to be equal in that terms.
I’ve attempted to use the dimension collapse block after the second model but I get the same error.
Good morning @spence,
This is a great use case for workflows! To triage this issue, I am working to replicate this workflow and error in my own workspace.
To help me provide the best advice possible, do you mind elaborating on your use case and how you plan to implement this workflow in production?
I will keep you apprised of my progress, thanks again for your patience and understanding!
Great strategy with the cropping! If I understand correctly, you just want to put that final abnormality detection back on the original image. If so, you can add a Detections Stitch block to accomplish that. My sample workflow is below, along with each image generation along the way. (If it’s something else you need, maybe clarify with a specific example of the expected input vs output to help others troubleshoot some more!)