Workflow Results Inconsistent with Model Test Results

Project Type: Instance Segmentation
Model: ppm-thumb-corner/1
OS/Browser: macOS / Safari
Workspace/Project ID: ppm-thumb-corner
Workflow Link: Thumb Corner 1
Roboflow Support Access Granted: Yes


Issue:

I’m running an instance segmentation model (ppm-thumb-corner/1) to detect plastic debris in images taken at a rocky shoreline. The model test results look reasonably accurate — detections are picking up the right objects with appropriate segmentation masks (see Model Predictions screenshots).

However, when I run the same images through my Workflow, the results appear essentially random. The workflow uses the model followed by a Property Definition step (plastic_pixel_count) and a Polygon Visualization step, with both outputs returned. The polygon outlines in the workflow output don’t correspond to the actual plastic objects the model correctly identifies during standalone testing.

I’m not sure if the issue is in how the workflow is configured, how the steps are connected, or something else entirely. I’ve attached:

  • A screenshot of the workflow structure
  • A PDF showing the model test predictions vs. the workflow output

Any help diagnosing why the workflow results differ so significantly from the model test results would be greatly appreciated!


Images (combined since I’m only allowed to upload one):

Hi Hoibun,

Checking the Polygon Visualization block in your workflow, it seems like it is referencing the wrong output from the model, which is why the results look so different from your standalone model tests.

Your instance segmentation model actually returns two separate things: bounding box coordinates and segmentation mask polygons. These are distinct fields, and right now the Polygon Visualization step appears to be pulling from the bounding box data and interpreting it as polygon points. This is exactly what produces those large, irregular purple blobs in your workflow output — the shapes are landing in roughly the right area of the image, but the geometry is completely off because it’s reading the wrong data type.

The fix is straightforward. Inside the Polygon Visualization block, the input reference needs to be explicitly pointed to the segmentation mask output from the model rather than the bounding box output. Depending on your Roboflow workflow version, this may appear as a dropdown where you need to select between detection and mask outputs — it won’t always default to the mask automatically.

While you’re in there, it’s also worth confirming that your plastic_pixel_count Property Definition is similarly pulling from the segmentation mask field rather than the bounding box area. If it’s currently using the bounding box, the pixel counts being returned will be significantly larger than the actual plastic coverage, since bounding boxes include a lot of surrounding background area that isn’t plastic.

Once those references are corrected, your workflow output should align closely with what you’re seeing in the standalone model test results. Happy to walk through the specific configuration steps if that would be helpful.

So recommended things to fix here in your workflow, after your image input block, add:

  1. Resize → set to your model’s training size (e.g., 640×640) with letterbox/pad mode
  2. Model node → confirm version ID, set confidence and iou_threshold explicitly

Let me know how it goes!

Bar Shimshon