Detections Classes Replacement - dimensionality 1, when expected was 2

Describe your question/issue here! (delete this when you post)

  • Project Type: Object Detection → Multiple Bounding Boxes → Single Label Class Model → Detections Classes Replacement
  • Operating System & Browser: WSL, Local Inference, or Serverless Hosted API V2
  • Project Universe Link or Workspace/Project ID: Custom Workflow, but can be tested easily

I am trying to do a simple workflow (or at least I thought it would be simple.).

Since I can apparently only upload ONE image, here’s the text of the error from the Detections Classes Replacement block :

Data fed into step `detections_classes_replacement` property `classification_predictions` has actual dimensionality 1, when expected was 2
Blocks with errors:
1.
detections_classes_replacement
Error in property: classification_predictions. Expected dimensionality 2, but received dimensionality 1.

Is there an example or better way of doing this? The goal would be to have an output of the original input image, with all the bounding boxes detected in the Obj Detection Model, but with the class names inserted into those bounding boxes from the single label classification model.

Would greatly appreciate help. Have spent hours on this, going in circles with the AI Agent and reading documentation. I am not sure how to format the data in the expected format with the workflow editor. I am confident that doing it manually via Python wouldn’t be difficult, but for this exercise, I need to map this out for my customers to be able to do it in this workflow editor.

–Andrew

Hi there, can you make sure you have it configured to pull predictions from the Object Detection block, rather than the dynamic crop block?

For example:

Hi Lake,

Yes, it is setup like that.

I’ve even tried it straight out from the Detect → Classify template..and it always provides this error. I would expect the out-of-the-box templates to work with a simple obj det. → classification model..

{
  "version": "1.0",
  "inputs": [
    {
      "type": "InferenceImage",
      "name": "image"
    }
  ],
  "steps": [
    {
      "type": "roboflow_core/roboflow_object_detection_model@v1",
      "name": "detection_model",
      "images": "$inputs.image",
      "model_id": "iris-logo-object-recognition/2"
    },
    {
      "type": "roboflow_core/bounding_box_visualization@v1",
      "name": "bounding_box_visualization",
      "image": "$inputs.image",
      "predictions": "$steps.detection_model.predictions"
    },
    {
      "type": "roboflow_core/dynamic_crop@v1",
      "name": "dynamic_crop",
      "images": "$steps.bounding_box_visualization.image",
      "predictions": "$steps.detection_model.predictions"
    },
    {
      "type": "roboflow_core/roboflow_classification_model@v1",
      "name": "classification_model",
      "images": "$steps.dynamic_crop.crops",
      "model_id": "iris-logo-classification/1"
    },
    {
      "type": "roboflow_core/detections_classes_replacement@v1",
      "name": "detections_classes_replacement",
      "object_detection_predictions": "$steps.detection_model.predictions",
      "classification_predictions": "$steps.classification_model.predictions"
    },
    {
      "type": "roboflow_core/bounding_box_visualization@v1",
      "name": "detections_visualization",
      "image": "$inputs.image",
      "predictions": "$steps.detections_classes_replacement.predictions"
    },
    {
      "type": "roboflow_core/label_visualization@v1",
      "name": "label_visualization",
      "image": "$steps.detections_visualization.image",
      "predictions": "$steps.detections_classes_replacement.predictions"
    }
  ],
  "outputs": [
    {
      "type": "JsonField",
      "name": "output_image",
      "coordinates_system": "own",
      "selector": "$steps.label_visualization.image"
    },
    {
      "type": "JsonField",
      "name": "predictions",
      "coordinates_system": "own",
      "selector": "$steps.detections_classes_replacement.predictions"
    },
    {
      "type": "JsonField",
      "name": "dynamic_crop",
      "coordinates_system": "own",
      "selector": "$steps.dynamic_crop.crops"
    },
    {
      "type": "JsonField",
      "name": "detection_predictions",
      "coordinates_system": "own",
      "selector": "$steps.detection_model.predictions"
    },
    {
      "type": "JsonField",
      "name": "classification_predictions",
      "coordinates_system": "own",
      "selector": "$steps.classification_model.*"
    }
  ]
}

Hey Andrew. TLDR - maybe try from scratch today. Nothing conclusive to report here but I was playing around with this out of curiosity last night. It was failing for me a TON and then it started working. I couldn’t get it to start failing again. I don’t know if Lake was up all night pushing code or what. :slight_smile:

Today, I tried again (including with the “Detect and Classify” template) and it continues to work fine. There are two scenarios where I could force it to get the error. The one Lake mentioned (which seems to be a “gotcha” because when you build the workflow from scratch it picks the Dynamic block on accident and has to be switched.) The other one was if I “mistakenly” used my object detection model in the Single-Label Classification block. But that’s not your issue per your screenshots. (Unless “iris-logo-classification/1” is not actually a model generated as a classification model, but that looks like the default naming convention so I’m guessing it’s fine.)

So all that to say, you might try starting from scratch with the template again, drop in your two models, and see if you can get it to break.