Project Type: Text-Image Pairs / Custom Florence‑2 Model
Operating System & Browser: Windows 11, Google Chrome
Workspace/Project ID: asfandiyars-workspace / testingflorence
Trying to inference trained Florence-2 model:
Hi,
I’m working with a custom-trained Florence‑2 model in my Roboflow project, and I’m encountering issues when attempting to run local inference. It’s a super simple script since I’m just trying to test inference on the model before doing anything else.
Since this project setup is basically the same as a previous project where I used a trained standard object detection model (which worked perfectly), I expected the inference workflow to be similar for Florence‑2. I know that Florence‑2 is a multimodal model for tasks involving text and image inputs and I decided to reference the standard Roboflow inference documentation for the Python SDK.
I’ve tried multiple variations and all behave the same way. Here’s the code I’m using to run the inference:
import traceback
from roboflow import Roboflow
from dotenv import load_dotenv
def find_image_path(image_name, folder="NutritionLabelsDataset"):
"""
Searches for image file with common extensions if no extension provided.
"""
extensions = ["jpg", "jpeg", "png"]
if "." in image_name:
potential_path = os.path.join(folder, image_name)
if os.path.exists(potential_path):
return potential_path
else:
for ext in extensions:
potential_path = os.path.join(folder, f"{image_name}.{ext}")
if os.path.exists(potential_path):
return potential_path
return None
def main():
load_dotenv()
api_key = os.getenv("ROBOFLOW_API_KEY")
if not api_key:
print("Error: Roboflow API key not found.")
return
try:
rf = Roboflow(api_key=api_key)
except Exception as e:
print(f"Eror initializing Roboflow: {e}")
traceback.print_exc()
return
try:
workspace = rf.workspace("asfandiyars-workspace")
print("Successfully loaded workspace: asfandiyars-workspace")
except Exception as e:
print(f"Error loading workspace: {e}")
traceback.print_exc()
return
try:
project = workspace.project("testingflorence")
print("Successfully loded project: testingflorence")
except Exception as e:
print(f"Error loading project: {e}")
traceback.print_exc()
return
try:
version_obj = project.version(1)
model = version_obj.model
if model is None:
print("Error: Model is None. The model failed to load properly!!!")
print("Debug info for version(1):")
print(vars(version_obj))
return
print("Successfully loaded model (version 1).")
except Exception as e:
print(f"Error loading model: {e}")
traceback.print_exc()
return
print("Type of model object:", type(model))
image_name = input("Enter the image name (with or without extension, e.g., '1' or '1.jpg'): ")
image_path = find_image_path(image_name)
if not image_path:
print(f"Error: Image not found for '{image_name}'.")
return
print(f"Found image at: {image_path}")
try:
model.confidence = 50
model.overlap = 25
prediction = model.predict(image_path).json()
print("Prediction Results:")
print(prediction)
except Exception as e:
print(f"Error during inference: {e}")
traceback.print_exc()
if __name__ == "__main__":
main()
So, with the code out of the way, here was my initial error without try-except blocks:
‘NoneType’ object has no attribute ‘confidence’
Doesn’t this mean the model object wasn’t loaded correctly, since model was None when I tried to set its confidence attribute?
After adding try-except blocks and running the code provided above with the printed debug information, I confirmed that:
- The workspace and project load correctly.
- The version(1) object is created, but its model attribute remains None.
- The debug output for the version object includes:
** ‘model_format’: ‘undefined’,**
** ‘model’: None**
Doesn’t this indicate that the model isn’t being returned properly by the API?
I also confirmed model deployment and inferencing via the REST API.
Curl to confirm model deployment:
curl “https://api.roboflow.com/asfandiyars-workspace/testingflorence/1?api_key=$ROBOFLOW_API_KEY”
Sample output (summary):
{
“workspace”: {
“name”: “Asfandiyars Workspace”,
“url”: “asfandiyars-workspace”,
“members”: 1
},
“project”: {
“id”: “asfandiyars-workspace/testingflorence”,
“type”: “text-image-pairs”,
“name”: “TestingFlorence”,
// … more details …
“model”: {
“id”: “testingflorence/1”,
“endpoint”: “expected endpoint here. just removed it since I’m only allowed 2 links per community post”,
// … more details …
“status”: “finished”
}
}
}
This confirms that the project, version, and endpoint exist.
Inference by curl:
curl -F “file=@NutritionLabelsDataset/1.jpg” “https://detect.roboflow.com/testingflorence/1?api_key=$ROBOFLOW_API_KEY”
Response:
{“message”:“Service misconfiguration.”}
While the model endpoint is active, inference returns a 500 error or a “Service misconfiguration” message.
My Questions:
Is my method for inferencing a custom Florence‑2 model correct?
Given that the same workflow works for my object detection model, what differences should I be aware of for Florence‑2 models?
Is this behavior expected since the model is in Beta?
Is this issue related to Florence‑2 being in Beta? Does the debug output (‘model_format’: ‘undefined’, ‘model’: None) indicate that local inference for custom Florence‑2 models isn’t fully supported yet?
Am I Overlooking Something Obvious? (I prob am and I’m sorry if I look silly posting this)
I’d appreciate any suggestions on what I might be missing in my code, workflow, or setup. If local inference isn’t fully supported for Florence‑2 in its beta state, I might instead use the base model and fine tune it, though that would be a pain since I’m limited to CPU power.
I’d really appreciate any suggestions here and am happy to give more info or context.