Embedding generating with ViT-L-14 model is not working

I am trying to generate embedding with the model ViT-L-14 but getting error

{
  "detail": "HTTPCallErrorError(description='500 Server Error: Internal Server Error for url: https://infer.roboflow.com/clip/embed_image?api_key=aa***OA', api_message='Internal Server Error',status_code=500)"
}

The helper function is below:
def get_image_embeddings(image_link: str):
try:
print(image_link)
res = roboflow_client.get_clip_image_embeddings(inference_input=image_link, clip_version=“ViT-L-14”)
embeddings = res[“embeddings”][0]
return embeddings
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))

Strange thing is this works when I put B-32 instead of L-14

Thanks in Advance

Hey @sakib

Thanks for reporting this. I’ve confirmed that L-14 is not working on our hosted inference endpoint. I’ve flagged this to the team.

In the meantime as a workaround, I’ve confirmed ViT-L-14 works when run locally (which you can also do with the inference package)

#pip install inference[clip]

from inference.models import Clip
import inference

clip = Clip(model_id="clip/ViT-L-14")

prompt = "an ace of spades playing card"
text_embedding = clip.embed_text(prompt)
1 Like

Thanks @leo a lot for your reply. Actually I am hosting my backend on render and I don’t think it can take the load to run a model locally. Can you please tell by when I may get the api service working? I have made my vector db using the l-14 model. So I pretty much don’t have an alternative as well

Hi @leo
Still it is not working. Can you please confirm whether I can use roboflow api for image embedding with ViT-L-14? Otherwise I need to regenerate the entire vector db with B-32. Also can you mention any other api service than can give me embedding with ViT-L-14?

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.