Zero-shot clip example works on default DS - but on my photos , accuracy is 0.0

This category is for question related to accounts on https://app.roboflow.com/

Please share the following so we may better assist you:

  1. Photo Classification, on my own test set of photos ( exported from openai project )
  2. linx penguin 5.15.95 chrome 112.0.56

running the standard ZS clip on “rock , paper, scissors” is all fine usiing COLAB to open & run the notebook from the sample.
Copy of Roboflow-CLIP-Zero-Shot-Classification.ipynb

But, when i try to switch the DS to my own photo DS including 4 classes and about 150 photos, the critical classification step in the notebooks shows all accuracy readings as zero. Im pretty sure that the photos of mine are OK. Im pretty sure my default tokens are ok . but i get bad results … why??

tokenization.txt

An example picture from the org311 clip photos dataset depicting a encampment
An example picture from the org311 clip photos dataset depicting a garbage
An example picture from the org311 clip photos dataset depicting a graffiti
An example picture from the org311 clip photos dataset depicting a mural

step “Run Clip Inference” from notebook stdout:

only thing i know todo is to shift around the token values ??
ive used the sample notebooks to train a DS on my 150 photos in 4 categories and that looks good , so i will get an endpoint from that deployed model and try classification over there.

i have no idea why the zero-shot Clip step is failing to match anything?

when i use the online demo for the model i trained using my DS photos , everything looks fine and i get accurate classifications on my 4 categories .

Roboflow Inference Example