How to implement Object Tracking

I am new to object tracking. I went thru Jacob’s blog on Object Tracking and face an error below.

#run zeroshot object tracking, point to your video and your detection engine
!python --source ./data/video/fish.mp4 --detection-engine yolov5

I get error below:

Namespace(agnostic_nms=False, api_key=None, augment=False, cfg=‘yolov4.cfg’, classes=None, confidence=0.4, detection_engine=‘yolov5’, device=‘’, exist_ok=False, img_size=640, info=False, max_cosine_distance=0.4, name=‘exp’, names=‘coco.names’, nms_max_overlap=1.0, nn_budget=None, overlap=0.3, project=‘runs/detect’, save_conf=False, save_txt=False, source=‘./data/video/fish.mp4’, thickness=3, update=False, url=None, view_img=False, weights=‘’)
Downloading to…
100% 14.5M/14.5M [00:00<00:00, 16.7MB/s]

Fusing layers…
Using torch 1.12.1+cu113 CUDA:0 (Tesla T4, 15109.75MB)

Traceback (most recent call last):
File “”, line 360, in
File “”, line 141, in detect
_ = yolov5_engine.infer(img.half() if half else img) if device.type != ‘cpu’ else None # run once
File “/content/zero-shot-object-tracking/yolov5/zero-shot-object-tracking/utils/”, line 16, in infer
pred = self.model(img, augment=self.augment)[0]
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 1130, in _call_impl
return forward_call(*input, **kwargs)
File “/content/zero-shot-object-tracking/yolov5/zero-shot-object-tracking/models/”, line 123, in forward
return self.forward_once(x, profile) # single-scale inference, train
File “/content/zero-shot-object-tracking/yolov5/zero-shot-object-tracking/models/”, line 139, in forward_once
x = m(x) # run
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 1130, in _call_impl
return forward_call(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 154, in forward
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/”, line 1208, in getattr
type(self).name, name))
AttributeError: ‘Upsample’ object has no attribute ‘recompute_scale_factor’

Hi @yeong_nam_tan, since you’re running it with the YOLOv5 detection engine, you’ll need to specify the path to your model weights after including the --weights flag.


!python --source ./data/video/fish.mp4 --detection-engine yolov5 --weights /insert_path_to/

For saving and loading model weights in Google Colab: How to Save and Load Model Weights in Google Colab

HI @Mohamed,
Jacob’s youtube video (see 3:28) says that we can just use engine yolov5 with or without our own weights.
!python --source ./data/video/fish.mp4 --detection-engine yolov5

copy @Jacob_Solawetz

I get the same error even after inserting weights.


I am doing a project on fish counting for a fish farm and would really like to test if CLIP works. Can anyone help? I am stuck here

This should be fixed now, and we’ve also added functionality for YOLOv7: Merge pull request #27 from MShirshekar/master · roboflow-ai/zero-shot-object-tracking@6b5e923 · GitHub