Below is the code I use to infer and then save the mask. The mask size that I get is always 512x512 which is different from the size of inferred image. What am I doing wrong?
def save_mask(model_res, img_path: Path, res_dir):
prediction = model_res["predictions"][0]
encoded_mask = bytes(prediction["segmentation_mask"], 'ascii')
mask_data = base64.decodebytes(encoded_mask)
mask_name = img_path.stem + "_mask" + img_path.suffix
mask_path = os.path.join(res_dir, mask_name)
mask_1c = Image.open(io.BytesIO(mask_data))
mask_1c = np.array(mask_1c)
mask_w, mask_h = mask_1c.shape
mask_3c = np.empty((mask_w, mask_h, 3))
mask_3c[:,:,0][mask_1c > 0] = 255
mask_3c[:,:,1][mask_1c > 0] = 255
mask_3c[:,:,2][mask_1c > 0] = 255
cv2.imwrite(mask_path, mask_3c)
rf = Roboflow(api_key="KEY")
project = rf.workspace().project("PROJECT")
model = project.version(9).model
res = model.predict("img.png")
res.save("img_res.png")
save_mask(res.json(), ..., ...)
Dataset Details
Preprocessing
Auto-Orient: Applied
Resize: Stretch to 640x640
Filter Null: Require at least 90% of images to contain annotations.
Augmentations
Outputs per training example: 5
90° Rotate: Clockwise, Counter-Clockwise, Upside Down
Grayscale: Apply to 20% of images
Brightness: Between -25% and +25%