Deploying instance segmentation model to MacOS

Hi all, I’ve successfully trained an instance segmentation model using Roboflow Instance Segmentation 2.1. I’d now like to run that model from an XCode app on MacOS. If not possible, I’d like to train my own model and convert it to CoreML. My end goal is to run this on a video feed at near-realtime inference speed on a Mac Studio.

Any recommendations for which route I should take here?

Based on the information provided in the extracts, Roboflow supports a variety of deployment options for your trained model. However, specific details about deploying on an XCode app on MacOS or converting the model to CoreML are not mentioned in the provided extracts.

Roboflow does support running inference on a web browser, which could potentially be integrated into your MacOS application. You can run inference on your model by uploading an image or video, choosing an image from your test set, providing the URL of a YouTube video or image, or using your webcam source.

In terms of inference speed, it’s important to note that there’s often a trade-off between accuracy and speed. More accurate models tend to be slower as they require more computations to make predictions source. The inference speeds may vary with respect to hardware and custom models with different prediction behavior source.

For more specific guidance on deploying your model in an XCode app or converting it to CoreML, I recommend checking out the official Roboflow documentation or reaching out to their support team.

Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, knowledge base, and previous forum responses. If you find this to be a helpful answer, please mark it as the solution using the checkbox icon below. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.

Hey Hart! We don’t yet have a way of doing this. We prototyped it a few weeks ago and validated the model runs (~90fps on the Mac we were using) but the NMS CoreML step doesn’t support Instance Segmentation so we’ll need to re-implement the post-processing code.

The SDK is open-sourced here, any chance you’d want to collaborate with @Maxwell on it? GitHub - roboflow/roboflow-swift

Thanks Brad, I’ll take a look!

Would running this via a local server on the same Mac have good FPS, or would it be better to try an achieve true CoreML integration?

I’d be glad to try and implement whatever classes are necessary to get it working – might save me time in the long term to not have to worry about setting up a separate model. If there’s any pointers @Maxwell would like to give me to get started, please reach out to hart at or discord @hartw – thanks!

Depends what you mean by “good” FPS. Mac can’t pipe MPS through to Docker unfortunately so it’d run on CPU but the M1 and M2 chips are so fast it can get pretty good FPS anyway.

Makes sense, thanks :slight_smile: