Exporting model


I was wondering if I can train a model on Roboflow and then be able to export or retrieve the model file. In this case, having trained a model with Yolov5 I wanted the .pt or .pth file. Is it possible?

Hi Luis,

Unfortunately, we do not release the weights files from Roboflow Train by default, but any custom model trained with our Model Zoo (models.roboflow.com) results in final weights files for export.

To train a custom YOLOv5 model: Video

To save the weights: How to Save and Load Model Weights in Google Colab

Do you have any plans to do so in the future? It would be really nice to have a completely no code solution instead of having to go outside of Roboflow and use a Google Colab notebook. I would like if we could export it as a tflite model.

We do have local deployment options: Launch: Test Computer Vision Models Locally and Inference - Object Detection - Roboflow (any option that reads “On-Device”).

For Raspberry Pi deployment for example: Raspberry Pi (On Device) - Roboflow

We do take the feature request feedback. Implementation depends on the number of people requesting and also our development team’s timeline.

Being able to create a tflite model would be great!
So far I have not managed to create a tflite mode in Colab based on the Roboflow data with the instructions I have found.

The examples I found in Robflow are based on an old TF version and don’t work properly anymore

Hi, we’re working on an updated Colab notebook that works with Tensor Flow 2 for export to TFLite.

I haven’t yet fully tested it myself, but it worked with the sample data provided from Google’s original notebook.

Once we get it tested, we’ll be updating the notebook to make the flow easier to follow, and including it in the Notebooks repository on GitHub.

Thank you :slight_smile: I will try it

Here’s an updated TFLite notebook: Google Colab

The model architecture used in training is: EfficientDet

I fully tested it with a project of my own.

Hi, thank you for the great support :slight_smile:

I’m just trying it out. I still have one question:

The image size of my images is 224224
The input size for EfficientDet-Lite1 is 640x640, do I have to adjust the image size first, or can I work with 224

So far I’ve created the models with Teachable Machine, which requires an image size of 224*224 for a tflite model

The following error occurs when exporting

I use tensorflow version 2.8.4

tflite_filename = 'model_Cookie.tflite'

label_filename = 'labels.txt'

vocab_filename = 'vocab.txt'

saved_model_filename = 'saved_model'

tfjs_folder_name = 'tfjs'

## available formats: tflite, tfjs, saved_model, vocab, label

#export_format = ['tflite', 'tfjs', 'saved_model'] #export in TFLite, TFJs, TF Saved Model

export_format = ['tflite']

I didn’t specify any special characters either

It would be best to use a size most-comparable to the input size of the model architecture you’re training with. Generating a new version at 640x640 is an option in Roboflow, too.

I’m not sure what underlying model Teachable Machine uses. Whatever architecture they use is what is influencing those resize parameters.

I was able to fully run the notebook and get the export without issue:

Maybe its the _Cookie you’ve included in your tflite_filename? I also ran this notebook with Tensorflow 2.8.4 - I created the notebook while I ran through the full process last week.

And I do also want to add that you can change the EfficientDet model you use - I just happened to choose 1 in this case.

I modified the code as follows, then at least the tflite file is created.

label_filename=label_filename, saved_model_filename=saved_model_filename, tfjs_folder_name=tfjs_folder_name)

as soon as I add


, the error comes.
The labels.txt is not generated.
i tried with

model.export(export_dir=‘.’, with_metadata=False)

@staebchen0 I just pulled up the notebook to test it out again.

Maybe something changed with Colab’s pre-installed libraries. I’ll report back with the results and whether I changed anything in the notebook.

Just found that Teachable Machine is using MobileNet to train image models. They’re using MobileNet to train a classification model on your images, and resizing to 224x224 for optimal training and inference speed.

Some other notes I wanted to add –

The description says that the data should be downloaded as a folder structure.

In the roboflow export there is no entry with the name “Folder Structure”

Description: How to Train MobileNetV2 On a Custom Dataset
“Then simply generate a new version of the dataset and export with a “Folder Structure”. You will recieve a Jupyter notebook command that looks something like this:”

To receive the export type of “Folder Structure,” you’ll need to be working out of a classification project.

Are you working out of a single label classification project?