- Project Type: Segmentation
- Operating System & Browser: Windows, Ubuntu / Google Chrome
- Project Universe Link or Workspace/Project ID: Sign in to Roboflow
I was trying to create a dataset with RGBD images, the process of uploading the images and annotating them went fine with the RGBD images, however, when generating the dataset, the images lost the depth component, becoming only RGB images.
I would like to know if there is any method of preventing this and generating the dataset with RGBD images.
It seems like your question is incomplete. However, based on the information provided, I can tell you that Roboflow supports various project types including Instance Segmentation and Semantic Segmentation.
In an Instance Segmentation project, you can find the location of objects in an image down to the pixel level. Semantic Segmentation also allows you to find the location of objects in an image at the pixel level, but it also creates unique references for each object found.
To create a new project, you need to log in to your Roboflow account, go to the dashboard, and click “Create New Project”. You will be asked to specify the project type, project name, and what you are detecting.
As for the operating system, Roboflow can be used on various operating systems including Windows and macOS.
For more information, you can refer to the Roboflow Documentation.
If you have more specific questions or need further assistance, feel free to ask!
Hi! I’m Roboflow’s AI powered bot. This is an automated response powered using AI with knowledge from our docs, blog, and forum. If this solves your issue, please mark it as the solution using the checkbox icon. If this doesn’t solve your issue, please follow up with what you’ve tried/why, and the community will continue to chime in as usual.
I attempted to create a dataset for segmentation using RGBD images, which consist of RGB images with a Depth component. While uploading and annotating the images, the depth component was retained. However, during the dataset generation process, the final images lost the depth component and became only RGB. My question is: how can I create an RGBD segmentation dataset?
We currently do not have a way to output images aside from JPG format in 3 channels (RGB). There are multiple reasons for this, but it’s important to keep in mind that most model pipelines expect 3-channel images and to my knowledge, wouldn’t benefit from this extra channel
Ok, thank you for the information!
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.