Inquiry about YOLOv8/9

Hello. I am a college student living in Korea.

I have a question because I am curious.

I am currently researching object detection related to typhoons.

One concern is that typhoons are white like clouds.

The shapes are also different, and in some ways they look similar to clouds.

However, it is definitely a typhoon.

I can get the coordinates (latitude, longitude) for the typhoon and I think I can create a yaml file.

The problem mentioned above is whether it is possible to detect typhoons with yolov8 or yolov9???

I’m curious about the thoughts of experts on this matter.

Thank you for your help.

Hey @λ„ν›ˆ_이

Really interesting idea you’re working on. The short answer, I think, would be yes it is possible. Object detection models (including, but not limited to YOLOv8 and YOLOv9) do take color as an consideration, but the fact that the other non-typhoon clouds are the same color shouldn’t make training a model impossible.

Also, it looks like you are using object detection, but it’s possible that instance segmentation might be better for this use case. Regardless, we’d love to see how your project ends up. Feel free to share in Show & Tell

1 Like

Could you please explain a bit more about instance segmentation? Also, is it possible to detect typhoons without using coordinate values but instead by detecting objects? For example, can we create a YAML file using only 10,000 typhoon image files or are there other methods? The image above is an example image I made with PowerPoint.

Sure, what would you like to know? This article could help: What is Instance Segmentation? A Guide.

I’m not quite sure I understand what you are saying. Could you explain what you mean by coordinate values, as opposed to detecting objects?

Also, it is worth noting that you likely would not need 10,000 images to train an accurate model. I suggest you start with a much smaller amount, then add/label more data if your model is not performant enough.

1 Like

Firstly, as a beginner in the field of YOLO, I’m not very familiar with it. I kindly ask for your understanding. The data I have consists of images of typhoons, along with latitude and longitude data. However, the issue lies in the difficulty of converting latitude and longitude into coordinates on the image. Therefore, I’m curious whether it’s absolutely necessary to have the coordinates (latitude, longitude) of the typhoon. I wonder if it’s possible to create a dataset solely based on images of typhoons, without worrying about the coordinates. (The images I have depict typhoons on an Asian map.) Essentially, I’m curious if it’s possible to create a dataset using just the pure images of typhoons, regardless of the coordinates within the images. I’ll check out the link you provided above. I will attach only pure typhoon images. With the attached 10,000 pure typhoon images, will it be possible to perform object detection on an Asian map? (Without needing latitude, longitude, or coordinates)
Thank you.

The link you provided, I feel like I have some understanding of Instance Segmentation now. I’ll delve a bit deeper into Instance Segmentation and try to figure out how to integrate it with typhoon images. Thank you.

roboflow homepage β†’ + create new project β†’ If I put images into instance segmentation, I can edit images like the above. Can I proceed with training by drawing bounding boxes arbitrarily for typhoon detection, ignoring the coordinates (latitude, longitude) of the typhoon? If I train by arbitrarily marking boxes on the typhoon images, will the accuracy suffer?

I’m now looking to create a dataset using data images. Before I proceed, I have a question. When creating the dataset, is it better to upload all three types of images (color, normal, black/white) together to Roboflow for training, or is it better to upload color, normal, and black/white data separately for training? (ex: training data only with color images for object recognition of hurricanes in color images) If color, normal, and black/white images are not correlated, I plan to train the dataset using all three colors of images.

Hey @λ„ν›ˆ_이

Ah okay I see. For creating a computer vision model, having the coordinates of the object (in this case a typhoon) is not required. However, if you did have it, you may be able to create a way to automate the annotation process.

Labeling often does have some degree of arbitrariness. The most important thing is to make your annotations consistently. Like I suggested before, I recommend you start with a much lower number of images (than the 10,000 you mentioned earlier) and train a model.

I do recommend you use all three types of images together, especially if you are looking to predict/run inference on all three types of images. Doing so will make the model place a lower emphasis/importance on color, which I assume will be good for your use case.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.