Download the data set
Here I am asking for 6000 pictures from the mushroom class, but the maximum number of pictures available in the training series is 1782.
Now we have the pictures in their limit boxes.
See what I’ve downloaded locally so far.
The images are now available in the Mushroom folder, which contains the stickers folder.
The other two steps and we are ready with our data set. First, we need to put all the stickers on the same root as the pictures. Easy! Second, we need to convert the labels / bounding box files to the correct format. Entries stored in Txt files are coordinates (XMin, YMin, XMax, YMax). Note that we need coordinates between 0 and 1.
To complete this step, I know you can use some label applications. But I want to do it myself!
So first we edit the OIDv4_ToolKit / class.txt file just to set the class Mushroom. After that I tried convert_annotations.py AIGuyCodesta. It didn’t work right away. So I put the code convert_annotations.py in my notebook and encoded the file paths. I don’t know why it didn’t work. But the code is good and it is open source.
Check it out here!
Let’s look at the results:
Check the difference in labeling
Now we are ready! We have a set of image data with the binding boxes in the correct format. Get the model now!
YOLOv5 is available here. Let’s clone it.
We need all the requirements that come with the package. To do this, you can use requirement.txt file or use the pip command if you want more control over what you do to your environment.
Check my YOLOv5 folder github.
We need to add a Yaml file for YOLOv5 to find the class. Here I added Sieni.yaml data folder. Note that you should put this path in the Mushroom folder that we were editing earlier with OIDv4_ToolKit.
Practice the YOLOv5 model
In the downloaded package for YOLOv5 we have 4 versions from model: YOLOv5s, YOLOv5m, YOLOv5l and, YOLOv5x. I use a small one and it is YOLOv5s. You assumed that the letters s, m, l, and x correspond to the model size.
In the exercise command, you give the following arguments to the selected model:
- img the size of the entered image
- round: batch size
- eras: number of training periods
- information: our previously created yaml file
- CFG: the selected model here I used a small one
- weights: custom path for weights, if empty, it is saved in the folder yolov5 / run / train / yolov5_results / weights
- name: result names
- device: 0 to use the GPU
- cache: cached images for faster training
And that’s all it takes to accomplish it