This repository has made public the key configurations for model training and detection related to the paper "Counting wheat heads using a simulation model" [--], as well as the unified code for evaluating model detection and counting performance.
The main.py
in the Assess
folder is the main code for evaluation and test.
The Models
folder contains the configuration files for the model during training and test.
The Figs
folder contains the code for drawing the main figures and charts in the paper.
During training, test, inference, images will always first be converted to grayscale images.
Dataset could be downloaded from figshare.
We use grayscale images for training and testing(images_grayscale folders in the dataset), grayscale images can also be converted from original rgb images through util/to_gray.py.
The pretrained YOLOv7 model trained with our simulation wheat images can be downloaded from Dropbox-yolo-wheat.
- Clone this repository to your local machine.
- Download the original YOLOv7 pretrained model from WongKinYiu's YOLOv7 and place it in the
weights
folder. - In
Models/YOLOV7/data/MakeMyData.yaml
, specify the paths for your training set (processed in grayscale) and validation set (also processed in grayscale). - Optionally, set custom parameters (defaults are available).
- Run
train.py
inModels/YOLOV7
to start training the model. Expected results should appear within 25-75 epochs.
- In
detect.py
, modify custom parameters as needed (defaults are acceptable). - Set the path for the test dataset.
- Run
detect.py
. Results will be available in theruns/detect
folder. - In the
main.py
insideAssess
, specify the path for the labels of the test dataset and the labels of detection results. - Run
main.py
. Results will be printed in the console.
- In
detect.py
, modify custom parameters as needed (defaults are acceptable). - Set the path for the test dataset.
- Run the script for detection. Results will be available in the
runs/detect
folder.