Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Latest commit

 

History

History
37 lines (25 loc) · 1.74 KB

README.md

File metadata and controls

37 lines (25 loc) · 1.74 KB

RoboCupHumanoid

In this project, we are aiming to implement [1] and improve it using Conv + LSTM layer for detection and tracking of the soccer ball for RoboCup Humanoid League.

Dataset

example_data.csv contains information about training images with the following fields:

  • image_file - image location,
  • width - width of the image,
  • height - height of the image,
  • label - label of the object,
  • xmin - top left x-coordinate of rectangle around object,
  • ymin - top left y-coordinate of rectangle around object,
  • xmax - bottom right x-coordinate of rectangle around object,
  • ymax - bottom right y-coordinate of rectangle around object.

To extract and label images we are using Image Tagger We used YOLO for automatic ball detection and then we manually verified each the detection.

Note: One image file may contain multiple objects of different types.

Running the tests

  • Train sweaty python train.py --batch_size=16 --alpha=1000 --model_name=alpha1000 --epochs=50
  • Test Sweaty python test.py --load=pretrained_models/alpha1000_epoch_50.model --testSet=data/test/ --trainSet=data/train/

References

[1] Fabian Schnekenburger, Manuel Scharffenberg, Michael Wulker, Ulrich Hochberg, Klaus Dorer Detection and Localization of Features on a Soccer Field with Feedforward Fully Convolutional Neural Networks (FCNN) for the Adult-Size Humanoid Robot Sweaty