Skip to content
Snippets Groups Projects
Select Git revision
  • 677a741f788e9ce910c6a9dc45b590dc865353db
  • master default protected
  • state_at_demo_220614
3 results

farmbot_yolo

Anton Tetov's avatar
677a741f
History

A YOLO-based fruit picking system for FarmBot

Brief introduction

https://pjreddie.com/darknet/yolo/

YOLO, implemented using Darknet

Darknet is a deep learning framework created by the author of YOLO. It is written in C and provides necessary Python APIs for object detection.

Detector Training

Check the instruction for how to train the YOLO detector.

Camera Calibration

Think YOLO as a black box, it consumes photos that contains objects or not and outputs objects' positions in the pixel coordinate. However, we want to know their positions on the planting bed, namely global coordinate. That is why we need camera calibration. This step was done separately on MATLAB and offered us the intrinsic matrix of the camera. You do not need to redo this unless you want to change to a new model of camera. Check MATLAB instruction for how to use the camera calibration app.

Note: The result have to be transferred to struct, a MATLAB data type, before saving.

If you follow the instruction above, you will end up with a MATLAB object, cameraParameters. If you save this to a .mat file and read it by SciPy, you will unfortunately end up with an empty dictionary. You should always use the function toStruct to convert it to a struct, which is close to Python dictionary, before saving it to a .mat file. Check the function here.

Coordinate Tranform

Pixel Coordinate

\Rightarrow^{1}
Camera Coordinate
\Rightarrow^{2}
Global Coordinate

  • Pixel coordinate: Output bounding boxes from YOLO in the form of
    <x, y, w,h>
    . Here we only use
    <x, y>
    currently.
    w, h
    might be useful for estmating the size of object's in the future.
  • Camera coordinate: The coordinate system determined by the camera, whose original point is at the camera pin hole.
  • Global coordinate: The coordinate system for Farmbot. The original point is at the upper right corner of the planting bed.

Note: Due to mechanical limitation, the original point of global coordinate system does not perfectly align with the planting bed's corner.

1. From Pixel to Camera Coordinate

2. From Pixel to Camera Coordinate

The camera coordinate can be transformed to the global one via a transition and a rotation. From the picture, we can easily write the formula

Setup

  1. git clone git@gitlab.control.lth.se:robotlab/farmbot_yolo
  2. Initialize submodules: git submodules init && git submodules update.
  3. make to compile
  4. Install anaconda using official installer and instructions.
  5. Set conda-forge as the package repo: conda config --add channels conda-forge && conda config --set channel_priority strict
  6. conda env create --name farm --f environment.yml, this will install all dependencies. Change the environment name to whatever you like.
  7. Install local package: pip install -e .

Compile Darknet

Already done in the above steps

Check the instruction for how to use Make to compile on Linux.

In the top Makefile LIBSO=1 is set before the Makefile in darknet is run. This will ensure libdarknet.so be generated, which will be used in darknet.py.

Before starting the system

Always calibrate the position before using!
The purpose is to reset the zero positions of x, y, z axis. This step should be done manually first and then use the webapp, i.e., the user should push the grantry and z axis to the upper right corner. Check the official instruction for how to use the web app to set zeros.

How to run this system?

The software has three main modules:

  1. move: drive FarmBot to move, take photos, and open/close the gripper
  2. detection: run darknet as a dynamic library for detecting, output bounding boxes
  3. location: input bounding boxes, transfer to real-world coordinate

We also provide main.py as a wrapper for all the modules above. By running it, you can make Farmbot automatically conduct the whole process. The three modules can also be run separately, mostly for debugging purpose.

First go to farmbot_yolo and conda activate <env> to run the following scripts. <env> is the same as the one you created in Install, Compile

Move Farmbot, take photos, and open/close the gripper

YOLO detection

All the arguments for file path are set to default.

python farmbot_yolo.detect --dont_show --ext_output --save_labels --input ../img --weights ../weights/yolov3-vattenhallen_best.weights  --config_file ../cfg/yolov3-vattenhallen-test.cfg --data_file ../data/vattenhallen.data

Calculate location

python farmbot_yolo.location -v -cam ../static/camera_no_distortion.mat -loc ../img/locations/ -a ../img/annotations -o ../static/distance.txt -l ../log/location.log

All the arguments has default values, which means they can be all omitted if you don't change the document tree structure.

Scan the bed and pick

重新生成 requirement!!