Jing Wang1, Chris Helliwell1, Shannon Dillon1, Geoff Bull1, Julianne Lilley1
1Csiro, Canberra, Australia
Phenotyping plants in the field is currently performed manually, which is expensive for both researchers and breeding companies. Additionally, human scoring of plants is inevitably affected by subjective judgement and level of expertise. The growth stages of canola are mainly defined by visual properties, such as the number of leaves and the appearance of buds and flowers. Drones mounted with high resolution cameras can rapidly image large agricultural fields. With the availability of big data and parallel computing, deep learning has achieved great success in recognizing patterns from images. We proposed to combine drone imaging and deep learning methods to automate canola phenology detection. The system consists of 1) canola field imaging and annotation, 2) deep learning model design, training and validation, and 3) trained model deployment. We choose a Sony A7iii camera with a 35mm lens mounted on a DJI M600 drone to collect high resolution images of leaves, buds, and flowers. The collected images are labelled by human annotators with the plant locations and key stage categories. In step two, deep object detection networks are designed, trained, and validated on the acquired image datasets. The networks are pretrained on the Imagenet dataset and finetuned on the canola image datasets. In step three, the well-trained model is deployed to detect canola growth stages on new trials. We have designed the imaging acquisition scheme, and preliminary results on collected images show promising detection performance. The system is expected to be completed by 2022 and provide efficient automatic phenotyping.
Jing is a postdoctoral fellow with Ag&Food CSIRO. His works involve phonemics, deep learning and computer vision.