WO2022016563A1 - 一种植保无人机地面监控***及其监控方法 - Google Patents

一种植保无人机地面监控***及其监控方法 Download PDF

Info

Publication number
WO2022016563A1
WO2022016563A1 PCT/CN2020/104941 CN2020104941W WO2022016563A1 WO 2022016563 A1 WO2022016563 A1 WO 2022016563A1 CN 2020104941 W CN2020104941 W CN 2020104941W WO 2022016563 A1 WO2022016563 A1 WO 2022016563A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
layer
convolution
matrix
lens
Prior art date
Application number
PCT/CN2020/104941
Other languages
English (en)
French (fr)
Inventor
段纳
张正强
苗珍
孟国华
Original Assignee
南京科沃信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京科沃信息技术有限公司 filed Critical 南京科沃信息技术有限公司
Publication of WO2022016563A1 publication Critical patent/WO2022016563A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Definitions

  • the invention relates to the field of unmanned aerial vehicles, in particular to a ground monitoring system for plant protection unmanned aerial vehicles and a monitoring method thereof.
  • the traditional measurement method is that researchers or agricultural operators personally come to the farmland, randomly select several areas in the farmland of hundreds of acres for manual statistics, and then obtain the basic seedling information of the entire field through estimation.
  • Such statistical methods require a lot of labor costs, and personnel activities in the field may cause damage to the seedlings.
  • the above-mentioned problems are solved by the visual platform carried by the UAV, and the statistical accuracy and efficiency can be greatly improved.
  • the purpose of the invention is to provide a ground monitoring system for plant protection drones, and a further purpose is to propose a monitoring method based on the above monitoring system to solve the above problems existing in the prior art.
  • the monitoring system uses drones as the platform and machine vision as the core to count the basic seedlings of farmland.
  • the monitoring system specifically includes a map acquisition system for taking picture sets and splicing them; a regional image processing system for marking and preprocessing the spliced plant protection map; a convolutional neural network for identifying crops and weeds A network system; and a statistical system for estimating the number of basic seedlings of crops in the area by statistics.
  • the map acquisition system further includes a drone cruising on a preset route, at least one VHF heading beacon and a plurality of VHF indicator markers arranged on the farmland; A machine vision module is suspended from the lower part of the drone.
  • the machine vision module includes at least one main lens and several auxiliary lenses, and the auxiliary lenses are switched by a lens switch frame;
  • the lens switch frame includes a servo motor fixed at the lower part of the drone, A rotating shaft connected with the output shaft of the servo motor through a coupling, and a switch plate fixed at one end of the rotating shaft; the circumference of the switch plate is provided with a lens installation groove, and the auxiliary lens is installed in the lens installation groove Inside;
  • the auxiliary lens includes a telephoto lens, a macro lens, and a night vision lens.
  • the area image processing system further splices the collected multiple images into a large image through a scale-invariant feature transformation algorithm, and randomly marks several areas on the obtained large image, and adopts a predetermined threshold value. Binarize the image to remove the interference caused by objects with similar colors to the crop.
  • the convolutional neural network system further labels crops and weeds into the designed CNN to train appropriate parameters, and recognizes the images segmented from the connected domain one by one to obtain the final result.
  • a method for ground monitoring of a plant protection drone comprising the following steps:
  • Step 1 A set of scattered pictures is obtained by photographing the target farmland through the visual platform carried by the drone;
  • Step 2 splicing the scattered image sets through a scale-invariant feature transformation algorithm
  • Step 3 Randomly mark several regions on the spliced image
  • Step 4. Perform operations such as binarization, expansion, erosion, etc. on the marked area images one by one, and mark the connected domains;
  • Step 5 Train a convolutional neural network to identify crops and weeds through the collected samples of crops and weeds;
  • Step 6 Obtain the number of basic seedlings of crops in the marked area, thereby estimating the number of overall basic seedlings.
  • step 1 further includes the following attitude control method when using a quadrotor unmanned aerial vehicle to photograph the target farmland:
  • Step 1-1 Reduce magnetic interference: In calibration mode, calculate the X offset , Y offset , and Z offset by collecting the respective maximum and minimum values of the X, Y, and Z axes:
  • the offset calculation result is stored in the central computer, and the corresponding offset is subtracted from each axis to obtain the magnetic field after calibration of the hard magnet;
  • Step 1-2 establish the earth coordinate system: the coordinate origin o e is in the center of the earth, x e and y e are in the equatorial plane of the earth, x e points to the prime meridian, and z e is the axis of rotation of the earth;
  • represents the angle between the longitudinal axis of the body and the longitudinal horizontal axis, represents the angle between the longitudinal symmetry plane of the body and the vertical vertical plane, and ⁇ represents the angle between the projection of the longitudinal axis of the body on the horizontal plane xoy and the prime meridian in the geographic coordinate system;
  • Step 1-3 establish the UAV attitude dynamics model:
  • m is the mass
  • g is the acceleration of gravity
  • I x , I y , I z are the moments of inertia relative to the body coordinate system
  • l is the distance from the center of the propeller to the center of the quadrotor
  • represents the angle between the longitudinal axis of the body and the longitudinal horizontal axis, represents the angle between the longitudinal symmetry plane of the body and the vertical vertical plane
  • represents the angle between the projection of the longitudinal axis of the body on the horizontal plane xoy and the prime meridian in the geographic coordinate system
  • is the unknown time delay
  • Step 1-4 define the current sampling time of the system as k, and the optimal estimated state at the last moment (ie k-1) is according to Predict the current state of the system, and record the predicted value.
  • a matrix showing the state transition matrix u k represents the current time input
  • the system matrix B denotes the control matrix, It consists of two parts, one part is the product between the optimal state at the previous moment and matrix A, and the other part is the product between the input quantity at the current moment and matrix B;
  • Steps 1-5 in order to express the unknownness of the prediction model, use the covariance matrix equation to define the prediction current state matrix as P k , and get:
  • P k-1 is the optimal solution estimated by the covariance matrix at the previous moment, and Q is the inherent noise matrix of the prediction model;
  • Step 1-6 define the current observation value as z k , the current observation matrix as H, and the covariance matrix of the observation noise as R, we can get
  • K is called the Kalman coefficient matrix
  • the expression of K is
  • step 2 further includes splicing the collected multiple images into a large image through a scale-invariant feature transformation algorithm, and step 3 further includes randomly marking a number of regions on the obtained large image, and adopting a predetermined threshold Binarize the image to remove the interference caused by objects whose color is similar to the crop;
  • Step 4 further includes selecting a predetermined threshold to binarize the picture to remove the interference caused by objects with similar colors to the crop.
  • the connected domain method is used to divide the plants in the image, and each connected domain in the image is given a unique identifier that is different from other connected domains:
  • Step 4-1 Complete the method of marking all connected domains in the image by scanning the image twice. The first pass scans by row to give each foreground pixel a mark;
  • Step 4-2. After scanning, the pixels in the same connected domain may be marked with different labels, but there is an equivalent relationship between these different labels. Therefore, in the second scan, the different labels in the same connected domain need to be marked as an identical token;
  • Step 4-3 select a seed point, and put all the front and background pixels adjacent to the seed point into the same set to form a connected domain.
  • step 5 further comprises:
  • Step 5-1 Convert the area image to gray value image and filter it, set three sets of parameters (Lsize1, Lsize2), (Msize1, Msize2), (Fsize1, Fsize2), and satisfy Lsize1 ⁇ Msize1 ⁇ Fsize1, Lsize2 ⁇ Msize2 ⁇ Fsize2, at the same time press (Fsize1, Fsize2) to divide the captured picture into blocks, then the size of each block is Fsize1*Fsize2.
  • Fsize1, Fsize2 the remaining part of the picture that is less than Fsize1 or Fsize2 is discarded, or added to Fsize1* fsize2 size;
  • Step 5-2 Calculate the optimal binarization threshold of the Lsize1*Lsize2 area around each block, use the calculated threshold to perform image binarization on the range of the Msize1*Msize2 area around the block, and convert all Msize1*Msize2
  • the blocks are overlapped and accumulated according to the corresponding spatial positions;
  • Step 5-3 divide the accumulated image by the square root value of [(Msize1/Fsize1)*(Msize2/Fsize2)] to obtain an image-enhanced watermark image;
  • Step 5-4 convert the original picture and video frame data into a Ycbcr image, extract its brightness channel Y L , downsample the channel Y L to obtain a single-channel image Y L ′; perform adjacent interpolation operations on the brightness channel Y L , obtain the interpolated image Y Lc , sharpen the luminance channel Y L with an intensity of 0.8 to obtain the sharpened image Y LR , and finally combine the single-channel image Y L ′, the interpolated image Y Lc , and the sharpened image Y LR with a ratio of 0.8:0.9 : a ratio of 1.1 is mixed to form a multi-channel image Y m ;
  • Step 5-5 Divide the image data into input layer, convolution layer, and output layer through convolution operation.
  • the size of the input layer is a ⁇ a, and the number of channels is 16;
  • the convolution layer is two layers, which are respectively recorded as The F1 layer and the F2 layer, among which, the convolution feature map of the F1 layer is 128, and its size is (a-8+1) ⁇ (a-8+1);
  • the input image is checked by each 8 ⁇ 8 convolution check Do an inner convolution and do the first activation on the convolution result:
  • F c1 represents the convolution of the first layer F1 layer activation function
  • Y m represents a multi-channel image
  • B 1 represents a first layer F1 layer convolutional rate factor
  • W 1 represents a first layer F1 enlarged convolution layer factor
  • the convolution feature map of the F2 layer is 128, and its size is (a-16) ⁇ (a-16).
  • the output of the F1 layer is input to the F2 layer as a variable, and 128 convolution kernels are used to check the F2 layer.
  • the data of the convolution operation is performed, and the second activation is performed on the convolution result:
  • F. F2 represents the convolution of the first layer of layer activation function F2
  • B 2 represents a second layer Layer convolutional rate factor of F2
  • W 2 represents the amplification factor of the second layer convolution layer F2, and the remaining symbols are as above;
  • the calculation steps of the first convolutional layer are as follows:
  • x l 1,j represents the l-th input sample of the first layer
  • x l 2,j represents the l-th input sample of the second layer
  • down(x l 1,j(1 ⁇ j ⁇ 3 ) means The downsampling function in the lth input sample of the first layer
  • down(x l 2,j(1 ⁇ j ⁇ 15) represents the downsampling function in the lth input sample of the second layer
  • Steps 5-6 construct a super-resolution reconstruction model, and use the mean square error as the loss function:
  • Y L represents the luminance channel
  • N represents the number of extracted image blocks
  • K 0 represents the adjustment coefficient
  • the present invention relates to a ground monitoring system for planting and protecting drones and a monitoring method thereof.
  • the monitoring system takes the drone as the carrying platform and the machine vision as the core, and counts the basic seedlings of the farmland. Through the visual platform carried by the drone, several areas are randomly selected in hundreds of acres of farmland for identification and statistics, and the basic seedling information of the entire field is estimated.
  • the image sequence is collected by the machine vision platform carried by the UAV and stitched into a panorama of the target farmland, and then the basic seedlings in the image are identified through an algorithm and statistics are completed.
  • the invention can popularize the application of the plant protection unmanned aerial vehicle, realize the platform detection statistical data, improve the statistical efficiency and agricultural production efficiency, improve the statistical precision and protect the seedlings.
  • Fig. 1 is a flow chart of the monitoring method of the present invention.
  • FIG. 2 is a flow chart of processing the marked area picture in the monitoring method.
  • FIG. 3 is a schematic diagram of the monitoring system of the present invention.
  • FIG. 4 is a flowchart of the attitude control of the UAV in the present invention.
  • the invention takes the unmanned aerial vehicle as the carrying platform and the machine vision as the core, and conducts statistics on the basic seedlings of the farmland.
  • the main problems to be solved by this project are: (1) How to get the overall picture of farmland in the case of uneven illumination; (2) Distinguish the crop plants in the picture from other objects.
  • the traditional measurement method is that researchers or agricultural operators personally come to the farmland, randomly select several areas in the farmland of hundreds of acres for manual statistics, and then obtain the basic seedling information of the entire field through estimation.
  • Such statistical methods require a lot of labor costs, and personnel activities in the field may cause damage to the seedlings.
  • the above-mentioned problems are solved by the visual platform carried by the UAV, and the statistical accuracy and efficiency can be greatly improved.
  • the invention relates to a plant protection unmanned aerial vehicle ground monitoring system and a monitoring method thereof.
  • the monitoring system specifically includes a map acquisition system used for taking picture sets and splicing them; and is used for regional marking and preprocessing of the spliced plant protection maps.
  • a regional image processing system a convolutional neural network system for identifying crops and weeds; and a statistical system for estimating the number of basic seedlings of crops in an area by statistics.
  • the map acquisition system further includes an unmanned aerial vehicle cruising on a preset route, at least one VHF heading beacon and a plurality of VHF indicator markers arranged on the farmland; the lower part of the unmanned aerial vehicle is suspended There are machine vision modules.
  • the machine vision module includes at least one main lens and several auxiliary lenses, and the auxiliary lenses are switched through a lens switch frame;
  • the lens switch frame includes a servo motor fixed at the lower part of the drone, and is connected to the a rotating shaft connected to the output shaft of the servo motor, and a switch plate fixed at one end of the rotating shaft; the circumference of the switch plate is provided with a lens installation groove, and the auxiliary lens is installed in the lens installation groove;
  • the auxiliary lens includes Telephoto lens, macro lens, night vision lens.
  • the area image processing system further splices the collected multiple images into a large image through a scale-invariant feature transformation algorithm, and randomly marks several areas on the obtained large image, and uses a predetermined threshold to binarize the image. , to remove the interference caused by objects with similar colors to crops.
  • the convolutional neural network system further labels crops and weeds into the designed CNN to train appropriate parameters, and recognizes the images segmented from the connected domain one by one to obtain the final result.
  • the specific monitoring methods are as follows:
  • a set of scattered pictures is obtained by photographing the target farmland through the vision platform carried by the UAV.
  • Step 1-1 Reduce magnetic interference: In calibration mode, calculate the X offset , Y offset , and Z offset by collecting the respective maximum and minimum values of the X, Y, and Z axes:
  • the offset calculation result is stored in the central computer, and the corresponding offset is subtracted from each axis to obtain the magnetic field after calibration of the hard magnet;
  • Step 1-2 establish the earth coordinate system: the coordinate origin o e is in the center of the earth, x e and y e are in the equatorial plane of the earth, x e points to the prime meridian, and z e is the axis of rotation of the earth;
  • represents the angle between the longitudinal axis of the body and the longitudinal horizontal axis, represents the angle between the longitudinal symmetry plane of the body and the vertical vertical plane, and ⁇ represents the angle between the projection of the longitudinal axis of the body on the horizontal plane xoy and the prime meridian in the geographic coordinate system;
  • Step 1-3 establish the UAV attitude dynamics model:
  • m is the mass
  • g is the acceleration of gravity
  • I x , I y , I z are the moments of inertia relative to the body coordinate system
  • l is the distance from the center of the propeller to the center of the quadrotor
  • represents the angle between the longitudinal axis of the body and the longitudinal horizontal axis, represents the angle between the longitudinal symmetry plane of the body and the vertical vertical plane
  • represents the angle between the projection of the longitudinal axis of the body on the horizontal plane xoy and the prime meridian in the geographic coordinate system
  • is the unknown time delay
  • Step 1-4 define the current sampling time of the system as k, and the optimal estimated state at the last moment (ie k-1) is according to Predict the current state of the system, and record the predicted value.
  • a matrix showing the state transition matrix u k represents the current time input
  • the system matrix B denotes the control matrix, It consists of two parts, one part is the product between the optimal state at the previous moment and matrix A, and the other part is the product between the input quantity at the current moment and matrix B;
  • Steps 1-5 in order to express the unknownness of the prediction model, use the covariance matrix equation to define the prediction current state matrix as P k , and get:
  • P k-1 is the optimal solution estimated by the covariance matrix at the previous moment, and Q is the inherent noise matrix of the prediction model;
  • Step 1-6 define the current observation value as z k , the current observation matrix as H, and the covariance matrix of the observation noise as R, we can get
  • K is called the Kalman coefficient matrix
  • the expression of K is
  • SIFT Scale-invariant feature transform
  • the so-called connected domain labeling refers to giving each connected domain in the image a unique identification that is different from other connected domains. There are many kinds of labeling algorithms for connected domains. The two-pass scanning method and the seed filling method are mainly used in this topic.
  • the two-pass scanning method refers to the method of completing the marking of all connected domains in the image by scanning the image twice.
  • the first pass scans by row to give each foreground pixel a mark.
  • the pixels in the same connected domain may be marked with different labels, but there is an equivalent relationship between these different labels. Therefore, in the second pass, the different labels in the same connected domain need to be the same label.
  • 2Seed filling method First select a seed point, both foreground pixels and background pixels, and then put all the front and background pixels adjacent to the seed point into the same set, thus forming a connected domain.
  • CNN Convolutional Neural Network
  • CNN is a deep feedforward neural network. Its convolution operation not only solves the problem of the huge amount of parameters of the fully connected neural network, but also can directly use the original image as the input of the convolutional neural network. enter. Therefore, CNN is widely used in large-scale visual processing fields, such as image classification, object detection, image generation, image semantic segmentation and other computer vision problems. Unlike statistical machine learning algorithms that require human-designed features, convolutional neural networks take raw data as input, such as image pixels, raw audio data, etc. CNN abstracts the original data in the input layer layer by layer by stacking a series of operations such as convolution operation, pooling operation and nonlinear activation function mapping, so as to extract high-level semantic information. This process is called "feedforward operation”.
  • a “layer” in the convolutional neural network is a certain type of operation: the convolution operation is called a “convolution layer”, the pooling operation is called a “pooling layer”, and so on.
  • the convolutional layer is the basic operation in the convolutional neural network. Its core is to obtain the local information of the image by acting on the local image area with a certain size of the convolution kernel; the pooling layer extracts the main features by down-sampling the input data. and reduce the amount of data.
  • Common pooling methods are: max pooling, average pooling, and random pooling. The crops and weeds are labeled and then put into the designed CNN to train appropriate parameters, and the images segmented from the connected domain are further identified one by one to obtain the final result.
  • Step 5-1 Convert the area image to gray value image and filter it, set three sets of parameters (Lsize1, Lsize2), (Msize1, Msize2), (Fsize1, Fsize2), and satisfy Lsize1 ⁇ Msize1 ⁇ Fsize1, Lsize2 ⁇ Msize2 ⁇ Fsize2, at the same time press (Fsize1, Fsize2) to divide the captured picture into blocks, then the size of each block is Fsize1*Fsize2.
  • Fsize1, Fsize2 the remaining part of the picture that is less than Fsize1 or Fsize2 is discarded, or added to Fsize1* fsize2 size;
  • Step 5-2 Calculate the optimal binarization threshold of the Lsize1*Lsize2 area around each block, use the calculated threshold to perform image binarization on the range of the Msize1*Msize2 area around the block, and convert all Msize1*Msize2
  • the blocks are overlapped and accumulated according to the corresponding spatial positions;
  • Step 5-3 divide the accumulated image by the square root value of [(Msize1/Fsize1)*(Msize2/Fsize2)] to obtain an image-enhanced watermark image;
  • Step 5-4 convert the original picture and video frame data into a Ycbcr image, extract its brightness channel Y L , downsample the channel Y L to obtain a single-channel image Y L ′; perform adjacent interpolation operations on the brightness channel Y L , obtain the interpolated image Y Lc , sharpen the luminance channel Y L with an intensity of 0.8 to obtain the sharpened image Y LR , and finally combine the single-channel image Y L ′, the interpolated image Y Lc , and the sharpened image Y LR with a ratio of 0.8:0.9 : a ratio of 1.1 is mixed to form a multi-channel image Y m ;
  • Step 5-5 Divide the image data into input layer, convolution layer, and output layer through convolution operation.
  • the size of the input layer is a ⁇ a, and the number of channels is 16;
  • the convolution layer is two layers, which are respectively recorded as The F1 layer and the F2 layer, among which, the convolution feature map of the F1 layer is 128, and its size is (a-8+1) ⁇ (a-8+1);
  • the input image is checked by each 8 ⁇ 8 convolution check Do an inner convolution and do the first activation on the convolution result:
  • F c1 represents the convolution of the first layer F1 layer activation function
  • Y m represents a multi-channel image
  • B 1 represents a first layer F1 layer convolutional rate factor
  • W 1 represents a first layer F1 enlarged convolution layer factor
  • the convolution feature map of the F2 layer is 128, and its size is (a-16) ⁇ (a-16).
  • the output of the F1 layer is input to the F2 layer as a variable, and 128 convolution kernels are used to check the F2 layer.
  • the data of the convolution operation is performed, and the second activation is performed on the convolution result:
  • F. F2 represents the convolution of the first layer of layer activation function F2
  • B 2 represents a second layer Layer convolutional rate factor of F2
  • W 2 represents the amplification factor of the second layer convolution layer F2, and the remaining symbols are as above;
  • the calculation steps of the first convolutional layer are as follows:
  • x l 1,j represents the l-th input sample of the first layer
  • x l 2,j represents the l-th input sample of the second layer
  • down(x l 1,j(1 ⁇ j ⁇ 30) means The downsampling function in the lth input sample of the first layer
  • down(x l 2,j(1 ⁇ j ⁇ 15) represents the downsampling function in the lth input sample of the second layer
  • Steps 5-6 construct a super-resolution reconstruction model, and use the mean square error as the loss function:
  • Y L represents the luminance channel
  • N represents the number of extracted image blocks
  • K 0 represents the adjustment coefficient

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及一种植保无人机地面监控***及其监控方法,监控***以无人机为搭载平台,机器视觉为核心,对农田基本苗进行统计。监控***具体包括用于拍摄图片集并对其进行拼接的地图采集***;用于对拼接出的植保地图进行区域标记并预处理的区域图片处理***;用于识别作物和杂草的卷积神经网络***;以及用于统计得出区域内作物基本苗数目、从而估算整体基本苗个数的统计***。本发明通过无人机搭载的视觉平台在数百亩的农田中随机选取数个区域进行识别统计,估算出整块田地的基本苗信息。通过无人机搭载的机器视觉平台采集图片序列并且拼接成一张目标农田的全景图,随后通过算法识别出图像中的基本苗并完成统计。

Description

一种植保无人机地面监控***及其监控方法 技术领域
本发明涉及无人机领域,具体涉及一种植保无人机地面监控***及其监控方法。
背景技术
现阶段基本苗的统计通常采用人工统计,随着经济的发展以及科学的进步,传统的统计方式耗时耗力。目前、机器视觉已经在人流,车流统计等众多领域上取得了卓越的成功。而在基本苗统计中的应用还处于起步阶段。对基本苗的统计的关键在于将图片或视频中的作物植株检测出来,机器视觉在农作物检测上的应用和研究较为成熟。
传统的测量法是研究人员或农业作业者亲自来到农田里,在数百亩的农田中随机选取数个区域进行人工统计,再通过估计得到整块田地的基本苗信息。这样的统计方法需要大量的人工成本,并且人员在田间活动,可能会造成幼苗的损伤。通过无人机搭载的视觉平台及解决的了上述问题,还可以大幅度提高统计精准度和效率。
发明内容
发明目的:提供一种植保无人机地面监控***,进一步目的是提出一种基于上述监控***的监控方法,以解决现有技术存在的上述问题。
技术方案:一种植保无人机地面监控***,该监控***以无人机为搭载平台,机器视觉为核心,对农田基本苗进行统计。监控***具体包括用于拍摄图片集并对其进行拼接的地图采集***;用于对拼接出的植保地图进行区域标记并预处理的区域图片处理***;用于识别作物和杂草的卷积神经网络***;以及用于统计得出区域内作物基本苗数目、从而估算整体基本苗个数的统计***。
在进一步的实施例中,所述地图采集***进一步包括以预设路线巡航的无人机,排布设置在农田上的至少一个甚高频航向信标台、以及多个甚高频指点标;所述无人机下部悬挂有机器视觉模组。
在进一步的实施例中,所述机器视觉模组包括至少一个主镜头和若干辅助镜头,辅助镜头通过镜头切换架实现切换;所述镜头切换架包括固定在所述无人机下部的伺服电机,通过联轴器与所述伺服电机的输出轴连接的转轴,以及固定在所述转轴一端的切换盘;所述切换盘的圆周设有镜头安装槽,所述辅助镜头安装在所述镜头安装槽内;所述辅助镜头包括长焦镜头、微距镜头、夜视镜头。
在进一步的实施例中,所述区域图片处理***进一步通过尺度不变特征变换算法将 采集到的多份图片拼接成一张大图,并在得出的大图上随机标记出若干区域,采取预定阈值对图像进行二值化处理,去除颜色与作物相近的物体对识别带来的干扰。
在进一步的实施例中,所述卷积神经网络***进一步将作物和杂草标记以后放入设计好的CNN训练出合适的参数,将连通域分割出的图像逐一识别得出最后的结果。
一种植保无人机地面监控方法,包括以下步骤:
步骤1、通过无人机搭载的视觉平台对目标农田进行拍摄得到一组零散的图片集;
步骤2、通过尺度不变特征变换算法对零散的图片集进行拼接;
步骤3、在拼接得到的图上随机标记出若干区域;
步骤4、将标记出的区域图片逐一的进行二值化、膨胀、腐蚀等操作,并标记出连通域;
步骤5、通过采集的作物和杂草样本训练一个识别作物和杂草的卷积神经网络;
步骤6、得出标记区域作物基本苗的数目,从而估算整体基本苗的个数。
在进一步的实施例中,步骤1采用四旋翼无人机对目标农田进行拍摄时进一步包括如下姿态控制方法:
步骤1-1、降低磁干扰:处于校准模式下,通过采集X、Y、Z三轴各自的最大值和最小值,来计算X 偏移量、Y 偏移量、Z 偏移量
X 偏移量=(X 最大值+X 最小值)/2
Y 偏移量=(Y 最大值+Y 最小值)/2
Z 偏移量=(Z 最大值+Z 最小值)/2
将偏移量计算结果存入中控机,各轴减去对应的偏移量得到校准硬磁后的磁场;
步骤1-2、建立地球坐标系:坐标原点o e在地球中心,x e和y e在地球赤道面内,x e指向本初子午线,z e为地球自转轴;
将地球坐标系转换为机体坐标系:
Figure PCTCN2020104941-appb-000001
其中
Figure PCTCN2020104941-appb-000002
姿态矩阵为:
Figure PCTCN2020104941-appb-000003
式中,θ表示机体纵轴与纵向水平轴之间的夹角,
Figure PCTCN2020104941-appb-000004
表示机体纵向对称面与纵向铅垂面之间的夹角,ψ表示机体纵轴在水平面xoy的投影与地理坐标系中本初子午线之间的夹角;
步骤1-3、建立无人机姿态动力学模型:
Figure PCTCN2020104941-appb-000005
式中,m为质量;g为重力加速度;I x,I y,I z为相对机体坐标系的转动惯量;l为螺旋桨中心到四旋翼中心的距离;U i(i=1,2,3,4)为控制输入,θ表示机体纵轴与纵向水平轴之间的夹角,
Figure PCTCN2020104941-appb-000006
表示机体纵向对称面与纵向铅垂面之间的夹角,ψ表示机体纵轴在水平面xoy的投影与地理坐标系中本初子午线之间的夹角,d i(i=1,2,3,4,5,6)为外部未知干扰;τ为未知时滞,
Figure PCTCN2020104941-appb-000007
表示无人机在X方向的动力学模型,
Figure PCTCN2020104941-appb-000008
表示无人机在Y方向的动力学模型,
Figure PCTCN2020104941-appb-000009
表示无人机在Z方向的动力学模型,
Figure PCTCN2020104941-appb-000010
表示无人机的俯仰角模型,
Figure PCTCN2020104941-appb-000011
表示滚转角模型,
Figure PCTCN2020104941-appb-000012
表示偏航角模型;
步骤1-4、定义***当前采样时间为k,上一时刻(即k-1)最优估计状态为
Figure PCTCN2020104941-appb-000013
根据
Figure PCTCN2020104941-appb-000014
对***当前时刻状态进行预测,预测值记
Figure PCTCN2020104941-appb-000015
Figure PCTCN2020104941-appb-000016
矩阵A表示状态转移矩阵,u k表示当前时刻输入量,矩阵B表示***控制矩阵,
Figure PCTCN2020104941-appb-000017
由两部分组成,一部分是上一时刻最优状态与矩阵A之间的乘积,另一部分是当前时刻输入量与矩阵B之间的乘积;
步骤1-5、为表示预测模型的未知性,采用协方差矩阵方程,定义预测当前状态矩阵为P k,得
P k=AP k-1A T+Q
其中P k-1为上一时刻协方差矩阵估算出的最优解,Q为预测模型固有噪声矩阵;
步骤1-6、定义当前观测值为z k,当前观测矩阵为H,观测噪声的协方差矩阵为R,可得
Figure PCTCN2020104941-appb-000018
将预测值和观测值进行数据融合,得到当前状态的最优估计值
Figure PCTCN2020104941-appb-000019
Figure PCTCN2020104941-appb-000020
式中
Figure PCTCN2020104941-appb-000021
表示实际观测值与预测值之间的残差,K称为卡尔曼系数矩阵,K的表达式为
K=P kH T/[HP kH T+R]
式中各符号含义同上。
在进一步的实施例中,步骤2进一步包括通过尺度不变特征变换算法将采集到的多份图片拼接成一张大图,步骤3进一步包括在得出的大图上随机标记出若干区域,采取预定阈值对图像进行二值化处理,去除颜色与作物相近的物体对识别带来的干扰;
步骤4进一步包括选用预定阈值对图片进行二值化,以去除颜色与作物相近的物体对识别带来的干扰,对于基本苗的植株还未长成一片、在图片上呈现一个个独立的个体的情况,采用连通域的方法将图片中的植株进行划分,给图像中每个连通域赋予一个唯一的、区别于其他连通域的标识:
步骤4-1、通过扫描图像两遍完成图像中所有连通域标记的方法,第一遍按行扫描,给每个前景像素点一个标记;
步骤4-2、扫描后相同连通域内的像素点有可能被标了不同的标记,但这些不同标记之间具有等价关系,因此,第二遍扫时需要将同一个连通域内的不同标记为一个相同 的标记;
步骤4-3、选取一个种子点,将与种子点相邻的所有前、背景像素归入同一个集合,形成一个连通域。
在进一步的实施例中,步骤5进一步包括:
步骤5-1、将区域图片转化为灰度值图片并进行滤波,设置三组参数(Lsize1,Lsize2),(Msize1,Msize2),(Fsize1,Fsize2),且满足Lsize1≥Msize1≥Fsize1,Lsize2≥Msize2≥Fsize2,同时按(Fsize1,Fsize2)对拍摄到的图片进行分块,则每一块的大小为Fsize1*Fsize2,分块时,图片剩余不足Fsize1或Fsize2的部分舍弃掉,或补充为Fsize1*Fsize2大小;
步骤5-2、计算每一个分块周边Lsize1*Lsize2区域的最优二值化阈值,利用计算出来的阈值对该分块周边Msize1*Msize2区域的范围进行图像二值化,将所有Msize1*Msize2的分块按照对应的空间位置重叠累加;
步骤5-3、将累加图像除以[(Msize1/Fsize1)*(Msize2/Fsize2)]的开方值得到图像增强的水印图像;
步骤5-4、将原图片和视频帧数据转化为Ycbcr图像,提取其亮度通道Y L,将该通道Y L进行下采样得到单通道图像Y L′;对亮度通道Y L进行相邻插值运算,得到插值图像Y Lc,对亮度通道Y L进行强度为0.8的锐化得出锐化图像Y LR,最终将单通道图像Y L′、插值图像Y Lc、锐化图像Y LR以0.8:0.9:1.1的比率混合形成多通道图像Y m
步骤5-5、将图片数据通过卷积运算分割为输入层、卷积层、输出层,其中,输入层的大小为a×a,通道数为16;卷积层为两层,分别记为F1层和F2层,其中,F1层的卷积特征图为128个,其尺寸为(a-8+1)×(a-8+1);由每个8×8的卷积核对输入图像进行内卷积,并对卷积结果进行第一次激活:
F c1=max(0,W 1×Y m+B 1)
式中,F c1表示第一层卷积层F1的激活函数,Y m表示多通道图像,B 1表示第一层卷积层F1的比率因子,W 1表示第一层卷积层F1的放大因子;
F2层的卷积特征图为128个,其尺寸为(a-16)×(a-16),将经过F1层的的输出量作为变量输入到F2层,使用128个卷积核对F2层中的数据进行卷积运算,并对卷积结果进行第二次激活:
Figure PCTCN2020104941-appb-000022
式中,F F2表示第一层卷积层F2的激活函数,B 2表示第二层卷积层F2的比率因子,W 2表示第二层卷积层F2的放大因子,其余符号含义同上;
第一层卷积层的计算步骤如下:
pool1:x l 1,j(1≤j≤1)=g(down(x l 1,j(1≤j≤30)))
第二层卷积层的计算步骤如下:
pool1:x l 2,j(1≤j≤12)=g(down(x l 2,j(1≤j≤15)))
式中,x l 1,j表示第1层的第l个输入样本,x l 2,j表示第2层的第l个输入样本,down(x l 1,j(1≤j≤3)表示在第1层的第l个输入样本中的下采样函数,down(x l 2,j(1≤j≤15)表示在第2层的第l个输入样本中的下采样函数;
步骤5-6、构造超分辨重构模型,采用均方误差作为损失函数:
Figure PCTCN2020104941-appb-000023
式中,
Figure PCTCN2020104941-appb-000024
表示高分辨率图像块,Y L表示亮度通道,N表示抽取图像块的数量,K 0表示调节系数。
有益效果:本发明涉及一种植保无人机地面监控***及其监控方法,监控***以无人机为搭载平台,机器视觉为核心,对农田基本苗进行统计。通过无人机搭载的视觉平台在数百亩的农田中随机选取数个区域进行识别统计,估算出整块田地的基本苗信息。通过无人机搭载的机器视觉平台采集图片序列并且拼接成一张目标农田的全景图,随后通过算法识别出图像中的基本苗并完成统计。本发明可推广植保无人机的应用,实现平台检测统计数据,提高统计效率和农业生产效率,提高统计的精度和保护幼苗。
附图说明
图1为本发明监控方法流程图。
图2为本监控方法中对标记出的区域图片进行处理的流程图。
图3为本发明监控***的示意图。
图4为本发明中无人机姿态控制的流程图。
具体实施方式
在下文的描述中,给出了大量具体的细节以便提供对本发明更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本发明可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本发明发生混淆,对于本领域公知的一些技术特征未进行描述。
本发明以无人机为搭载平台,机器视觉为核心,对农田基本苗进行统计。本项目拟解决的主要问题为:(1)在光照不平均的情况下,如何得到整体农田图片;(2)将图片中的作物植株与其他的物体所区别。
传统的测量法是研究人员或农业作业者亲自来到农田里,在数百亩的农田中随机选取数个区域进行人工统计,再通过估计得到整块田地的基本苗信息。这样的统计方法需要大量的人工成本,并且人员在田间活动,可能会造成幼苗的损伤。通过无人机搭载的视觉平台及解决的了上述问题,还可以大幅度提高统计精准度和效率。
实施例一
本发明涉及一种植保无人机地面监控***及其监控方法,监控***具体包括用于拍摄图片集并对其进行拼接的地图采集***;用于对拼接出的植保地图进行区域标记并预处理的区域图片处理***;用于识别作物和杂草的卷积神经网络***;以及用于统计得出区域内作物基本苗数目、从而估算整体基本苗个数的统计***。所述地图采集***进一步包括以预设路线巡航的无人机,排布设置在农田上的至少一个甚高频航向信标台、以及多个甚高频指点标;所述无人机下部悬挂有机器视觉模组。
所述机器视觉模组包括至少一个主镜头和若干辅助镜头,辅助镜头通过镜头切换架实现切换;所述镜头切换架包括固定在所述无人机下部的伺服电机,通过联轴器与所述伺服电机的输出轴连接的转轴,以及固定在所述转轴一端的切换盘;所述切换盘的圆周设有镜头安装槽,所述辅助镜头安装在所述镜头安装槽内;所述辅助镜头包括长焦镜头、微距镜头、夜视镜头。
所述区域图片处理***进一步通过尺度不变特征变换算法将采集到的多份图片拼接成一张大图,并在得出的大图上随机标记出若干区域,采取预定阈值对图像进行二值化处理,去除颜色与作物相近的物体对识别带来的干扰。
所述卷积神经网络***进一步将作物和杂草标记以后放入设计好的CNN训练出合适的参数,将连通域分割出的图像逐一识别得出最后的结果。
具体监控方法如下:
(1)通过无人机搭载的视觉平台对目标农田进行拍摄得到一组零散的图片集。
无人机对目标农田进行拍摄时包括如下姿态控制方法:
步骤1-1、降低磁干扰:处于校准模式下,通过采集X、Y、Z三轴各自的最大值和最小值,来计算X 偏移量、Y 偏移量、Z 偏移量
X 偏移量=(X 最大值+X 最小值)/2
Y 偏移量=(Y 最大值+Y 最小值)/2
Z 偏移量=(Z 最大值+Z 最小值)/2
将偏移量计算结果存入中控机,各轴减去对应的偏移量得到校准硬磁后的磁场;
步骤1-2、建立地球坐标系:坐标原点o e在地球中心,x e和y e在地球赤道面内,x e指向本初子午线,z e为地球自转轴;
将地球坐标系转换为机体坐标系:
Figure PCTCN2020104941-appb-000025
其中
Figure PCTCN2020104941-appb-000026
姿态矩阵为:
Figure PCTCN2020104941-appb-000027
式中,θ表示机体纵轴与纵向水平轴之间的夹角,
Figure PCTCN2020104941-appb-000028
表示机体纵向对称面与纵向铅垂面之间的夹角,ψ表示机体纵轴在水平面xoy的投影与地理坐标系中本初子午线之间的夹角;
步骤1-3、建立无人机姿态动力学模型:
Figure PCTCN2020104941-appb-000029
式中,m为质量;g为重力加速度;I x,I y,I z为相对机体坐标系的转动惯量;l为螺旋桨中心到四旋翼中心的距离;U i(i=1,2,3,4)为控制输入,θ表示机体纵轴与纵向水平轴之间的夹角,
Figure PCTCN2020104941-appb-000030
表示机体纵向对称面与纵向铅垂面之间的夹角,ψ表示机体纵轴在水平面xoy的投影与地理坐标系中本初子午线之间的夹角,d i(i=1,2,3,4,5,6)为外部未知干扰;τ为未知时滞,
Figure PCTCN2020104941-appb-000031
表示无人机在X方向的动力学模型,
Figure PCTCN2020104941-appb-000032
表示无人机在Y方向的动力学模型,
Figure PCTCN2020104941-appb-000033
表示无人机在Z方向的动力学模型,
Figure PCTCN2020104941-appb-000034
表示无人机的俯仰角模型,
Figure PCTCN2020104941-appb-000035
表示滚转角模型,
Figure PCTCN2020104941-appb-000036
表示偏航角模型;
步骤1-4、定义***当前采样时间为k,上一时刻(即k-1)最优估计状态为
Figure PCTCN2020104941-appb-000037
根据
Figure PCTCN2020104941-appb-000038
对***当前时刻状态进行预测,预测值记
Figure PCTCN2020104941-appb-000039
Figure PCTCN2020104941-appb-000040
矩阵A表示状态转移矩阵,u k表示当前时刻输入量,矩阵B表示***控制矩阵,
Figure PCTCN2020104941-appb-000041
由两部分组成,一部分是上一时刻最优状态与矩阵A之间的乘积,另一部分是当前时刻输入量与矩阵B之间的乘积;
步骤1-5、为表示预测模型的未知性,采用协方差矩阵方程,定义预测当前状态矩阵为P k,得
P k=AP k-1A T+Q
其中P k-1为上一时刻协方差矩阵估算出的最优解,Q为预测模型固有噪声矩阵;
步骤1-6、定义当前观测值为z k,当前观测矩阵为H,观测噪声的协方差矩阵为R,可得
Figure PCTCN2020104941-appb-000042
将预测值和观测值进行数据融合,得到当前状态的最优估计值
Figure PCTCN2020104941-appb-000043
Figure PCTCN2020104941-appb-000044
式中
Figure PCTCN2020104941-appb-000045
表示实际观测值与预测值之间的残差,K称为卡尔曼系数矩阵,K的表达式为
K=P kH T/[HP kH T+R]
(2)式中各符号含义同上。
(2)通过尺度不变特征变换(Scale-invariant feature transform,SIFT)算法将多份图片拼接成一张大图,方便以后再次统计,提高可信度。SIFT特征是基于物体上的一些局部外观的兴趣点而与影像的大小和旋转无关,对于光线、噪声、微视角改变的容忍度也相当高。
(3)在大图上随机标记出若干区域。
(4)将这若干区域图片逐一的进行二值化、膨胀、腐蚀等操作,并标记出连通域(即植株的位置)。
选用合适的阈值对图片进行二值化,此步骤可以轻易的去除除颜色与作物相近的物体对识别带来的干扰。因为基本苗的统计是在种子发芽不久,植株还未长成一片,在图片上呈现一个个独立的个体,这时可以用连通域的方法将图片中的植株进行划分(若目标已具有粘连性可以运用其他算法对其进行削弱)。所谓连通域的标记,是指给图像中每个连通域赋予一个唯一的、区别于其他连通域的标识。连通域的标记算法有很多种,该课题主要用两遍扫描法和种子填充法。
①两遍扫描法:两遍扫描法是指通过扫描图像两遍完成图像中所有连通域标记的方法,第一遍按行扫描,给每个前景像素点一个标记。扫描后相同连通域内的像素点有可能被标了不同的标记,但这些不同标记之间具有等价关系。因此,第二遍扫时需要将同一个连通域内的不同标记为一个相同的标记。
②种子填充法:首先选取一个种子点,前景像素和背景像素都可以,然后将与种子点相邻的所有前、背景像素归入同一个集合,这样就形成一个连通域。
(5)通过采集的作物和杂草样本训练一个识别二者的卷积神经网络。
卷积神经网络(Convolutional Neural Network,CNN)是一种深度前馈神经网络,其卷积运算操作不仅解决了全连接神经网络参数量巨大的问题,而且能够直接以原始图像作为卷积神经网络的输入。因此,CNN被广泛应用于大型视觉处理领域,诸如,图像分类、目标检测、图像生成、图像语义分割等计算机视觉问题。与需要人工设计特征的统计机器学***均池化、和随机池化。将作物和杂草标记以后放入设计好的CNN训练出合适的参数,进一步的将连通域分割出的图像逐一识别得出最后的结果。
步骤5-1、将区域图片转化为灰度值图片并进行滤波,设置三组参数(Lsize1,Lsize2),(Msize1,Msize2),(Fsize1,Fsize2),且满足Lsize1≥Msize1≥Fsize1,Lsize2≥Msize2≥Fsize2,同时按(Fsize1,Fsize2)对拍摄到的图片进行分块,则每一块的大小为Fsize1*Fsize2,分块时,图片剩余不足Fsize1或Fsize2的部分舍弃掉,或补充为Fsize1*Fsize2大小;
步骤5-2、计算每一个分块周边Lsize1*Lsize2区域的最优二值化阈值,利用计算出来的阈值对该分块周边Msize1*Msize2区域的范围进行图像二值化,将所有Msize1*Msize2的分块按照对应的空间位置重叠累加;
步骤5-3、将累加图像除以[(Msize1/Fsize1)*(Msize2/Fsize2)]的开方值得到图像增强的水印图像;
步骤5-4、将原图片和视频帧数据转化为Ycbcr图像,提取其亮度通道Y L,将该通道Y L进行下采样得到单通道图像Y L′;对亮度通道Y L进行相邻插值运算,得到插值图像Y Lc,对亮度通道Y L进行强度为0.8的锐化得出锐化图像Y LR,最终将单通道图像Y L′、插值图像 Y Lc、锐化图像Y LR以0.8:0.9:1.1的比率混合形成多通道图像Y m
步骤5-5、将图片数据通过卷积运算分割为输入层、卷积层、输出层,其中,输入层的大小为a×a,通道数为16;卷积层为两层,分别记为F1层和F2层,其中,F1层的卷积特征图为128个,其尺寸为(a-8+1)×(a-8+1);由每个8×8的卷积核对输入图像进行内卷积,并对卷积结果进行第一次激活:
F c1=max(0,W 1×Y m+B 1)
式中,F c1表示第一层卷积层F1的激活函数,Y m表示多通道图像,B 1表示第一层卷积层F1的比率因子,W 1表示第一层卷积层F1的放大因子;
F2层的卷积特征图为128个,其尺寸为(a-16)×(a-16),将经过F1层的的输出量作为变量输入到F2层,使用128个卷积核对F2层中的数据进行卷积运算,并对卷积结果进行第二次激活:
Figure PCTCN2020104941-appb-000046
式中,F F2表示第一层卷积层F2的激活函数,B 2表示第二层卷积层F2的比率因子,W 2表示第二层卷积层F2的放大因子,其余符号含义同上;
第一层卷积层的计算步骤如下:
pool1:x l 1,j(1≤j≤1)=g(down(x l 1,j(1≤j≤3)))
第二层卷积层的计算步骤如下:
pool1:x l 2,j(1≤j≤12)=g(down(x l 2,j(1≤j≤15)))
式中,x l 1,j表示第1层的第l个输入样本,x l 2,j表示第2层的第l个输入样本,down(x l 1,j(1≤j≤30)表示在第1层的第l个输入样本中的下采样函数,down(x l 2,j(1≤j≤15)表示在第2层的第l个输入样本中的下采样函数;
步骤5-6、构造超分辨重构模型,采用均方误差作为损失函数:
Figure PCTCN2020104941-appb-000047
式中,
Figure PCTCN2020104941-appb-000048
表示高分辨率图像块,Y L表示亮度通道,N表示抽取图像块的数量,K 0表示调节系数。
(6)得出标记区域作物基本苗的数目,从而估算整体基本苗的个数。
如上所述,尽管参照特定的优选实施例已经表示和表述了本发明,但其不得解释为对本发明自身的限制。在不脱离所附权利要求定义的本发明的精神和范围前提下,可对其在形式上和细节上做出各种变化。

Claims (10)

  1. 一种植保无人机地面监控***,其特征在于以无人机为搭载平台,机器视觉为核心,对农田基本苗进行统计。
  2. 根据权利要求1所述的一种植保无人机地面监控***,其特征在于,所述监控***包括:
    用于拍摄图片集并对其进行拼接的地图采集***;
    用于对拼接出的植保地图进行区域标记并预处理的区域图片处理***;
    用于识别作物和杂草的卷积神经网络***;
    用于统计得出区域内作物基本苗数目、从而估算整体基本苗个数的统计***。
  3. 根据权利要求2所述的一种植保无人机地面监控***,其特征在于,所述地图采集***进一步包括以预设路线巡航的无人机,排布设置在农田上的至少一个甚高频航向信标台、以及多个甚高频指点标;所述无人机下部悬挂有机器视觉模组。
  4. 根据权利要求3所述的一种植保无人机地面监控***,其特征在于,所述机器视觉模组包括至少一个主镜头和若干辅助镜头,辅助镜头通过镜头切换架实现切换;所述镜头切换架包括固定在所述无人机下部的伺服电机,通过联轴器与所述伺服电机的输出轴连接的转轴,以及固定在所述转轴一端的切换盘;所述切换盘的圆周设有镜头安装槽,所述辅助镜头安装在所述镜头安装槽内;所述辅助镜头包括长焦镜头、微距镜头、夜视镜头。
  5. 根据权利要求2所述的一种植保无人机地面监控***,其特征在于,所述区域图片处理***进一步通过尺度不变特征变换算法将采集到的多份图片拼接成一张大图,并在得出的大图上随机标记出若干区域,采取预定阈值对图像进行二值化处理,去除颜色与作物相近的物体对识别带来的干扰。
  6. 根据权利要求2所述的一种植保无人机地面监控***,其特征在于,所述卷积神经网络***进一步将作物和杂草标记以后放入设计好的CNN训练出合适的参数,将连通域分割出的图像逐一识别得出最后的结果。
  7. 一种植保无人机地面监控***进行的地面监控方法,其特征在于,包括以下步骤:
    步骤1、通过无人机搭载的视觉平台对目标农田进行拍摄得到一组零散的图片集;
    步骤2、通过尺度不变特征变换算法对零散的图片集进行拼接;
    步骤3、在拼接得到的图上随机标记出若干区域;
    步骤4、将标记出的区域图片逐一的进行二值化、膨胀、腐蚀等操作,并标记出连 通域;
    步骤5、通过采集的作物和杂草样本训练一个识别作物和杂草的卷积神经网络;
    步骤6、得出标记区域作物基本苗的数目,从而估算整体基本苗的个数。
  8. 根据权利要求7所述的一种植保无人机地面监控方法,其特征在于:步骤1采用四旋翼无人机对目标农田进行拍摄时进一步包括如下姿态控制方法:
    步骤1-1、降低磁干扰:处于校准模式下,通过采集X、Y、Z三轴各自的最大值和最小值,来计算X 偏移量、Y 偏移量、Z 偏移量
    X 偏移量=(X 最大值+X 最小值)/2
    Y 偏移量=(Y 最大值+Y 最小值)/2
    Z 偏移量=(Z 最大值+Z 最小值)/2
    将偏移量计算结果存入中控机,各轴减去对应的偏移量得到校准硬磁后的磁场;
    步骤1-2、建立地球坐标系:坐标原点o e在地球中心,x e和y e在地球赤道面内,x e指向本初子午线,z e为地球自转轴;
    将地球坐标系转换为机体坐标系:
    Figure PCTCN2020104941-appb-100001
    其中
    Figure PCTCN2020104941-appb-100002
    姿态矩阵为:
    Figure PCTCN2020104941-appb-100003
    式中,θ表示机体纵轴与纵向水平轴之间的夹角,
    Figure PCTCN2020104941-appb-100004
    表示机体纵向对称面与纵向铅垂面之间的夹角,ψ表示机体纵轴在水平面xoy的投影与地理坐标系中本初子午线之间的夹角;
    步骤1-3、建立无人机姿态动力学模型:
    Figure PCTCN2020104941-appb-100005
    式中,m为质量;g为重力加速度;I x,I y,I z为相对机体坐标系的转动惯量;l为螺旋桨中心到四旋翼中心的距离;U i(i=1,2,3,4)为控制输入,θ表示机体纵轴与纵向水平轴之间的夹角,
    Figure PCTCN2020104941-appb-100006
    表示机体纵向对称面与纵向铅垂面之间的夹角,ψ表示机体纵轴在水平面xoy的投影与地理坐标系中本初子午线之间的夹角,d i(i=1,2,3,4,5,6)为外部未知干扰;τ为未知时滞,
    Figure PCTCN2020104941-appb-100007
    表示无人机在X方向的动力学模型,
    Figure PCTCN2020104941-appb-100008
    表示无人机在Y方向的动力学模型,
    Figure PCTCN2020104941-appb-100009
    表示无人机在Z方向的动力学模型,
    Figure PCTCN2020104941-appb-100010
    表示无人机的俯仰角模型,
    Figure PCTCN2020104941-appb-100011
    表示滚转角模型,
    Figure PCTCN2020104941-appb-100012
    表示偏航角模型;
    步骤1-4、定义***当前采样时间为k,上一时刻(即k-1)最优估计状态为
    Figure PCTCN2020104941-appb-100013
    根据
    Figure PCTCN2020104941-appb-100014
    对***当前时刻状态进行预测,预测值记
    Figure PCTCN2020104941-appb-100015
    Figure PCTCN2020104941-appb-100016
    矩阵A表示状态转移矩阵,u k表示当前时刻输入量,矩阵B表示***控制矩阵,
    Figure PCTCN2020104941-appb-100017
    由两部分组成,一部分是上一时刻最优状态与矩阵A之间的乘积,另一部分是当前时刻输入量与矩阵B之间的乘积;
    步骤1-5、为表示预测模型的未知性,采用协方差矩阵方程,定义预测当前状态矩阵为P k,得
    P k=AP k-1A T+Q
    其中P k-1为上一时刻协方差矩阵估算出的最优解,Q为预测模型固有噪声矩阵;
    步骤1-6、定义当前观测值为z k,当前观测矩阵为H,观测噪声的协方差矩阵为R,可得
    Figure PCTCN2020104941-appb-100018
    将预测值和观测值进行数据融合,得到当前状态的最优估计值
    Figure PCTCN2020104941-appb-100019
    Figure PCTCN2020104941-appb-100020
    式中
    Figure PCTCN2020104941-appb-100021
    表示实际观测值与预测值之间的残差,K称为卡尔曼系数矩阵,K的表达式为
    K=P kH T/[HP kH T+R]
    式中各符号含义同上。
  9. 根据权利要求7所述的一种植保无人机地面监控方法,其特征在于:步骤2进一步包括通过尺度不变特征变换算法将采集到的多份图片拼接成一张大图,步骤3进一步包括在得出的大图上随机标记出若干区域,采取预定阈值对图像进行二值化处理,去除颜色与作物相近的物体对识别带来的干扰;
    步骤4进一步包括选用预定阈值对图片进行二值化,以去除颜色与作物相近的物体对识别带来的干扰,对于基本苗的植株还未长成一片、在图片上呈现一个个独立的个体的情况,采用连通域的方法将图片中的植株进行划分,给图像中每个连通域赋予一个唯一的、区别于其他连通域的标识:
    步骤4-1、通过扫描图像两遍完成图像中所有连通域标记的方法,第一遍按行扫描,给每个前景像素点一个标记;
    步骤4-2、第二遍扫描,将同一个连通域内的不同标记为一个相同的标记;
    步骤4-3、选取一个种子点,将与种子点相邻的所有前、背景像素归入同一个集合,形成一个连通域。
  10. 根据权利要求7所述的一种植保无人机地面监控方法,其特征在于,步骤5进一步包括:
    步骤5-1、将区域图片转化为灰度值图片并进行滤波,设置三组参数(Lsize1,Lsize2),(Msize1,Msize2),(Fsize1,Fsize2),且满足Lsize1≥Msize1≥Fsize1,Lsize2≥Msize2 ≥Fsize2,同时按(Fsize1,Fsize2)对拍摄到的图片进行分块,则每一块的大小为Fsize1*Fsize2,分块时,图片剩余不足Fsize1或Fsize2的部分舍弃掉,或补充为Fsize1*Fsize2大小;
    步骤5-2、计算每一个分块周边Lsize1*Lsize2区域的最优二值化阈值,利用计算出来的阈值对该分块周边Msize1*Msize2区域的范围进行图像二值化,将所有Msize1*Msize2的分块按照对应的空间位置重叠累加;
    步骤5-3、将累加图像除以[(Msize1/Fsize1)*(Msize2/Fsize2)]的开方值得到图像增强的水印图像;
    步骤5-4、将原图片和视频帧数据转化为Ycbcr图像,提取其亮度通道Y L,将该通道Y L进行下采样得到单通道图像Y L′;对亮度通道Y L进行相邻插值运算,得到插值图像Y Lc,对亮度通道Y L进行强度为0.8的锐化得出锐化图像Y LR,最终将单通道图像Y L′、插值图像Y Lc、锐化图像Y LR以0.8:0.9:1.1的比率混合形成多通道图像Y m
    步骤5-5、将图片数据通过卷积运算分割为输入层、卷积层、输出层,其中,输入层的大小为a×a,通道数为16;卷积层为两层,分别记为F1层和F2层,其中,F1层的卷积特征图为128个,其尺寸为(a-8+1)×(a-8+1);由每个8×8的卷积核对输入图像进行内卷积,并对卷积结果进行第一次激活:
    F c1=max(0,W 1×Y m+B 1)
    式中,F c1表示第一层卷积层F1的激活函数,Y m表示多通道图像,B 1表示第一层卷积层F1的比率因子,W 1表示第一层卷积层F1的放大因子;
    F2层的卷积特征图为128个,其尺寸为(a-16)×(a-16),将经过F1层的的输出量作为变量输入到F2层,使用128个卷积核对F2层中的数据进行卷积运算,并对卷积结果进行第二次激活:
    Figure PCTCN2020104941-appb-100022
    式中,F F2表示第一层卷积层F2的激活函数,B 2表示第二层卷积层F2的比率因子,W 2表示第二层卷积层F2的放大因子,其余符号含义同上;
    第一层卷积层的计算步骤如下:
    pool1:x l 1,j(1≤j≤1)=g(down(x l 1,j(1≤j≤3)))
    第二层卷积层的计算步骤如下:
    pool1:x l 2,j(1≤j≤12)=g(down(x l 2,j(1≤j≤1)))
    式中,x l 1,j表示第1层的第l个输入样本,x l 2,j表示第2层的第l个输入样本,down(x l 1,j(1≤j≤3)表示在第1层的第l个输入样本中的下采样函数,down(x l 2,j(1≤j≤15)表示在第2层的第l个输入样本中的下采样函数;
    步骤5-6、构造超分辨重构模型,采用均方误差作为损失函数:
    Figure PCTCN2020104941-appb-100023
    式中,
    Figure PCTCN2020104941-appb-100024
    表示高分辨率图像块,Y L表示亮度通道,N表示抽取图像块的数量,K 0表示调节系数。
PCT/CN2020/104941 2020-07-23 2020-07-27 一种植保无人机地面监控***及其监控方法 WO2022016563A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010728152.9 2020-07-23
CN202010728152.9A CN111860375A (zh) 2020-07-23 2020-07-23 一种植保无人机地面监控***及其监控方法

Publications (1)

Publication Number Publication Date
WO2022016563A1 true WO2022016563A1 (zh) 2022-01-27

Family

ID=72946995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104941 WO2022016563A1 (zh) 2020-07-23 2020-07-27 一种植保无人机地面监控***及其监控方法

Country Status (2)

Country Link
CN (1) CN111860375A (zh)
WO (1) WO2022016563A1 (zh)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637876A (zh) * 2022-05-19 2022-06-17 中国电子科技集团公司第五十四研究所 基于矢量地图特征表达的大场景无人机图像快速定位方法
CN114663785A (zh) * 2022-03-18 2022-06-24 华南农业大学 基于无人机高光谱的荔枝病害检测方法及***
CN114745500A (zh) * 2022-03-28 2022-07-12 联想(北京)有限公司 图像处理方法及输出检测***
CN114879744A (zh) * 2022-07-01 2022-08-09 浙江大学湖州研究院 一种基于机器视觉的夜间作业无人机***
CN115294482A (zh) * 2022-09-26 2022-11-04 山东常生源生物科技股份有限公司 一种基于无人机遥感图像的食用菌产量估算方法
CN115619286A (zh) * 2022-11-11 2023-01-17 中国农业科学院农业资源与农业区划研究所 一种评估育种田间小区样地质量的方法和***
CN115826596A (zh) * 2022-09-19 2023-03-21 国家能源集团宝庆发电有限公司 基于多旋翼无人机的智能火电厂烟囱巡检方法及***
CN116222411A (zh) * 2023-04-06 2023-06-06 山东环宇地理信息工程有限公司 一种地表形变监测***、监测方法及应用
CN116246225A (zh) * 2023-05-12 2023-06-09 青岛农业大学 一种基于图像处理的作物育种监测方法及***
CN116412813A (zh) * 2023-06-09 2023-07-11 苏州青宸科技有限公司 一种基于无人机的地图构建方法及***
CN116758081A (zh) * 2023-08-18 2023-09-15 安徽乾劲企业管理有限公司 一种无人机道路桥梁巡检图像处理方法
CN117499887A (zh) * 2024-01-02 2024-02-02 江西机电职业技术学院 一种基于多传感器融合技术的数据采集方法及***
CN117579790A (zh) * 2024-01-16 2024-02-20 金钱猫科技股份有限公司 一种施工工地监控方法及终端
CN117676093A (zh) * 2023-12-19 2024-03-08 苏州伟卓奥科三维科技有限公司 一种基于云服务的远程无线视频监控***
CN117726541A (zh) * 2024-02-08 2024-03-19 北京理工大学 一种基于二值化神经网络的暗光视频增强方法及装置
CN116164711B (zh) * 2023-03-09 2024-03-29 广东精益空间信息技术股份有限公司 一种无人机测绘方法、***、介质及计算机
CN117876222A (zh) * 2024-03-12 2024-04-12 昆明理工大学 一种弱纹理湖泊水面场景下的无人机影像拼接方法
CN118155104A (zh) * 2024-05-10 2024-06-07 江西理工大学南昌校区 一种无人机自主着陆方法及***

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112698661B (zh) * 2021-03-22 2021-08-24 成都睿铂科技有限责任公司 一种飞行器的航测数据采集方法、装置、***及存储介质
CN113298889A (zh) * 2021-05-08 2021-08-24 江苏师范大学 一种基于机器视觉的基本苗统计方法
CN113989253A (zh) * 2021-11-04 2022-01-28 广东皓行科技有限公司 农田目标对象信息的获取方法及装置
CN114418251B (zh) * 2022-04-01 2022-06-24 北京新兴科遥信息技术有限公司 用于永久基本农田的智能监测***及监测方法
CN115861859A (zh) * 2023-02-20 2023-03-28 中国科学院东北地理与农业生态研究所 一种坡耕地环境监测方法及***

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105035334A (zh) * 2015-06-25 2015-11-11 胡茂东 一种利用北斗卫星和gps双星控制的农业无人飞机
CN106125762A (zh) * 2016-08-01 2016-11-16 北京艾森博航空科技股份有限公司 基于互联网的无人机植保管理***和方法
CN109814597A (zh) * 2019-02-03 2019-05-28 唐山坤翼创新科技有限公司 集群式植保无人机控制***的控制方法
CN109816698A (zh) * 2019-02-25 2019-05-28 南京航空航天大学 基于尺度自适应核相关滤波的无人机视觉目标跟踪方法
CN109977924A (zh) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 针对农作物的无人机机上实时图像处理方法及***
US20190259134A1 (en) * 2018-02-20 2019-08-22 Element Ai Inc. Training method for convolutional neural networks for use in artistic style transfers for video
CN110631588A (zh) * 2019-09-23 2019-12-31 电子科技大学 一种基于rbf网络的无人机视觉导航定位方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105035334A (zh) * 2015-06-25 2015-11-11 胡茂东 一种利用北斗卫星和gps双星控制的农业无人飞机
CN106125762A (zh) * 2016-08-01 2016-11-16 北京艾森博航空科技股份有限公司 基于互联网的无人机植保管理***和方法
US20190259134A1 (en) * 2018-02-20 2019-08-22 Element Ai Inc. Training method for convolutional neural networks for use in artistic style transfers for video
CN109814597A (zh) * 2019-02-03 2019-05-28 唐山坤翼创新科技有限公司 集群式植保无人机控制***的控制方法
CN109816698A (zh) * 2019-02-25 2019-05-28 南京航空航天大学 基于尺度自适应核相关滤波的无人机视觉目标跟踪方法
CN109977924A (zh) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 针对农作物的无人机机上实时图像处理方法及***
CN110631588A (zh) * 2019-09-23 2019-12-31 电子科技大学 一种基于rbf网络的无人机视觉导航定位方法

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663785A (zh) * 2022-03-18 2022-06-24 华南农业大学 基于无人机高光谱的荔枝病害检测方法及***
CN114663785B (zh) * 2022-03-18 2024-06-07 华南农业大学 基于无人机高光谱的荔枝病害检测方法及***
CN114745500A (zh) * 2022-03-28 2022-07-12 联想(北京)有限公司 图像处理方法及输出检测***
CN114745500B (zh) * 2022-03-28 2023-09-19 联想(北京)有限公司 图像处理方法及输出检测***
CN114637876A (zh) * 2022-05-19 2022-06-17 中国电子科技集团公司第五十四研究所 基于矢量地图特征表达的大场景无人机图像快速定位方法
CN114879744A (zh) * 2022-07-01 2022-08-09 浙江大学湖州研究院 一种基于机器视觉的夜间作业无人机***
CN115826596A (zh) * 2022-09-19 2023-03-21 国家能源集团宝庆发电有限公司 基于多旋翼无人机的智能火电厂烟囱巡检方法及***
CN115826596B (zh) * 2022-09-19 2023-08-25 国家能源集团宝庆发电有限公司 基于多旋翼无人机的智能火电厂烟囱巡检方法及***
CN115294482A (zh) * 2022-09-26 2022-11-04 山东常生源生物科技股份有限公司 一种基于无人机遥感图像的食用菌产量估算方法
CN115294482B (zh) * 2022-09-26 2022-12-20 山东常生源生物科技股份有限公司 一种基于无人机遥感图像的食用菌产量估算方法
CN115619286B (zh) * 2022-11-11 2023-10-03 中国农业科学院农业资源与农业区划研究所 一种评估育种田间小区样地质量的方法和***
CN115619286A (zh) * 2022-11-11 2023-01-17 中国农业科学院农业资源与农业区划研究所 一种评估育种田间小区样地质量的方法和***
CN116164711B (zh) * 2023-03-09 2024-03-29 广东精益空间信息技术股份有限公司 一种无人机测绘方法、***、介质及计算机
CN116222411A (zh) * 2023-04-06 2023-06-06 山东环宇地理信息工程有限公司 一种地表形变监测***、监测方法及应用
CN116222411B (zh) * 2023-04-06 2023-10-20 山东环宇地理信息工程有限公司 一种地表形变监测***、监测方法及应用
CN116246225A (zh) * 2023-05-12 2023-06-09 青岛农业大学 一种基于图像处理的作物育种监测方法及***
CN116412813A (zh) * 2023-06-09 2023-07-11 苏州青宸科技有限公司 一种基于无人机的地图构建方法及***
CN116412813B (zh) * 2023-06-09 2023-09-05 苏州青宸科技有限公司 一种基于无人机的地图构建方法及***
CN116758081B (zh) * 2023-08-18 2023-11-17 安徽乾劲企业管理有限公司 一种无人机道路桥梁巡检图像处理方法
CN116758081A (zh) * 2023-08-18 2023-09-15 安徽乾劲企业管理有限公司 一种无人机道路桥梁巡检图像处理方法
CN117676093A (zh) * 2023-12-19 2024-03-08 苏州伟卓奥科三维科技有限公司 一种基于云服务的远程无线视频监控***
CN117499887B (zh) * 2024-01-02 2024-03-19 江西机电职业技术学院 一种基于多传感器融合技术的数据采集方法及***
CN117499887A (zh) * 2024-01-02 2024-02-02 江西机电职业技术学院 一种基于多传感器融合技术的数据采集方法及***
CN117579790A (zh) * 2024-01-16 2024-02-20 金钱猫科技股份有限公司 一种施工工地监控方法及终端
CN117579790B (zh) * 2024-01-16 2024-03-22 金钱猫科技股份有限公司 一种施工工地监控方法及终端
CN117726541A (zh) * 2024-02-08 2024-03-19 北京理工大学 一种基于二值化神经网络的暗光视频增强方法及装置
CN117876222A (zh) * 2024-03-12 2024-04-12 昆明理工大学 一种弱纹理湖泊水面场景下的无人机影像拼接方法
CN117876222B (zh) * 2024-03-12 2024-06-11 昆明理工大学 一种弱纹理湖泊水面场景下的无人机影像拼接方法
CN118155104A (zh) * 2024-05-10 2024-06-07 江西理工大学南昌校区 一种无人机自主着陆方法及***

Also Published As

Publication number Publication date
CN111860375A (zh) 2020-10-30

Similar Documents

Publication Publication Date Title
WO2022016563A1 (zh) 一种植保无人机地面监控***及其监控方法
CN113012150A (zh) 一种特征融合的高密度稻田无人机图像稻穗计数方法
CN104091175B (zh) 一种基于Kinect深度信息获取技术的害虫图像自动识别方法
CN102663397B (zh) 一种小麦出苗的自动检测方法
CN109949593A (zh) 一种基于路口先验知识的交通信号灯识别方法及***
CN102542560B (zh) 一种水稻移栽后密度自动检测的方法
Xu et al. Classification method of cultivated land based on UAV visible light remote sensing
Song et al. Detection of maize tassels for UAV remote sensing image with an improved YOLOX model
CN114708208B (zh) 一种基于机器视觉的名优茶嫩芽识别与采摘点定位方法
CN110288623A (zh) 无人机海上网箱养殖巡检图像的数据压缩方法
CN117409339A (zh) 一种用于空地协同的无人机作物状态视觉识别方法
Tubau Comas et al. Automatic apple tree blossom estimation from UAV RGB imagery
Xiang et al. Measuring stem diameter of sorghum plants in the field using a high-throughput stereo vision system
Zhong et al. Identification and depth localization of clustered pod pepper based on improved Faster R-CNN
CN110689022A (zh) 基于叶片匹配的各株作物图像提取方法
CN111950524A (zh) 一种基于双目视觉和rtk的果园局部稀疏建图方法和***
CN117079125A (zh) 一种基于改进型YOLOv5的猕猴桃授粉花朵识别方法
Li et al. Image processing for crop/weed discrimination in fields with high weed pressure
CN116205879A (zh) 一种基于无人机图像及深度学习的小麦倒伏面积估算方法
CN116311218A (zh) 基于自注意力特征融合的带噪植株点云语义分割方法及***
Habib et al. Wavelet frequency transformation for specific weeds recognition
CN105740805B (zh) 一种基于多区域联合车道线检测方法
CN115482501A (zh) 融合数据增强和目标检测网络的抛洒物识别方法
CN117011722A (zh) 基于无人机实时监控视频的车牌识别方法及装置
Fang et al. Classification system study of soybean leaf disease based on deep learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945909

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945909

Country of ref document: EP

Kind code of ref document: A1