CN113192129B - Method for positioning adhered citrus based on deep convolutional neural network model - Google Patents

Method for positioning adhered citrus based on deep convolutional neural network model Download PDF

Info

Publication number
CN113192129B
CN113192129B CN202110571099.0A CN202110571099A CN113192129B CN 113192129 B CN113192129 B CN 113192129B CN 202110571099 A CN202110571099 A CN 202110571099A CN 113192129 B CN113192129 B CN 113192129B
Authority
CN
China
Prior art keywords
citrus
image
visible light
tree
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110571099.0A
Other languages
Chinese (zh)
Other versions
CN113192129A (en
Inventor
唐宇
骆少明
陈尉钊
李嘉豪
杨捷鹏
符伊晴
赵晋飞
张晓迪
郭琪伟
庄鑫财
黄华盛
朱兴
侯超钧
庄家俊
苗爱敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkai University of Agriculture and Engineering
Guangdong Polytechnic Normal University
Original Assignee
Zhongkai University of Agriculture and Engineering
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkai University of Agriculture and Engineering, Guangdong Polytechnic Normal University filed Critical Zhongkai University of Agriculture and Engineering
Priority to CN202110571099.0A priority Critical patent/CN113192129B/en
Publication of CN113192129A publication Critical patent/CN113192129A/en
Application granted granted Critical
Publication of CN113192129B publication Critical patent/CN113192129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method for positioning adhered oranges based on a deep convolutional neural network model, which comprises the following steps: obtaining a first visible light image; carrying out manual calibration to obtain a first training image, and carrying out training to obtain a first citrus positioning model; carrying out first picking and moving treatment to enable the adhered oranges to be a first ratio value; obtaining a second visible light image; carrying out manual calibration to obtain a second training image, and carrying out training to obtain a second citrus positioning model; picking up and moving to enable the adhered oranges to be the ith ratio value; obtaining an i +1 th visible light image; carrying out manual calibration to obtain an i +1 th training image, and carrying out training to obtain an i +1 th citrus positioning model; obtaining a visible light image to be positioned, and obtaining a preliminary citrus positioning result; obtaining the ratio of the adhered oranges; and if the ratio of the adhered oranges is greater than the ratio threshold value, recording the ratio as a final orange positioning result, thereby realizing the accurate positioning of the adhered oranges.

Description

Method for positioning adhered citrus based on deep convolutional neural network model
Technical Field
The application relates to a method and a device for positioning adhered oranges based on a deep convolutional neural network model, computer equipment and a storage medium.
Background
An automatic intelligent accurate picking technology of mature oranges is an important component of agricultural automation. The positioning technology of the citrus fruit is involved in the automatic intelligent accurate picking technology, and the positioning technology needs to be implemented under different field fruit tree environments, for example, the positioning technology is implemented under the field fruit tree environment with obvious unstructured characteristics, and the multi-fruit adhesion caused by fruit cluster growth on random spatial positions is shielded, so that the identification and positioning of the citrus are influenced, and the adhesion citrus further influences the quality of the adhesion citrus image due to changeable illumination conditions under the field fruit tree environment, and further influences the positioning accuracy. However, the conventional positioning scheme for citrus plants, for example, attempts to provide an additional light source for processing by artificial light supplement, but still fails to solve the problem that the adhered citrus plants are difficult to position. Thus, the prior art lacks a solution for accurately positioning adhered citrus fruit.
Disclosure of Invention
The application provides a method for positioning adhered oranges based on a deep convolutional neural network model, which comprises the following steps:
s1, carrying out first image acquisition on the sample citrus tree by adopting a CCD camera to obtain a first visible light image; wherein the sample citrus tree has no adhered citrus fruit;
s2, manually calibrating the position of the orange in the first visible light image to obtain a first training image, and inputting the first training image into a preset deep convolution neural network model for training to obtain a first orange positioning model;
s3, carrying out first picking and moving treatment on the oranges on the sample orange trees, so that the positions of the oranges in the sample orange trees are changed, the sample orange trees have adhered oranges, and the ratio of the number of the adhered oranges to the number of all the oranges on the sample orange trees is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1;
s4, carrying out second image acquisition on the sample citrus tree subjected to the first picking and moving processing by adopting a CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition;
s5, manually calibrating the position of the citrus in the second visible light image to obtain a second training image, and inputting the second training image into the first training image for training to obtain a second citrus positioning model;
s6, carrying out the ith picking and moving treatment on the oranges on the sample orange trees, so that the positions of the oranges in the sample orange trees are changed, the sample orange trees have adhered oranges, and the ratio of the number of the adhered oranges to the number of all the oranges on the sample orange trees is the ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1;
s7, carrying out i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition;
s8, manually calibrating the orange position in the (i + 1) th visible light image to obtain an (i + 1) th training image, and inputting the (i + 1) th training image into the (i) th training image for training to obtain an (i + 1) th orange positioning model;
s9, acquiring an image of a target citrus tree by using a CCD (charge coupled device) camera to obtain a visible light image to be positioned, and inputting the visible light image to be positioned into the ith citrus positioning model for processing to obtain a preliminary citrus positioning result output by the ith citrus positioning model;
s10, obtaining the ratio of the oranges adhered to the target orange tree according to the preliminary orange positioning result, and judging whether the ratio of the oranges adhered to the target orange tree is larger than a preset ratio threshold value or not;
and S11, if the ratio of the adhered oranges on the target orange tree is larger than a preset ratio threshold, taking the preliminary orange positioning result as a final orange positioning result.
Further, the step S1 of acquiring a first visible light image of the sample citrus tree by using the CCD camera includes:
s101, carrying out primary image acquisition on a sample citrus tree by adopting a CCD (charge coupled device) camera to obtain a primary visible light image;
s102, processing the preliminary visible light image according to a preset color difference map technology and an Otsu threshold segmentation algorithm to obtain an intermediate visible light image, and separating a first citrus fruit region from the intermediate visible light image;
s103, generating a mask image, so that only a first citrus fruit area is exposed after the mask image is superposed on the intermediate visible light image;
s104, superposing the mask image on the preliminary visible light image to obtain a second citrus fruit region in the preliminary visible light image;
s105, counting the brightness values of all pixel points in the second citrus fruit region, and calculating a first brightness average value of the second citrus fruit region;
s106, calling a high-brightness image of the mature citrus fruit collected in advance, and simultaneously calling an ambient light brightness value during collection of the high-brightness image; the high-brightness image means that when the image of the mature citrus fruit is collected, the brightness value of the visible light of the environment where the mature citrus fruit is located is larger than a preset brightness threshold value, or the high-brightness image means that when the image of the mature citrus fruit is collected, the mature citrus fruit is irradiated by the light generated by the visible light generator, and the power value of the visible light generator is larger than a preset power threshold value;
s107, calculating a second brightness average value of the high-brightness image according to a preset brightness average value calculation formula;
and S108, adjusting the brightness of the preliminary visible light image by adopting a first image enhancement algorithm of a retinex model according to the ambient light brightness value, the first brightness average value and the second brightness average value, so as to obtain a first visible light image.
Further, the step S107 of calculating the second brightness average value of the high brightness image according to a preset brightness average value calculation formula includes:
s1071, according to the formula:
v(i,j)=αR(i,j)+βG(i,j)+γB(i,j)
Figure BDA0003082743100000031
calculating a second brightness average value v (i, j); wherein i and j are respectively a horizontal coordinate and a vertical coordinate of a pixel point of the high-brightness image, R (i, j), G (i, j) and B (i, j) are respectively a brightness component of a red color channel, a brightness component of a green color channel, and a brightness component of a green and blue color channel, α, β, and γ are respectively preset parameters, a maximum value of the horizontal coordinate of the pixel point of the high-brightness image is R, and a maximum value of the vertical coordinate of the pixel point of the high-brightness image is c.
Further, step S9, in which the CCD camera is used to acquire an image of the target citrus tree to obtain a visible light image to be positioned, and the visible light image to be positioned is input into the ith citrus positioning model to be processed, so as to obtain a preliminary citrus positioning result output by the ith citrus positioning model, includes:
s901, acquiring an image of a target citrus tree by using a CCD (charge coupled device) camera to obtain an unprocessed visible light image;
s902, adjusting the brightness of the unprocessed visible light image by adopting a preset second image enhancement algorithm to obtain a visible light image to be positioned; wherein the second image enhancement algorithm is the same as the first image enhancement algorithm;
and S903, inputting the visible light image to be positioned into the ith citrus positioning model for processing to obtain a preliminary citrus positioning result output by the ith citrus positioning model.
The application provides a positioner of adhesion oranges and tangerines based on degree of depth convolution neural network model, includes:
the first visible light image acquisition unit is used for acquiring a first image of the sample citrus tree by adopting a CCD (charge coupled device) camera to obtain a first visible light image; wherein the sample citrus tree has no adhered citrus fruit;
a first citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the first visible light image to obtain a first training image, and input the first training image into a preset deep convolutional neural network model for training to obtain a first citrus positioning model;
a first picking and moving unit for performing a first picking and moving process on the citrus fruit on the sample citrus tree, so that the position of the citrus fruit in the sample citrus tree is changed, so that adhered citrus fruit exists on the sample citrus tree, and the ratio of the number of the adhered citrus fruit to the number of all citrus fruit on the sample citrus tree is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1;
the second visible light image acquisition unit is used for carrying out second image acquisition on the sample citrus tree subjected to the first picking and moving processing by adopting the CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition;
a second citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the second visible light image to obtain a second training image, and input the second training image into the first training image for training to obtain a second citrus positioning model;
the ith picking and moving unit is used for carrying out ith picking and moving treatment on the citrus on the sample citrus tree, so that the position of the citrus in the sample citrus tree is changed, the adhered citrus exists on the sample citrus tree, and the ratio of the number of the adhered citrus to the number of all the citrus on the sample citrus tree is an ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1;
the i +1 th visible light image acquisition unit is used for performing i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition;
an i +1 citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the i +1 visible light image to obtain an i +1 training image, and input the i +1 training image into the i training image for training to obtain an i +1 citrus positioning model;
the preliminary citrus positioning result acquisition unit is used for acquiring images of a target citrus tree by adopting a CCD (charge coupled device) camera to obtain a visible light image to be positioned, and inputting the visible light image to be positioned into the ith citrus positioning model for processing to obtain a preliminary citrus positioning result output by the ith citrus positioning model;
the proportion threshold judging unit is used for acquiring the proportion of the citrus adhered to the target citrus tree according to the preliminary citrus positioning result and judging whether the proportion of the citrus adhered to the target citrus tree is larger than a preset proportion threshold or not;
and the final citrus positioning result obtaining unit is used for taking the preliminary citrus positioning result as a final citrus positioning result if the ratio of the citrus adhered to the target citrus tree is greater than a preset ratio threshold value.
The present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of any of the above methods when the processor executes the computer program.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any of the above.
According to the method, the device, the computer equipment and the storage medium for positioning the adhered citrus based on the deep convolutional neural network model, first image acquisition is carried out to obtain a first visible light image; carrying out manual calibration to obtain a first training image, inputting the first training image into a deep convolutional neural network model for training to obtain a first citrus positioning model; carrying out first picking and moving treatment to enable the ratio of the number of the adhered oranges to be a first ratio value; carrying out second image acquisition to obtain a second visible light image; carrying out manual calibration to obtain a second training image, inputting the second training image into the first training image for training to obtain a second citrus positioning model; picking up and moving to enable the ratio of the number of the adhered oranges to be the ith ratio value; carrying out (i + 1) th image acquisition to obtain an (i + 1) th visible light image; carrying out manual calibration to obtain an i +1 th training image, and carrying out training to obtain an i +1 th citrus positioning model; acquiring an image of a target citrus tree to obtain a visible light image to be positioned and obtain a primary citrus positioning result; obtaining the ratio of the adhered oranges; and if the ratio of the adhered oranges is greater than the ratio threshold value, recording the ratio as a final orange positioning result, thereby realizing the accurate positioning of the adhered oranges.
One reason why the adhered citrus is difficult to identify and locate is that the image features of the adhered citrus are different from those of the individual citrus, and the common citrus identification model is focused on the image identification of the individual citrus, so that misjudgment of the adhered citrus is easily caused. The method adopts a progressive model training method, keeps the data provider (namely, the sample citrus tree and the total amount of the citrus fruits) unchanged in the process, and gradually and manually changes the positions of the citrus fruits to adjust the proportion of the adhered citrus fruits, so that the extraction and the recognition of the image characteristics of the adhered citrus fruits by the citrus positioning model are facilitated, and the positioning accuracy is finally improved.
Drawings
Fig. 1 is a schematic flowchart of a method for positioning adhered citrus fruit based on a deep convolutional neural network model according to an embodiment of the present application;
fig. 2 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, an embodiment of the present application provides a method for positioning adhered citrus based on a deep convolutional neural network model, including the following steps:
s1, carrying out first image acquisition on the sample citrus tree by adopting a CCD camera to obtain a first visible light image; wherein the sample citrus tree has no adhered citrus fruit;
s2, manually calibrating the position of the orange in the first visible light image to obtain a first training image, and inputting the first training image into a preset deep convolution neural network model for training to obtain a first orange positioning model;
s3, carrying out first picking and moving treatment on the oranges on the sample orange trees, so that the positions of the oranges in the sample orange trees are changed, the sample orange trees have adhered oranges, and the ratio of the number of the adhered oranges to the number of all the oranges on the sample orange trees is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1;
s4, carrying out second image acquisition on the sample citrus tree subjected to the first picking and moving processing by adopting a CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition;
s5, manually calibrating the position of the citrus in the second visible light image to obtain a second training image, and inputting the second training image into the first training image for training to obtain a second citrus positioning model;
s6, carrying out the ith picking and moving treatment on the oranges on the sample orange trees, so that the positions of the oranges in the sample orange trees are changed, the sample orange trees have adhered oranges, and the ratio of the number of the adhered oranges to the number of all the oranges on the sample orange trees is the ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1;
s7, carrying out i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition;
s8, manually calibrating the orange position in the (i + 1) th visible light image to obtain an (i + 1) th training image, and inputting the (i + 1) th training image into the (i) th training image for training to obtain an (i + 1) th orange positioning model;
s9, acquiring an image of a target citrus tree by using a CCD (charge coupled device) camera to obtain a visible light image to be positioned, and inputting the visible light image to be positioned into the ith citrus positioning model for processing to obtain a preliminary citrus positioning result output by the ith citrus positioning model;
s10, obtaining the ratio of the oranges adhered to the target orange tree according to the preliminary orange positioning result, and judging whether the ratio of the oranges adhered to the target orange tree is larger than a preset ratio threshold value or not;
and S11, if the ratio of the adhered oranges on the target orange tree is larger than a preset ratio threshold, taking the preliminary orange positioning result as a final orange positioning result.
As described in the above steps S1-S5, a CCD camera is used to perform a first image acquisition on the sample citrus tree, obtaining a first visible light image; wherein the sample citrus tree has no adhered citrus fruit; manually calibrating the position of the citrus in the first visible light image to obtain a first training image, and inputting the first training image into a preset deep convolution neural network model for training to obtain a first citrus positioning model; performing first picking and moving treatment on the citrus on the sample citrus tree, so that the position of the citrus in the sample citrus tree is changed, the sample citrus tree has adhered citrus, and the ratio of the number of the adhered citrus to the number of all the citrus on the sample citrus tree is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1; performing second image acquisition on the sample citrus tree subjected to the first picking and moving processing by using a CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition; and manually calibrating the position of the citrus in the second visible light image to obtain a second training image, and inputting the second training image into the first training image for training to obtain a second citrus positioning model.
The sample citrus tree of the present application is a tailored citrus tree that is characterized by the absence of adhered citrus on the sample citrus tree. The non-adhering citrus fruit on the sample citrus tree may be removed in any feasible manner, such as by finding a naturally growing non-adhering citrus tree, or by treating the citrus fruit on the citrus tree, i.e., removing the adhering citrus fruit, to form a non-adhering citrus tree. In addition, it should be mentioned that the citrus position in the first visible light image is manually calibrated to obtain the first training image, and the training process of the model necessarily requires a plurality of training images, and the number is not mentioned here, which is for convenience of description, but a practitioner in the art can unambiguously confirm that the number of the first training image and the subsequent training images is multiple. And inputting the first training image into a preset deep convolutional neural network model for training to obtain a first citrus positioning model. At this moment, the first citrus positioning model is only suitable for positioning the non-adhered citrus, which is the basis of the progressive training model of the application, and the positions of most citrus fruits on the sample citrus tree are not changed, so that the model is favorable for rapidly finding the image characteristics of the adhered citrus.
And carrying out first picking and moving treatment on the citrus on the sample citrus tree, so that the position of the citrus in the sample citrus tree is changed, the sample citrus tree has adhered citrus, and the ratio of the number of the adhered citrus to the number of all the citrus on the sample citrus tree is a first ratio value. Therefore, after the first picking and moving treatment, a small amount of adhered oranges exist on the sample orange tree, and most of oranges and orange trees are not affected by the first picking and moving treatment, so that the method is suitable for finding out image detail changes through second training, and is favorable for finding out adhered orange image features. Secondly, acquiring images of the orange trees subjected to the first picking and moving treatment by using a CCD camera to obtain a second visible light image; and the shooting parameters during the second image acquisition are the same as the shooting parameters during the first image acquisition. Shooting parameters are the same, including shooting positions, light conditions, object distances and the like are the same, and therefore negative effects caused by noise are avoided as far as possible. And manually calibrating the position of the citrus in the second visible light image to obtain a second training image, and inputting the second training image into the first training image for training to obtain a second citrus positioning model. Compared with the first training image, the second training image is different only in that the positions of a small part of the citrus are moved and changed into the adhered citrus, so that the second citrus positioning model is trained more quickly, image features of the adhered citrus are found more easily, and the adhered citrus is easier to position.
Further, the step S1 of acquiring a first visible light image of the sample citrus tree by using the CCD camera includes:
s101, carrying out primary image acquisition on a sample citrus tree by adopting a CCD (charge coupled device) camera to obtain a primary visible light image;
s102, processing the preliminary visible light image according to a preset color difference map technology and an Otsu threshold segmentation algorithm to obtain an intermediate visible light image, and separating a first citrus fruit region from the intermediate visible light image;
s103, generating a mask image, so that only a first citrus fruit area is exposed after the mask image is superposed on the intermediate visible light image;
s104, superposing the mask image on the preliminary visible light image to obtain a second citrus fruit region in the preliminary visible light image;
s105, counting the brightness values of all pixel points in the second citrus fruit region, and calculating a first brightness average value of the second citrus fruit region;
s106, calling a high-brightness image of the mature citrus fruit collected in advance, and simultaneously calling an ambient light brightness value during collection of the high-brightness image; the high-brightness image means that when the image of the mature citrus fruit is collected, the brightness value of the visible light of the environment where the mature citrus fruit is located is larger than a preset brightness threshold value, or the high-brightness image means that when the image of the mature citrus fruit is collected, the mature citrus fruit is irradiated by the light generated by the visible light generator, and the power value of the visible light generator is larger than a preset power threshold value;
s107, calculating a second brightness average value of the high-brightness image according to a preset brightness average value calculation formula;
and S108, adjusting the brightness of the preliminary visible light image by adopting a first image enhancement algorithm of a retinex model according to the ambient light brightness value, the first brightness average value and the second brightness average value, so as to obtain a first visible light image.
Therefore, the image processibility is enhanced in a light filling algorithm mode, and the training of the first citrus positioning model is facilitated. And, before training of other citrus positioning models in the future, the brightness adjustment can be performed by adopting the same image enhancement algorithm. The adhered citrus is difficult to accurately identify and position, and one reason is that variable illumination conditions in the field fruit tree environment affect the quality of adhered citrus images and further affect the positioning accuracy. Therefore, the brightness of the primary visible light image is adjusted, and the first visible light image is obtained, so that the positioning accuracy is improved. Specifically, the preliminary visible light image is processed according to a preset color difference map technology and an Otsu threshold segmentation algorithm to obtain an intermediate visible light image, and then a first orange fruit region is separated from the intermediate visible light image, namely the intermediate visible light image is changed into a binary image, so that the first orange fruit region is separated. The Otsu threshold segmentation algorithm is an efficient algorithm for carrying out binarization on an image, and according to a preset color difference graph technology and the Otsu threshold segmentation algorithm, namely, the area division is carried out through color differences among pixel points in the image so as to obtain a binarized image. A mask image is generated such that only a first citrus fruit region is exposed after the mask image is superimposed on the intermediate visible light image. The mask image aims to obtain a non-binarized second citrus fruit region, and the mask image is only required to be superposed on the preliminary visible light image.
A first average luminance value for the second citrus fruit region is then calculated. In addition, a high-brightness image of the mature citrus fruit collected in advance needs to be called, and an ambient light brightness value during collection of the high-brightness image is called at the same time; and calculating a second brightness average value of the high-brightness image. And then, brightness adjustment is carried out on the primary visible light image by adopting a first image enhancement algorithm of the retinex model, so that a first visible light image is obtained. The basic idea of retinex model is that a person perceives the color and brightness of a point not only from the absolute light entering the human eye but also from the color and brightness of its surroundings, and the basic assumption is that the original image S is the product of the illumination image L and the reflection image R, i.e. it can be expressed in the form of: s (x, y) ═ R (x, y) · L (x, y). Wherein x and y are coordinate values of the pixel points. Wherein, R (x, y) represents the reflection property of the object, namely the intrinsic property of the image, and the maximum degree of retention is required; and L (x, y) represents the incident light image, determines the dynamic range that the image pixels can reach, and should be removed as much as possible, so the first image enhancement algorithm of the retinex model is to remove L (x, y) as much as possible. The application introduces a high brightness image, wherein L is known as an ambient light brightness value; since the S image is a high-luminance image, the R image corresponding to the high-luminance image of the mature citrus fruit can be calculated. For the preliminary visible light image, L is unknown, R is unknown, and S is known, but since the reflectivity of the mature citrus fruit is approximately the same (more precisely, the reflectivity of the corresponding area of the two mature citrus fruits is the same), the initial visible light image can be obtained by using the introduced high-brightness image, so that the illumination image L (x, y) corresponding to the preliminary visible light image can be further calculated, and then the first visible light image can be obtained by removing the influence of the illumination image L (x, y).
Further, the step S107 of calculating the second brightness average value of the high brightness image according to a preset brightness average value calculation formula includes:
s1071, according to the formula:
v(i,j)=αR(i,j)+βG(i,j)+γB(i,j)
Figure BDA0003082743100000111
calculating a second brightness average value v (i, j); wherein i and j are respectively a horizontal coordinate and a vertical coordinate of a pixel point of the high-brightness image, R (i, j), G (i, j) and B (i, j) are respectively a brightness component of a red color channel, a brightness component of a green color channel, and a brightness component of a green and blue color channel, α, β, and γ are respectively preset parameters, a maximum value of the horizontal coordinate of the pixel point of the high-brightness image is R, and a maximum value of the vertical coordinate of the pixel point of the high-brightness image is c.
Performing the ith picking and moving process on the citrus fruit on the sample citrus fruit tree as described in the previous steps S6-S8, so that the position of the citrus fruit in the sample citrus fruit tree is changed, so that the adhered citrus fruit exists on the sample citrus fruit tree, and the ratio of the number of the adhered citrus fruit to the number of all the citrus fruit on the sample citrus fruit tree is the ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1; carrying out i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition; and manually calibrating the orange position in the (i + 1) th visible light image to obtain an (i + 1) th training image, and inputting the (i + 1) th training image into the (i) th training image for training to obtain an (i + 1) th orange positioning model.
The progressive model training of the application is carried out at least 3 times, namely i is an integer which is more than or equal to 2 and less than or equal to n, the more times the progressive model training is carried out, the more accurate the positioning of the adhered citrus is, but the time consumed by the training is correspondingly prolonged. Wherein the ith picking and moving process is performed on the citrus fruit on the sample citrus fruit tree, so that the position of the citrus fruit in the sample citrus fruit tree is changed, and the method is actually similar to the picking and moving processes of the previous times, and is different in that the ratio of the number of adhered citrus fruit to the number of all citrus fruit on the sample citrus fruit tree is the ith ratio value, and the ith ratio value is greater than the ith-1 ratio value, which indicates that the number of adhered citrus fruit is increased. Furthermore, in the ith picking and moving process, the original adhered citrus positions are kept unchanged, and the number of adhered citrus is increased by picking and moving a new single citrus fruit, so that the model training is facilitated. Carrying out i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD camera to obtain an i +1 th visible light image; and the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition. Similarly, the ith image acquisition is the same as the previous image acquisition, and the obtained visible light image is used as the training data of the model. And manually calibrating the orange position in the (i + 1) th visible light image to obtain an (i + 1) th training image, and inputting the (i + 1) th training image into the (i) th training image for training to obtain an (i + 1) th orange positioning model.
As described in the above steps S9-S11, a CCD camera is used to acquire an image of a target citrus tree, so as to obtain a visible light image to be positioned, and the visible light image to be positioned is input into the ith citrus positioning model for processing, so as to obtain a preliminary citrus positioning result output by the ith citrus positioning model; acquiring the citrus proportion value adhered to the target citrus tree according to the preliminary citrus positioning result, and judging whether the citrus proportion value adhered to the target citrus tree is larger than a preset proportion threshold value or not; and if the ratio of the citrus adhered to the target citrus tree is larger than a preset ratio threshold, taking the initial citrus positioning result as a final citrus positioning result.
Because the ith citrus positioning model suitable for positioning the adhered citrus is obtained, the positioning result can be obtained only by inputting the visible light image to be positioned into the ith citrus positioning model for processing. And because the application is suitable for processing the adhered oranges, the adhered orange ratio value of the target orange tree is obtained according to the preliminary orange positioning result, and whether the adhered orange ratio value of the target orange tree is larger than a preset ratio threshold value or not is judged, so that whether the target orange tree needs to be positioned by using the special ith orange positioning model of the application is determined. And if the ratio of the adhered oranges on the target orange tree is larger than a preset ratio threshold value, the target orange tree is an appropriate processing object of the ith orange positioning model, and therefore the preliminary orange positioning result is used as a final orange positioning result.
Further, if the ratio of the adhered oranges on the target orange tree is not greater than the preset ratio threshold, the adhesion condition of most of the oranges on the target orange tree is not existed, and the method is not suitable for processing by using the ith orange positioning model of the application, so that the ordinary orange positioning model can be called to position the oranges on the target orange tree.
Further, step S9, in which the CCD camera is used to acquire an image of the target citrus tree to obtain a visible light image to be positioned, and the visible light image to be positioned is input into the ith citrus positioning model to be processed, so as to obtain a preliminary citrus positioning result output by the ith citrus positioning model, includes:
s901, acquiring an image of a target citrus tree by using a CCD (charge coupled device) camera to obtain an unprocessed visible light image;
s902, adjusting the brightness of the unprocessed visible light image by adopting a preset second image enhancement algorithm to obtain a visible light image to be positioned; wherein the second image enhancement algorithm is the same as the first image enhancement algorithm;
and S903, inputting the visible light image to be positioned into the ith citrus positioning model for processing to obtain a preliminary citrus positioning result output by the ith citrus positioning model.
The visible light image to be positioned is subjected to image enhancement processing which is the same as that of the training image, the consistency of data processing is guaranteed, and the credibility of the preliminary citrus positioning result is improved.
According to the method for positioning the adhered citrus based on the deep convolutional neural network model, the first image acquisition is carried out to obtain a first visible light image; carrying out manual calibration to obtain a first training image, inputting the first training image into a deep convolutional neural network model for training to obtain a first citrus positioning model; carrying out first picking and moving treatment to enable the ratio of the number of the adhered oranges to be a first ratio value; carrying out second image acquisition to obtain a second visible light image; carrying out manual calibration to obtain a second training image, inputting the second training image into the first training image for training to obtain a second citrus positioning model; picking up and moving to enable the ratio of the number of the adhered oranges to be the ith ratio value; carrying out (i + 1) th image acquisition to obtain an (i + 1) th visible light image; carrying out manual calibration to obtain an i +1 th training image, and carrying out training to obtain an i +1 th citrus positioning model; acquiring an image of a target citrus tree to obtain a visible light image to be positioned and obtain a primary citrus positioning result; obtaining the ratio of the adhered oranges; and if the ratio of the adhered oranges is greater than the ratio threshold value, recording the ratio as a final orange positioning result, thereby realizing the accurate positioning of the adhered oranges.
The embodiment of the application provides a positioner of adhesion oranges and tangerines based on degree of depth convolution neural network model, includes:
the first visible light image acquisition unit is used for acquiring a first image of the sample citrus tree by adopting a CCD (charge coupled device) camera to obtain a first visible light image; wherein the sample citrus tree has no adhered citrus fruit;
a first citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the first visible light image to obtain a first training image, and input the first training image into a preset deep convolutional neural network model for training to obtain a first citrus positioning model;
a first picking and moving unit for performing a first picking and moving process on the citrus fruit on the sample citrus tree, so that the position of the citrus fruit in the sample citrus tree is changed, so that adhered citrus fruit exists on the sample citrus tree, and the ratio of the number of the adhered citrus fruit to the number of all citrus fruit on the sample citrus tree is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1;
the second visible light image acquisition unit is used for carrying out second image acquisition on the sample citrus tree subjected to the first picking and moving processing by adopting the CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition;
a second citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the second visible light image to obtain a second training image, and input the second training image into the first training image for training to obtain a second citrus positioning model;
the ith picking and moving unit is used for carrying out ith picking and moving treatment on the citrus on the sample citrus tree, so that the position of the citrus in the sample citrus tree is changed, the adhered citrus exists on the sample citrus tree, and the ratio of the number of the adhered citrus to the number of all the citrus on the sample citrus tree is an ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1;
the i +1 th visible light image acquisition unit is used for performing i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition;
an i +1 citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the i +1 visible light image to obtain an i +1 training image, and input the i +1 training image into the i training image for training to obtain an i +1 citrus positioning model;
the preliminary citrus positioning result acquisition unit is used for acquiring images of a target citrus tree by adopting a CCD (charge coupled device) camera to obtain a visible light image to be positioned, and inputting the visible light image to be positioned into the ith citrus positioning model for processing to obtain a preliminary citrus positioning result output by the ith citrus positioning model;
the proportion threshold judging unit is used for acquiring the proportion of the citrus adhered to the target citrus tree according to the preliminary citrus positioning result and judging whether the proportion of the citrus adhered to the target citrus tree is larger than a preset proportion threshold or not;
and the final citrus positioning result obtaining unit is used for taking the preliminary citrus positioning result as a final citrus positioning result if the ratio of the citrus adhered to the target citrus tree is greater than a preset ratio threshold value.
The operations respectively executed by the above units correspond to the steps of the method for positioning the adhered citrus fruit based on the deep convolutional neural network model in the foregoing embodiment one by one, and are not described herein again.
The positioning device for adhering the citrus based on the deep convolutional neural network model carries out first image acquisition to obtain a first visible light image; carrying out manual calibration to obtain a first training image, inputting the first training image into a deep convolutional neural network model for training to obtain a first citrus positioning model; carrying out first picking and moving treatment to enable the ratio of the number of the adhered oranges to be a first ratio value; carrying out second image acquisition to obtain a second visible light image; carrying out manual calibration to obtain a second training image, inputting the second training image into the first training image for training to obtain a second citrus positioning model; picking up and moving to enable the ratio of the number of the adhered oranges to be the ith ratio value; carrying out (i + 1) th image acquisition to obtain an (i + 1) th visible light image; carrying out manual calibration to obtain an i +1 th training image, and carrying out training to obtain an i +1 th citrus positioning model; acquiring an image of a target citrus tree to obtain a visible light image to be positioned and obtain a primary citrus positioning result; obtaining the ratio of the adhered oranges; and if the ratio of the adhered oranges is greater than the ratio threshold value, recording the ratio as a final orange positioning result, thereby realizing the accurate positioning of the adhered oranges.
Referring to fig. 2, an embodiment of the present invention further provides a computer device, where the computer device may be a server, and an internal structure of the computer device may be as shown in the figure. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer equipment is used for storing data used by a positioning method of the adhered citrus based on the deep convolutional neural network model. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for locating adhered citrus fruit based on a deep convolutional neural network model.
The processor executes the method for positioning the adhered citrus fruit based on the deep convolutional neural network model, wherein the method includes steps corresponding to the steps of executing the method for positioning the adhered citrus fruit based on the deep convolutional neural network model in the foregoing embodiment one to one, and details are not repeated herein.
It will be understood by those skilled in the art that the structures shown in the drawings are only block diagrams of some of the structures associated with the embodiments of the present application and do not constitute a limitation on the computer apparatus to which the embodiments of the present application may be applied.
The computer equipment carries out first image acquisition to obtain a first visible light image; carrying out manual calibration to obtain a first training image, inputting the first training image into a deep convolutional neural network model for training to obtain a first citrus positioning model; carrying out first picking and moving treatment to enable the ratio of the number of the adhered oranges to be a first ratio value; carrying out second image acquisition to obtain a second visible light image; carrying out manual calibration to obtain a second training image, inputting the second training image into the first training image for training to obtain a second citrus positioning model; picking up and moving to enable the ratio of the number of the adhered oranges to be the ith ratio value; carrying out (i + 1) th image acquisition to obtain an (i + 1) th visible light image; carrying out manual calibration to obtain an i +1 th training image, and carrying out training to obtain an i +1 th citrus positioning model; acquiring an image of a target citrus tree to obtain a visible light image to be positioned and obtain a primary citrus positioning result; obtaining the ratio of the adhered oranges; and if the ratio of the adhered oranges is greater than the ratio threshold value, recording the ratio as a final orange positioning result, thereby realizing the accurate positioning of the adhered oranges.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored thereon, and when the computer program is executed by a processor, the method for positioning adhered citrus fruit based on a deep convolutional neural network model is implemented, where steps included in the method correspond to steps of the method for positioning adhered citrus fruit based on the deep convolutional neural network model in the foregoing embodiment one to one, and are not described herein again.
The computer-readable storage medium of the application acquires a first image to obtain a first visible light image; carrying out manual calibration to obtain a first training image, inputting the first training image into a deep convolutional neural network model for training to obtain a first citrus positioning model; carrying out first picking and moving treatment to enable the ratio of the number of the adhered oranges to be a first ratio value; carrying out second image acquisition to obtain a second visible light image; carrying out manual calibration to obtain a second training image, inputting the second training image into the first training image for training to obtain a second citrus positioning model; picking up and moving to enable the ratio of the number of the adhered oranges to be the ith ratio value; carrying out (i + 1) th image acquisition to obtain an (i + 1) th visible light image; carrying out manual calibration to obtain an i +1 th training image, and carrying out training to obtain an i +1 th citrus positioning model; acquiring an image of a target citrus tree to obtain a visible light image to be positioned and obtain a primary citrus positioning result; obtaining the ratio of the adhered oranges; and if the ratio of the adhered oranges is greater than the ratio threshold value, recording the ratio as a final orange positioning result, thereby realizing the accurate positioning of the adhered oranges.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with a computer program or instructions, the computer program can be stored in a non-volatile computer-readable storage medium, and the computer program can include the processes of the embodiments of the methods described above when executed. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (7)

1. A method for positioning adhered citrus based on a deep convolutional neural network model is characterized by comprising the following steps:
s1, carrying out first image acquisition on the sample citrus tree by adopting a CCD camera to obtain a first visible light image; wherein the sample citrus tree has no adhered citrus fruit;
s2, manually calibrating the position of the orange in the first visible light image to obtain a first training image, and inputting the first training image into a preset deep convolution neural network model for training to obtain a first orange positioning model;
s3, carrying out first picking and moving treatment on the oranges on the sample orange trees, so that the positions of the oranges in the sample orange trees are changed, the sample orange trees have adhered oranges, and the ratio of the number of the adhered oranges to the number of all the oranges on the sample orange trees is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1;
s4, carrying out second image acquisition on the sample citrus tree subjected to the first picking and moving processing by adopting a CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition;
s5, manually calibrating the position of the citrus in the second visible light image to obtain a second training image, and inputting the second training image into the first training image for training to obtain a second citrus positioning model;
s6, carrying out the ith picking and moving treatment on the oranges on the sample orange trees, so that the positions of the oranges in the sample orange trees are changed, the sample orange trees have adhered oranges, and the ratio of the number of the adhered oranges to the number of all the oranges on the sample orange trees is the ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1;
s7, carrying out i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition;
s8, manually calibrating the orange position in the (i + 1) th visible light image to obtain an (i + 1) th training image, and inputting the (i + 1) th training image into the (i) th training image for training to obtain an (i + 1) th orange positioning model;
s9, acquiring an image of a target citrus tree by using a CCD (charge coupled device) camera to obtain a visible light image to be positioned, and inputting the visible light image to be positioned into the (i + 1) th citrus positioning model for processing to obtain a primary citrus positioning result output by the (i + 1) th citrus positioning model;
s10, obtaining the ratio of the oranges adhered to the target orange tree according to the preliminary orange positioning result, and judging whether the ratio of the oranges adhered to the target orange tree is larger than a preset ratio threshold value or not;
and S11, if the ratio of the adhered oranges on the target orange tree is larger than a preset ratio threshold, taking the preliminary orange positioning result as a final orange positioning result.
2. The method for positioning stuck citrus fruit based on the deep convolutional neural network model as claimed in claim 1, wherein the step S1 of acquiring the first visible light image of the sample citrus tree by using the CCD camera comprises:
s101, carrying out primary image acquisition on a sample citrus tree by adopting a CCD (charge coupled device) camera to obtain a primary visible light image;
s102, processing the preliminary visible light image according to a preset color difference map technology and an Otsu threshold segmentation algorithm to obtain an intermediate visible light image, and separating a first citrus fruit region from the intermediate visible light image;
s103, generating a mask image, so that only a first citrus fruit area is exposed after the mask image is superposed on the intermediate visible light image;
s104, superposing the mask image on the preliminary visible light image to obtain a second citrus fruit region in the preliminary visible light image;
s105, counting the brightness values of all pixel points in the second citrus fruit region, and calculating a first brightness average value of the second citrus fruit region;
s106, calling a high-brightness image of the mature citrus fruit collected in advance, and simultaneously calling an ambient light brightness value during collection of the high-brightness image; the high-brightness image means that when the image of the mature citrus fruit is collected, the brightness value of the visible light of the environment where the mature citrus fruit is located is larger than a preset brightness threshold value, or the high-brightness image means that when the image of the mature citrus fruit is collected, the mature citrus fruit is irradiated by the light generated by the visible light generator, and the power value of the visible light generator is larger than a preset power threshold value;
s107, calculating a second brightness average value of the high-brightness image according to a preset brightness average value calculation formula;
and S108, adjusting the brightness of the preliminary visible light image by adopting a first image enhancement algorithm of a retinex model according to the ambient light brightness value, the first brightness average value and the second brightness average value, so as to obtain a first visible light image.
3. The method for positioning stuck citrus fruit based on deep convolutional neural network model as claimed in claim 2, wherein the step S107 of calculating the second luminance average value of the high luminance image according to a preset luminance average value calculation formula comprises:
s1071, according to the formula:
Figure DEST_PATH_IMAGE002
calculating a second brightness average value v (i, j); wherein i and j are respectively a horizontal coordinate and a vertical coordinate of a pixel point of the high-brightness image, R (i, j), G (i, j) and B (i, j) are respectively a brightness component of a red color channel, a brightness component of a green color channel, and a brightness component of a green and blue color channel, α, β, and γ are respectively preset parameters, a maximum value of the horizontal coordinate of the pixel point of the high-brightness image is R, and a maximum value of the vertical coordinate of the pixel point of the high-brightness image is c.
4. The method for positioning the stuck citrus fruit based on the deep convolutional neural network model as claimed in claim 2, wherein the step S9 of acquiring an image of a target citrus tree by using a CCD camera to obtain a visible light image to be positioned, inputting the visible light image to be positioned into the (i + 1) th citrus positioning model for processing to obtain a preliminary citrus fruit positioning result output by the (i + 1) th citrus positioning model includes:
s901, acquiring an image of a target citrus tree by using a CCD (charge coupled device) camera to obtain an unprocessed visible light image;
s902, adjusting the brightness of the unprocessed visible light image by adopting a preset second image enhancement algorithm to obtain a visible light image to be positioned; wherein the second image enhancement algorithm is the same as the first image enhancement algorithm;
and S903, inputting the visible light image to be positioned into the (i + 1) th orange positioning model for processing to obtain a preliminary orange positioning result output by the (i + 1) th orange positioning model.
5. A device for positioning adhered citrus based on a deep convolutional neural network model, comprising:
the first visible light image acquisition unit is used for acquiring a first image of the sample citrus tree by adopting a CCD (charge coupled device) camera to obtain a first visible light image; wherein the sample citrus tree has no adhered citrus fruit;
a first citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the first visible light image to obtain a first training image, and input the first training image into a preset deep convolutional neural network model for training to obtain a first citrus positioning model;
a first picking and moving unit for performing a first picking and moving process on the citrus fruit on the sample citrus tree, so that the position of the citrus fruit in the sample citrus tree is changed, so that adhered citrus fruit exists on the sample citrus tree, and the ratio of the number of the adhered citrus fruit to the number of all citrus fruit on the sample citrus tree is a first ratio value; wherein the total number of citrus fruit on the sample citrus tree is unchanged after the first picking and moving process; the first ratio value is greater than 0 and less than 1;
the second visible light image acquisition unit is used for carrying out second image acquisition on the sample citrus tree subjected to the first picking and moving processing by adopting the CCD camera to obtain a second visible light image; the shooting parameters during the second image acquisition are the same as those during the first image acquisition;
a second citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the second visible light image to obtain a second training image, and input the second training image into the first training image for training to obtain a second citrus positioning model;
the ith picking and moving unit is used for carrying out ith picking and moving treatment on the citrus on the sample citrus tree, so that the position of the citrus in the sample citrus tree is changed, the adhered citrus exists on the sample citrus tree, and the ratio of the number of the adhered citrus to the number of all the citrus on the sample citrus tree is an ith ratio value; after the ith picking and moving treatment is carried out, the total number of the oranges on the sample orange tree is unchanged, and the ith ratio value is larger than the ith-1 ratio value; i is an integer which is more than or equal to 2 and less than or equal to n, and n is a preset integer which is more than or equal to 3; the ith ratio value is greater than 0 and less than 1;
the i +1 th visible light image acquisition unit is used for performing i +1 th image acquisition on the sample citrus tree subjected to the i-th picking and moving processing by adopting a CCD (charge coupled device) camera to obtain an i +1 th visible light image; the shooting parameters during the (i + 1) th image acquisition are the same as those during the ith image acquisition;
an i +1 citrus positioning model obtaining unit, configured to perform manual calibration on a citrus position in the i +1 visible light image to obtain an i +1 training image, and input the i +1 training image into the i training image for training to obtain an i +1 citrus positioning model;
the preliminary citrus positioning result acquisition unit is used for acquiring images of a target citrus tree by adopting a CCD (charge coupled device) camera to obtain a visible light image to be positioned, and inputting the visible light image to be positioned into the (i + 1) th citrus positioning model for processing to obtain a preliminary citrus positioning result output by the (i + 1) th citrus positioning model;
the proportion threshold judging unit is used for acquiring the proportion of the citrus adhered to the target citrus tree according to the preliminary citrus positioning result and judging whether the proportion of the citrus adhered to the target citrus tree is larger than a preset proportion threshold or not;
and the final citrus positioning result obtaining unit is used for taking the preliminary citrus positioning result as a final citrus positioning result if the ratio of the citrus adhered to the target citrus tree is greater than a preset ratio threshold value.
6. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
CN202110571099.0A 2021-05-25 2021-05-25 Method for positioning adhered citrus based on deep convolutional neural network model Active CN113192129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110571099.0A CN113192129B (en) 2021-05-25 2021-05-25 Method for positioning adhered citrus based on deep convolutional neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110571099.0A CN113192129B (en) 2021-05-25 2021-05-25 Method for positioning adhered citrus based on deep convolutional neural network model

Publications (2)

Publication Number Publication Date
CN113192129A CN113192129A (en) 2021-07-30
CN113192129B true CN113192129B (en) 2022-03-25

Family

ID=76985271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110571099.0A Active CN113192129B (en) 2021-05-25 2021-05-25 Method for positioning adhered citrus based on deep convolutional neural network model

Country Status (1)

Country Link
CN (1) CN113192129B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758243B (en) * 2022-04-29 2022-11-11 广东技术师范大学 Tea leaf picking method and device based on supplementary training and dual-class position prediction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319973A (en) * 2018-01-18 2018-07-24 仲恺农业工程学院 Detection method for citrus fruits on tree
CN109190493A (en) * 2018-08-06 2019-01-11 甘肃农业大学 Image-recognizing method, device and robotic vision system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860888B2 (en) * 2018-01-05 2020-12-08 Whirlpool Corporation Detecting objects in images
CN109711325B (en) * 2018-12-25 2023-05-23 华南农业大学 Mango picking point identification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319973A (en) * 2018-01-18 2018-07-24 仲恺农业工程学院 Detection method for citrus fruits on tree
CN109190493A (en) * 2018-08-06 2019-01-11 甘肃农业大学 Image-recognizing method, device and robotic vision system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像分割和决策层融合的未成熟柑橘检测方法;庄家俊等;《仲恺农业工程学院学报》;20201231;第33卷(第4期);第46-52页 *

Also Published As

Publication number Publication date
CN113192129A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
Wang et al. Semantic segmentation of crop and weed using an encoder-decoder network and image enhancement method under uncontrolled outdoor illumination
Kiani et al. Identification of plant disease infection using soft-computing: Application to modern botany
US10740610B2 (en) Methods, systems, and devices relating to shadow detection for real-time object identification
CN108629289B (en) Farmland identification method and system and agricultural unmanned aerial vehicle
CN113192129B (en) Method for positioning adhered citrus based on deep convolutional neural network model
CN111783693A (en) Intelligent identification method of fruit and vegetable picking robot
CN113255434B (en) Apple identification method integrating fruit characteristics and deep convolutional neural network
CN110363103B (en) Insect pest identification method and device, computer equipment and storage medium
CN116843581B (en) Image enhancement method, system, device and storage medium for multi-scene graph
Nawawi et al. Comprehensive pineapple segmentation techniques with intelligent convolutional neural network
Tripathy et al. Image processing techniques aiding smart agriculture
CN112215765B (en) Robot vision color correction method and device in agricultural natural light environment
Dorj et al. A new method for tangerine tree flower recognition
CN113379620B (en) Optical remote sensing satellite image cloud detection method
CN113673340B (en) Pest type image identification method and system
Tannouche et al. A real time efficient management of onions weeds based on a multilayer perceptron neural networks technique
CN115170420A (en) Image contrast processing method and system
Singh et al. A novel algorithm for segmentation of diseased apple leaf images
CN114170668A (en) Hyperspectral face recognition method and system
CN114580573A (en) Image-based cloud amount, cloud shape and weather phenomenon inversion device and method
Mei et al. Combing color index and region growing with simple non-iterative clustering for plant segmentation
CN109712110B (en) Crop real-time water consumption monitoring method and device based on machine vision
Choi et al. Illumination-invariant vegetation detection for a vision sensor-based agricultural applications
Feng et al. Multi-spectral image fusion method for identifying similar-colored tomato organs
CN114782682B (en) Agricultural pest image intelligent identification method based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant