CN117115661A - Crop identification method and device - Google Patents

Crop identification method and device Download PDF

Info

Publication number
CN117115661A
CN117115661A CN202311246222.7A CN202311246222A CN117115661A CN 117115661 A CN117115661 A CN 117115661A CN 202311246222 A CN202311246222 A CN 202311246222A CN 117115661 A CN117115661 A CN 117115661A
Authority
CN
China
Prior art keywords
crop
sample picture
model
crops
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311246222.7A
Other languages
Chinese (zh)
Inventor
潘志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tianchuang Jinnong Technology Co ltd
Original Assignee
Beijing Tianchuang Jinnong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tianchuang Jinnong Technology Co ltd filed Critical Beijing Tianchuang Jinnong Technology Co ltd
Priority to CN202311246222.7A priority Critical patent/CN117115661A/en
Publication of CN117115661A publication Critical patent/CN117115661A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Sorting Of Articles (AREA)

Abstract

The embodiment of the application provides a crop identification method and a device, wherein the crop identification method comprises the following steps: acquiring crop characteristics of crops to be identified in a crop picture; inputting the crop characteristics to a classification recognition model, wherein the classification recognition model is used for recognizing crop classifications of the crops to be recognized according to the crop characteristics, and the crop classifications comprise fruit crops and leaf crops; inputting the crop characteristics into a first type model of a crop identification model under the condition that the crop is classified as a fruit crop, wherein the first type model is used for identifying a fruit identification result of the crop to be identified according to group characteristics and/or individual characteristics included by the crop characteristics; and under the condition that the crop is classified as a leaf crop, inputting the crop characteristics into a second model of the crop identification model, wherein the second model is used for identifying leaf identification results of the crop to be identified according to group characteristics included by the crop characteristics.

Description

Crop identification method and device
Technical Field
The application relates to the field of intelligent planting, in particular to a crop identification method and device.
Background
In order to better meet the development requirements of modern agriculture, the real-time monitoring technology of the images is applied to agriculture, the indoor environment of the greenhouse, the growth vigor of crops and the quality of the crops can be monitored in real time, and guidance of planting management of the corresponding crops can be given remotely. However, the inventors found that there are problems: the technology of real-time monitoring needs to consume a great deal of manpower and time cost to analyze the growth vigor and the variety of crops, and then gives corresponding planting guidance suggestions.
Disclosure of Invention
The application provides a crop identification method and a device, which can intelligently analyze the types of crops at any stage of the crop growth period, save the manpower consumed by observing each growth period when analyzing the types of crops, and further provide reasonable and scientific planting guidance suggestion in combination with the growth needs of the crops of the types, so that the crops grow more scientifically and healthily.
The application provides a crop identification method, which comprises the following steps:
acquiring crop characteristics of crops to be identified in a crop picture;
inputting the crop characteristics to a classification recognition model, wherein the classification recognition model is used for recognizing crop classifications of the crops to be recognized according to the crop characteristics, and the crop classifications comprise fruit crops and leaf crops;
inputting the crop characteristics into a first type model of a crop identification model under the condition that the crop is classified as a fruit crop, wherein the first type model is used for identifying a fruit identification result of the crop to be identified according to group characteristics and/or individual characteristics included by the crop characteristics;
and under the condition that the crop is classified as a leaf crop, inputting the crop characteristics into a second model of the crop identification model, wherein the second model is used for identifying leaf identification results of the crop to be identified according to group characteristics included by the crop characteristics.
Optionally, the training process of the classification recognition model includes:
acquiring a sample picture;
cleaning the sample picture to obtain a cleaned sample picture;
expanding the cleaned sample picture to obtain an expanded sample picture, wherein the expanded sample picture comprises a first sample picture of a fruit crop and a second sample picture of a leaf crop, the first sample picture comprises population characteristics and/or individual characteristics, the second sample picture comprises population characteristics, the individual characteristics comprise fruit characteristics and whole plant characteristics of at least a single plant crop, and the population characteristics comprise leaf characteristics and stem characteristics of at least two plants;
labeling group characteristics and/or individual characteristics of the first sample picture and group characteristics of the second sample picture;
inputting the first sample picture and the group characteristics and/or individual characteristics of the first sample picture, and the second sample picture and the group characteristics of the second sample picture to an initial classification recognition model for training to obtain the classification recognition model.
Optionally, the training process of the crop identification model includes:
setting a first weight parameter of the group characteristics and/or the individual characteristics and a second weight parameter of the group characteristics;
inputting the first sample picture, the group characteristics and/or the individual characteristics and the first weight parameters to a first crop identification initial model training to obtain the first model;
and inputting the second sample picture, the group characteristics and the second weight parameters to a second type crop identification initial model for training to obtain the second type model.
Optionally, the cleaning the sample picture includes:
determining a similarity parameter of the sample picture, and deleting the sample picture when the similarity parameter is smaller than a preset similarity threshold;
determining a definition parameter of the sample picture, and deleting the sample picture when the definition parameter is smaller than a preset definition threshold.
Optionally, a formula for determining the similarity parameter of the sample picture is as follows:
wherein mu X 、μ Y Representing the mean value, sigma, of image X and image Y, respectively X 、σ Y Representing standard deviation, sigma, of image X and image Y, respectively X 、σ Y Representing the variance, sigma, of image X and image Y, respectively XY Representing image X and image Y covariance, C 1 ,C 2 And C 3 Is constant.
Optionally, the determining the sharpness parameter of the sample picture includes:
determining the gray average value of all pixels of the sample picture;
determining a difference value between each pixel of the sample picture and the gray average value and a square sum parameter of the difference value;
and based on the pixel number of the sample picture, normalizing the square sum parameter, and determining a definition parameter of the sample picture, wherein the definition parameter represents the average degree of the image gray level change of the sample picture.
Alternatively to this, the method may comprise,
the formula for determining the gray average value of all pixels of the sample picture is as follows:
the formula for determining the sum of squares parameter of the difference is:
the application also provides a crop identification apparatus comprising: the crop identification system comprises an acquisition unit, a classification identification model and a crop identification model;
the acquisition unit is configured to acquire crop characteristics of crops to be identified in the crop picture;
the classification recognition model is configured to receive the crop characteristics, and recognize crop classifications of the crops to be recognized according to the crop characteristics, wherein the crop classifications comprise fruit crops and leaf crops;
in the case of the crop being classified as a fruit crop, the first type model of the crop identification model is configured to receive the crop characteristics, identify a fruit identification result of the crop to be identified according to population characteristics and/or individual characteristics included in the crop characteristics;
in the case of the crop being classified as a foliar crop, the second model of the crop identification model is configured to receive the crop characteristics, identify foliar identification results of the crop to be identified according to group characteristics comprised by the crop characteristics.
Optionally, the training process of the classification recognition model includes:
acquiring a sample picture;
cleaning the sample picture to obtain a cleaned sample picture;
expanding the cleaned sample picture to obtain an expanded sample picture, wherein the expanded sample picture comprises a first sample picture of a fruit crop and a second sample picture of a leaf crop, the first sample picture comprises population characteristics and/or individual characteristics, the second sample picture comprises population characteristics, the individual characteristics comprise fruit characteristics and whole plant characteristics of at least a single plant crop, and the population characteristics comprise leaf characteristics and stem characteristics of at least two plants;
labeling group characteristics and/or individual characteristics of the first sample picture and group characteristics of the second sample picture;
inputting the first sample picture and the group characteristics and/or individual characteristics of the first sample picture, and the second sample picture and the group characteristics of the second sample picture to an initial classification recognition model for training to obtain the classification recognition model.
Optionally, the training process of the crop identification model includes:
setting a first weight parameter of the group characteristics and/or the individual characteristics and a second weight parameter of the group characteristics;
inputting the first sample picture, the group characteristics and/or the individual characteristics and the first weight parameters to a first crop identification initial model training to obtain the first model;
and inputting the second sample picture, the group characteristics and the second weight parameters to a second type crop identification initial model for training to obtain the second type model.
Optionally, the cleaning the sample picture includes:
determining a similarity parameter of the sample picture, and deleting the sample picture when the similarity parameter is smaller than a preset similarity threshold;
determining a definition parameter of the sample picture, and deleting the sample picture when the definition parameter is smaller than a preset definition threshold.
Optionally, a formula for determining the similarity parameter of the sample picture is as follows:
wherein mu X 、μ Y Representing the mean value, sigma, of image X and image Y, respectively X 、σ Y Representing standard deviation, sigma, of image X and image Y, respectively X 、σ Y Representing the variance, sigma, of image X and image Y, respectively XY Representing image X and image Y covariance, C 1 ,C 2 And C 3 Is constant.
Optionally, the determining the sharpness parameter of the sample picture includes:
determining the gray average value of all pixels of the sample picture;
determining a difference value between each pixel of the sample picture and the gray average value and a square sum parameter of the difference value;
and based on the pixel number of the sample picture, normalizing the square sum parameter, and determining a definition parameter of the sample picture, wherein the definition parameter represents the average degree of the image gray level change of the sample picture.
Alternatively to this, the method may comprise,
determining a gray average value of all pixels of the sample pictureThe formula of (2) is:
wherein N is x Representing the width, N, of the gray scale image corresponding to the sample picture y Representing the high of the gray image corresponding to the sample picture, x represents the wide position of the pixel in the gray image corresponding to the sample picture, y represents the high position of the pixel in the gray image corresponding to the sample picture, and f (x, y) represents the gray value of the pixel point with the position (x, y) in the gray image corresponding to the sample picture;
the formula for determining the sum of squares parameter s of the difference is:
according to the technical scheme provided by the embodiment of the application, the types of crops can be intelligently analyzed at any stage of the crop growth period, so that the manpower consumed by observing each growth period when the types of crops are analyzed is saved, and reasonable and scientific planting guidance suggestion can be given by combining the growth needs of the crops of the types, so that the crops can grow more scientifically and healthily.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a crop identification process according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a crop identification process according to another embodiment of the present application;
fig. 3 is a schematic illustration of labeling a tomato photo according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a photo of chives according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a photo of leek according to an embodiment of the present application;
fig. 6 is a schematic block diagram of a crop identification apparatus according to an embodiment of the application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
In some of the flows described in the description of the application, the claims, and the figures described above, a number of operations occurring in a particular order are included, and the operations may be performed out of order or concurrently with respect to the order in which they occur. The sequence numbers of operations such as 101, 102, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The related technology applies the image real-time monitoring technology to agriculture, can monitor the indoor environment of the greenhouse, the growth vigor of crops and the quality of the crops in real time, and can give guidance for planting management of the corresponding crops remotely. However, the inventors found that there are problems: before the planting guidance suggestion is given, the scientific planting suggestion can be given only by accurately analyzing the types of crops, and the real-time monitoring technology adopted by the related technology needs to consume a great deal of manpower and time cost to observe the characteristics of the crops in each growing period, so that the types of the crops can be accurately determined, and the corresponding planting guidance suggestion is given by combining the growth vigor of the crops.
Therefore, it is necessary to provide a crop identification method, according to the crop characteristics of the crops, the crops are roughly classified by using a classification identification model, then the specific types of the crops are identified by using a crop identification model, namely, whether the crops are fruit crops or leaf crops or not is identified, and then the types of the crops can be intelligently analyzed at any stage of the growth period of the crops, so that the manpower consumed by observing each growth period when analyzing the types of the crops is saved, and reasonable and scientific planting guidance suggestions can be given by combining the growth needs of the types of the crops, so that the crops grow more scientifically and healthily.
Referring to fig. 1, an embodiment of the present application provides a crop identification method, which at least includes 102-108.
102. And acquiring crop characteristics of crops to be identified in the crop picture.
In one embodiment, the crop to be identified comprises at least one of a fruit crop, a leaf crop. Fruit crops are crops that can be fruiting, for example: strawberry, tomato, cucumber, eggplant, sweet potato leaf, balsam pear, capsicum, dragon fruit, grape, watermelon, cherry, corn, peach and the like. Leaf crops refer to non-fruit crops, i.e., crops that cannot be fruiting, such as leeks, celery, rape, and the like.
In a greenhouse planting scene, crop pictures are taken by sensors in the greenhouse. Because the sensor in the greenhouse has wide shooting range, and the single plant can not be shot at a short distance, the crops in the photos show group characteristics. For fruit crops, the crop photograph includes fruits, so the fruit crop photograph also presents individual features of the crop (e.g., fruits). Thus, for different types of crops, the characteristics of the crops adopted in the identification are different: the fruit crops are group characteristics or individual characteristics, or can be combined with the group characteristics or the individual characteristics; whereas foliar crops are used for group characteristics. In other embodiments, the crop may also be identified in conjunction with surrounding environmental features.
It should be understood that in the conventional open-air planting scene, in order to improve the recognition efficiency, the crop pictures are automatically taken by adopting shooting devices, such as unmanned aerial vehicles, remote sensing cameras and the like, and the shooting devices take a wide range of pictures, so that the crops in the pictures show group characteristics. Likewise, fruit crop photographs also present individual characteristics of the crop (e.g., fruit).
104. And inputting crop characteristics into a classification recognition model, wherein the classification recognition model is used for recognizing crop classification of crops to be recognized according to the crop characteristics, and the crop classification comprises fruit crops and leaf crops.
106. And under the condition that the crops are classified into fruit crops, inputting the crop characteristics into a first model of a crop identification model, wherein the first model is used for identifying the fruit identification result of the crop to be identified according to the group characteristics and/or the individual characteristics included by the crop characteristics.
108. And under the condition that the crops are classified into leaf crops, inputting the crop characteristics into a second model of the crop identification model, wherein the second model is used for identifying leaf identification results of the crops to be identified according to group characteristics included by the crop characteristics.
According to the embodiment of the application, the classification recognition model is utilized to perform rough classification on crops such as fruit crops or leaf crops according to the crop characteristics of the crops, then the crop recognition model is utilized to recognize the specific types of the crops, namely, whether the crops are fruit crops or leaf crops is recognized firstly, and then the specific types of the crops are recognized, so that the types of the crops can be intelligently analyzed at any stage of the growth period of the crops, the manpower consumed by observing each growth period when the types of the crops are analyzed is saved, and reasonable and scientific planting guidance suggestions can be provided by combining the growth needs of the crops of the types, so that the crops grow more scientifically and healthily.
Referring to fig. 2, the present embodiment provides a crop identification method, which includes three processes of training a classification identification model, training a crop identification model, and crop identification.
Process one, training classification recognition model
2011. And obtaining a sample picture.
In an embodiment, greenhouse planting is used as an application scene, sample pictures of 10 common crops (such as tomatoes, celery, cucumbers, strawberries, eggplants, peppers, rapes, leeks, grapes and balsam pears) are obtained, and the number of sample pictures of each crop is multiple. The tomato, cucumber, strawberry, eggplant, capsicum, grape and balsam pear are fruit crops, and the celery, rape and leek are leaf crops.
2012. And cleaning the sample picture to obtain a cleaned sample picture.
Data cleansing is a process of rechecking and checking data, and aims to delete repeated information, correct existing errors and provide data consistency, which is an important ring for ensuring the correctness of subsequent results.
In one embodiment, the process of cleaning the sample pictures includes 20121-20122. It should be understood that 20121 and 20122 are not limited to the execution sequence, and may be executed simultaneously, 20121 may be executed first, 20122 may be executed second, and 20121 may be executed first, 20122 may be executed second.
20121. And determining similarity parameters of the sample pictures, and deleting the sample pictures when the similarity parameters are smaller than a preset similarity threshold. In the embodiment of the application, the similarity parameter is the image structure similarity (structural similarity index, SSIM), and the larger the value range [0,1], the smaller the image distortion is, the more similar the SSIM value range is.
Specifically, the similarity parameter of the sample picture is determined by the following formula:
wherein mu X 、μ Y Representing the mean value, sigma, of image X and image Y, respectively X 、σ Y Representing standard deviation, sigma, of image X and image Y, respectively X 、σ Y Representing the variance, sigma, of image X and image Y, respectively XY Representing image X and image Y covariance, C 1 ,C 2 And C 3 Is constant. C is usually taken 1 =(K 1 L)^2,C 2 =(K 2 L)^2,C 3 =C 2 2, in general K 1 =0.01,K 2 =0.03, l=255 (L is the dynamic range of pixel values, and is generally taken as 255).
20122. And determining the definition parameters of the sample pictures, and deleting the sample pictures when the definition parameters are smaller than a preset definition threshold. In the embodiment of the application, the clearer the image, the higher the quality, the larger the definition parameter and the smaller the blurring degree; the less sharp the image (the more blurred), the lower the quality, the smaller the sharpness parameter and the greater the blur.
Specifically, determining the sharpness parameter of the sample picture includes:
determining the gray average value of all pixels of a sample picture;
determining a difference value between each pixel of the sample picture and the gray average value and a square sum parameter of the difference value;
based on the pixel number of the sample picture, the square sum parameter is standardized, the definition parameter of the sample picture is determined, and the definition parameter represents the average degree of the image gray level change of the sample picture.
Alternatively to this, the method may comprise,
determining gray average value of all pixels of sample pictureThe formula of (2) is:
wherein N is x Representing the width, N, of the gray scale image corresponding to the sample picture y Representing the high of the gray image corresponding to the sample picture, x represents the wide position of the pixel in the gray image corresponding to the sample picture, y represents the high position of the pixel in the gray image corresponding to the sample picture, and f (x, y) represents the gray value of the pixel point with the position (x, y) in the gray image corresponding to the sample picture;
the formula for determining the sum of squares of the differences parameter s is:
2012, deleting the sample pictures of which the similarity parameters and the definition parameters do not meet the preset conditions, and finishing the cleaning of the pictures.
2013. Expanding the cleaned sample picture to obtain an expanded sample picture, wherein the expanded sample picture comprises a first sample picture of fruit crops and a second sample picture of leaf crops, the first sample picture comprises group characteristics and/or individual characteristics, the second sample picture comprises group characteristics, the individual characteristics comprise fruit characteristics and whole plant characteristics of at least single plant crops, the group characteristics comprise whole plant characteristics of at least two plant crops, and the whole plant characteristics comprise leaf characteristics and stem characteristics.
2014. Labeling the group characteristics and/or individual characteristics of the first sample picture and the group characteristics of the second sample picture.
Here, labeling refers to dividing crop features in a picture into feature frames.
For example, the first sample picture is a photograph of a tomato, including a photograph of an immature stage and a photograph of a mature stage. Dividing the group characteristics of tomatoes into characteristic frames in the photo of the immature stage; for the photo of the mature stage, individual characteristics of tomatoes need to be divided into characteristic frames,
since only leaves and stems can be identified by the tomatoes in the immature stage, the leaves and stems are identified as population elements, and cannot be marked independently. Therefore, the population characteristics of tomatoes need to be divided into characteristic frames, wherein the population characteristics comprise leaf characteristics and stem characteristics of at least two plants. As shown in fig. 3, tomato leaves and stems are marked in the same box. As tomatoes do not continuously grow into slices, a space is reserved between the first frame and the second frame, characteristic division is carried out in the sub-areas, each frame contains leaves and stems, the areas with clear near ends and easy identification are marked, and the areas with blurred far ends are discarded.
The most important characteristics of the mature stage are fruits and colors of the fruits, when the crop characteristics are marked, the fruits with obvious and large sizes are selected to be marked into the characteristic frame as the crop characteristics, the fruits are positioned in the middle of the characteristic frame, the areas with dense fruits are marked, and the small and shielded areas of the fruits are discarded. In addition to the fruit, the whole plant should be marked, and the region containing the leaf and the stem should be marked, that is, the leaf characteristics and the stem characteristics of the plant are also included in the characteristic frame. In other embodiments, to more accurately identify the various types of crops, the noted crop characteristics also include crop environmental characteristics, including the ground environment in which the crop is planted.
For another example, the second sample picture is a picture of leeks, including a photograph at an immature stage and a photograph at a mature stage. Dividing group characteristics of Chinese chives and ground environment into characteristic frames according to the photo in the immature stage; in the photo of the mature stage, the group characteristics of the Chinese chives are required to be divided into characteristic frames.
The characteristics of the chives in the immature stage are that the chives are sparse, crops grow in rows (columns), the leaves are linear, and the ground is exposed. Only the near-end clear area is reserved during marking, and crops and corresponding ground environments are arranged in a marking frame, as shown in fig. 4.
The leek features in the mature stage are that the growth is relatively dense, the exposed part of the ground is shielded, the leaves are long lines, the marked area needs to contain the leaf features, the range is proper, and the range is not too large or too small. The regions with clear near ends and easily identified are marked, and the regions with blurred far ends are discarded, as shown in fig. 5.
2014. Inputting the first sample picture and the group characteristics and/or individual characteristics of the first sample picture, and the second sample picture and the group characteristics of the second sample picture to an initial classification recognition model for training to obtain a classification recognition model.
In one embodiment, the initial classification recognition model employs a ViT-16 model of a transducer architecture based on a self-attention mechanism. After training, the obtained classification recognition model can be used for carrying out rough classification on crops such as fruit crops or leaf crops.
Training crop identification model
2021. Setting a first weight parameter of the group characteristics and/or the individual characteristics and a second weight parameter of the group characteristics.
2022. Inputting a first sample picture, group characteristics and/or individual characteristics and a first weight parameter to a first crop identification initial model training to obtain a first model.
In this embodiment, the first type of crop identification initial model is a YoloX deep learning classification network. After training, the obtained first type model can be used for identifying the specific type of the fruit crops.
2023. And inputting a second sample picture, group characteristics and a second weight parameter to a second crop identification initial model for training to obtain a second model.
In this embodiment, the second type of crop identification initial model is a YoloX deep learning classification network. After the training is finished, the obtained second model can be used for identifying the specific type of the leaf crops.
Process three, crop identification
2031. And acquiring crop characteristics of crops to be identified in the crop picture.
In one embodiment, the crop to be identified comprises at least one of a fruit crop, a leaf crop. Fruit crops are crops that can be fruiting, for example: strawberry, tomato, cucumber, eggplant, sweet potato leaf, balsam pear, capsicum, dragon fruit, grape, watermelon, cherry, corn, peach and the like. Leaf crops refer to non-fruit crops, i.e., crops that cannot be fruiting, such as leeks, celery, rape, and the like.
In a greenhouse planting scene, crop pictures are taken by sensors in the greenhouse. Because the sensor in the greenhouse has wide shooting range, and the single plant can not be shot at a short distance, the crops in the photos show group characteristics. For fruit crops, the crop photograph includes fruits, so the fruit crop photograph also presents individual features of the crop (e.g., fruits). Thus, for different types of crops, the characteristics of the crops adopted in the identification are different: the fruit crops are group characteristics or individual characteristics, or can be combined with the group characteristics or the individual characteristics; whereas foliar crops are used for group characteristics. In other embodiments, the crop may also be identified in conjunction with surrounding environmental features.
It should be understood that in the conventional open-air planting scene, in order to improve the recognition efficiency, the crop pictures are automatically taken by adopting shooting devices, such as unmanned aerial vehicles, remote sensing cameras and the like, and the shooting devices take a wide range of pictures, so that the crops in the pictures show group characteristics. Likewise, fruit crop photographs also present individual characteristics of the crop (e.g., fruit).
2032. And inputting crop characteristics into a classification recognition model, wherein the classification recognition model is used for recognizing crop classification of crops to be recognized according to the crop characteristics, and the crop classification comprises fruit crops and leaf crops.
In one embodiment, the crop picture is a tomato picture, and the classification result output by the classification recognition model is a fruit crop. In another embodiment, the crop picture is a leek picture, and the classification result output by the classification recognition model is a leaf crop.
2034. And under the condition that the crops are classified into fruit crops, inputting the crop characteristics into a first model of a crop identification model, wherein the first model is used for identifying the fruit identification result of the crop to be identified according to the group characteristics and/or the individual characteristics included by the crop characteristics.
In one embodiment, the crop picture is a tomato picture, the classification result output by the classification recognition model is a fruit crop, and then the crop characteristics are input into a first model corresponding to the fruit crop, and the first model recognizes that the crop is a tomato according to the crop characteristics (such as group characteristics in a non-mature stage or individual characteristics in a mature stage) of the fruit crop.
2036. And under the condition that the crops are classified into leaf crops, inputting the crop characteristics into a second model of the crop identification model, wherein the second model is used for identifying leaf identification results of the crops to be identified according to group characteristics included by the crop characteristics.
In another embodiment, the crop picture is a leek picture, the classification result output by the classification recognition model is a leaf crop, and then the crop feature is input to a second model corresponding to the leaf crop, and the second model recognizes that the crop is a leek according to the crop feature (for example, stem feature and She Tezheng) of the leaf crop.
According to the embodiment of the application, the classification recognition model is utilized to perform rough classification on crops such as fruit crops or leaf crops according to the crop characteristics of the crops, then the crop recognition model is utilized to recognize the specific types of the crops, namely, whether the crops are fruit crops or leaf crops is recognized firstly, and then the specific types of the crops are recognized, so that the types of the crops can be intelligently analyzed at any stage of the growth period of the crops, the manpower consumed by observing each growth period when the types of the crops are analyzed is saved, and reasonable and scientific planting guidance suggestions can be provided by combining the growth needs of the crops of the types, so that the crops grow more scientifically and healthily.
Referring to fig. 6, an embodiment of the present application provides a crop identification apparatus, including: an acquisition unit 600, a classification model 601 and a crop identification model 602.
The obtaining unit 600 is configured to obtain a crop feature of the crop to be identified in the crop picture.
The classification recognition model 601 is configured to receive crop features, and to recognize crop classifications of crops to be recognized according to the crop features, the crop classifications including fruit crops and leaf crops.
In the case of a crop classified as a fruit crop, the first type of model of the crop identification model 602 is configured to receive crop characteristics, identify a fruit identification result of the crop to be identified based on population characteristics and/or individual characteristics included by the crop characteristics.
In the case of a crop classified as a leaf crop, the second model of the crop identification model 602 is configured to receive crop features, identify leaf identification results of the crop to be identified from population features included in the crop features.
Optionally, the training process of the classification recognition model includes:
acquiring a sample picture;
cleaning the sample picture to obtain a cleaned sample picture;
expanding the cleaned sample picture to obtain an expanded sample picture, wherein the expanded sample picture comprises a first sample picture of fruit crops and a second sample picture of leaf crops, the first sample picture comprises group characteristics and/or individual characteristics, the second sample picture comprises group characteristics, the individual characteristics comprise fruit characteristics and whole plant characteristics of at least single plant crops, and the group characteristics comprise leaf characteristics and stem characteristics of at least two plant crops;
labeling group characteristics and/or individual characteristics of the first sample picture and group characteristics of the second sample picture;
inputting the first sample picture and the group characteristics and/or individual characteristics of the first sample picture, and the second sample picture and the group characteristics of the second sample picture to an initial classification recognition model for training to obtain a classification recognition model.
Optionally, the training process of the crop identification model includes:
setting first weight parameters of group characteristics and/or individual characteristics and second weight parameters of the group characteristics;
inputting a first sample picture, group characteristics and/or individual characteristics and a first weight parameter to a first crop identification initial model for training to obtain a first model;
and inputting a second sample picture, group characteristics and a second weight parameter to a second crop identification initial model for training to obtain a second model.
Optionally, cleaning the sample picture includes:
determining similarity parameters of the sample pictures, and deleting the sample pictures when the similarity parameters are smaller than a preset similarity threshold;
and determining the definition parameters of the sample pictures, and deleting the sample pictures when the definition parameters are smaller than a preset definition threshold.
Optionally, a formula for determining the similarity parameter of the sample picture is:
wherein mu X 、μ Y Representing the mean value, sigma, of image X and image Y, respectively X 、σ Y Representing standard deviation, sigma, of image X and image Y, respectively X 、σ Y Representing the variance, sigma, of image X and image Y, respectively XY Representing image X and image Y covariance, C 1 ,C 2 And C 3 Is constant.
Optionally, determining the sharpness parameter of the sample picture includes:
determining the gray average value of all pixels of a sample picture;
determining a difference value between each pixel of the sample picture and the gray average value and a square sum parameter of the difference value;
based on the pixel number of the sample picture, the square sum parameter is standardized, the definition parameter of the sample picture is determined, and the definition parameter represents the average degree of the image gray level change of the sample picture.
Alternatively to this, the method may comprise,
determining gray average value of all pixels of sample pictureThe formula of (2) is:
wherein N is x Representing the width, N, of the gray scale image corresponding to the sample picture y Representing the high of the gray image corresponding to the sample picture, x represents the wide position of the pixel in the gray image corresponding to the sample picture, y represents the high position of the pixel in the gray image corresponding to the sample picture, and f (x, y) represents the gray value of the pixel point with the position (x, y) in the gray image corresponding to the sample picture;
the formula s for determining the sum of squares parameter of the difference is:
accordingly, the embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program when executed by a computer can implement the steps or functions of the data monitoring method during diversion provided in the foregoing embodiments.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of crop identification comprising:
acquiring crop characteristics of crops to be identified in a crop picture;
inputting the crop characteristics to a classification recognition model, wherein the classification recognition model is used for recognizing crop classifications of the crops to be recognized according to the crop characteristics, and the crop classifications comprise fruit crops and leaf crops;
inputting the crop characteristics into a first type model of a crop identification model under the condition that the crop is classified as a fruit crop, wherein the first type model is used for identifying a fruit identification result of the crop to be identified according to group characteristics and/or individual characteristics included by the crop characteristics;
and under the condition that the crop is classified as a leaf crop, inputting the crop characteristics into a second model of the crop identification model, wherein the second model is used for identifying leaf identification results of the crop to be identified according to group characteristics included by the crop characteristics.
2. The method of claim 1, wherein the training process of the classification recognition model comprises:
acquiring a sample picture;
cleaning the sample picture to obtain a cleaned sample picture;
expanding the cleaned sample picture to obtain an expanded sample picture, wherein the expanded sample picture comprises a first sample picture of a fruit crop and a second sample picture of a leaf crop, the first sample picture comprises population characteristics and/or individual characteristics, the second sample picture comprises population characteristics, the individual characteristics comprise fruit characteristics and whole plant characteristics of at least a single plant crop, and the population characteristics comprise leaf characteristics and stem characteristics of at least two plants;
labeling group characteristics and/or individual characteristics of the first sample picture and group characteristics of the second sample picture;
inputting the first sample picture and the group characteristics and/or individual characteristics of the first sample picture, and the second sample picture and the group characteristics of the second sample picture to an initial classification recognition model for training to obtain the classification recognition model.
3. The method of claim 2, wherein the training process of the crop identification model comprises:
setting a first weight parameter of the group characteristics and/or the individual characteristics and a second weight parameter of the group characteristics;
inputting the first sample picture, the group characteristics and/or the individual characteristics and the first weight parameters to a first crop identification initial model training to obtain the first model;
and inputting the second sample picture, the group characteristics and the second weight parameters to a second type crop identification initial model for training to obtain the second type model.
4. A method according to claim 2 or 3, wherein said washing said sample picture comprises:
determining a similarity parameter of the sample picture, and deleting the sample picture when the similarity parameter is smaller than a preset similarity threshold;
determining a definition parameter of the sample picture, and deleting the sample picture when the definition parameter is smaller than a preset definition threshold.
5. The method of claim 4, wherein the similarity parameter of the sample picture is determined by the following formula:
wherein mu X 、μ Y Representing the mean value, sigma, of image X and image Y, respectively X 、σ Y Representing standard deviation, sigma, of image X and image Y, respectively X 、σ Y Representing the variance, sigma, of image X and image Y, respectively XY Representing image X and image Y covariance, C 1 ,C 2 And C 3 Is constant.
6. The method of claim 4, wherein determining sharpness parameters of the sample picture comprises:
determining the gray average value of all pixels of the sample picture;
determining a difference value between each pixel of the sample picture and the gray average value and a square sum parameter of the difference value;
and based on the pixel number of the sample picture, normalizing the square sum parameter, and determining a definition parameter of the sample picture, wherein the definition parameter represents the average degree of the image gray level change of the sample picture.
7. The method of claim 6, wherein the step of providing the first layer comprises,
determining a gray average value of all pixels of the sample pictureThe formula of (2) is:
wherein N is x Representing the width, N, of the gray scale image corresponding to the sample picture y Representing the high of the gray image corresponding to the sample picture, x represents the wide position of the pixel in the gray image corresponding to the sample picture, y represents the high position of the pixel in the gray image corresponding to the sample picture, and f (x, y) represents the gray value of the pixel point with the position (x, y) in the gray image corresponding to the sample picture;
the formula for determining the sum of squares parameter s of the difference is:
8. a crop identification apparatus, comprising: the crop identification system comprises an acquisition unit, a classification identification model and a crop identification model;
the acquisition unit is configured to acquire crop characteristics of crops to be identified in the crop picture;
the classification recognition model is configured to receive the crop characteristics, and recognize crop classifications of the crops to be recognized according to the crop characteristics, wherein the crop classifications comprise fruit crops and leaf crops;
in the case of the crop being classified as a fruit crop, the first type model of the crop identification model is configured to receive the crop characteristics, identify a fruit identification result of the crop to be identified according to population characteristics and/or individual characteristics included in the crop characteristics;
in the case of the crop being classified as a foliar crop, the second model of the crop identification model is configured to receive the crop characteristics, identify foliar identification results of the crop to be identified according to group characteristics comprised by the crop characteristics.
9. The apparatus of claim 8, wherein the training process of the classification recognition model comprises:
acquiring a sample picture;
cleaning the sample picture to obtain a cleaned sample picture;
expanding the cleaned sample picture to obtain an expanded sample picture, wherein the expanded sample picture comprises a first sample picture of a fruit crop and a second sample picture of a leaf crop, the first sample picture comprises population characteristics and/or individual characteristics, the second sample picture comprises population characteristics, the individual characteristics comprise fruit characteristics and whole plant characteristics of at least a single plant crop, and the population characteristics comprise leaf characteristics and stem characteristics of at least two plants;
labeling group characteristics and/or individual characteristics of the first sample picture and group characteristics of the second sample picture;
inputting the first sample picture and the group characteristics and/or individual characteristics of the first sample picture, and the second sample picture and the group characteristics of the second sample picture to an initial classification recognition model for training to obtain the classification recognition model.
10. The apparatus of claim 9, wherein the training process of the crop identification model comprises:
setting a first weight parameter of the group characteristics and/or the individual characteristics and a second weight parameter of the group characteristics;
inputting the first sample picture, the group characteristics and/or the individual characteristics and the first weight parameters to a first crop identification initial model training to obtain the first model;
and inputting the second sample picture, the group characteristics and the second weight parameters to a second type crop identification initial model for training to obtain the second type model.
CN202311246222.7A 2023-09-25 2023-09-25 Crop identification method and device Pending CN117115661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311246222.7A CN117115661A (en) 2023-09-25 2023-09-25 Crop identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311246222.7A CN117115661A (en) 2023-09-25 2023-09-25 Crop identification method and device

Publications (1)

Publication Number Publication Date
CN117115661A true CN117115661A (en) 2023-11-24

Family

ID=88807655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311246222.7A Pending CN117115661A (en) 2023-09-25 2023-09-25 Crop identification method and device

Country Status (1)

Country Link
CN (1) CN117115661A (en)

Similar Documents

Publication Publication Date Title
Aich et al. Deepwheat: Estimating phenotypic traits from crop images with deep learning
US20220327815A1 (en) System and method for identification of plant species
US11935282B2 (en) Server of crop growth stage determination system, growth stage determination method, and storage medium storing program
Zhou et al. Strawberry maturity classification from UAV and near-ground imaging using deep learning
CN113392775B (en) Sugarcane seedling automatic identification and counting method based on deep neural network
CN109583301B (en) Method and device for predicting optimal external planting conditions in crop growth process
CN114387520B (en) Method and system for accurately detecting compact Li Zijing for robot picking
Costa et al. Measuring pecan nut growth utilizing machine vision and deep learning for the better understanding of the fruit growth curve
Yalcin An approximation for a relative crop yield estimate from field images using deep learning
CN112836623B (en) Auxiliary method and device for agricultural decision of facility tomatoes
CN113223040B (en) Banana estimated yield method and device based on remote sensing, electronic equipment and storage medium
CN115620151B (en) Method and device for identifying phenological period, electronic equipment and storage medium
CN113011221A (en) Crop distribution information acquisition method and device and measurement system
CN114627411A (en) Crop growth period identification method based on parallel detection under computer vision
CN113822198A (en) Peanut growth monitoring method, system and medium based on UAV-RGB image and deep learning
CN113989689B (en) Crop pest and disease damage identification method and system based on unmanned aerial vehicle
CN113763196A (en) Orchard yield measuring system based on improved YOLOv3
CN109684953A (en) The method and device of pig tracking is carried out based on target detection and particle filter algorithm
CN111582035B (en) Fruit tree age identification method, device, equipment and storage medium
CN116310806B (en) Intelligent agriculture integrated management system and method based on image recognition
Mahule et al. Hybrid Method for Improving Accuracy of Crop-Type Detection using Machine Learning
CN114651283A (en) Seedling emergence by search function
Heylen et al. Counting strawberry flowers on drone imagery with a sequential convolutional neural network
CN114782837B (en) Plant estimation method, plant estimation device, electronic equipment and storage medium
CN117115661A (en) Crop identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination