CN116704497B - Rape phenotype parameter extraction method and system based on three-dimensional point cloud - Google Patents

Rape phenotype parameter extraction method and system based on three-dimensional point cloud Download PDF

Info

Publication number
CN116704497B
CN116704497B CN202310587705.7A CN202310587705A CN116704497B CN 116704497 B CN116704497 B CN 116704497B CN 202310587705 A CN202310587705 A CN 202310587705A CN 116704497 B CN116704497 B CN 116704497B
Authority
CN
China
Prior art keywords
rape
point cloud
point
phenotype
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310587705.7A
Other languages
Chinese (zh)
Other versions
CN116704497A (en
Inventor
张喜海
郭锐超
孟繁峰
龚鑫晶
朱家喜
张茹雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Agricultural University
Original Assignee
Northeast Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Agricultural University filed Critical Northeast Agricultural University
Priority to CN202310587705.7A priority Critical patent/CN116704497B/en
Publication of CN116704497A publication Critical patent/CN116704497A/en
Application granted granted Critical
Publication of CN116704497B publication Critical patent/CN116704497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a three-dimensional point cloud-based rape phenotype parameter extraction method and system, relates to the technical field of crop phenotype parameter extraction, and aims to solve the problem that the phenotype parameters of plants are difficult to effectively extract from the level of a single leaf organ in the existing method. The method comprises the following steps: s1, acquiring rape multi-angle images and constructing a rape image data set; s2, carrying out three-dimensional reconstruction on the images in the rape image data set to obtain a rape point cloud data set, and preprocessing point clouds in the point cloud data set; s3, carrying out semantic segmentation on the preprocessed rape point cloud points to obtain point clouds of leaves, stems and bottoms of rape, screening out leaf point clouds, removing connection points among the leaves through edge filtering, and carrying out instance segmentation of single-piece leaves through a clustering algorithm; s4, extracting phenotype parameters of the rape. The invention can realize the point cloud segmentation of the organ level of the rape single leaf, and the obtained rape phenotype parameter result has higher accuracy.

Description

Rape phenotype parameter extraction method and system based on three-dimensional point cloud
Technical Field
The invention relates to the technical field of crop phenotype parameter extraction, in particular to a rape phenotype parameter extraction method and system based on three-dimensional point cloud.
Background
Plant phenotype is a comprehensive assessment of a complex plant trait, and basic measurement of a single quantitative parameter underlying a more complex trait is one of the important means to obtain a plant phenotype for growth, development, architecture, yield. The shape of the crop (crop height, leaf area, etc.) is one of the important factors of plant phenotype, reflecting both the traits of the crop itself and environmental factors (temperature, humidity, etc.). How to obtain shape phenotype parameters in a non-destructive way has been a major challenge in plant phenotyping. With the rapid development of computer vision technology and optical sensing systems, researchers have approached reproduction of the whole plant system by means of technological means, obtaining plant phenotype parameters through various data analysis methods and not damaging the plant in the whole process.
Since two-dimensional image methods observe objects from specific angles and lack depth information, they are commonly used to treat monocotyledonous or less foliated plants. For plants with a large number of leaves and a large leaf overlap area, the complete structure of the plants cannot be accurately described, so that the statistical result of the phenotypic trait measurement is unreliable. Crop phenotype analysis using three-dimensional images is mainly dependent on three-dimensional reconstruction of plants. The current plant phenotype research based on three-dimensional images is carried out through four common three-dimensional reconstruction expression modes of depth maps, point clouds, voxels and grids, and the phenotype feature is extracted by analyzing the point cloud data of plants, but the point cloud data is not directly consumed. The increase in computational effort and the inadaptation bottleneck also limit the development of plant phenotypes.
It is certainly challenging to introduce deep learning methods that consume point cloud data directly into three-dimensional point cloud data and phenotype extraction of processed plants. The point cloud data has disorder and rotation invariance, and meanwhile, although the point cloud data contains color information, the color demarcation of each organ is still unclear for plants mainly based on green, so that the plant point cloud is difficult to divide from the plant organ level.
Disclosure of Invention
The invention aims to solve the technical problems that:
the existing method aims at the problems that the plant mainly takes green and the color of each organ is not clearly divided, and the phenotype parameters of the plant are difficult to effectively extract from the level of a single leaf organ.
The invention adopts the technical scheme for solving the technical problems:
the invention provides a three-dimensional point cloud-based rape phenotype parameter extraction method, which comprises the following steps of:
s1, acquiring rape multi-angle images and constructing a rape image data set;
s2, carrying out three-dimensional reconstruction on the images in the rape image data set to obtain a rape point cloud data set, and preprocessing point clouds in the point cloud data set;
s3, carrying out semantic segmentation on the preprocessed rape point cloud points to obtain point clouds of leaves, stems and bottoms of rape, screening leaf point clouds based on characteristics of colors of rape leaves after semantic segmentation, removing connection points among the leaves through edge filtering, eliminating point cloud noise among the leaves, and carrying out instance segmentation of single-piece leaves through a clustering algorithm;
s4, extracting phenotype parameters of the rape.
Further, the acquiring rape multi-angle images comprises acquiring rape images of rape circumferential multi-angle, and acquiring rape images of three visual angles of head-up, overlook and bottom under each angle.
Further, in the step S2, three-dimensional reconstruction is performed based on a motion structure method SFM, firstly, a part of images are selected to perform extraction and matching of feature points, a lot of errors generally exist in matching results, a KNN algorithm is used to find 2 features which are matched with the features most, if the ratio of the matching distance of the first feature to the matching distance of the second feature is smaller than a preset threshold, the matching is accepted, otherwise, mismatching is considered, point cloud data is obtained, coordinates of the point cloud are converted to a world coordinate system, and the operations are sequentially executed until all the images are calculated completely, so that the three-dimensional point cloud of the whole rape is obtained.
Further, the process of preprocessing the point cloud in S2 is as follows: firstly, a radius filter and a statistical filter are adopted to remove the noise of the discrete point cloud, and then downsampling operation and Gaussian noise adding operation are carried out to expand and strengthen data.
Further, in S3, semantic segmentation is performed on rape point cloud by using PointNet++, including:
firstly, sampling point clouds in each layer through an abstract layer (Set Abstraction Layer) to obtain a new point set; comprising the following steps:
(1) An input Point Cloud Set (Point Cloud Set) consisting of points in three-dimensional space;
(2) Dividing the input point cloud set into a plurality of spherical areas, wherein the radius of each spherical area is radius;
(3) The following steps are performed for each spherical region: a. selecting a representative point from the points in the spherical region as a center point in the new point set; b. encoding points within the spherical region using relative position information between points within the spherical region and the center point; c. feature extraction of points within the sphere using an MLP (Multi-Layer Perceptron); d. pooling (max pooling) the points in the spherical region to obtain a fixed-length feature vector representation; e. splicing the feature vector with the feature vector of the center point to form a new point set;
(4) Repeating the steps (2) and (3) until the set layering times are reached;
secondly, transmitting the characteristics of the sampled point set to an original point set through a characteristic extraction layer (Feature Propagation Layer), so as to obtain characteristic information of each point;
thirdly, aggregating the features with different scales through a Multi-scale Grouping layer (Multi-scale Grouping) to obtain more comprehensive feature representation;
finally, the characteristics of the point cloud are mapped to a space with higher dimension through a multi-layer perceptron, so that the segmented rape point cloud is obtained.
Further, in S3, the example segmentation of the single blade is performed by using a clustering algorithm, specifically, a DBscan clustering algorithm is adopted, the blade point cloud with the connection points between the blades removed is input into the DBscan clustering algorithm, and two iterative computations are performed, so as to obtain the example of the single blade.
Further, the phenotypic parameters of S4 include plant height, leaf number, leaf length, leaf width and leaf area.
Further, the length of the blade is calculated by measuring the Euclidean distance between the maximum value point and the minimum value point on the corresponding axis of the length of the blade, and the width of the blade is calculated by measuring the Euclidean distance between the maximum value point and the minimum value point on the corresponding axis of the width of the blade, namely
Wherein p is i And p j Respectively representing the ith point and the jth point, p on the corresponding length axis l And p r Respectively representing the first point and the r point on the corresponding width axis;
measuring leaf area by triangular mesh method, i.e.
Wherein,and->AB and AC sides respectively representing the ith triangleVector of (2) by->And->Obtaining the leaf area by the modular length of the cross product of the vectors;
calculating scale based on the bottom of rape, i.e.
Where r represents the scaling between the real point cloud and the reconstructed point cloud, h real Refers to the true height of the rape base, h reconstructed Refers to the reconstructed height of the rape base;
the obtained phenotype parameters are converted into the true values of the rape phenotype parameters through scaling.
A three-dimensional point cloud-based rape phenotype parameter extraction system, which is provided with a program module corresponding to the steps of any one of the technical schemes, and executes the steps in the three-dimensional point cloud-based rape phenotype parameter extraction method when running.
The utility model provides a rape phenotype parameter extraction element based on three-dimensional point cloud, the device includes rotating base, depth camera, light filling lamp, black shading cloth and computer, the rotating base is used for placing the rape, rotates the circumference angle of adjusting the rape through the rotating base, gathers the rape image through the depth camera, improves the shooting effect through light filling lamp and black shading cloth, transmits the rape image data of shooting to the computer and stores and handle.
Compared with the prior art, the invention has the beneficial effects that:
according to the rape phenotype parameter extraction method and system based on the three-dimensional point cloud, rape plants are used as research objects, a high-resolution rape three-dimensional point cloud data set is obtained through three-dimensional reconstruction of rape images, rape leaf point clouds are extracted through semantic segmentation, connection points among leaves are removed through edge filtering, single-leaf leaves are segmented through a clustering algorithm, and finally point cloud segmentation of organ levels of the single-leaf leaves of rape is achieved, so that higher segmentation accuracy is achieved. The rape phenotype parameter results obtained by the method are compared with manually measured data, and the results show that the method has higher accuracy, meets the requirements of the existing rape phenotype research, and shows higher correlation. The method provides a reference for extracting the target parameters of other plants.
Drawings
FIG. 1 is a flow chart of a three-dimensional point cloud-based rape phenotype parameter extraction method in an embodiment of the invention;
FIG. 2 is a diagram of a three-dimensional point cloud-based rape phenotype parameter extraction device in an embodiment of the invention;
FIG. 3 is a diagram of a rape point cloud semantic segmentation model based on PointNet++ in an embodiment of the invention;
FIG. 4 is a graph showing the results of three-dimensional reconstruction of a portion of canola in an embodiment of the present invention;
FIG. 5 is a graph of accuracy and loss trend in PointNet++ semantic segmentation model training in an embodiment of the present invention;
FIG. 6 is a graph showing the comparison between the extraction result of plant height parameters and the manual measurement result in the embodiment of the invention;
FIG. 7 is a graph showing the comparison of the results of the extraction of the leaf length and the leaf width parameters with the results of the manual measurement in the embodiment of the present invention, wherein the graph (a) is a graph showing the comparison of the results of the extraction of the leaf length and the results of the manual measurement, and the graph (b) is a graph showing the comparison of the results of the extraction of the leaf width and the results of the manual measurement;
FIG. 8 is a graph comparing the extraction results of leaf area parameters with the results of manual measurements in an embodiment of the present invention;
Detailed Description
In the description of the present invention, it should be noted that the terms "first," "second," and "third" mentioned in the embodiments of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
Example 1
As shown in fig. 2, the embodiment adopts a rape phenotype parameter extraction device based on three-dimensional point cloud, and comprises a rotating base, a Inter RealSense D435i depth camera, an LED light supplementing lamp, black shading cloth and a computer, wherein the rotating base is used for placing rape, the rotating base rotates to adjust the circumferential angle of the rape, a rape image is collected through the depth camera, the shooting position of the depth camera is adjusted according to the height and the size of the rape, after the image is collected once, the rotating base is rotated clockwise for 2-5 degrees, and when the image is collected by one angle, the rape is stopped for 1-2 s to ensure that the rape is still. The shooting effect is improved through the light supplementing lamp and the black shading cloth, about 150 images are acquired for each rape, and shot rape image data are transmitted to a computer for storage and processing.
The invention provides a three-dimensional point cloud-based rape phenotype parameter extraction method, which is shown in figure 1 and comprises the following steps:
and acquiring rape multi-angle images and constructing a rape image data set.
The rape multi-angle image acquisition comprises the steps of rape image acquisition of rape circumferential multi-angle, and rape image acquisition of three visual angles of head-up, overlook and look-up under each angle.
And carrying out three-dimensional reconstruction on the images in the rape image data set to obtain a rape point cloud data set, and preprocessing the point cloud in the point cloud data set.
Based on a motion structure method SFM, three-dimensional reconstruction is carried out, firstly, a part of images are selected to extract and match feature points, a plurality of errors generally exist in matching results, a KNN algorithm is used for searching 2 features which are matched with the features, if the ratio of the matching distance of a first feature to the matching distance of a second feature is smaller than a preset threshold value, the matching is accepted, otherwise, the matching is regarded as mismatching, point cloud data are obtained, coordinates of the point cloud are converted into a world coordinate system, and the operations are sequentially executed until all the images are calculated completely, so that the three-dimensional point cloud of the complete rape is obtained.
The preprocessing process of the point cloud comprises the following steps: firstly, a radius filter and a statistical filter are adopted to remove the noise of the discrete point cloud, and then downsampling operation and Gaussian noise adding operation are carried out to expand and strengthen data.
Establishing a rape point cloud semantic segmentation model, carrying out semantic segmentation on preprocessed rape point cloud points to obtain rape leaf, stem and bottom point clouds, screening leaf point clouds based on characteristics of rape leaf colors after semantic segmentation, removing connection points among the leaves through edge filtering, eliminating point cloud noise among the leaves, and carrying out instance segmentation of single-piece leaves through a clustering algorithm.
And constructing a rape point cloud semantic segmentation model based on PointNet++, and converting the rape point cloud file into an NPY format based on the requirement of the PointNet++ on the data format. Labeling the point cloud data, and according to 5: the scale of 1 randomizes the point cloud data into a training set and a test set. The rape data is an n x 1024 x 6 array, where n represents the total number of blocks of the segmented input network, 1024 represents the number of point clouds contained in each block of the input network, and 6 represents dimensions, i.e. spatial position information (x, y, z) and color information (R, G, B). The model was trained by the training set, the model performance was tested by the test set, the initial learning rate was 0.001, the batch size defaults to 16, and the epochs number was 32. The feature vector is mapped into the category probability space by using the Softmax activation function, so that the classification of the point cloud data is realized. The loss function employs a cross entropy loss function,
wherein y is i True tags representing point clouds, p i Representing the predicted value, w, calculated by the model i Indicating the weight of class i.
As shown in fig. 3, semantic segmentation of rape point cloud using pointnet++, comprising:
firstly, sampling point clouds in each layer through an abstract layer (Set Abstraction Layer) to obtain a new point set; comprising the following steps:
(1) An input Point Cloud Set (Point Cloud Set) consisting of points in three-dimensional space;
(2) Dividing the input point cloud set into a plurality of spherical areas, wherein the radius of each spherical area is radius;
(3) The following steps are performed for each spherical region: a. selecting a representative point from the points in the spherical region as a center point in the new point set; b. encoding points within the spherical region using relative position information between points within the spherical region and the center point; c. feature extraction of points within the sphere using an MLP (Multi-Layer Perceptron); d. pooling (max pooling) the points in the spherical region to obtain a fixed-length feature vector representation; e. splicing the feature vector with the feature vector of the center point to form a new point set;
(4) Repeating the steps (2) and (3) until the set layering times are reached;
secondly, transmitting the characteristics of the sampled point set to an original point set through a characteristic extraction layer (Feature Propagation Layer), so as to obtain characteristic information of each point;
thirdly, aggregating the features with different scales through a Multi-scale Grouping layer (Multi-scale Grouping) to obtain more comprehensive feature representation;
finally, the characteristics of the point cloud are mapped to a space with higher dimension through a multi-layer perceptron, so that the segmented rape point cloud is obtained.
When the semantic segmentation of the rape point cloud is carried out, k classifications are generated by the MLP multi-layer sensor of the model, and k is set to be 3, namely, the rape point cloud is classified into three types of leaves, stems and bottoms. After the semantic segmentation is realized, the colors corresponding to the point clouds of the three organs of the leaf, the stem and the bottom are green, blue and red in sequence.
The example segmentation of the single blade is carried out through a clustering algorithm, a DBscan clustering algorithm is specifically adopted, blade point clouds with connection points among the blades removed are input into the DBscan clustering algorithm, iterative computation is carried out twice, an example of the single blade is obtained, and the point clouds of the single independent blade are stored.
Extracting phenotype parameters of rape.
The phenotypic parameters include plant height, leaf number, leaf length, leaf width and leaf area.
Because the main direction of the reconstructed point cloud has a certain deviation from the three-dimensional coordinate axis in the visual image, the point cloud needs to be subjected to coordinate transformation before the phenotype parameters are extracted. In this embodiment, a three-dimensional point cloud transformation function of MATLAB2020a is adopted, and a new point cloud with a main direction consistent with the direction of the three-dimensional coordinate axis is obtained by converting the created rotational translation matrix with the original point cloud as a center point, namely
T A =M T T O
M T Translation rotation matrix representing point cloud, T O And T A Respectively representing the original point cloud data and the converted point cloud data.
Extracting phenotype parameters of rape by using CloudCompare, specifically, extracting rape plant height by CloudCompare measurement, calculating leaf length by measuring Euclidean distance between maximum value point and minimum value point on axis corresponding to leaf length, and calculating leaf width by measuring Euclidean distance between maximum value point and minimum value point on axis corresponding to leaf width, namely
Wherein p is i And p j Respectively representing the ith point and the jth point, p on the corresponding length axis l And p r Respectively representing the first point and the r point on the corresponding width axis;
measuring leaf area by triangular mesh method, i.e.
Wherein,and->Vectors representing the AB and AC sides of the ith triangle, respectively, by +.>And->Obtaining the leaf area by the modular length of the cross product of the vectors;
because the part of phenotype parameters of the rape can change along with the gesture, the scaling is calculated according to the bottom benchmark of the rape, namely
Where r represents the scaling between the real point cloud and the reconstructed point cloud, h real Refers to the true height of the rape base, h reconstructed Refers to the reconstructed height of the rape base;
and converting the obtained phenotype parameters such as plant height, leaf length, leaf width and leaf area into true values of the rape phenotype parameters through scaling.
Performing result analysis
The results of the three-dimensional reconstruction of a portion of canola are shown in FIG. 4. It can be seen that 3D reconstruction overall shows good performance except for small area and the fact that the rape shoots may have a point cloud missing during reconstruction due to angle and camera resolution. As shown in tables 1 and 2, six sets of rape point clouds were randomly selected for leaf segmentation, and the number of leaves and the integrity of the leaves were compared with ground real data.
TABLE 1
TABLE 2
The results show that the absolute error of the results for the number of leaves is very low, and that reconstruction failure occurs only for seedlings with very high overlap. Although small area point cloud deletion occurs in part of the leaves, the effect on the subsequent extraction of phenotypic parameters of rape plants is not great. Therefore, the method of the invention is considered to have higher accuracy in reconstructing and dividing rape leaves.
As shown in FIG. 5, the training accuracy of the PointNet++ semantic segmentation model is continuously increased, the training Loss is converged in the continuous decrease, the accuracy and Loss of the model are obviously improved in the previous 15 Epochs, the result shows that the accuracy of the PointNet++ semantic segmentation model on the rape segmentation result is higher, the accuracy of the test reaches 0.92, and the IoU is 0.66. Although some errors still occur at the junctions of different organs of the rape point cloud, good accuracy can still be demonstrated in experiments using these segmented point clouds for rape phenotype parameter extraction.
As shown in Table 3, to verify that PointNet++ is more suitable for plant point cloud semantic segmentation, a comparative experiment was performed with PointNet under the same environmental variables. The result shows that the PointNet++ model has better feature extraction and fusion capability on point cloud data, so that the PointNet++ model is more suitable for semantic segmentation research of plant point clouds compared with the PointNet model.
TABLE 3 Table 3
As shown in fig. 6 to 8, the results of the extraction of the plant height parameters of 30 rape, the extraction of the leaf length and leaf width parameters of 80 rape leaves, and the comparison of the extraction of the leaf area parameters with the manual measurement results were shown. Wherein, R in the result of plant height 2 R in the results of =0.9623, rmse=0.48 cm, leaf length 2 R in the results of rmse=0.9067, rmse=0.68 cm, leaf width 2 R in the results of rmse=0.9528, rmse=0.39 cm, leaf area 2 =0.9412,RMSE=6.24cm 2 . Far higher than the existing findings (Xiang, l., et al Automated morphological traits extraction for sorghum plants via 3D point cloud data analysis.Computers and Electronics inAgriculture,2019.162:p.951-961.): r=0.81, rmse=2.67 cm, correlation of leaf area r=0.94 cm for plant height 2 ,RMSE=12.94cm 2 . From the results of this example, it can be seen that the results of the plant height and the leaf width of canola were more accurate and less erroneous than the results of the leaf length and the leaf area. The method is characterized in that in the rape growth process, the blades are gradually bent to cause the overlarge inclination angle, so that the overall leaf length result is in a smaller state, and meanwhile, the leaf area result is smaller than the result of manual measurement. Overall, the accuracy of the results of the phenotypic parameters of rape, such as plant height, leaf length, leaf width and leaf area, is high. The three-dimensional point cloud-based rape phenotype parameter extraction method can be applied to research of phenotype traits of other plants, and an effective means is provided for extracting plant phenotype parameters.
Although the present disclosure is disclosed above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and such changes and modifications would be within the scope of the disclosure.

Claims (5)

1. The rape phenotype parameter extraction method based on the three-dimensional point cloud is characterized by comprising the following steps of:
s1, acquiring rape multi-angle images and constructing a rape image data set;
s2, carrying out three-dimensional reconstruction on the images in the rape image data set to obtain a rape point cloud data set, and preprocessing point clouds in the point cloud data set;
s3, carrying out semantic segmentation on the preprocessed rape point cloud points to obtain point clouds of leaves, stems and bottoms of rape, screening leaf point clouds based on characteristics of colors of rape leaves after semantic segmentation, removing connection points among the leaves through edge filtering, eliminating point cloud noise among the leaves, and carrying out instance segmentation of single-piece leaves through a clustering algorithm;
s4, extracting phenotype parameters of the rape;
s2, carrying out three-dimensional reconstruction based on a motion structure method SFM, firstly selecting a part of images to extract and match feature points, wherein a lot of errors generally exist in matching results, searching 2 features which are matched with the features most by using a KNN algorithm, accepting the matching if the ratio of the matching distance of the first feature to the matching distance of the second feature is smaller than a preset threshold, otherwise, regarding the matching as mismatching to obtain point cloud data, converting coordinates of the point cloud into a world coordinate system, and sequentially executing the operations until all images are completely calculated to obtain the three-dimensional point cloud of the complete rape;
the preprocessing process of the point cloud in the S2 is as follows: firstly, removing discrete point cloud noise by adopting a radius filter and a statistical filter, and then performing downsampling operation and Gaussian noise adding operation to perform data expansion and enhancement;
s3, performing instance segmentation of the single blade through a clustering algorithm, specifically adopting a DBscan clustering algorithm, inputting blade point clouds with connection points among the blades removed into the DBscan clustering algorithm, and performing iterative computation twice to obtain an instance of the single blade;
calculating the length of the blade by measuring the Euclidean distance between the maximum value point and the minimum value point on the corresponding axis of the length of the blade, and likewise, calculating the width of the blade by measuring the Euclidean distance between the maximum value point and the minimum value point on the corresponding axis of the width of the blade, namely
Wherein, i and j indicating the i-th and j-th points on the length-corresponding axis respectively, l and r respectively representing the first point and the r point on the corresponding width axis;
measuring leaf area by triangular mesh method, i.e.
Wherein,and->Vectors representing the AB and AC sides of the ith triangle, respectively, by +.>And->Obtaining the leaf area by the modular length of the cross product of the vectors;
calculating scale based on the bottom of rape, i.e.
Where r represents the scaling between the real point cloud and the reconstructed point cloud, h real Refers to the true height of the rape base, h reconstructed Refers to the reconstructed height of the rape base;
the obtained phenotype parameters are converted into the true values of the rape phenotype parameters through scaling.
2. The method for extracting rape phenotype parameters based on three-dimensional point cloud as claimed in claim 1, wherein the step of collecting rape multi-angle images comprises collecting rape images of multiple angles in the circumferential direction of rape, and collecting rape images of three visual angles of head-up, top-down and bottom-up under each angle.
3. The three-dimensional point cloud based rape phenotype parameter extraction method according to claim 1, wherein the semantic segmentation of rape point cloud by using PointNet++ in S3 comprises the following steps:
firstly, sampling point clouds in each layer through an abstract layer (Set Abstraction Layer) to obtain a new point set; comprising the following steps:
(1) An input Point Cloud Set (Point Cloud Set) consisting of points in three-dimensional space;
(2) Dividing the input point cloud set into a plurality of spherical areas, wherein the radius of each spherical area is radius;
(3) The following steps are performed for each spherical region: a. selecting a representative point from the points in the spherical region as a center point in the new point set; b. encoding points within the spherical region using relative position information between points within the spherical region and the center point; c. feature extraction of points within the sphere using an MLP (Multi-Layer Perceptron); d. pooling (max pooling) the points in the spherical region to obtain a fixed-length feature vector representation; e. splicing the feature vector with the feature vector of the center point to form a new point set;
(4) Repeating the steps (2) and (3) until the set layering times are reached;
secondly, transmitting the characteristics of the sampled point set to an original point set through a characteristic extraction layer (Feature Propagation Layer), so as to obtain characteristic information of each point;
thirdly, aggregating the features with different scales through a Multi-scale Grouping layer (Multi-scale Grouping) to obtain more comprehensive feature representation;
finally, the characteristics of the point cloud are mapped to a space with higher dimension through a multi-layer perceptron, so that the segmented rape point cloud is obtained.
4. The method for extracting three-dimensional point cloud based canola phenotype parameters according to claim 1, wherein the phenotype parameters of S4 include plant height, leaf number, leaf length, leaf width, and leaf area.
5. A rape phenotype parameter extraction system based on three-dimensional point cloud is characterized in that: the system has program modules corresponding to the steps of any one of the preceding claims 1 to 4, and the steps in the three-dimensional point cloud based rape phenotype parameter extraction method are executed in operation.
CN202310587705.7A 2023-05-24 2023-05-24 Rape phenotype parameter extraction method and system based on three-dimensional point cloud Active CN116704497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310587705.7A CN116704497B (en) 2023-05-24 2023-05-24 Rape phenotype parameter extraction method and system based on three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310587705.7A CN116704497B (en) 2023-05-24 2023-05-24 Rape phenotype parameter extraction method and system based on three-dimensional point cloud

Publications (2)

Publication Number Publication Date
CN116704497A CN116704497A (en) 2023-09-05
CN116704497B true CN116704497B (en) 2024-03-26

Family

ID=87844286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310587705.7A Active CN116704497B (en) 2023-05-24 2023-05-24 Rape phenotype parameter extraction method and system based on three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN116704497B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635839A (en) * 2023-12-05 2024-03-01 四川省农业科学院科技保障中心 Crop information acquisition and three-dimensional image presentation method, device and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471523A (en) * 2013-09-30 2013-12-25 北京林业大学 Method for detecting profile phenotype of arabidopsis
CN111583328A (en) * 2020-05-06 2020-08-25 南京农业大学 Three-dimensional estimation method for epipremnum aureum leaf external phenotype parameters based on geometric model
CN111667529A (en) * 2020-05-25 2020-09-15 东华大学 Plant point cloud blade segmentation and phenotype characteristic measurement method
CN111724433A (en) * 2020-06-24 2020-09-29 广西师范大学 Crop phenotype parameter extraction method and system based on multi-view vision
CN112200854A (en) * 2020-09-25 2021-01-08 华南农业大学 Leaf vegetable three-dimensional phenotype measurement method based on video image
CN112435239A (en) * 2020-11-25 2021-03-02 南京农业大学 Scindapsus aureus leaf shape parameter estimation method based on MRE-PointNet and self-encoder model
CN114792372A (en) * 2022-06-22 2022-07-26 广东工业大学 Three-dimensional point cloud semantic segmentation method and system based on multi-head two-stage attention

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471523A (en) * 2013-09-30 2013-12-25 北京林业大学 Method for detecting profile phenotype of arabidopsis
CN111583328A (en) * 2020-05-06 2020-08-25 南京农业大学 Three-dimensional estimation method for epipremnum aureum leaf external phenotype parameters based on geometric model
CN111667529A (en) * 2020-05-25 2020-09-15 东华大学 Plant point cloud blade segmentation and phenotype characteristic measurement method
CN111724433A (en) * 2020-06-24 2020-09-29 广西师范大学 Crop phenotype parameter extraction method and system based on multi-view vision
CN112200854A (en) * 2020-09-25 2021-01-08 华南农业大学 Leaf vegetable three-dimensional phenotype measurement method based on video image
CN112435239A (en) * 2020-11-25 2021-03-02 南京农业大学 Scindapsus aureus leaf shape parameter estimation method based on MRE-PointNet and self-encoder model
CN114792372A (en) * 2022-06-22 2022-07-26 广东工业大学 Three-dimensional point cloud semantic segmentation method and system based on multi-head two-stage attention

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Branch–Leaf Segmentation and Leaf Phenotypic Parameter Estimation of Pear Trees Based on Three-Dimensional Point Clouds;Haitao Li等;Sensors;第2节 *
基于动态K阈值的苹果叶片点云聚类与生长参数提取;刘刚;张伟洁;郭彩玲;;农业机械学报(04) *

Also Published As

Publication number Publication date
CN116704497A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
Li et al. PlantNet: A dual-function point cloud segmentation network for multiple plant species
Kumar et al. Image based leaf segmentation and counting in rosette plants
Li et al. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning
CN113112504B (en) Plant point cloud data segmentation method and system
Li et al. An overlapping-free leaf segmentation method for plant point clouds
CN112819830A (en) Individual tree crown segmentation method based on deep learning and airborne laser point cloud
CN108229347A (en) For the method and apparatus of the deep layer displacement of the plan gibbs structure sampling of people's identification
CN116704497B (en) Rape phenotype parameter extraction method and system based on three-dimensional point cloud
CN112766155A (en) Deep learning-based mariculture area extraction method
CN115880487A (en) Forest laser point cloud branch and leaf separation method based on deep learning method
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
Yu et al. LFPNet: Lightweight network on real point sets for fruit classification and segmentation
Paturkar et al. Plant trait segmentation for plant growth monitoring
Saeed et al. Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks
Wang et al. Three-dimensional reconstruction of soybean canopy based on multivision technology for calculation of phenotypic traits
CN113096080B (en) Image analysis method and system
CN116721345A (en) Morphology index nondestructive measurement method for pinus massoniana seedlings
CN115205691B (en) Rice planting area identification method and device, storage medium and equipment
Sodhi et al. Robust plant phenotyping via model-based optimization
CN113096079B (en) Image analysis system and construction method thereof
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network
CN108073934A (en) Nearly multiimage detection method and device
Nazeri Evaluation of multi-platform LiDAR-based leaf area index estimates over row crops
Saeed et al. 3D Annotation and deep learning for cotton plant part segmentation and architectural trait extraction
Vatresia et al. Automatic Fish Identification Using Single Shot Detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant