CN115035184B - Honey pomelo volume estimation method based on lateral multi-view reconstruction - Google Patents

Honey pomelo volume estimation method based on lateral multi-view reconstruction Download PDF

Info

Publication number
CN115035184B
CN115035184B CN202210662277.5A CN202210662277A CN115035184B CN 115035184 B CN115035184 B CN 115035184B CN 202210662277 A CN202210662277 A CN 202210662277A CN 115035184 B CN115035184 B CN 115035184B
Authority
CN
China
Prior art keywords
honey pomelo
point cloud
camera
volume
honey
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210662277.5A
Other languages
Chinese (zh)
Other versions
CN115035184A (en
Inventor
饶秀勤
林洋洋
朱逸航
张小敏
黄心瑶
徐涛
应义斌
徐惠荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210662277.5A priority Critical patent/CN115035184B/en
Publication of CN115035184A publication Critical patent/CN115035184A/en
Application granted granted Critical
Publication of CN115035184B publication Critical patent/CN115035184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a honey pomelo volume estimation method based on lateral multi-view reconstruction. The method comprises the following steps: firstly, constructing a multi-view image acquisition system, then realizing the reconstruction of honey pomelo dense point clouds based on a motion recovery structure and a multi-view stereoscopic vision principle by utilizing the constructed system, and then forming a closed convex hull through dense point cloud segmentation, segmentation point cloud filtering, filtering point cloud downsampling and downsampling point cloud triangularization, and carrying out volume calculation on the closed convex hull to serve as a downsampling point cloud volume estimation value. The method effectively solves the problem that the volume of the fruit is difficult to calculate, and is suitable for estimating the volume of the fruit with various fruit shapes such as a drop shape, a sphere shape, an ellipsoid shape, a pear shape and the like. Meanwhile, the image acquisition device with uniform light environment constructed by the invention can well overcome the bright spot area formed by strong reflection of light on the surface of the fruit. Furthermore, the invention can realize nondestructive accurate measurement of the volume of the fruit, and can provide important reference basis for quality grading of the fruit.

Description

Honey pomelo volume estimation method based on lateral multi-view reconstruction
Technical Field
The invention relates to a fruit volume estimation method, in particular to a honey pomelo volume estimation method based on lateral multi-view reconstruction.
Background
The fruit is developed into the third largest planting product in China, which is inferior to grains and vegetables. Meanwhile, china is the country with the largest planting area of the shaddock in the world, the yield of the shaddock is the first in the world, the yield of the shaddock in 2019 is as high as 508 ten thousand tons, and the planting area and the yield of the shaddock respectively account for 61.41% and 51.45% of the world. The honey pomelo has been one of excellent representative varieties of pomelo through the cultivation history of 500 years. Currently, the quality classification of honey pomelos is mainly based on weight, such as yellow daily rise (2015) (yellow daily rise, zhu Donghuang, lin Jinxing, shen Hong, li Jian. Honey pomelo fruit classification standard research [ J ]. Southern China fruit tree, 2015,44 (03): 28-31+34 ]) research shows that the utilization of volume as a classification index is more scientific in weight, the correlation of the volume of the fruit and the internal quality is the strongest, and the corresponding juice cell granulation rate is the largest. Therefore, the method has important significance in estimating the volume of the honey pomelo and further carrying out commercial grading.
The traditional fruit volume estimation method is mainly manual measurement, such as measurement by using a drainage method, but the manual measurement method has the defects of high labor intensity, low efficiency, long time consumption and the like. With the development of image processing technology, the estimation of external geometric characteristics of fruits represented by the volume and quality of estimated fruits shows great advantages. Koc et al (2007)(Koc A B.Determination of watermelon volume using ellipsoid approximation and image processing[J].Postharvest Biology and Technology,2007,45(3):366-371.) used watermelons as the study object, and found that the accuracy of the image area fitting method was higher by fitting the watermelon volumes by ellipsometry and image area estimation, respectively. Omid et al (2010)(Omid M,Khojastehnazhand M,Tabatabaeefar A.Estimating volume and mass of citrus fruits by image processing technique[J].Journal of Food Engineering,2010,100(2):315-321.) constructed a two-camera captured image acquisition system, used the system to acquire surface images of citrus, and then used an ellipsometry to estimate the volume and mass of citrus, and found that the volume and mass exhibited a high correlation. Nyalala and (2019)(Nyalala I,Okinda C,Nyalala L,et al.Tomato volume and mass estimation using computer vision and machine learning algorithms:Cherry tomato model[J].Journal of Food Engineering,2019,263:288-298.) take cherry tomatoes as research objects, and estimate the volume and the mass of the cherry tomatoes by adopting a method combining machine vision and machine learning, wherein the estimation accuracy of the mass and the volume in the RBF-SVM model reaches 0.9706.
In recent years, with the development of three-dimensional reconstruction technology and consumer-grade image acquisition equipment, students perform three-dimensional reconstruction to measure the size or volume of fruits by using a way of generating a full-surface point cloud of the fruits: yawe et al (2020)(Yawei,W,Yifei C.Fruit Morphological Measurement Based on Three-Dimensional Reconstruction.Agronomy,2020,10,455) the pears were placed on a rotating table, three-dimensional point cloud images of the pears were obtained using 9 pictures, and three-dimensional dimensions thereof were calculated. Ni and (2021)(Ni X,Li C,Jiang H,et al.Three-dimensional photogrammetry with deep learning instance segmentation to extract berry fruit harvestability traits[J].ISPRS Journal of Photogrammetry and Remote Sensing,2021,171:297-309.) reconstruct blueberry fruits of 4 varieties based on SFM and MVS, the compactness of the blueberry is calculated by utilizing a minimum bounding box, the number, the volume and the maturity of the blueberry fruits are estimated, and the result shows that the detection precision of the number of the blueberry fruits reaches 97.3%.
However, the above study objects are all ellipsoidal or spherical fruits, and are difficult to be applied to the volume estimation of honey pomelo with various fruit shapes such as drop shape, sphere shape, ellipsoidal shape, pear shape and the like. In addition, because the honey pomelo fruits are large, the oil cells are fully distributed on the surfaces of the honey pomelo fruits, and the honey pomelo fruits have the characteristic of easy reflection, and bright spot areas are easy to form, the construction of a reconstruction device with illumination uniformity is also a great difficulty.
Disclosure of Invention
In order to solve the problems and needs in the background art, the invention provides a honey pomelo volume estimation method based on lateral multi-view reconstruction.
The technical scheme of the invention is as follows:
1) Building a multi-view image acquisition system: the multi-view image acquisition system comprises an illumination box, a rotary platform, a camera and a host, wherein the rotary platform is arranged at the center of the interior of the illumination box and is used for placing honey pomelo, the camera consists of a overlook camera C 1, a head-up camera C 2 and a head-up camera C 3, the overlook camera C 1, the head-up camera C 2 and the head-up camera C 3 are sequentially arranged on the circumferential side surface of the illumination box from top to bottom, optical axes of the overlook camera C 1, the head-up camera C 2 and the head-up camera C 3 point to the honey pomelo, and the overlook camera C 1, the head-up camera C 2 and the head-up camera C 3 are all connected with the host;
2) Multi-view honey pomelo dense point cloud reconstruction: starting a rotating platform, synchronously shooting honey pomelos by a overlooking camera C 1, a head-up camera C 2 and a head-up camera C 3, and shooting by each camera to obtain a group of original images of the honey pomelos; restoring three-dimensional point coordinates of the current honey pomelo by utilizing a motion restoration structure-based multi-view stereoscopic vision method according to the 3 groups of original images of the honey pomelo to obtain a honey pomelo dense point cloud;
3) Dense point cloud segmentation: removing background noise in the honey pomelo dense point cloud by adopting ConditionAnd algorithm according to the honey pomelo dense point cloud to obtain honey pomelo segmentation point cloud;
4) Segmentation point cloud filtering: for any point k in the honey pomelo segmentation point cloud, a plurality of adjacent points adjacent to the current point k are taken, the average distance between the current point k and the current plurality of adjacent points is calculated, then outliers are selected to be removed according to the average distance by using Gaussian distribution, each point in the honey pomelo segmentation point cloud is traversed, all outliers of each point are removed, and the honey pomelo filtering point cloud is obtained currently;
5) Filtering point cloud downsampling: randomly downsampling the obtained honey pomelo filtering point cloud to obtain a honey pomelo downsampling point cloud;
6) Down-sampling point cloud triangularization: triangularizing the honey pomelo downsampling point cloud to form a closed convex hull;
7) Volume estimation: the volume of the internal cavity of the closed convex hull is calculated and is taken as the volume estimated value of the current honey pomelo 3.
In the step 2), the rotary platform rotates at least once.
In the step 2), firstly, extracting feature points of all honey pomelo original images by using a scale-invariant feature transformation algorithm, and matching the feature points among all the honey pomelo original images to obtain each group of image matching pairs; then, according to the matched characteristic points of the matched pairs of the images of each group, restoring an internal reference matrix and an external reference matrix corresponding to the 3 cameras by utilizing the epipolar geometry principle; and finally, carrying out three-dimensional point cloud reconstruction on all original honey pomelo images by utilizing a binocular stereoscopic vision and multi-view stereoscopic vision principle method according to an internal reference matrix and an external reference matrix corresponding to the 3 cameras to obtain dense honey pomelo point clouds.
In the step 4), points outside the average distance by 0.5 times are taken as outliers.
And 5) adopting a point cloud random downsampling method to randomly downsample the obtained honey pomelo filtering point cloud.
The illumination box in the step 1) comprises a transparent acrylic cylinder, a top LED light source, a bottom LED light source and a light guide film;
A rotating platform is arranged in the transparent acrylic cylinder, a camera groove is formed in the circumferential side face of the transparent acrylic cylinder, a top view camera C 1, a head-up camera C 2 and a bottom view camera C 3 are sequentially arranged on the circumferential side face of the transparent acrylic cylinder from top to bottom, light source mounting grooves are formed in the upper end face and the lower end face of the transparent acrylic cylinder, a top LED light source and a bottom LED light source are respectively embedded in the light source mounting grooves of the upper end face and the lower end face, and the outer circumferential side face of the transparent acrylic cylinder is covered with a light guide film.
The beneficial effects of the invention are as follows:
The method effectively solves the problem that the volume of the fruit is difficult to calculate, and is suitable for estimating the volume of the honey pomelo with various fruit shapes such as a drop shape, a sphere shape, an ellipsoid shape, a pear shape and the like. Meanwhile, the image acquisition device with uniform light environment constructed by the invention can well overcome the bright spot area formed by strong reflection of light on the surface of the fruit. Furthermore, the invention can realize nondestructive accurate measurement of the volume of the fruit, and can provide important reference basis for quality grading of the fruit.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the image acquisition system according to the present invention.
Fig. 3 is a schematic diagram of the binocular vision system of the present invention.
Fig. 4 is a dense point cloud of honey pomelo of the present invention.
Fig. 5 is a cloud of honey pomelo downsampling points of the present invention.
Fig. 6 is a convex hull diagram of the honey pomelo of the present invention.
FIG. 7 is a plot of the fit of the volume estimates of the present invention to the actual honey pomelo volume measurements.
Fig. 8 is a schematic view of the illumination scheme of the present invention.
Fig. 9 is a V-component diagram of the present invention.
Fig. 10 is a ROI area luminance distribution diagram of the present invention.
In the figure: 1. the illumination box, 2, rotary platform, 3, honey pomelo, 4, camera, 5, host computer, 6, display.
Detailed Description
The invention is further described below with reference to the drawings and examples.
The embodiment of selecting honey pomelo according to the invention is as follows:
As shown in fig. 1, the present invention includes the steps of:
1) Building a multi-view image acquisition system: as shown in fig. 2, the multi-view image acquisition system comprises an illumination box 1, a rotary platform 2, an industrial camera 4, a host 5 and a display 6, wherein the rotary platform 2 is installed at the center of the interior of the illumination box 1, the rotary platform 2 is used for placing honey pomelo 3, the camera 4 consists of a overlook camera C 1, a head-up camera C 2 and a bottom-view camera C 3, the overlook camera C 1, the head-up camera C 2 and the bottom-view camera C 3 are sequentially installed along the circumferential side surface of the axial illumination box 1 from top to bottom, the optical axes of the top view camera C 1, the top view camera C 2 and the bottom view camera C 3 are all directed toward the honey pomelo 3, i.e., an included angle exists between the optical axes of the top view camera C 1, the top view camera C 2 and the bottom view camera C 3. The top view camera C 1, the head-up camera C 2 and the bottom view camera C 3 are not fixed in position in the circumferential direction, for example, the top view camera C 1, the head-up camera C 2 and the bottom view camera C 3 may be on the same axis or may be arranged at equal intervals in the circumferential direction, the top view camera C 1, the head-up camera C 2 and the bottom view camera C 3 are all connected with the host computer 5, the host computer 5 is connected with the display 6, the top view camera C 1, The head-up camera C 2 and the bottom-up camera C 3 are connected with the host 5, and the host 5 is connected with the display 6;
The illumination box 1 in the step 1) comprises a transparent acrylic cylinder, a top LED light source, a bottom LED light source and a light guide film;
A rotary platform 2 is arranged in the transparent acrylic cylinder, a camera groove is formed in the circumferential side surface of the transparent acrylic cylinder, a top view camera C 1, a head-up camera C 2 and a bottom view camera C 3 are sequentially arranged on the circumferential side surface of the transparent acrylic cylinder from top to bottom, a light source mounting groove is formed in the upper end surface and the lower end surface of the transparent acrylic cylinder, a top LED light source and a bottom LED light source are respectively embedded in the light source mounting grooves of the upper end surface and the lower end surface, and the outer circumferential side surface of the transparent acrylic cylinder is covered with a light guide film.
2) Multi-view honey pomelo dense point cloud reconstruction: starting the rotating platform 2, synchronously shooting the honey pomelo 3 by the overlooking camera C 1, the head-up camera C 2 and the head-up camera C 3, and shooting by each camera to obtain a group of honey pomelo original images of the current honey pomelo 3 by at least one rotation of the rotating platform 2; recovering three-dimensional point coordinates of the current honey pomelo by using a motion recovery structure (Structure From Motion, SFM) and multi-view stereoscopic vision (Multi View Stereo, MVS) method according to 3 groups of original images of the honey pomelo to obtain a honey pomelo dense point cloud;
In the step 2), firstly, extracting feature points of all honey pomelo original images by using a Scale-invariant feature transform (Scale-INVARIANT FEATURE TRANSFORM, SIFT) algorithm, and matching the feature points among all the honey pomelo original images to obtain each group of image matching pairs; then, according to the matched characteristic points of the matched pairs of the images of each group, restoring an internal reference matrix and an external reference matrix corresponding to the 3 cameras by utilizing the epipolar geometry principle; and further calculating the three-dimensional coordinate positions of the matched characteristic points, thereby obtaining the sparse three-dimensional point cloud of the honey pomelo. And finally, carrying out three-dimensional point cloud reconstruction on all original honey pomelo images according to an internal reference matrix and an external reference matrix corresponding to the 3 cameras by utilizing a binocular stereoscopic vision and multi-view stereoscopic vision principle method to obtain a honey pomelo dense point cloud (figure 4).
In specific implementation, the rotation speed of the rotation platform 2 is set to be 1 °/s, the overlook camera C 1, the head-up camera C 2 and the head-up camera C 3 are all connected with a trigger plate through trigger control lines, the trigger plate is connected to an NPN type laser sensor, and the cameras are triggered and collected every 1s at intervals by being matched with the upper computer of the camera, so that 3 groups of 1080 honey pomelo original images are collected.
As shown in fig. 3, two cameras corresponding to the image matching pair in the step 2) are arbitrarily selected to form a binocular vision system, wherein O CL is a left-eye camera, O CR is a right-eye camera, the focal length of the left-eye camera is f l, the focal length of the right-eye camera is f r, an arbitrary spatial point P (X W,YW,ZW) in the world coordinate system, an imaging point on the image coordinate system of the left-eye camera is P l(xl,yl), an imaging point on the image coordinate system of the right-eye camera is P r(xr,yr), and coordinate values under the right-eye camera coordinate system are (X WR,YWR,ZWR). In specific implementation, the left-eye camera coordinate system is used as the world coordinate system, so that the other two camera coordinate systems are converted into the current world coordinate system according to the internal reference matrix and the external reference matrix corresponding to the cameras.
Specifically, it is available from a small-bore imaging model:
Let the rotation matrix R and the translation matrix t, which are obtained by converting the right-eye camera coordinate system into the left-eye camera coordinate system in the step 2), be respectively:
tT=[t1 t2 t3] (4)
the conversion from the right-eye camera coordinate system to the world coordinate system can be expressed as:
then, the following formulas are available simultaneously:
the above-mentioned various solutions are combined,
And traversing each group of image matching pairs to obtain three-dimensional point coordinates of all the space points, wherein all the space points form a honey pomelo dense point cloud (figure 4).
3) Dense point cloud segmentation: according to the honey pomelo dense point cloud, setting the value range of an X axis of a point cloud world coordinate system to be (X min,xmax), the value range of a Y axis to be (Y min,ymax), and the value range of a Z axis to be (Z min,Zmax), removing background noise in the honey pomelo dense point cloud by adopting ConditionAnd algorithm based on the value ranges of 3 coordinate axes to obtain honey pomelo segmentation point cloud;
4) Segmentation point cloud filtering: for any point k in the honey pomelo segmentation point cloud, a plurality of adjacent points adjacent to the current point k are taken, and in specific implementation, 12 adjacent points are taken. Calculating the average distance between the current point k and a plurality of current adjacent points, selecting points with the average distance being 0.5 times as outliers by using Gaussian distribution for removal, traversing each point in the honey pomelo segmentation point cloud, and removing all outliers of each point to obtain the current honey pomelo filtering point cloud;
5) Filtering point cloud downsampling: randomly downsampling the obtained honey pomelo filtering point cloud by using a point cloud random downsampling method to reduce the number of the point clouds, setting the random downsampling point number to 4096, and obtaining the honey pomelo downsampling point cloud after downsampling (figure 5);
6) Down-sampling point cloud triangularization: triangularizing the honey pomelo downsampling point cloud to form a closed convex hull, as shown in fig. 6;
7) Volume estimation: the closed convex hull is a closed cavity, the volume of the inner cavity of the closed convex hull is calculated, and the volume of the inner cavity is used as the volume estimated value of the current honey pomelo 3.
As shown in fig. 8, in this embodiment, the top LED light source and the bottom LED light source are both soft strip light sources, the soft strip light source is a 2835LED light source, when the 2835LED light source is turned on, the black arrow light source light propagates along the transparent acrylic cylinder 1.8, and is refracted through the laser perforation point on the light guide film 1.5, and the gray conduction light is scattered into the transparent acrylic cylinder 1.8, so that an internal illumination environment with excellent uniformity is finally formed.
The original honey pomelo image is converted from RGB space to HSV space, and then the V component diagram is extracted, as shown in FIG. 9. Then, selecting a rectangular region of the V component diagram as an ROI region, counting brightness values of all pixel points in the ROI region pixel by pixel to obtain a brightness distribution diagram of the ROI region, and calculating a brightness MEAN value MEAN and a standard deviation STD as shown in fig. 10. Mean=196.1 in this example, std=5.8.
In the experiment, 180 honey pomelos 3 are randomly selected, the steps 2 to 7 are repeated, and the average relative error between the down-sampling point cloud volume estimated value of down-sampling to 4096 points and the actual measurement value of the pomelo volume of the stream honey 3 is found to be 3.91% and the R2 value is found to be 0.942. Fig. 7 is a graph showing a fit between the estimated volume value and the actual volume value of honey pomelo 3.

Claims (5)

1. The honey pomelo volume estimation method based on lateral multi-view reconstruction is characterized by comprising the following steps of:
1) Building a multi-view image acquisition system: the multi-view image acquisition system comprises an illumination box (1), a rotary platform (2), a camera (4) and a host (5), wherein the rotary platform (2) is arranged at the center of the interior of the illumination box (1), the rotary platform (2) is used for placing honey pomelo (3), the camera (4) consists of a overlook camera and a head-up camera, the overlook camera and the head-up camera are sequentially arranged on the circumferential side surface of the illumination box (1) from top to bottom, the optical axes of the overlook camera and the head-up camera point to the honey pomelo (3), and the overlook camera and the head-up camera are connected with the host (5);
2) Multi-view honey pomelo dense point cloud reconstruction: starting a rotary platform (2), synchronously shooting honey pomelos (3) by overlooking cameras and head-up cameras, and shooting by each camera to obtain a group of original images of the honey pomelos (3); restoring three-dimensional point coordinates of the current honey pomelo by utilizing a motion restoration structure-based multi-view stereoscopic vision method according to the 3 groups of original images of the honey pomelo to obtain a honey pomelo dense point cloud;
3) Dense point cloud segmentation: removing background noise in the honey pomelo dense point cloud by adopting ConditionAnd algorithm according to the honey pomelo dense point cloud to obtain honey pomelo segmentation point cloud;
4) Segmentation point cloud filtering: for any point in the honey pomelo segmentation point cloud, a plurality of adjacent points adjacent to the current point are taken, the average distance between the current point and the current plurality of adjacent points is calculated, then outliers are selected for removal according to the average distance by using Gaussian distribution, each point in the honey pomelo segmentation point cloud is traversed, all outliers of each point are removed, and the honey pomelo filtering point cloud is obtained currently;
5) Filtering point cloud downsampling: randomly downsampling the obtained honey pomelo filtering point cloud to obtain a honey pomelo downsampling point cloud;
6) Down-sampling point cloud triangularization: triangularizing the honey pomelo downsampling point cloud to form a closed convex hull;
7) Volume estimation: calculating the volume of the internal cavity of the closed convex hull, and taking the volume as the volume estimated value of the current honey pomelo 3;
The illumination box (1) in the step 1) comprises a transparent acrylic cylinder, a top LED light source, a bottom LED light source and a light guide film;
A rotary platform (2) is arranged in the transparent acrylic cylinder, a camera groove is formed in the circumferential side surface of the transparent acrylic cylinder, a overlook camera and a head-up camera are sequentially arranged on the circumferential side surface of the transparent acrylic cylinder from top to bottom, a light source mounting groove is formed in the upper end surface and the lower end surface of the transparent acrylic cylinder, a top LED light source and a bottom LED light source are respectively embedded in the light source mounting grooves of the upper end surface and the lower end surface, and a light guide film is covered on the outer circumferential side surface of the transparent acrylic cylinder.
2. The method for estimating the volume of honey pomelo based on the lateral multi-view reconstruction according to claim 1, wherein in the step 2), the rotating platform (2) rotates at least one turn.
3. The method for estimating the volume of the honey pomelo based on the lateral multi-view reconstruction according to claim 1, wherein in the step 2), firstly, feature points of all the original images of the honey pomelo are extracted by utilizing a scale-invariant feature transformation algorithm, and feature points among the original images of the honey pomelo are matched to obtain matched pairs of the images of each group; then, according to the matched characteristic points of the matched pairs of the images of each group, restoring an internal reference matrix and an external reference matrix corresponding to the 3 cameras by utilizing the epipolar geometry principle; and finally, carrying out three-dimensional point cloud reconstruction on all original honey pomelo images by utilizing a binocular stereoscopic vision and multi-view stereoscopic vision principle method according to an internal reference matrix and an external reference matrix corresponding to the 3 cameras to obtain dense honey pomelo point clouds.
4. The method for estimating the volume of honey pomelo based on the lateral multi-view reconstruction according to claim 1, wherein in the step 4), the point out of the average distance of 0.5 times is taken as the outlier.
5. The method for estimating the volume of the honey pomelo based on the lateral multi-view reconstruction of claim 1, wherein the step 5) uses a random down-sampling method of point clouds to randomly down-sample the obtained honey pomelo filtering point clouds.
CN202210662277.5A 2022-06-13 2022-06-13 Honey pomelo volume estimation method based on lateral multi-view reconstruction Active CN115035184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210662277.5A CN115035184B (en) 2022-06-13 2022-06-13 Honey pomelo volume estimation method based on lateral multi-view reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210662277.5A CN115035184B (en) 2022-06-13 2022-06-13 Honey pomelo volume estimation method based on lateral multi-view reconstruction

Publications (2)

Publication Number Publication Date
CN115035184A CN115035184A (en) 2022-09-09
CN115035184B true CN115035184B (en) 2024-05-28

Family

ID=83125147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210662277.5A Active CN115035184B (en) 2022-06-13 2022-06-13 Honey pomelo volume estimation method based on lateral multi-view reconstruction

Country Status (1)

Country Link
CN (1) CN115035184B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104535582A (en) * 2014-12-08 2015-04-22 昆明理工大学 Horizontal swing magnetic tile detection device
CN112686877A (en) * 2021-01-05 2021-04-20 同济大学 Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN112884880A (en) * 2021-01-20 2021-06-01 浙江大学 Line laser-based honey pomelo three-dimensional modeling device and method
CN113870179A (en) * 2021-08-20 2021-12-31 浙江大学 Honey pomelo longitudinal and transverse diameter measuring method based on multi-view profile map reconstruction
WO2022086739A2 (en) * 2020-10-23 2022-04-28 Argo AI, LLC Systems and methods for camera-lidar fused object detection

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10520482B2 (en) * 2012-06-01 2019-12-31 Agerpoint, Inc. Systems and methods for monitoring agricultural products
US11327178B2 (en) * 2019-09-06 2022-05-10 Volvo Car Corporation Piece-wise network structure for long range environment perception
WO2021214714A1 (en) * 2020-04-22 2021-10-28 Oxford University Innovation Limited Partial volume estimation from surface reconstructions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104535582A (en) * 2014-12-08 2015-04-22 昆明理工大学 Horizontal swing magnetic tile detection device
WO2022086739A2 (en) * 2020-10-23 2022-04-28 Argo AI, LLC Systems and methods for camera-lidar fused object detection
CN112686877A (en) * 2021-01-05 2021-04-20 同济大学 Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN112884880A (en) * 2021-01-20 2021-06-01 浙江大学 Line laser-based honey pomelo three-dimensional modeling device and method
CN113870179A (en) * 2021-08-20 2021-12-31 浙江大学 Honey pomelo longitudinal and transverse diameter measuring method based on multi-view profile map reconstruction

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A method for organs classification and fruit counting on pomegranate trees based on multi-features fusion and support vector machine by 3D point cloud;Chunlong Zhang等;《Scientia Horticulturae》;20210201;全文 *
基于三维重建的蜜柚外部几何特征估测研究;林洋洋;《万方数据》;20230223;全文 *
基于单目多视角影像的场景三维重建;吴铮铮;寇展;;光学与光电技术;20201010(05);全文 *
基于激光雷达的空间物体三维建模与体积计算;胡燕威;***;范媛媛;卢云鹏;白崇岳;张荠匀;;中国激光;20200110(05);全文 *
温维亮 ; 王勇健 ; 许童羽 ; 杨涛 ; 郭新宇 ; 朱宏宇 ; 董成玉 ; .基于三维点云的玉米果穗几何建模.中国农业科技导报.(05),全文. *

Also Published As

Publication number Publication date
CN115035184A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN110264416B (en) Sparse point cloud segmentation method and device
Wang et al. Localisation of litchi in an unstructured environment using binocular stereo vision
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN105716539B (en) A kind of three-dimentioned shape measurement method of quick high accuracy
CN113112504A (en) Plant point cloud data segmentation method and system
CN110910437B (en) Depth prediction method for complex indoor scene
CN102833486A (en) Method and device for real-time adjusting face display scale in video image
Lou et al. Accurate multi-view stereo 3D reconstruction for cost-effective plant phenotyping
CN112200854B (en) Leaf vegetable three-dimensional phenotype measuring method based on video image
CN104794737A (en) Depth-information-aided particle filter tracking method
CN114298151A (en) 3D target detection method based on point cloud data and image data fusion
CN111914615A (en) Fire-fighting area passability analysis system based on stereoscopic vision
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN112270694B (en) Method for detecting urban environment dynamic target based on laser radar scanning pattern
CN110766782A (en) Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation
CN113686314A (en) Monocular water surface target segmentation and monocular distance measurement method of shipborne camera
CN116883480A (en) Corn plant height detection method based on binocular image and ground-based radar fusion point cloud
CN113379824B (en) Quasi-circular fruit longitudinal and transverse diameter measuring method based on double-view-point cloud registration
CN110889868A (en) Monocular image depth estimation method combining gradient and texture features
CN115035184B (en) Honey pomelo volume estimation method based on lateral multi-view reconstruction
CN110310371B (en) Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image
CN115761137B (en) High-precision curved surface reconstruction method and device based on mutual fusion of normal vector and point cloud data
Neverova et al. 2 1/2 D scene reconstruction of indoor scenes from single RGB-D images
CN113129348B (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
Sun et al. A vision system based on TOF 3D imaging technology applied to robotic citrus harvesting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant