CN117671228A - Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud - Google Patents

Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud Download PDF

Info

Publication number
CN117671228A
CN117671228A CN202311488456.2A CN202311488456A CN117671228A CN 117671228 A CN117671228 A CN 117671228A CN 202311488456 A CN202311488456 A CN 202311488456A CN 117671228 A CN117671228 A CN 117671228A
Authority
CN
China
Prior art keywords
hyperspectral
compensation
image
canopy
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311488456.2A
Other languages
Chinese (zh)
Inventor
朱逢乐
周壮飞
彭继宇
蒋建东
乔欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202311488456.2A priority Critical patent/CN117671228A/en
Publication of CN117671228A publication Critical patent/CN117671228A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses a near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud, which comprises the following steps: s1: setting up an experiment platform and completing the debugging of equipment; s2: respectively collecting data of hyperspectral images and depth images of the experimental sample and the checkerboard, and preprocessing the collected image data; s3: based on the checkerboard image shot in the step S2, carrying out image registration and fusion on the hyperspectral image and the depth image of the preprocessed sample in the step S2, and generating hyperspectral three-dimensional point cloud of the plant canopy; s4: the spectral compensation of the plant canopy is realized based on the depth and angle information of each point in the hyperspectral three-dimensional point cloud in the S3, and the influence of two important factors of working distance and blade angle is reduced; s5: k-means (k-means) cluster analysis and average spectrum curve comparison are respectively carried out on the hyperspectral images of the plant canopy after the compensation in the original and S4, and the effectiveness of spectrum compensation is evaluated by comparing the results before and after the compensation.

Description

Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud
Technical Field
The invention relates to the field of plant phenotype research, in particular to a near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud.
Background
In recent years, the phenotype of high-throughput plants is vigorously developed, and the high-throughput plant phenotype has important significance for plant breeding and revealing the growth and stress response of plants under various environments. Among the many phenotypic approaches, near-end hyperspectral imaging is a very promising technique. The method can acquire spectrum and space information simultaneously, quickly and non-contact features physiological and biochemical characteristics of plants, and is widely applied to research of plants in the environment such as a greenhouse, a field and the like. Standard white boards with reflectivity close to 100% are generally used as white references during experiments on plant phenotypes to calculate the reflectivity of the samples. Currently, most of the research of near-end hyperspectral imaging technology is limited to flattened in-vitro leaves, so that the application value of the in-situ phenotype acquisition of plant canopy is greatly reduced. Indeed, the hyperspectral imaging phenotype of the proximal plant canopy is more challenging. Due to the interaction between the three-dimensional complex structure of the plant canopy and the light source, correct calculation of the reflectivity of the plant canopy often cannot be achieved by conventional two-dimensional whiteboard correction. In particular, it is affected by factors such as working distance, blade angle, and shading and multiple scattering, which interfere with and even mask the actual spectral information associated with the plant biochemical features. Thus, spectral compensation of the proximal plant canopy is critical to improving the accuracy of the acquired spectral information and revealing the true optical properties of the plant canopy.
In terms of spectral compensation of the near-end canopy hyperspectral image, there are currently mainly several methods as follows. The most commonly used spectral preprocessing techniques, such as multivariate scatter correction (Multiple scattering correction, MSC), standard normal transformation (Standard normal variate, SNV), variable ordering normalization (Variable sorting for normalization, VSN), which are derived from the chemometric field, all show an effective spectral compensation effect. However, due to the large performance differences of different spectral preprocessing techniques across different data sets, it is not possible to determine a most efficient preprocessing technique that is applicable to all data sets; the second method is a PROCOSINE model based on a physical principle, which is a radiation transmission model introduced with parameters representing the angle and specular reflection of a leaf, and the estimation of the physiological and biochemical characteristics of a plant on the leaf scale has good robustness, but the feasibility of the plant canopy is not known yet; in addition, deep learning is another method of spectral compensation, but deep learning is often based on training of large data sets, which is often difficult to obtain in the plant phenotype field.
In recent years, students find that three-dimensional (3D) data of plant canopy have strong complementarity with two-dimensional hyperspectral images, and the fused hyperspectral three-dimensional point cloud provides spectrum, depth and local angle information of each point, has great potential in precise plant phenotype research, and provides feasibility for spectral compensation of near-end canopy hyperspectral images. Currently, related studies have mainly remained on fusion of three-dimensional data with hyperspectral images. The learner registers multispectral images of different angles to the same coordinate system to generate multispectral three-dimensional point cloud by capturing the multispectral images, so that accurate identification of pests is realized; and the multi-view depth image and the multi-spectrum image of the plant canopy are captured simultaneously, the spectral reflectivity is registered under the depth image coordinate system, the two images are fused, and the accurate prediction of chlorophyll content is realized. However, few reports are reported on the aspect of carrying out spectrum compensation on a near-end plant canopy by fusing 3D data, a learner shoots 3D white hemisphere images at a plurality of positions by utilizing a hyperspectral camera and a Kinect depth camera by making specific 3D white hemisphere references, and a 3D white reference library of the canopy is constructed by combining deep learning, so that compensation on the spectral reflectance of the canopy is realized, and the compensated result is more similar to standard data measured by a spectrometer. However, the method for constructing the 3D white reference is very complex, time-consuming and labor-consuming, and cannot be practically applied; in addition, due to the low resolution of the Kinect sensor, the generated hyperspectral three-dimensional point cloud is sparse, so that the compensation result is insufficient.
In summary, the problems of the prior art are:
(1) The spectrum compensation method based on the spectrum preprocessing technology has unstable performance, and cannot determine the most effective preprocessing technology applicable to all data sets; the spectrum compensation method based on the PROCOSINE model is only applicable to the blade scale, and the feasibility of the method on the canopy scale is not known; compensation methods based on deep learning rely too much on modeling of large datasets, which are difficult to obtain in the field of plant phenotypes.
(2) The spectrum compensation method based on the 3D white reference database is very complex, time-consuming and labor-consuming, lacks applicability in practice, and has sparse and insufficient compensation results.
Therefore, in the field of plant phenotypes, a high-efficiency and accurate near-end plant canopy spectrum compensation method is needed to reveal the true optical characteristics of plants and promote the further development of accurate plant phenotypes.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud, which has the following specific technical scheme:
a near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud comprises the following steps:
s1, building an experimental platform: setting up an experiment platform and completing the debugging of equipment;
s2, image data acquisition and pretreatment: respectively collecting data of hyperspectral images and depth images of experimental samples and checkerboards, and carrying out preprocessing such as background segmentation, noise removal and the like on collected image data;
s3, image registration and fusion: based on the checkerboard image shot in the step S2, carrying out image registration and fusion on the hyperspectral image and the depth image of the preprocessed sample in the step S2 to generate hyperspectral three-dimensional point cloud of the plant canopy;
s4, plant canopy spectrum compensation: based on the depth and angle information of each point in the hyperspectral three-dimensional point cloud in the step S3, the spectral compensation of the plant canopy is realized, and the influence of two important factors of working distance and blade angle is reduced;
s5, evaluating a spectrum compensation result: and (3) respectively carrying out k-means (k-means) cluster analysis and average spectrum curve comparison on the original plant canopy hyperspectral images compensated in the step (S4), and evaluating the effectiveness of spectrum compensation by comparing the results before and after compensation.
The near-end plant canopy spectrum compensation method provided by the invention is beneficial to accurate calculation of canopy spectrum reflectivity so as to reveal the true optical characteristics of plant canopy, and has the advantages of high compensation accuracy, good adaptability, high robustness and great popularization value in the field of accurate plant phenotypes.
Preferably, the experimental platform in step S1 mainly includes SNAPSCAN VNIR hyperspectral camera (IMEC, leuven, belgium), raytrix light field camera (GmbH, kiel, germany), 150W annular halogen light source and two 100W LED white light sources, wherein the lens of hyperspectral camera is 35mm, and the lens of light field camera is 17mm;
the SNAPSCANVNIR hyperspectral camera in the experimental platform is different from the traditional linear scanning camera, and the sensor moves in the camera, so that the relative rest between a sample and the camera can be ensured, and the hyperspectral image acquisition is greatly facilitated;
the Raytrix light field camera in the experimental platform can obtain information such as color, depth, 3D point cloud and the like of a target sample through one-time shooting;
the annular halogen light source in the experimental platform is mainly used for a hyperspectral camera, and the two LED white light lamp light sources are mainly used for a light field camera, wherein the halogen light source and the hyperspectral camera are coaxial and have the same direction;
in the step S1, the hyperspectral camera and the light field camera are tightly attached together and fixed on a bracket, the distance between the hyperspectral camera and the ground is about 45cm, and the distance between the hyperspectral camera and the two phases is about 100cm from an experimental sample, so that the sample canopy is ensured to be within the visual field range of the two phases;
the debugging of the device described in S1 includes adjustment of camera aperture, exposure time and light source intensity, the aperture of the hyperspectral camera is set to 3, the exposure time is set to 32ms, the aperture of the light field camera is set to 4, and the exposure time is set to 30ms. The equipment is debugged to ensure clear collection of spectral information and depth information of the sample canopy, so that spectral compensation of the plant canopy is facilitated.
Preferably, in step S2, each experimental sample is consistent with the image acquisition environment of the checkerboard, specifically, the hyperspectral image and the depth image of the sample are acquired first, then the checkerboard is reclined to the sample flowerpot, and then two images of the checkerboard are acquired;
the experimental samples in the step S2 are normally cultivated purple perilla seedlings and tea seedlings, the growth of the purple perilla seedlings and the tea seedlings is in the stage of V5-V7, the height of the purple perilla seedlings and the tea seedlings is about 30cm, and the depth of a canopy is about 15cm;
in the step S2, the checkerboard is black and white, the specification is 9 multiplied by 9, the size is 10 multiplied by 10cm, and the size of the pattern is 1 multiplied by 1cm;
in the step S2, the hyperspectral image is corrected by adopting a white reference image and a black reference image, the reflectivity is obtained by calculation, a white board is placed behind a plant canopy in the correction process, and the correction formula is as follows (1):
where R is the corrected image, R 0 Is the original hyperspectral image, R d And R is w Black reference image and white reference image respectively;
before the depth image is acquired in step S2, performing self-calibration on the Raytrix light field camera once to ensure accurate acquisition of subsequent depth information;
the spatial resolution of the collected hyperspectral image in the step S2 is 1500 multiplied by 1024, the spectral range is 610-850nm, and the spatial resolution of the depth image is 1102 multiplied by 766.
The image preprocessing in step S2 mainly aims at hyperspectral images and mainly comprises background segmentation and noise removal. The background segmentation selects a threshold wave band of 577nm and a threshold size of 0.015.
Preferably, the image registration in step S3 includes two steps, namely, detecting angular points of the checkerboard and solving a homography transformation matrix;
the corner detection algorithm in step S3 is based on the following formula (2):
wherein Ci, cj are two angular points which are relatively close to each other in the checkerboard image,the gray gradient of the point Cj is represented, ci-Cj represents the direction vector between the two points, and the dot product of the two points is always 0;
step S3, the corner detection is realized based on an OpenCV kit, the coding platform is Python3.7, and the search window size of the corner is 11 multiplied by 11;
the solution of the homography transformation matrix in step S3 is based on the following formula (3):
wherein P is h Representing points in a hyperspectral image, P d ' represents transformed points in the depth image, (x) i ,y i ) Representing the image coordinates, e representing the allowed error. The optimal homography transformation matrix is obtained through iterative calculation;
the image registration at step S3 is performed at the pixel level;
the generation of the hyperspectral three-dimensional point cloud of the plant canopy in the step S3 is obtained by fusing the registered hyperspectral image and the depth image, and comprises two steps of obtaining internal and external parameters of a camera and constructing the hyperspectral three-dimensional point cloud.
S3, the camera internal and external parameters are obtained by calibrating the Raytrix light field camera based on the checkerboard image, and an internal and external parameter matrix m of the light field camera is obtained;
s3, constructing the hyperspectral three-dimensional point cloud based on the following formula (4):
where s is the scaling factor, (u, v) is the pixel coordinates of the depth image, m is the light field camera parameter matrix of 3 x 4, and (x, y, z) is the homogeneous coordinates of the points in the point cloud.
Preferably, the spectral compensation of the plant canopy in step S4 is mainly aimed at the compensation of the working distance and the blade angle factor, and the original spectral reflectance of a certain point of the canopy is expressed by the following formula (5):
wherein lambda is m Is the collected spectral reflectance, lambda r Is the true reflectivity of the canopy, d w Is the distance between the white board and the camera when the white board of the hyperspectral camera is corrected, d is the distance between the point on the canopy and the camera, and θ is the included angle between the normal vector of the point on the canopy and the direction of the incident light. In the formula, the influence of the working distance and the blade angle on the original spectral reflectivity respectively obeys the inverse square law and the lambert cosine law.
The spectral compensation of the plant canopy in step S4 is based on depth and angle information of each point in the hyperspectral three-dimensional point cloud, and the original spectral reflectance of each point of the canopy is compensated according to the following formula (6) to reduce the influence of two important factors, namely working distance and blade angle:
wherein θ can estimate the normal vector of each point by K-Nearest Neighbor (KNN) algorithm, and further calculate the angle between the point and the incident light, where K is 300;
and step S4, performing the spectrum compensation point by point to realize the point by point compensation of the plant canopy spectrum.
Preferably, the compensation result of step S5 includes the results of distance compensation (distance compensation, DC), angle compensation (inclination compensation, IC), and both compensation (both compensation, BC);
the optimal cluster number for the k-means cluster analysis described in S5 is determined using an elbow method that aims to reduce the sum of squares of the cluster errors to determine the optimal number of class clusters k, which is more accurate in determining the optimal number of clusters for different sample data sets than the contour coefficient method. The core idea of the elbow method is that as the clustering number k increases, the sample division is finer, the higher the aggregation degree of each cluster is, the square sum of errors is reduced sharply, then from a certain k class, the reduction of the square sum of errors becomes slow, and k is the optimal clustering number at the moment;
the k-means cluster analysis and the average spectrum curve evaluation result in the S5 show that most pixel points in the plant canopy after BC compensation are gathered into one category, and the average spectrum curve of the canopy is closer to a standard curve, which indicates that the plant canopy spectrum is well compensated, and the adverse effects of the working distance and the blade angle on the canopy spectrum are effectively eliminated.
The beneficial effects of the invention are as follows:
(1) Aiming at the problem that the near-end plant canopy spectrum cannot be corrected through a traditional two-dimensional whiteboard to obtain the correct reflectivity, the invention provides a point-by-point and accurate compensation method of the near-end plant canopy spectrum based on hyperspectral three-dimensional point cloud;
(2) Compared with a complex spectrum compensation method for constructing a 3D white reference database, the method provided by the invention is realized based on the physical law and only depends on accurate synchronous acquisition of the canopy hyperspectral image and the depth map, and the method is efficient and accurate, high in repeatability, good in adaptability and high in robustness.
(3) The near-end plant canopy spectrum compensation method provided by the invention can be popularized and applied to the plant phenotype field, and has great potential in the aspect of realizing accurate calculation of the spectral reflectance of the near-end plant canopy and revealing the true optical characteristics of the plant canopy.
Drawings
FIG. 1 is a flow chart of a near-end plant canopy spectral compensation method based on hyperspectral three-dimensional point clouds;
fig. 2 is a schematic diagram of a registration result of a hyperspectral image and a depth image of a plant canopy ((a) is a single-channel spectral image, and (B) and (C) are depth images before and after registration, respectively);
FIG. 3 is a schematic diagram of hyperspectral three-dimensional point cloud of the canopy of the purple perilla (a) and the tea seedling sample (b) at a wavelength band of 795.4 nm;
FIG. 4 is a graph showing k-means clustering results and average spectrum curve comparison results of the canopy of the purple perilla (a) and the tea seedling sample (b) before and after compensation;
Detailed Description
The invention is further described below with reference to examples. The following examples are presented only to aid in the understanding of the invention. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
Aiming at the problems existing in the prior art, the invention provides a near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud, and the invention is described in detail below with reference to the accompanying drawings:
a near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud is shown in a flow chart in fig. 1, and specifically comprises the following steps:
s1, building an experimental platform: and (5) building an experiment platform and completing the debugging of the equipment. The specific method comprises the following steps: the SNAPSCAN VNIR hyperspectral camera and the Raytrix light field camera are fixed at a position 45cm away from the ground, so that the two cameras are guaranteed to be attached together as much as possible, and the distance from the camera to an experimental sample is about 100cm. The annular halogen light source is hung on the hyperspectral lens, and the two LED white light lamp light sources are symmetrically arranged on two sides of the sample, so that the light sources are ensured to be uniformly directed to the sample.
The annular halogen light source in the experimental platform is mainly used for a hyperspectral camera, the two LED white light lamp light sources are mainly used for a light field camera, and the halogen light source and the hyperspectral camera are coaxial and in the same direction in the experimental process;
the power of the annular halogen light source is 150W, and the power of the two LED white light sources is 100W;
the model of the hyperspectral lens is 35mm, and the model of the light field camera lens is 17mm;
the hyperspectral lens aperture is adjusted to be 3, and the light field camera lens aperture is adjusted to be 4;
s2, image data acquisition and pretreatment: and respectively collecting data of the hyperspectral image and the depth image of the experimental sample and the checkerboard, and carrying out preprocessing such as background segmentation, noise removal and the like.
The experimental samples are normally cultivated purple perilla seedlings and tea seedlings, the growth of the purple perilla seedlings and the tea seedlings is in the stage of V5-V7, the height of the purple perilla seedlings and the tea seedlings is about 30cm, and the depth of a canopy is about 15cm;
the hyperspectral image is corrected by adopting a white reference image and a black reference image, and reflectivity is obtained through calculation. In the correction process, the white board is placed behind the plant canopy, and the correction formula is as follows (1):
where R is the corrected image, R 0 Is the original hyperspectral image, R d And R is w Black reference image and white reference image respectively;
and S2, before the depth image is acquired, performing one-time self-calibration on the Raytrix light field camera to ensure accurate acquisition of subsequent depth information. The method comprises the steps of covering a lens of a light field camera by an annular calibration plate, and adjusting a light source, an aperture and exposure time to make depth and three-dimensional information of a target in a visual field sufficiently abundant;
s2, keeping the image acquisition environment of each experimental sample and the checkerboard consistent, wherein the specific steps are that firstly, hyperspectral images and depth images of the samples are acquired, then, the checkerboard is reclined to the sample flowerpot, and then, two images of the checkerboard are acquired;
the checkerboard is black and white, the specification is 9 multiplied by 9, the size is 10 multiplied by 10cm, and the size of the pattern is 1 multiplied by 1cm;
the spatial resolution of the hyperspectral image is 1500 multiplied by 1024, the spectral range is 610-850nm, and the spatial resolution of the depth image is 1102 multiplied by 766;
the image preprocessing mainly aims at hyperspectral images and mainly comprises background segmentation and noise removal. The background segmentation selects a threshold wave band of 577nm and a threshold size of 0.015.
S3, image registration and fusion: and (3) carrying out image registration and fusion on the hyperspectral image and the depth image of the preprocessed sample in the step (2) based on the checkerboard image shot in the step (2), and generating hyperspectral three-dimensional point cloud of the canopy.
S3, the image registration comprises two steps, namely detecting angular points of the checkerboard to determine characteristic points in the checkerboard image, and realizing the registration of the sample depth image and the hyperspectral image based on the solving and application of a homography transformation matrix;
the corner detection algorithm is based on the following formula (2):
wherein C is i ,C j For two closely spaced corner points in the checkerboard image,representation point C j Gray scale gradient of C i -C j Representing the direction vector between the two points, the dot product of which is always 0;
the corner detection is realized based on an OpenCV kit, the coding platform is Python3.7, and the search window size of the corner is 11 multiplied by 11;
the solution of the homography transformation matrix is based on the following formula (3):
wherein P is h Representing points in a hyperspectral image, P d ' represents transformed points in the depth image, (x) i ,y i ) Representing the image coordinates, e representing the allowed error. The optimal homography transformation matrix is obtained through iterative calculation;
s3, the image registration is realized at the pixel level;
the result of the image registration at S3 is shown in fig. 2. Wherein the (A) image is a single-channel spectrum image, and the (B) image and the (C) image are depth images before and after registration respectively;
and S3, generating the hyperspectral three-dimensional point cloud of the plant canopy by fusing the registered hyperspectral image and the depth image, wherein the generation comprises two steps of obtaining internal and external parameters of a camera and constructing the hyperspectral three-dimensional point cloud.
The camera internal and external parameter acquisition is to calibrate the Raytrix light field camera based on the checkerboard image, and acquire an internal and external parameter matrix m of the light field camera;
the construction of the hyperspectral three-dimensional point cloud is based on the following formula (4):
where s is the scaling factor, (u, v) is the pixel coordinates of the depth image, m is the light field camera parameter matrix of 3 x 4, and (x, y, z) is the homogeneous coordinates of the points in the point cloud.
The hyperspectral three-dimensional point cloud generated in the step S3 is shown in fig. 3. Wherein (a) the graph is hyperspectral three-dimensional point cloud of purple perilla, and (b) the graph is hyperspectral three-dimensional point cloud of tea seedlings.
S4, plant canopy spectrum compensation: and (3) realizing spectral compensation of the plant canopy based on the depth and angle information of each point in the hyperspectral three-dimensional point cloud in the step (S3), and reducing the influence of two important factors, namely working distance and blade angle.
The spectral compensation of the plant canopy is mainly aimed at the compensation of factors of working distance and blade angle, and the expression formula of the original spectral reflectance of a certain point of the canopy is as follows (5):
wherein lambda is m Is the collected spectral reflectance, lambda r Is the true reflectivity of the canopy, d w Is the distance between the white board and the camera when the white board of the hyperspectral camera is corrected, d is the distance between the point on the canopy and the camera, and θ is the included angle between the normal vector of the point on the canopy and the direction of the incident light. In the formula, the influence of the working distance and the blade angle on the original spectral reflectivity respectively obeys the inverse square law and the lambert cosine law.
S4, the spectral compensation of the plant canopy is based on depth and angle information of each point in the hyperspectral three-dimensional point cloud, and the original spectral reflectivity of each point of the canopy is compensated according to the following formula (6) so as to reduce the influence of two important factors, namely working distance and blade angle:
wherein θ can estimate the normal vector of each point by K-Nearest Neighbor (KNN) algorithm, and further calculate the angle between the point and the incident light, where K is 300;
and S4, performing the spectral compensation point by point, and realizing the point by point compensation of the plant canopy spectrum.
S5, evaluating a spectrum compensation result: and (3) respectively carrying out k-means (k-means) cluster analysis and average spectrum curve comparison on the original plant canopy hyperspectral images compensated in the step (S4), and evaluating the effectiveness of spectrum compensation by comparing the results before and after compensation.
The compensation results include the results of distance compensation (distance compensation, DC), angle compensation (inclination compensation, IC), and both compensation (both compensation, BC);
the optimal cluster number for the k-means cluster analysis described in S5 is determined using the elbow method, which aims to reduce the sum of squares of the cluster errors to determine the optimal number of category clusters k. The core idea of the elbow method is that as the clustering number k increases, the sample division is finer, the higher the aggregation degree of each cluster is, the square sum of errors is reduced sharply, then from a certain k class, the reduction of the square sum of errors becomes slow, and k is the optimal clustering number at the moment;
the results of the k-means cluster analysis and average spectral curve comparison described in S5 are shown in FIG. 4, (a) and (b)
The figures show the comparison results of purple perilla and tea seedlings respectively;
s5, in the k-means cluster analysis result, the blades in the original canopy are gathered into 3 categories according to different distances and angles; the post-DC clustering result does not change much; the clustering result after the IC is more uniform, which shows that the influence of the angle on the canopy spectrum is larger; most of blades in the BC back canopy are gathered into 1 category, and the spectrum information of the whole canopy becomes more homogeneous, which shows that the influence of the distance and the angle on the canopy spectrum is reduced to a great extent;
s5, in the spectrum curve comparison, spectrum collection is carried out on four in-vitro flattened blades with different distances and angles in each canopy respectively, and an average spectrum curve of the four blades is used as a standard curve;
in S5, the spectrum curve gradually becomes gentle after 750nm, so the curve about 800nm is used as comparison data;
in the spectrum curve comparison result of S5, the standard spectrum curve is about 0.52, the average spectrum curve value of the original canopy is about 0.45, the average spectrum curve value after dc is reduced to below 0.4, the average spectrum curve after IC is increased to about 0.57, the average spectrum curve after bc is about 0.5, and the effect of the distance and angle on the plant canopy spectrum is effectively reduced as shown by being closer to the standard curve.
The k-means cluster analysis and the average spectrum curve evaluation result in the S5 show that most pixel points in the plant canopy after BC compensation are gathered into one category, and the average spectrum curve of the canopy is closer to a standard curve, which indicates that the plant canopy spectrum is well compensated, and the adverse effects of the working distance and the blade angle on the canopy spectrum are effectively eliminated.
In conclusion, the method has remarkable technical effect, has better technical contribution to the development of the phenotype of the accurate plant, and has wide application prospect and considerable economic benefit in the research field of the phenotype of the canopy of the near-end plant. The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
It is understood that the foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, but rather is to be accorded the full scope of all such modifications and equivalent structures, features and principles as set forth herein.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud is characterized by comprising the following steps:
s1, building an experimental platform: setting up an experiment platform and completing the debugging of equipment;
s2, image data acquisition and pretreatment: respectively collecting data of hyperspectral images and depth images of the experimental sample and the checkerboard, and preprocessing the collected image data;
s3, image registration and fusion: based on the checkerboard image shot in the step S2, carrying out image registration and fusion on the hyperspectral image and the depth image of the preprocessed sample in the step S2 to generate hyperspectral three-dimensional point cloud of the plant canopy;
s4, plant canopy spectrum compensation: based on the depth and angle information of each point in the hyperspectral three-dimensional point cloud in the step S3, the spectral compensation of the plant canopy is realized, and the influence of two important factors of working distance and blade angle is reduced;
s5, evaluating a spectrum compensation result: and (3) carrying out k-means cluster analysis and average spectrum curve comparison on the original hyperspectral images of the plant canopy after compensation in the step (S4) respectively, and evaluating the effectiveness of spectrum compensation by comparing the results before and after compensation.
2. The near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud as claimed in claim 1, wherein the method comprises the following steps:
the experimental platform in the step S1 mainly comprises a SNAPSCAN VNIR hyperspectral camera, a Raytrix light field camera, a 150W annular halogen lamp light source and two 100W LED white light lamp light sources, wherein the lens of the hyperspectral camera is 35mm, and the lens of the light field camera is 17mm;
the annular halogen light source is mainly used for a hyperspectral camera, and the two LED white light lamp light sources are mainly used for a light field camera, wherein the halogen light source and the hyperspectral camera are coaxial and in the same direction;
when the experimental platform is built in the step S1, the hyperspectral camera and the light field camera are tightly attached together and fixed on the bracket, and the distance between the two phases and the ground is about 45cm, and the distance between the two phases and the experimental sample is about 100cm;
the debugging of the equipment in the step S1 comprises the adjustment of camera aperture, exposure time and light source intensity, wherein the aperture of the hyperspectral camera is set to be 3, and the exposure time is set to be 32ms; the aperture of the light field camera was set to 4 and the exposure time was set to 30ms.
3. The near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud as claimed in claim 1, wherein the method comprises the following steps:
in the step S2, each experimental sample is consistent with the image acquisition environment of the checkerboard, specifically, the hyperspectral image and the depth image of the sample are firstly acquired, then the checkerboard is reclined to the sample flowerpot, and then two images of the checkerboard are acquired;
the checkerboard in the step S2 is black and white, the specification is 9 multiplied by 9, the size is 10 multiplied by 10cm, and the pattern size is 1 multiplied by 1cm;
the image preprocessing in the step S2 mainly aims at hyperspectral images, and the preprocessing mode mainly comprises background segmentation and noise removal; the background segmentation selects a threshold wave band of 577nm and a threshold size of 0.015.
4. The near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud as claimed in claim 1, wherein the method comprises the following steps:
the image registration of the hyperspectral image and the depth image in the step S3 comprises two steps of detecting angular points of a checkerboard and solving a homography transformation matrix respectively;
the corner detection is realized based on an OpenCV kit, the coding platform is Python3.7, and the search window size of the corner is 11 multiplied by 11;
the generation of the hyperspectral three-dimensional point cloud of the plant canopy in the step S3 is obtained by carrying out image fusion on the registered hyperspectral image and the depth image, and the image fusion comprises two steps of respectively obtaining internal and external parameters of a light field camera and constructing the hyperspectral three-dimensional point cloud;
the internal and external parameters of the light field camera are obtained by calibrating the Raytrix light field camera based on the checkerboard image, so as to obtain an internal and external parameter matrix m of the light field camera;
the construction of the hyperspectral three-dimensional point cloud is based on the following formula (1):
where s is the scaling factor, (u, v) is the pixel coordinates of the depth image, m is the light field camera parameter matrix of 3 x 4, and (x, y, z) is the homogeneous coordinates of the points in the point cloud.
5. The near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud as claimed in claim 1, wherein the method comprises the following steps:
in the step S4, the spectral compensation of the plant canopy is mainly aimed at the compensation of the working distance and the blade angle factor, and the original spectral reflectance of a certain point of the canopy is expressed by the following formula (2):
wherein lambda is m Is the collected spectral reflectance, lambda r Is the true reflectivity of the canopy, d w The distance between the white board and the camera during the white board correction of the hyperspectral camera is d is the distance between the point on the canopy and the camera, θ is the included angle between the normal vector of the point on the canopy and the incident light direction, and in the formula, the influence of the working distance and the blade angle on the original spectral reflectivity respectively obeys the inverse square law and the lambert cosine law;
the spectral compensation of the plant canopy in the step S4 is based on depth and angle information of each point in the hyperspectral three-dimensional point cloud, and the original spectral reflectivity of each point of the canopy is compensated according to the following formula (3) to reduce the influence of two important factors, namely working distance and blade angle:
wherein θ can estimate the normal vector of each point by K nearest neighbor algorithm, and further calculate the included angle between the point and the incident light, and the size of K is set to 300;
the spectral compensation in step S4 is performed point by point.
6. The near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud as claimed in claim 1, wherein the method comprises the following steps:
the compensation result in the step S5 comprises the results of distance compensation, angle compensation and both compensation;
the optimal cluster number for the k-means cluster analysis in the step S5 is determined by using an elbow method, which aims to reduce the sum of squares of the cluster errors to determine the optimal cluster number k of categories;
the results of k-means cluster analysis and average spectrum curve evaluation in the step S5 show that most pixel points in the plant canopy after BC compensation are gathered into one category, and the average spectrum curve of the canopy is closer to a standard curve, which indicates that the plant canopy spectrum is well compensated, and adverse effects of working distance and blade angle on the canopy spectrum are effectively eliminated.
CN202311488456.2A 2023-11-09 2023-11-09 Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud Pending CN117671228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311488456.2A CN117671228A (en) 2023-11-09 2023-11-09 Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311488456.2A CN117671228A (en) 2023-11-09 2023-11-09 Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud

Publications (1)

Publication Number Publication Date
CN117671228A true CN117671228A (en) 2024-03-08

Family

ID=90085390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311488456.2A Pending CN117671228A (en) 2023-11-09 2023-11-09 Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN117671228A (en)

Similar Documents

Publication Publication Date Title
CN109410256B (en) Automatic high-precision point cloud and image registration method based on mutual information
Müller-Linow et al. The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool
Biskup et al. A stereo imaging system for measuring structural parameters of plant canopies
Ma et al. Improved salient feature-based approach for automatically separating photosynthetic and nonphotosynthetic components within terrestrial lidar point cloud data of forest canopies
Lalonde et al. What do the sun and the sky tell us about the camera?
CN110443836A (en) A kind of point cloud data autoegistration method and device based on plane characteristic
Ma et al. Determining woody-to-total area ratio using terrestrial laser scanning (TLS)
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CA2707176A1 (en) Method and apparatus for rapid three-dimensional restoration
CN114241031A (en) Fish body ruler measurement and weight prediction method and device based on double-view fusion
Gaillard et al. Voxel carving‐based 3D reconstruction of sorghum identifies genetic determinants of light interception efficiency
CN111798509B (en) Method for measuring leaf area index based on hemispherical image method
CN108318458B (en) Method for measuring outdoor typical feature pBRDF (binary RDF) suitable for different weather conditions
CN116295285A (en) Shallow sea water depth remote sensing inversion method based on region self-adaption
CN113674402B (en) Plant three-dimensional hyperspectral point cloud model generation method, correction method and device thereof
Lalonde et al. Automatic three-dimensional point cloud processing for forest inventory
Aghaei et al. A flying gray ball multi-illuminant image dataset for color research
Greco et al. Methodology for measuring dendrometric parameters in a mediterranean forest with UAVs flying inside forest
CN111260735B (en) External parameter calibration method for single-shot LIDAR and panoramic camera
CN117635898A (en) Crop dynamic phenotype extraction-oriented close-range image stitching method
CN117376717A (en) Camera disturbance correction method and system
CN117115669A (en) Object-level ground object sample self-adaptive generation method and system with double-condition quality constraint
JP5672029B2 (en) Reflectance calculating device, reflectance calculating method and program
CN117671228A (en) Near-end plant canopy spectrum compensation method based on hyperspectral three-dimensional point cloud
CN114494039A (en) Underwater hyperspectral push-broom image geometric correction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination