CN117541537B - Space-time difference detection method and system based on all-scenic-spot cloud fusion technology - Google Patents

Space-time difference detection method and system based on all-scenic-spot cloud fusion technology Download PDF

Info

Publication number
CN117541537B
CN117541537B CN202311343854.5A CN202311343854A CN117541537B CN 117541537 B CN117541537 B CN 117541537B CN 202311343854 A CN202311343854 A CN 202311343854A CN 117541537 B CN117541537 B CN 117541537B
Authority
CN
China
Prior art keywords
point cloud
data
panoramic image
space
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311343854.5A
Other languages
Chinese (zh)
Other versions
CN117541537A (en
Inventor
尤万成
方超
苟新平
张志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xinghu Technology Co ltd
Original Assignee
Jiangsu Xinghu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Xinghu Technology Co ltd filed Critical Jiangsu Xinghu Technology Co ltd
Priority to CN202311343854.5A priority Critical patent/CN117541537B/en
Publication of CN117541537A publication Critical patent/CN117541537A/en
Application granted granted Critical
Publication of CN117541537B publication Critical patent/CN117541537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a space-time difference detection method and a space-time difference detection method based on a full scenic spot cloud fusion technology, wherein the method comprises the following steps: s1, collecting a panoramic image; s2, preprocessing the panoramic image; s3, collecting original point cloud data; s4, processing the original point cloud data; s5, fusing the panoramic image and the point cloud data; s6, data comparison is carried out; s7, performing difference visual display; s8, ending: and (3) according to the difference detection result displayed in the step (S7), corresponding application and decision are carried out. By the method, the time-space difference is accurately detected and analyzed. By data fusion and advanced algorithm application, the accuracy and processing efficiency of the space-time data are improved, and a more reliable space-time difference detection solution is provided for various application scenes.

Description

Space-time difference detection method and system based on all-scenic-spot cloud fusion technology
Technical Field
The invention relates to the technical field of image processing, in particular to a space-time difference detection method and a space-time difference detection system based on a full-scenic spot cloud fusion technology.
Background
In the prior art, the detection of the space-time difference usually depends on a single data source or a traditional method (such as a difference image method, a pixel difference method and the like), and has the problems of low precision, low efficiency and the like. Therefore, an innovative method is needed to comprehensively utilize panoramic and point cloud data, and accurate and efficient space-time difference detection is achieved.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art and provides a space-time difference detection method and a space-time difference detection system based on a full scenic spot cloud fusion technology.
The invention also provides a space-time difference detection method based on the all-scenic spot cloud fusion technology, which comprises the following steps:
S1, collecting panoramic images: acquiring data of a target area by using a panoramic camera to obtain a panoramic image;
S2, preprocessing the panoramic image: preprocessing the panoramic image obtained in the step S1 to obtain a target panoramic image, wherein the preprocessing comprises image correction, denoising and color correction;
s3, collecting original point cloud data: acquiring data of a target area by using a laser scanner to acquire original cloud data;
S4, processing the original point cloud data: preprocessing and post-processing are carried out on the original cloud data obtained in the step S3, so that target original cloud data are obtained;
s5, fusing the target panoramic image and the target point cloud data: fusing the target panoramic image and the target point cloud data in a point cloud projection mode to generate comprehensive space-time data, wherein the target panoramic image corresponds to panoramic images acquired at different times or different positions, and the target point cloud data corresponds to original point cloud data acquired at different times or different positions;
s6, data comparison: comparing the space data of different times through an iterative nearest point algorithm, and identifying and analyzing the differences;
s7, difference visual display: displaying the space-time difference obtained in the step S6 by using a visualization tool and a statistical method;
s8, ending: and (3) according to the difference detection result displayed in the step (S7), corresponding application and decision are carried out.
According to the space-time difference detection method based on the all-scene point cloud fusion technology provided by the invention, the preprocessing in the S2 comprises the following steps:
1) Panoramic image correction: carrying out panoramic image correction by using a spherical equidistant projection model;
equidistant projection describes the mapping between the angle of incidence and the image height of the image in a linear relationship;
2) And (3) filtering: filtering the panoramic image by using a Gaussian filtering algorithm;
gaussian filtering is a filter based on gaussian normal distribution;
3) Color correction: performing image color correction by using a histogram equalization algorithm;
Histogram equalization is a common method of image enhancement. Changing the gray values in the image by a mapping increases the dynamic range of the gray values of the image and thus increases the contrast of the image. The histogram equalization technology is used, so that the gray value of the whole image can be uniformly distributed in the whole dynamic range, the contrast of the image is increased, and the visual impression is improved.
According to the space-time difference detection method based on the all-scene point cloud fusion technology, the specific steps of color correction are as follows:
1. Determining the gray level of the image, wherein the gray level is 0-255;
2. Calculating the probability of an original histogram, and counting the proportion of the pixels of each gray level on the original image to the total;
3. Calculating an accumulated value of the histogram probability;
4. Calculating the mapping relation of pixels;
5. And (5) gray scale mapping is performed.
According to the space-time difference detection method based on the full-scene point cloud fusion technology, the preprocessing comprises invalid data removal, noise filtering and point cloud registration, and the post-processing comprises point cloud segmentation, feature extraction and reconstruction.
According to the space-time difference detection method based on the all-scene point cloud fusion technology, the preprocessing specifically comprises the following steps:
1) Removing invalid data by a radius-based outlier removal algorithm;
2) Using a Gaussian filter algorithm to filter noise of the point cloud;
Weighted averaging is performed according to the weights of the gaussian distribution. The distribution of weights will decrease with distance, which means that points closer in distance will take up more weight on average, while points farther in distance will take up less weight. Such a weight distribution may effectively reduce the impact of noise while preserving the details of the point cloud.
3) And carrying out point cloud registration by using an iterative nearest point registration algorithm, and aligning a plurality of point cloud data sets to enable the point cloud data sets to be in the same coordinate.
According to the space-time difference detection method based on the all-scene point cloud fusion technology, the post-processing specifically comprises the following steps:
1) Performing point cloud segmentation by using an European clustering algorithm, and grouping the points into clusters;
european clustering is a clustering algorithm based on Euclidean distance metric.
2) Extracting feature information of the object by using a point feature histogram extraction method;
The point feature histogram is a neighborhood geometry property that encodes points by generalizing the average curvature around the points using a multi-dimensional histogram. This high-dimensional hyperspace provides an informative feature for the feature representation, is invariant to the pose of the surface, and can cope well with noise present in different sampling densities or neighborhoods.
3) A poisson reconstruction algorithm is used for point cloud reconstruction.
A time-space difference detection system based on panorama and point cloud technology fusion comprises
Panoramic image sensor: for obtaining a panoramic image;
A point cloud sensor: for obtaining point cloud data;
An image processing module: for processing the obtained panoramic image;
The point cloud data processing module is used for: processing the obtained point cloud data;
and a data fusion module: fusing the processed panoramic image with the processed point cloud data;
And a comparison module: comparing the fused panoramic image and point cloud data with the historical panoramic image and point cloud data;
And a display module: and visually displaying the difference obtained by the comparison of the comparison module.
According to the space-time difference detection system based on the full-scenic spot cloud fusion technology, the panoramic image sensor and the spot cloud sensor work synchronously, and the data fusion module is connected with the spot cloud data processing module and the image processing module.
According to the space-time difference detection system based on the full-scene point cloud fusion technology, the data fusion module is realized by carrying out point cloud projection on the panoramic image and the point cloud data.
According to the space-time difference detection system based on the full-scenic spot cloud fusion technology, which is provided by the invention, the space-time difference detection system based on the full-scenic spot cloud fusion technology can be applied to the fields of urban planning, engineering quality supervision, infrastructure management and the like.
The invention combines panoramic and point cloud technologies, and realizes accurate detection and analysis of time difference. By data fusion and advanced algorithm application, the accuracy and processing efficiency of the space-time data are improved, and a more reliable space-time difference detection solution is provided for various application scenes.
Drawings
The invention is further described below with reference to the drawings and examples;
FIG. 1 is a flow diagram of a space-time difference detection method based on a full-scene point cloud fusion technology;
FIG. 2 is a schematic view of a spherical isometric projection of an embodiment of the present invention;
FIG. 3 is a schematic diagram of a two-dimensional Gaussian distribution in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of an European polymerization process according to an embodiment of the present invention;
FIG. 5 is a graph of a point feature histogram calculation region in an embodiment of the present invention;
FIG. 6 is a uwv coordinate system diagram of an embodiment of the present invention;
FIG. 7 is a basic functional diagram of a poisson reconstruction according to an embodiment of the present invention;
FIG. 8 is a flowchart of a Poisson algorithm according to an embodiment of the present invention;
fig. 9 is a schematic flow chart of a space-time difference detection system based on the full-scene point cloud fusion technology.
Detailed Description
Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the accompanying drawings are used to supplement the description of the written description so that one can intuitively and intuitively understand each technical feature and overall technical scheme of the present invention, but not to limit the scope of the present invention.
Referring to fig. 1, the space-time difference detection method based on the all-scene point cloud fusion technology in the embodiment of the invention comprises the following steps:
S1, collecting panoramic images: acquiring data of a target area by using a panoramic camera to obtain a panoramic image;
S2, preprocessing the panoramic image: preprocessing the panoramic image obtained in the step S1 to obtain a target panoramic image, wherein the preprocessing comprises image correction, denoising and color correction;
The preprocessing in S2 comprises the following steps:
1) Panoramic image correction: carrying out panoramic image correction by using a spherical equidistant projection model; the formula of the spherical equidistant projection model is as follows: r=f·θ, where r represents the height of the image formed on the sensor, f represents the focal length of the camera, and θ represents the magnitude of the incident angle.
The spherical equidistant projection model algorithm is as follows:
Let the coordinates of a pixel point on the panoramic image be (X, Y) (the plane of the target image is the plane of L (X, Y, Z) P (X, Y, Z) in fig. 2). Here, z=r is taken, where R is the radius of the panoramic image. From the schematic of the above figure, it can be obtained:
From the above formula, it can be seen that:
Wherein θ: the incident angle of the light ray, phi is the projection of the line connecting the imaging point and the hemispherical origin of coordinates on the XOY plane, and f represents the focal length of the camera.
And phi is the included angle between the image height rr in the actual imaging plane and the coordinate X axis of the actual image. After obtaining the values of θ, φ, an expression of the polar coordinates of the point L 1(x1,y1,z1)P1(x1,y1,z1) on the sphere is obtained. The two-dimensional plane coordinates are actually mapped to the three-dimensional coordinate system by polar coordinates, and the point L can be represented by polar coordinates: l (θ, φ).
The polar coordinates of a point in space can be converted into three-dimensional coordinates of a point in space, so that the three-dimensional coordinates of the point L are x=r×sin θ×cos Φ, y=r×sin θ×Φ, and z= Rcos θ.
After the three-dimensional coordinates of a space point are obtained, spherical perspective transformation is carried out on the space point. The point L 1 on the virtual sphere is mapped to the target image plane. Triangle OPP 3,OPP3' is similar triangle, and it can be seen that
From the above, it can be seen that
Z=R
The corrected target image plane is the z=r plane. It is therefore assumed that the coordinates of the corrected target image are (x 2,y2, u). A mapping relationship between a source image (x, y) and a target image of the panoramic image can be obtained:
wherein, R: radius of panoramic image, f: focal length of panoramic image
The effects of the transformed panoramic image can be observed and the radius of the image can be continuously adjusted to achieve the best effect.
2) And (3) filtering: filtering the panoramic image by using a Gaussian filtering algorithm; the gaussian filtering is to apply the two-dimensional normal distribution of the upper graph to a two-dimensional matrix, the values of G (x, y) are weights on the matrix, normalize the obtained weights, restrict the range of the weights between 0,1, and the sum of all the values is 1. As shown in fig. 3.
The two-dimensional gaussian function expression is:
σ can be seen here as two values, one being the component σx on the x-axis and the other being the component σy on the y-axis. σ can be directly used for image processing and rank operation on the image. The larger the sigma value is, the whole shape is approximately flat; the smaller the value of σ, the more the entire shape is convex.
3) Color correction: image color correction is performed using a histogram equalization algorithm.
Histogram equalization is a common method of image enhancement. Changing the gray values in the image by a mapping increases the dynamic range of the gray values of the image and thus increases the contrast of the image. The histogram equalization technology is used, so that the gray value of the whole image can be uniformly distributed in the whole dynamic range, the contrast of the image is increased, and the visual impression is improved.
The specific steps of color correction are as follows:
1. Determining the gray level of the image, wherein the gray level is 0-255;
2. Calculating the probability of an original histogram, and counting the proportion of the pixel of each gray level on the original image to the total, and marking as P i;
3. Calculating an accumulated value of the histogram probability, and calculating the accumulated value by a formula below;
4. calculating the mapping relation of pixels; the mapping relation of the pixels is calculated by the following formula:
Ui=int(max(pix)-min(pix))*Si+0.5
where pix represents the gray level.
5. And (5) gray scale mapping is performed.
S3, collecting original point cloud data: acquiring data of a target area by using a laser scanner to acquire original cloud data;
S4, processing the original point cloud data: preprocessing and post-processing are carried out on the original cloud data obtained in the step S3, so that target original cloud data are obtained; the preprocessing comprises invalid data removal, noise filtering and point cloud registration, and the post-processing comprises point cloud segmentation, feature extraction and reconstruction.
The pretreatment comprises the following specific steps:
1) Removing invalid data by a radius-based outlier removal algorithm;
Step one: and constructing the point cloud data into a KD-Tree data structure so as to facilitate quick inquiry.
Step two: a radius range and a density threshold are defined. The radius range defines a search radius taking each point as a center, and the radius range is selected according to the scale of the application scene and the point cloud data; the density threshold defines how many points within a radius should be considered valid, and the choice of the density threshold is typically related to the density and noise level of the point cloud. If the point cloud is denser, a higher density threshold may be selected to more severely reject outliers. If the point cloud is sparse, a lower density threshold may be selected to reduce the likelihood of false rejects of valid points.
Step three: each point is traversed.
Step four: the number of points within each point radius range is counted.
Step five: if the density of a dot is below the density threshold, the dot is marked as invalid and removed.
2) Using a Gaussian filter algorithm to filter noise of the point cloud;
For each point in the point cloud we consider surrounding neighboring points, weighted average according to the weights of the gaussian distribution. The distribution of weights will decrease with distance, which means that points closer in distance will take up more weight on average, while points farther in distance will take up less weight. Such a weight distribution may effectively reduce the impact of noise while preserving the details of the point cloud.
Step one: the size of the filter window is selected to determine the size of the neighborhood used to calculate the average, the larger the window, the stronger the smoothing effect, but also the loss of detail.
Step two: calculating weights, for each point within the window, a gaussian weight is calculated from its distance from the center point using the following formula:
wherein W (x) is the weight of the point
X is the distance between the point and the center point
Sigma is a parameter controlling the weight distribution
Step three: for each point in the window, a weighted average is performed according to the weight of the point to obtain a new smoothed value.
3) And carrying out point cloud registration by using an iterative nearest point registration algorithm, and aligning a plurality of point cloud data sets to enable the point cloud data sets to be in the same coordinate.
Step one: sampling and finding out a point set corresponding to the target point cloud Q from the source point cloud, minimizing the Euclidean distance between each corresponding point, and then obtaining two new point sets from which the point without the corresponding point and the error point are removed.
Step two: and calculating the gravity center of the new point set according to the coordinate information.
Step three: calculating a rotation matrix R and a translation vector T by using the gravity center, and calculating an error function E= (R, T) to minimize the value;
Step four: using the rotation matrix R and the translation vector T obtained by the calculation in the step three to carry out rigid transformation on points in the source point cloud P to obtain a new point set P 1;
Step five: the average distance between the new point set P 1 and all the corresponding points in the target point cloud Q is calculated by adopting the following method and is recorded as
Step 6: judging whether iteration is finished or not, if the average distance is larger than a preset threshold value, continuing to return to the calculation in the second step until the condition is met; if the number of iterations k is smaller than the preset threshold value or the preset number of iterations k is met, judging that the iteration is terminated.
The post-treatment comprises the following specific steps:
1) Performing point cloud segmentation by using an European clustering algorithm, and grouping the points into clusters;
European clustering is a clustering algorithm based on Euclidean distance metric. The specific algorithm is as follows: for a point in space, k points closest to the point are found by using a KD-Tree nearest neighbor search algorithm, and if the distance between the points and the point is smaller than a set threshold value, the points are clustered into a set. If the number of elements in Q is not increased, ending the whole clustering process; otherwise, points other than the points are selected from the set Q, and the process is repeated until the number of elements in the set is not increased. The flow chart is shown in fig. 4 below;
2) Extracting feature information of the object by using a point feature histogram extraction method;
The point feature histogram representation is based on the relationship between points in the k-neighborhood and their estimated surface normals. Briefly, it attempts to capture as much as possible the variation of the sampling surface by taking into account all interactions between the estimated normal directions. Thus, the resulting hyperspace depends on the quality of the surface normal estimate for each point.
Fig. 5 shows a calculation region diagram of the point feature histogram. The point to be calculated is P q, and the point P q is placed in a 3d sphere with a radius and is interconnected with points in all K neighbors. The final point feature histogram descriptor is equivalent to a histogram of the relationship between all the point pairs in the neighborhood, and thus the computational complexity is O (k 2).
A fixed coordinate system is defined at one point therein as shown in fig. 6 below. Calculating the relative deviation between the two points P i、Pj and the related normal n i、nj;
u=ns
w=u×vv
using the uvw coordinate system, the difference between the two normals of n s and n t can be expressed as a set of angular features:
α=v·nt
θ=arctan(w·nt,u·nt)
Where d is the Euclidean distance between point P s) and point P t), namely: d= |p t-Ps||2.
Computing four tuples for each pair of points in k-neighborhoodCan reduce the 12 information values of the pair to be described by using the 4 information values
3) A poisson reconstruction algorithm is used for point cloud reconstruction.
The core ideas of poisson reconstruction are: the object is divided into a geometric body inside and a geometric body outside, the normal vector of the object point cloud data can mark the inside and the outside, and the estimation of the surface of the object is obtained by implicitly fitting the indication function of the object, namely, the information of discrete points of the surface of the object is converted into a continuous surface function, so that the surface is constructed. The specific method comprises the following steps:
Assuming that an object is MM, the surface of the object is defined as δ M, which indicates that the function x M is:
wherein the relationship of the point cloud normal vector to x M is as shown in fig. 7:
From the above equation, a point has a value of 1 inside the surface and a value of 0 outside the surface. Thus, if x M(q0 is obtained for each point q 0, the surface of the entire object can be known. Because x M has no continuity, direct interpolation to x M(q0) is not significant. The exponential function may be smoothed first with a smoothing filter function.
Step one: normal vector to gradient space
First, a smoothing function is usedTo smooth x M, define/>, for arbitrary point p e delta M For the surface normal vector directed to the inside, F (q) is defined as a smoothing filter, F (q-p) F (q-p) represents the displacement of FF in pp direction, since the indication function x M is not well-derived, it is possible to use/>The inverse approximation of the convolution of the two functions solves for x M, namely:
where is convolution, here smoothing filtering. Thus, the solution from the normal vector of the point cloud data to the gradient space is completed.
Step two: gradient space to vector field
Due to the discrete nature of the object surface points,
For every point q of the surface is not necessarily known, i.e./>The distribution of (c) is unknown, and this problem can be solved with piecewise approximation, by observing p= (p i,ni).
The initial discrete sample point set is marked as S, S is a point (S epsilon S) in S, and ss comprises position information (S, p) and normal vector informationDelta M is spatially divided into different surface areas delta S, and S epsilon S,/>The following equation may be converted into an integral sum, where each small integral may be approximated as a constant function, and may be replaced by the corresponding function s.p and an integral of the area of delta S, as shown in the following equation:
Assuming that the sample points are uniformly distributed, the constant of delta S can be omitted, and the vector space can be obtained by the above formula through discrete approximation
Step three: conversion to poisson's equation
Vector spaceAnd the indication function x M satisfies the following formula, namely, the problem to be solved finally:
In which if directly solved for Integral needs to be solved, vector space/>The method is not necessarily a non-rotating field, and cannot integrate in a general sense, and the derivative operation is carried out on the two sides of the above formula to obtain the following formula:
where delta is the laplace operator, Is a divergence operator, the above formula is a poisson equation,/>Is the function to be solved, the above equation means that the divergence of the gradient is equal to the divergence of the vector field. The solution of the equation can be obtained by convolving the Laplace equation basic solution with the function, and the indication function can be found.
Step four: poisson algorithm flow
The input of poisson reconstruction is point cloud data with normal vector, the solution of the normal vector of the point cloud data is completed in the last step, the algorithm output is a triangular grid model, and the algorithm flow is shown in fig. 8.
S5, fusing the target panoramic image and the target point cloud data: fusing the target panoramic image and the target point cloud data in a point cloud projection mode to generate comprehensive space-time data, wherein the target panoramic image corresponds to panoramic images acquired at different times or different positions, and the target point cloud data corresponds to original point cloud data acquired at different times or different positions;
s6, data comparison: comparing the space data of different times through an iterative nearest point algorithm, and identifying and analyzing the differences;
s7, difference visual display: displaying the space-time difference obtained in the step S6 by using a visualization tool and a statistical method;
s8, ending: and (3) according to the difference detection result displayed in the step (S7), corresponding application and decision are carried out.
As shown in fig. 9: a space-time difference detection system based on a full scenic spot cloud fusion technology comprises a panoramic image sensor: for obtaining a panoramic image;
A point cloud sensor: for obtaining point cloud data;
An image processing module: for processing the obtained panoramic image;
The point cloud data processing module is used for: processing the obtained point cloud data;
and a data fusion module: fusing the processed panoramic image with the processed point cloud data;
And a comparison module: comparing the fused panoramic image and point cloud data with the historical panoramic image and point cloud data;
And a display module: and visually displaying the difference obtained by the comparison of the comparison module.
The panoramic image sensor and the point cloud sensor work synchronously, and the data fusion module is connected with the point cloud data processing module and the image processing module.
The data fusion module is realized by carrying out point cloud projection on the panoramic image and the point cloud data.
The space-time difference detection system based on the full scenic spot cloud fusion technology can be applied to the fields of city planning, engineering quality supervision, infrastructure management and the like.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention.

Claims (8)

1. The space-time difference detection method based on the all-scenic spot cloud fusion technology is characterized by comprising the following steps of:
S1, collecting panoramic images: acquiring data of a target area by using a panoramic camera to obtain a panoramic image;
S2, preprocessing the panoramic image: preprocessing the panoramic image obtained in the step S1 to obtain a target panoramic image, wherein the preprocessing comprises image correction, denoising and color correction;
s3, collecting original point cloud data: acquiring data of a target area by using a laser scanner to acquire original cloud data;
S4, processing the original point cloud data: preprocessing and post-processing are carried out on the original cloud data obtained in the step S3, so that target original cloud data are obtained;
S5, fusing the target panoramic image and target point cloud data: fusing the target panoramic image and the target point cloud data in a point cloud projection mode to generate comprehensive space-time data, wherein the target panoramic image corresponds to panoramic images acquired at different times or different positions, and the target point cloud data corresponds to original point cloud data acquired at different times or different positions;
s6, data comparison: comparing the space data of different times through an iterative nearest point algorithm, and identifying and analyzing the differences;
s7, difference visual display: displaying the space-time difference obtained in the step S6 by using a visualization tool and a statistical method;
S8, ending: according to the difference detection result displayed in the step S7, corresponding application and decision are carried out;
The preprocessing in the S4 comprises invalid data removal, noise filtering and point cloud registration, and the post-processing comprises point cloud segmentation, feature extraction and reconstruction;
The post-treatment specifically comprises the following steps:
1) Performing point cloud segmentation by using an European clustering algorithm, and grouping the points into clusters;
2) Extracting feature information of the object by using a point feature histogram extraction method;
3) A poisson reconstruction algorithm is used for point cloud reconstruction.
2. The method for detecting space-time difference based on the all-scene cloud fusion technology according to claim 1, wherein the preprocessing in S2 comprises:
panoramic image correction: carrying out panoramic image correction by using a spherical equidistant projection model;
and (3) filtering: filtering the panoramic image by using a Gaussian filtering algorithm;
color correction: image color correction is performed using a histogram equalization algorithm.
3. The space-time difference detection method based on the full-scene cloud fusion technology according to claim 2, wherein the specific steps of color correction are as follows:
determining the gray level of the image, wherein the gray level is 0-255;
calculating the probability of an original histogram, and counting the proportion of the pixels of each gray level on the original image to the total;
Calculating an accumulated value of the histogram probability;
calculating the mapping relation of pixels;
and (5) gray scale mapping is performed.
4. The space-time difference detection method based on the full-scene cloud fusion technology according to claim 1, wherein the preprocessing specifically comprises the following steps:
1) Removing invalid data by a radius-based outlier removal algorithm;
2) Using a Gaussian filter algorithm to filter noise of the point cloud;
3) And carrying out point cloud registration by using an iterative nearest point registration algorithm, and aligning a plurality of point cloud data sets to enable the point cloud data sets to be in the same coordinate.
5. The spatio-temporal difference detection system of the spatio-temporal difference detection method according to claim 1, characterized by comprising
Panoramic image sensor: for obtaining a panoramic image;
A point cloud sensor: for obtaining point cloud data;
An image processing module: for processing the obtained panoramic image;
The point cloud data processing module is used for: processing the obtained point cloud data;
and a data fusion module: fusing the processed panoramic image with the processed point cloud data;
And a comparison module: comparing the fused panoramic image and point cloud data with the historical panoramic image and point cloud data;
And a display module: and visually displaying the difference obtained by the comparison of the comparison module.
6. The space-time difference detection system based on the full-scene point cloud fusion technology according to claim 5, wherein the panoramic image sensor and the point cloud sensor work synchronously, and the data fusion module is connected with the point cloud data processing module and the image processing module.
7. The system of claim 5, wherein the data fusion module is configured to perform point cloud projection on the panoramic image and the point cloud data.
8. The space-time difference detection system based on the full-scene cloud fusion technology according to claim 5, wherein the space-time difference detection system based on the full-scene cloud fusion technology can be applied to the fields of city planning, engineering quality supervision, infrastructure management and the like.
CN202311343854.5A 2023-10-16 2023-10-16 Space-time difference detection method and system based on all-scenic-spot cloud fusion technology Active CN117541537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311343854.5A CN117541537B (en) 2023-10-16 2023-10-16 Space-time difference detection method and system based on all-scenic-spot cloud fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311343854.5A CN117541537B (en) 2023-10-16 2023-10-16 Space-time difference detection method and system based on all-scenic-spot cloud fusion technology

Publications (2)

Publication Number Publication Date
CN117541537A CN117541537A (en) 2024-02-09
CN117541537B true CN117541537B (en) 2024-05-24

Family

ID=89790766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311343854.5A Active CN117541537B (en) 2023-10-16 2023-10-16 Space-time difference detection method and system based on all-scenic-spot cloud fusion technology

Country Status (1)

Country Link
CN (1) CN117541537B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544456A (en) * 2018-11-26 2019-03-29 湖南科技大学 The panorama environment perception method merged based on two dimensional image and three dimensional point cloud
CN113223145A (en) * 2021-04-19 2021-08-06 中国科学院国家空间科学中心 Sub-pixel measurement multi-source data fusion method and system for planetary surface detection
CN113506318A (en) * 2021-07-12 2021-10-15 广东工业大学 Three-dimensional target perception method under vehicle-mounted edge scene
CN113822891A (en) * 2021-11-24 2021-12-21 深圳市智源空间创新科技有限公司 Tunnel disease detection method fusing laser point cloud and panoramic image
CN113850869A (en) * 2021-09-10 2021-12-28 国网重庆市电力公司建设分公司 Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis
CN113869629A (en) * 2021-08-13 2021-12-31 广东电网有限责任公司广州供电局 Laser point cloud-based power transmission line safety risk analysis, judgment and evaluation method
CN114677435A (en) * 2021-07-20 2022-06-28 武汉海云空间信息技术有限公司 Point cloud panoramic fusion element extraction method and system
CN115359021A (en) * 2022-08-29 2022-11-18 上海大学 Target positioning detection method based on laser radar and camera information fusion
CN115376109A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Obstacle detection method, obstacle detection device, and storage medium
CN115761550A (en) * 2022-12-20 2023-03-07 江苏优思微智能科技有限公司 Water surface target detection method based on laser radar point cloud and camera image fusion
CN116778104A (en) * 2023-08-16 2023-09-19 江西省国土资源测绘工程总院有限公司 Mapping method and system for dynamic remote sensing monitoring

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544456A (en) * 2018-11-26 2019-03-29 湖南科技大学 The panorama environment perception method merged based on two dimensional image and three dimensional point cloud
CN113223145A (en) * 2021-04-19 2021-08-06 中国科学院国家空间科学中心 Sub-pixel measurement multi-source data fusion method and system for planetary surface detection
CN113506318A (en) * 2021-07-12 2021-10-15 广东工业大学 Three-dimensional target perception method under vehicle-mounted edge scene
CN114677435A (en) * 2021-07-20 2022-06-28 武汉海云空间信息技术有限公司 Point cloud panoramic fusion element extraction method and system
CN113869629A (en) * 2021-08-13 2021-12-31 广东电网有限责任公司广州供电局 Laser point cloud-based power transmission line safety risk analysis, judgment and evaluation method
CN113850869A (en) * 2021-09-10 2021-12-28 国网重庆市电力公司建设分公司 Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis
CN113822891A (en) * 2021-11-24 2021-12-21 深圳市智源空间创新科技有限公司 Tunnel disease detection method fusing laser point cloud and panoramic image
CN115359021A (en) * 2022-08-29 2022-11-18 上海大学 Target positioning detection method based on laser radar and camera information fusion
CN115376109A (en) * 2022-10-25 2022-11-22 杭州华橙软件技术有限公司 Obstacle detection method, obstacle detection device, and storage medium
CN115761550A (en) * 2022-12-20 2023-03-07 江苏优思微智能科技有限公司 Water surface target detection method based on laser radar point cloud and camera image fusion
CN116778104A (en) * 2023-08-16 2023-09-19 江西省国土资源测绘工程总院有限公司 Mapping method and system for dynamic remote sensing monitoring

Also Published As

Publication number Publication date
CN117541537A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN109544456B (en) Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion
CN109345620B (en) Improved object point cloud splicing method for ICP (inductively coupled plasma) to-be-measured object by fusing fast point feature histogram
Fan et al. Rethinking road surface 3-d reconstruction and pothole detection: From perspective transformation to disparity map segmentation
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
CN109903372B (en) Depth map super-resolution completion method and high-quality three-dimensional reconstruction method and system
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
WO2015006224A1 (en) Real-time 3d computer vision processing engine for object recognition, reconstruction, and analysis
CN110610505A (en) Image segmentation method fusing depth and color information
CN109470149B (en) Method and device for measuring position and posture of pipeline
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN110751097B (en) Semi-supervised three-dimensional point cloud gesture key point detection method
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN112630469B (en) Three-dimensional detection method based on structured light and multiple light field cameras
O'Byrne et al. A stereo‐matching technique for recovering 3D information from underwater inspection imagery
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN114612412A (en) Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN113409242A (en) Intelligent monitoring method for point cloud of rail intersection bow net
CN115112098B (en) Monocular vision one-dimensional two-dimensional measurement method
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant