CN116071283B - Three-dimensional point cloud image fusion method based on computer vision - Google Patents

Three-dimensional point cloud image fusion method based on computer vision Download PDF

Info

Publication number
CN116071283B
CN116071283B CN202310363382.3A CN202310363382A CN116071283B CN 116071283 B CN116071283 B CN 116071283B CN 202310363382 A CN202310363382 A CN 202310363382A CN 116071283 B CN116071283 B CN 116071283B
Authority
CN
China
Prior art keywords
point cloud
image
dimensional
frame
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310363382.3A
Other languages
Chinese (zh)
Other versions
CN116071283A (en
Inventor
吴剑
胡波
黄嵩衍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Teng Kun Information Technology Co ltd
Original Assignee
Hunan Teng Kun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Teng Kun Information Technology Co ltd filed Critical Hunan Teng Kun Information Technology Co ltd
Priority to CN202310363382.3A priority Critical patent/CN116071283B/en
Publication of CN116071283A publication Critical patent/CN116071283A/en
Application granted granted Critical
Publication of CN116071283B publication Critical patent/CN116071283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a three-dimensional point cloud image fusion method based on computer vision, which comprises the following steps: dividing continuous frame images according to texture features of adjacent frame images, acquiring a region of interest from local features of point cloud data in each divided section, and performing data interpolation according to local density change of the region of interest. According to the invention, the problem of spatial characteristics of the traditional point cloud data and the volatile point cloud data fused by the image can be avoided through the interpolation of the point cloud data, a more accurate target detection effect is realized, and more accurate and stable intelligent automatic inspection assistance is provided.

Description

Three-dimensional point cloud image fusion method based on computer vision
Technical Field
The invention relates to the technical field of image data processing, in particular to a three-dimensional point cloud image fusion method based on computer vision.
Background
With the development of science and technology, intelligent management has been implemented inside many enterprise parks, where the security of the parks inside the enterprise is a major issue of intelligent management. Along with popularization of unmanned inspection vehicle application, the unmanned inspection vehicle is often adopted to replace traditional methods such as manual inspection in a park, so that security efficiency and park operation cost are greatly improved. However, in the inspection process of the unmanned inspection vehicle, real-time image information in the campus needs to be collected for working, but the digital image collected by the single image collecting device cannot meet the actual application requirement, and cooperative work of a plurality of sensors is needed, wherein fusion between three-dimensional point cloud data and image data is widely applied to an unmanned inspection vehicle system. The three-dimensional point cloud data are different from the application of two-dimensional images, the three-dimensional point cloud data are often characterized as three-dimensional information features such as depth and geometry of a target, and the two-dimensional image data are characterized as two-dimensional information features such as color and texture of the target, so that the three-dimensional point cloud data and the two-dimensional image data are fused to obtain a three-dimensional point cloud model with the information such as color, objective features of the target can be accurately represented, and better automatic inspection assistance can be provided under the condition that an unmanned inspection vehicle system can accurately detect the target.
However, in the existing method, three-dimensional point cloud data are projected into a two-dimensional plane for fusion according to a coordinate conversion relation, the method loses the spatial distribution characteristics of the three-dimensional point cloud data, affects the spatial relation between the point cloud data, and causes larger errors in projection results, so that the fusion requirements of the unmanned inspection vehicle in the running process of the three-dimensional point cloud data and the image data cannot be met in the fusion process of the three-dimensional point cloud data, and therefore, a three-dimensional point cloud image fusion method based on computer vision is needed.
Disclosure of Invention
The invention provides a three-dimensional point cloud image fusion method based on computer vision, which aims to solve the existing problems.
The three-dimensional point cloud image fusion method based on computer vision adopts the following technical scheme:
one embodiment of the invention provides a three-dimensional point cloud image fusion method based on computer vision, which comprises the following steps:
acquiring three-dimensional point cloud data and a visible light image, acquiring image segmentation areas according to edges in the visible light image, and combining the image segmentation areas with the largest adjacent frame structure similarity to acquire the average value of the structure similarity of all the image area combinations;
obtaining the segmentation degree of each frame of visible light image according to the average value of the number proportion relation and the structural similarity of the image segmentation areas in the adjacent frames of visible light images, and obtaining a segmentation time interval according to the segmentation degree of all frames;
clustering three-dimensional point cloud data of each frame, selecting three point clouds as an initial point in each point cloud cluster, gradually increasing the number of the point clouds, gradually fitting the normal vector of the three-dimensional plane where the point clouds are located and fitting times, obtaining an initial first structure complexity according to the cosine similarity and fitting times of the normal vector of the corresponding three-dimensional plane adjacent to two times, and selecting a plurality of initial points in each point cloud cluster to obtain the average value of the first structure complexity of all initial points;
in the process of successively fitting the three-dimensional plane in which the point clouds are located, each point cloud corresponds to a normal vector of the three-dimensional plane before and after the point clouds participate in fitting;
acquiring a time interval between adjacent frames during three-dimensional point cloud data acquisition, taking the ratio of the centroid distance of the point cloud clusters of the adjacent frames to the time interval as a centroid change rate, and taking the product of the distance between the centroid coordinates of the point cloud clusters and the whole three-dimensional point cloud data and the centroid change rate as an offset correction value;
correcting the average value of the first structural complexity by using the offset correction value to obtain a second structural complexity;
obtaining an interested region according to the complexity of the second structure, obtaining target point clouds in the interested region according to the cosine similarity change difference of each point cloud before and after the point clouds participate in fitting, obtaining interpolation weights according to the specific gravity of the point cloud densities of the target point clouds at different positions in the neighborhood, and carrying out data interpolation between the target point clouds by utilizing the interpolation weights to obtain new three-dimensional point cloud data;
and fusing the new three-dimensional point cloud data with the visible light image to obtain a fused image, and realizing intelligent inspection assistance by using the fused image.
Further, the average value of the structural similarity is obtained by the following steps:
will be the first
Figure SMS_2
Any one image segmentation area in the frame visible light image is marked as a target image segmentation area and calculated in the first step
Figure SMS_6
Each image segmentation area and the first image segmentation area in the frame visible light image
Figure SMS_8
Structural similarity between target image segmentation regions of a frame, will be
Figure SMS_3
Target image segmentation region of frame and the first
Figure SMS_5
The image segmentation area with the maximum structural similarity in the frame image is taken as an image area combination, and the first process is obtained by repeating the above processes
Figure SMS_7
Frame visible light image and the first
Figure SMS_9
All image area combinations and corresponding structural similarity in the frame image are obtained
Figure SMS_1
Frame and th
Figure SMS_4
The mean of the structural similarity of all image area combinations of the frame visible image.
Further, the segmentation degree is obtained by the following steps:
Figure SMS_10
in the formula ,
Figure SMS_11
represent the first
Figure SMS_12
The number of image areas in the frame visible image;
Figure SMS_13
represent the first
Figure SMS_14
The number of image areas in the frame visible light image; maximum similarity mean
Figure SMS_15
Represent the first
Figure SMS_16
Frame and th
Figure SMS_17
The mean of the structural similarity of all image area combinations of the frame visible image.
Further, the method for obtaining the segment time interval is as follows:
and taking part of visible light images with segmentation degree continuously larger than a first preset threshold value in the visible light images of the continuous frames as segmented frames, taking the left side and the right side of each segmented frame as end points of the section, and recording the section corresponding to the visible light images contained in the end points and the end points as a segmented time section to obtain a plurality of segmented time sections of the visible light images of the continuous frames.
Further, the first structure complexity is obtained by the following steps:
taking any one point cloud cluster as a target point cloud cluster, randomly selecting a plurality of point clouds in the target point cloud cluster, randomly selecting any one of the plurality of point clouds as a target starting center, acquiring the point cloud closest to the target starting center to form a three-dimensional plane, and acquiring the normal vector of the three-dimensional plane
Figure SMS_18
The point cloud closest to the three-dimensional plane is further acquired without considering the point cloud which has participated in fitting the three-dimensional plane, a new three-dimensional plane is acquired again by utilizing the least square plane fitting method, and the normal vector of the new three-dimensional plane is still acquired
Figure SMS_19
And so on, making all points in the point cloud cluster participate in calculation;
first structural complexity:
Figure SMS_20
in the formula ,
Figure SMS_23
representing a plurality of point clouds selected randomlyIs the first of (2)
Figure SMS_27
The point clouds are used as initial centers of three-dimensional plane fitting, and the fitting times are calculated
Figure SMS_30
Indicating the first of the segment time intervals
Figure SMS_22
Frame No
Figure SMS_26
The number of times the individual point clouds cluster to fit the plane; complexity factor
Figure SMS_31
Is shown in the first
Figure SMS_33
Acquired under the initial center
Figure SMS_21
Plane normal vector sum of the subsfit planes
Figure SMS_25
Cosine similarity between plane normal vectors of the subsfit planes, first structural complexity
Figure SMS_29
Representation and utilization of the first
Figure SMS_32
The first acquired by the initial center
Figure SMS_24
Frame three-dimensional point cloud data
Figure SMS_28
The structural complexity of the individual point cloud clusters.
Further, the offset correction value is obtained by the following method:
offset correction value:
Figure SMS_34
in the formula ,
Figure SMS_45
representing the first in a group point cloud cluster
Figure SMS_36
Frame three-dimensional point cloud data
Figure SMS_42
Clustering coordinates of centroids by the point clouds;
Figure SMS_44
represented in the combined point cloud cluster
Figure SMS_47
Frame three-dimensional point cloud clustering
Figure SMS_49
Clustering coordinates of centroids by the point clouds;
Figure SMS_50
represent the first
Figure SMS_41
Frame and th
Figure SMS_43
Acquiring interval time of frame three-dimensional point cloud data; centroid rate of change
Figure SMS_35
Represent the first
Figure SMS_39
Frame and th
Figure SMS_38
The average change rate of the change of the centroid position of the first point cloud cluster of the frame three-dimensional point cloud data in the data acquisition interval;
Figure SMS_40
representing the j-th frame three-dimensional pointThe centroid coordinates of the cloud data as a whole,
Figure SMS_46
represent the first
Figure SMS_48
Frame three-dimensional point cloud data
Figure SMS_37
Euclidean distance between the centroid of the individual point cloud clusters and the centroid of the three-dimensional point cloud data entity.
Further, the second structure complexity is obtained by the following method:
Figure SMS_51
wherein ,
Figure SMS_52
representing the first after linear normalization
Figure SMS_53
Frame three-dimensional point cloud data
Figure SMS_54
A correction for the structural complexity of the individual point cloud clusters,
Figure SMS_55
represent the first
Figure SMS_56
Frame three-dimensional point cloud data
Figure SMS_57
A mean of a first structural complexity of the point cloud clusters.
Further, the new three-dimensional point cloud data is obtained by the following steps:
when the second structure complexity value of the point cloud cluster is larger than a second preset threshold value, setting the point cloud cluster as an interested area;
in the region of interest, the cosine phase is calculated from the three-dimensional plane fitIf cosine similarity is smaller than a preset threshold value, the corresponding point cloud is indicated to be located at the position of the structure change key, and data interpolation is carried out in the sphere range of the point cloud located at the position of the structure change key, wherein the determination of the sphere radius range is that the Euclidean distance between the point clouds located at the position of the structure change key is the closest to the point cloud
Figure SMS_58
And (3) for radius, interpolating positions on connecting lines between the point cloud and other point clouds within the sphere range, obtaining interpolation weights according to specific gravity of the point cloud density within different sphere ranges, reducing Euclidean distances between the target point clouds and the other point clouds by using the interpolation weights, obtaining interpolation positions of interpolation data, carrying out data interpolation according to the interpolation positions, and obtaining three-dimensional point cloud data after data interpolation on line segments between the other point clouds and the target point clouds when interpolation directions are Euclidean distances.
Further, the intelligent inspection assistance is realized by using the fused image, which specifically comprises the following steps:
according to the coordinate conversion relation between the three-dimensional point cloud data and the two-dimensional plane image data, the interpolated three-dimensional point cloud data is projected into the two-dimensional plane image data, the pixel value of each pixel point in the two-dimensional plane image data is also given to the pixel point of the projected point cloud data position, fusion of the three-dimensional point cloud data and the two-dimensional plane image data is carried out, the fused image is input into a convolutional neural network to obtain patrol instruction information for obstacle avoidance, forward, backward and parking, and intelligent patrol assistance is realized.
The technical scheme of the invention has the beneficial effects that: the method comprises the steps of obtaining image interval sections through distribution characteristics of two-dimensional image data of continuous frames, obtaining positions of an interested region according to local distribution characteristics of point cloud data in each interval section, and interpolating the interested region according to density changes in a local range of the data of the point cloud, so that a local characteristic relation on a three-dimensional space after projection is reserved. The acquisition process of the region of interest performs cluster analysis by considering three-dimensional point cloud data of a single frame, acquires the complexity of a structure according to the distribution characteristics of the point cloud data in each point cloud cluster, and corrects the three-dimensional point cloud data by combining the changes of the point cloud data of continuous frames in the regional segments, thereby acquiring the final complexity of the structure and acquiring the region of interest. The method has the advantages that the defect that in the traditional fusion process of the three-dimensional point cloud data and the two-dimensional image data, the spatial distribution characteristics of the three-dimensional point cloud data are lost, the spatial relationship among the point cloud data is influenced, and a large error occurs in a projection result is avoided, and the spatial local distribution characteristics of the three-dimensional point cloud data and the corresponding spatial relationship among the three-dimensional point cloud data are greatly reserved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of steps of a three-dimensional point cloud image fusion method based on computer vision.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of a specific implementation, structure, characteristics and effects thereof based on a three-dimensional point cloud image fusion method based on computer vision for a data management method applied to a security operation and maintenance system according to the invention, which is provided by the invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the three-dimensional point cloud image fusion method based on computer vision provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a three-dimensional point cloud image fusion method based on computer vision according to an embodiment of the present invention is shown, where the method includes the following steps:
and S001, acquiring image data and corresponding three-dimensional point cloud data, and acquiring a coordinate conversion relation between the image data and the corresponding three-dimensional point cloud data according to a parameter relation between acquisition sensors.
In this embodiment, three-dimensional point cloud data is obtained by using a laser radar and a visible light camera mounted on an unmanned inspection vehicle, and multi-frame two-dimensional image data of visible light is obtained by using a visible light camera, wherein the laser radar and the visible light camera are calibrated, and a coordinate conversion relationship between the three-dimensional point cloud data and the two-dimensional image is obtained according to calibration parameters of the visible light camera in a calibration process (this process is a known technology and is not described in detail in this embodiment).
Step S002, according to the distribution relation of the similar areas between the continuous frames of visible light images of the two-dimensional image data, segments of the continuous frames of visible light image sections are obtained, and segments of the continuous frames of point cloud data sections are obtained.
In order to greatly reduce the calculated amount in the matching fusion process of the three-dimensional point cloud data and the two-dimensional image data, and preserve the spatial local distribution characteristics of the three-dimensional point cloud data and the corresponding spatial relationship between the three-dimensional point cloud data and the two-dimensional image data, the corresponding region of interest is obtained by obtaining the characteristic distribution between the three-dimensional point cloud data and the two-dimensional image data, and the coordinate conversion process is carried out after interpolation is carried out by the distribution characteristics between the region of interest, so that the spatial local distribution characteristics of the three-dimensional point cloud data and the corresponding spatial relationship are preserved.
In addition, because the unmanned inspection vehicle is continuously changed in the driving process, in the process of fusing the two-dimensional image data and the three-dimensional point cloud data, the region of interest is required to be firstly obtained according to the local distribution characteristics of the point cloud data, and interpolation is carried out according to the three-dimensional point cloud data between the regions of interest and the pixel point relationship in the two-dimensional image, in the process of obtaining the region of interest, the distribution of a time period is required to be firstly obtained, wherein the image information is basically the same in a single time period, only small changes (the lane line change of the road and the small distribution change of the vehicle information) exist, and the analysis can be carried out in the same time period, namely, the three-dimensional point cloud data of continuous frames in the corresponding time period and the two-dimensional image have great relevance. Because the data volume of the three-dimensional point cloud data of the continuous frames is larger, the section segmentation is carried out by using the visible light images of the continuous frames, and the corresponding three-dimensional point cloud data is acquired while the visible light images are acquired, so that the section segmentation of the point cloud data of the continuous frames can be obtained.
Firstly, carrying out graying treatment on a single-frame visible light image to obtain a graying treated gray image, carrying out edge detection on the gray image by utilizing a Sobel operator to obtain all edge lines in the image, obtaining an edge image, carrying out processing on the edge image by utilizing morphological closing operation, connecting broken edge lines in the image to obtain a new edge image, taking the new edge image as a mask, wherein the gray value of the edge line in the edge image is 255, the gray value of a non-edge line part is 0, carrying out region segmentation on the corresponding single-frame visible light image by utilizing the edge line in the new edge image as a region segmentation line, setting the pixel value of a pixel point corresponding to the edge line in the single-frame visible light image to be 0, generating each image segmentation region by the corresponding current-frame visible light image, and obtaining a plurality of image segmentation regions in the single-frame image, wherein the pixel values of the pixel points corresponding to the non-edge line part are recorded as
Figure SMS_59
The image segmentation areas are subjected to the same area segmentation processing on other continuous frame images by the same operation, so that a plurality of image segmentation areas contained in each frame image are obtained;
secondly, according to the structure between the image segmentation areas in the front and back two frames of imagesSimilarity SSIM, when the structural similarity between a certain segmentation area in the previous frame of visible light image and a certain image segmentation area in the next frame of visible light image is maximum, marking the two image segmentation areas as image area combinations, and combining the first frame of visible light image with the second frame of visible light image
Figure SMS_61
Any one image segmentation area in the frame visible light image is marked as a target image segmentation area and calculated in the first step
Figure SMS_63
Each image segmentation area and the first image segmentation area in the frame visible light image
Figure SMS_66
Structural similarity between target image segmentation regions of a frame, will be
Figure SMS_62
Target image segmentation region of frame and the first
Figure SMS_65
The image segmentation area with the maximum structural similarity in the frame image is taken as an image area combination, and the first process is obtained by repeating the above processes
Figure SMS_67
Frame visible light image and the first
Figure SMS_69
All image area combinations and corresponding structural similarity in the frame image are obtained
Figure SMS_60
Frame and th
Figure SMS_64
Mean value of structural similarity of all image region combinations of frame visible light image
Figure SMS_68
And recording as the maximum similarity mean value.
Then the corresponding frame 1 is except the visible light image
Figure SMS_70
Frame [ ]
Figure SMS_71
) The visible light image is the segmentation degree of the segmentation frame
Figure SMS_72
The calculated expression of (2) is:
Figure SMS_73
in the formula ,
Figure SMS_77
represent the first
Figure SMS_80
The number of image segmentation areas in the frame visible light image;
Figure SMS_83
represent the first
Figure SMS_76
The number of image areas in the frame visible light image; maximum similarity mean
Figure SMS_81
Represent the first
Figure SMS_85
Frame and th
Figure SMS_88
The structural similarity mean value of all the image region combinations of the frame visible light image (wherein the calculation formula of the structural similarity is a known technology, and is not described in detail in this embodiment, it should be noted that the image region combinations in this embodiment are the combination types with the largest structural similarity in the combination types of the image regions in the two frame images). Wherein the method comprises the steps of
Figure SMS_74
Characterizing two consecutive frames of visible light imagesIf the difference between the number of the image areas and 1 is larger, the image characteristic change difference of the visible light images of two continuous frames is larger;
Figure SMS_78
the degree of similarity of two image areas characterizing an image combination of two consecutive frames of visible light images, the greater the value, the less similar the image feature variation between the two frames of visible light images. Wherein the method comprises the steps of
Figure SMS_82
As an evaluation index of the overall similarity of the images, the method needs to be based on
Figure SMS_86
As a weight value to be adjusted, the weight value is a reference degree value of the subsequent overall similarity and the segmentation degree
Figure SMS_75
Reflect the first
Figure SMS_79
The degree of similarity between a frame image and its previous frame image, when the degree of similarity is higher, the first
Figure SMS_84
The more consistent the image information between the frame visible light image and the previous frame visible light image, the more can be used as the same time period for analysis, further, the more
Figure SMS_87
The more likely the frame visible image is an image over that period of time, denoted as a segmented frame.
Finally, according to the calculation process, obtaining the segmentation degree of the visible light images of other frames except the 1 st frame of visible light image, and setting a segmentation degree threshold value
Figure SMS_89
(depending on the implementation of the implementation, the present embodiment gives an empirical reference value), if the segmentation level of a certain frame of visible light image
Figure SMS_90
Greater than the threshold
Figure SMS_91
The frame is indicated as a segmented frame.
The method comprises the steps of obtaining segments of a continuous frame visible light image interval according to the distribution relation of similar areas among continuous frame visible light images, obtaining segments of continuous frame point cloud data intervals, marking continuous visible light image frames which are larger than a segmentation degree threshold value in each continuous frame visible light image after the segments as a similar image group, marking a time period where the similar image group is located as a segmentation time interval, and obtaining all segmentation time intervals.
Step S003, the region of interest is obtained according to the local distribution characteristics of the point cloud data in each segment, and data interpolation is carried out in the region of interest.
In each divided interval, because the aggregation degree of the point cloud data is different, larger projection errors can exist after the two-dimensional coordinate projection, when the local area structure of the point cloud data is more complex, the probability of deviation is larger when the point cloud data is projected onto a two-dimensional plane, and the corresponding point cloud data with larger structural complexity loses more three-dimensional information features after the projection. In the projection process, in order to preserve the spatial local distribution characteristics and the corresponding spatial relationship of the point cloud data, that is, the local characteristics of a certain point cloud after projection are guaranteed to be basically the same as the local characteristics of a corresponding three-dimensional space, that is, the local relationship of a part of point cloud after projection is guaranteed to be added at certain positions of the point cloud. Therefore, the position of the region of interest is obtained according to the local distribution characteristics of the point cloud data, and interpolation is performed on the region of interest, so that the local characteristic relation on the three-dimensional space after projection is reserved.
K-Means clustering is carried out on any frame of three-dimensional point cloud data in any divided segmentation time interval, wherein
Figure SMS_92
The value of the three-dimensional point cloud data point cloud cluster is set as the number N of image segmentation areas in a two-dimensional visible light image of a corresponding frame, a plurality of three-dimensional point cloud data point cloud clusters are obtained and recorded as the point cloud clusters, and any one point cloud cluster is in the corresponding segmentation time interval
Figure SMS_93
Frame three-dimensional point cloud data
Figure SMS_94
Clustering the point clouds, and calculating the structural complexity degree of each point cloud cluster in the three-dimensional point cloud data of a single frame: taking any one point cloud cluster as a target point cloud cluster, randomly selecting R point clouds in the target point cloud cluster, taking R=20 according to experience, and taking the R point clouds as initial centers respectively;
taking R point clouds as any one of the initial centers as an example, acquiring 2 point clouds nearest to the initial center to form a three-dimensional plane, and further acquiring the normal vector of the three-dimensional plane
Figure SMS_95
The point cloud closest to the three-dimensional plane is further acquired without considering the point cloud which has participated in fitting the three-dimensional plane, a new three-dimensional plane is acquired again by utilizing the least square plane fitting method, and the normal vector of the new three-dimensional plane is still acquired
Figure SMS_96
The method of least square plane fitting is known in the prior art and will not be described in detail in this embodiment; by utilizing the operation, all points in the point cloud cluster participate in three-dimensional plane fitting calculation to obtain a normal vector of a final plane, which can be known according to the following formula: when a plane is continuously fitted a plurality of times by taking any one of the R point clouds as a starting center, a plurality of normal vectors are obtained, and the final number of times when the fitting is stopped is recorded as
Figure SMS_97
Wherein the normal vector obtained after the nth fitting is recorded as
Figure SMS_98
Fitting a three-dimensional plane to all of the R starting centers is done in this way.
The point cloud cluster in the corresponding three-dimensional point cloud data acquired from the initial midpoint of the target is recorded as the first point in the corresponding segment time interval
Figure SMS_99
Frame three-dimensional point cloud data
Figure SMS_100
Clustering the point clouds, so that the structural complexity of the point cloud clusters is high
Figure SMS_101
The method of (1):
Figure SMS_102
in the formula ,
Figure SMS_105
representing the first of the R starting centers randomly selected
Figure SMS_109
The initial centers are used as initial centers of three-dimensional plane fitting, and the fitting times are as follows
Figure SMS_113
Indicating the first of the segment time intervals
Figure SMS_106
Frame No
Figure SMS_110
The number of times the individual point clouds cluster to fit the plane; complexity factor
Figure SMS_112
Is shown in the first
Figure SMS_115
Acquired under the initial center
Figure SMS_103
Plane normal vector sum of the subsfit planes
Figure SMS_107
Cosine similarity between plane normal vectors of sub-fitting planes, and structural complexity
Figure SMS_111
Representation and utilization of the first
Figure SMS_114
The first acquired by the initial center
Figure SMS_104
Frame three-dimensional point cloud data
Figure SMS_108
The structural complexity of the clustering of the point clouds, wherein if the difference between the normal vector of the plane after each fitting of the plane and the normal vector of the plane of the last fitting of the plane is smaller, namely, the smaller the difference between the cosine similarity of the normal vectors of the planes of the two corresponding planes and 1 is, the more similar the distribution directions of the two corresponding three-dimensional planes are, conversely, the more the difference is, the more dissimilar the distribution directions of the two three-dimensional planes are, and along with the accumulation calculation, the larger the accumulation result is, the more the point clouds forming the three-dimensional plane are distributed in the three-dimensional space, the less the point clouds are distributed to the same three-dimensional plane, namely, the greater the structural complexity of the corresponding point cloud clusters is.
Calculating by using the method for calculating the structural complexity calculated by taking a certain point cloud as an initial center
Figure SMS_116
Corresponding to the initial center
Figure SMS_117
Frame three-dimensional point cloud data
Figure SMS_118
The structural complexity of each initial center of the point cloud clusters is countedCalculation of
Figure SMS_119
Average value of first structural complexity of all initial centers of first point cloud cluster in j-th frame of three-dimensional point cloud data corresponding to initial centers
Figure SMS_120
Although the image characteristic information in the interval is similar, the relative position of the obtained point cloud data of each point cloud cluster may change in the running process of the unmanned patrol vehicle, wherein the greater the relative position change degree of the point cloud clusters of two adjacent frames, the greater the possibility that errors occur in the corresponding image segmentation areas is represented by the point cloud data in the point cloud clusters with the greater relative position change degree, and the less important the point cloud clusters are in the running process of the unmanned patrol vehicle for the point cloud clusters which are closer to the bottom (corresponding road surface) and closer to the top (corresponding sky).
The three-dimensional point cloud data is converted into two-dimensional point cloud data by utilizing the conversion relation between the three-dimensional point cloud data and the two-dimensional visible light image, wherein the point cloud data contained in the point cloud clusters are unchanged, but are changed into two-dimensional point cloud clusters, the centers of mass of the two-dimensional point cloud clusters and the centers of mass of image segmentation areas in the two-dimensional visible light image are respectively recorded as point cloud cluster centers and area centers, the distance between the point cloud cluster centers and the area centers is calculated by taking one point cloud cluster center as an example, the point cloud clusters and the areas corresponding to the closest point cloud cluster centers and the area centers are regarded as the point cloud clusters and the image segmentation areas corresponding to each other, in addition, the front frame point cloud cluster and the rear frame point cloud cluster corresponding to the image area combination formed by the front frame and the rear frame image segmentation areas with the largest structural similarity are recorded as combined point cloud clusters, namely the third point cloud cluster in the combined point cloud clusters is shown
Figure SMS_121
Point cloud clustering of frame three-dimensional point cloud data
Figure SMS_122
The point cloud clusters of the frame three-dimensional point cloud data belong to a mutually corresponding relationship.
In addition, the time interval between adjacent frames of the three-dimensional point cloud data during acquisition is acquired
Figure SMS_123
Centroid coordinates of three-dimensional point cloud clustered whole
Figure SMS_124
Centroid coordinates of each combined point cloud cluster in three-dimensional point cloud data of all front and back frames
Figure SMS_125
Figure SMS_126
The present embodiment corrects the structural complexity of the point cloud clusters by the overall variation of the point cloud clusters of consecutive frames, where the first
Figure SMS_127
Frame three-dimensional point cloud data
Figure SMS_128
Offset correction value for a first structural complexity of a point cloud cluster
Figure SMS_129
The calculated expression of (2) is:
Figure SMS_130
in the formula ,
Figure SMS_141
representing the first in a group point cloud cluster
Figure SMS_132
Frame three-dimensional point cloud data
Figure SMS_138
Clustering coordinates of centroids by the point clouds;
Figure SMS_145
represented in the combined point cloud cluster
Figure SMS_148
Frame three-dimensional point cloud clustering
Figure SMS_147
Clustering coordinates of centroids by the point clouds;
Figure SMS_150
represent the first
Figure SMS_140
Frame and th
Figure SMS_144
Acquiring interval time of frame three-dimensional point cloud data; centroid rate of change
Figure SMS_131
Represent the first
Figure SMS_135
Frame and th
Figure SMS_143
Frame three-dimensional point cloud data
Figure SMS_146
The average change rate of the change of the centroid position of the point cloud clusters in the data acquisition intervals;
Figure SMS_149
represent the first
Figure SMS_151
And the mass center coordinates of the whole frame three-dimensional point cloud data. Wherein the method comprises the steps of
Figure SMS_133
Represent the first
Figure SMS_136
Frame three-dimensional point cloud numberAccording to the first
Figure SMS_139
The Euclidean distance between the mass center of each point cloud cluster and the mass center of the whole three-dimensional point cloud data is larger, which indicates that the point cloud clusters are more unimportant in the edge area of the whole point cloud data, namely, the point cloud clusters are closer to the bottom or the top; wherein the rate of change of centroid displacement
Figure SMS_142
For characterizing the change of the relative position of the same object in adjacent frames, if the change of the relative position in adjacent frames is larger, the corresponding possibility of error is larger, and therefore the corresponding structural complexity needs to be increased (the structural complexity of the region is larger for obtaining the region of interest through the structural complexity later, the greater the region of interest is), wherein the structural complexity is increased by
Figure SMS_134
As the degree of deviation
Figure SMS_137
To represent a correction value of the structural complexity to be acquired.
For the acquired first
Figure SMS_152
Performing linear normalization processing on correction values of structural complexity of all point cloud clusters of the frame three-dimensional point cloud data to obtain normalized correction values
Figure SMS_153
Final first
Figure SMS_154
Frame three-dimensional point cloud data
Figure SMS_155
Final structural complexity of the point cloud clusters
Figure SMS_156
The second structural complexity is noted:
Figure SMS_157
wherein ,
Figure SMS_158
represent the first
Figure SMS_159
Frame three-dimensional point cloud data
Figure SMS_160
A normalized correction value of a first structural complexity mean of the point cloud clusters,
Figure SMS_161
represent the first
Figure SMS_162
Frame three-dimensional point cloud data
Figure SMS_163
A first structural complexity mean of the point cloud clusters.
Setting a structural complexity threshold according to the second structural complexity of each point cloud cluster of the current frame
Figure SMS_164
(an empirical reference value is given in this embodiment according to the implementation of the embodiment), if the second structural complexity of the point cloud cluster is greater than the set threshold, the point cloud cluster is set as the region of interest.
According to each obtained point cloud of a single region of interest, processing is performed to perform data interpolation, the purpose of the interpolation is to ensure that local space information is not lost when the point cloud data is projected onto an image plane, then the corresponding point cloud interpolation needs to be performed in the region of interest, wherein the point cloud which is located at a position with critical structural change in the region of interest is the place where the interpolation is most needed, so the embodiment is in the region of interest, according to the aboveThe point with larger cosine similarity difference variation calculated to a certain point in the fitting plane obtained by the step is taken as the position of the key structure variation, wherein a cosine similarity threshold value is set in the fitting plane process
Figure SMS_165
(depending on the implementation situation of the implementation, the embodiment provides an empirical reference value), the cosine similarity of the normal vector of the plane before and after the plane is fitted at a certain point is obtained, and if the cosine similarity is smaller than a set threshold value, the point cloud is indicated to be located at a position where the structural change is critical. Thus, a local range of the point cloud location is interpolated, wherein the local range is determined as: the Euclidean distance between the point clouds which are positioned at the key position of the structural change and are closest to the point clouds
Figure SMS_166
Is the sphere range of radius.
Local extent of point clouds at current structural change critical locations
Figure SMS_167
And analyzing, namely interpolating the positions of the point cloud in the local range on the connecting lines between the point cloud and other point clouds, and further reserving the local relation of the point cloud in the projection process. Where interpolation is performed, the interpolation is characterized by calculating the aggregation degree of the points in the local range, wherein if the aggregation degree of the point cloud in the point cloud range is larger, the point cloud in the whole local range is more important, that is, the local relation needs to be reserved, the corresponding interpolation degree needs to be biased towards the point cloud.
The specific process is as follows: the point cloud of any one structural change key position is recorded as a target point cloud, and the point cloud is in the local range of the target point cloud
Figure SMS_170
Personal point cloud
Figure SMS_173
For radius (as may be the case for the particular implementation, the present embodiment gives an empirical reference value)Sphere range, and target point cloud
Figure SMS_175
Calculating the point cloud density conditions of the two sphere ranges for the radius sphere ranges (wherein in the process of counting the density, only the point clouds in the local range of the target point cloud are counted, so that the point clouds of the two sphere ranges outside the local range are not analyzed) to obtain the first point
Figure SMS_171
The point cloud density of the sphere range of the individual point clouds is noted as
Figure SMS_174
(obtained by dividing the number of point clouds in the sphere range by the number of all point clouds in the local sphere range of the target point cloud), and calculating the density of the point clouds in the sphere range of the target point cloud by the same operation is recorded as
Figure SMS_176
Then corresponding to
Figure SMS_178
Interpolation weight of the point cloud is
Figure SMS_168
The interpolation weight of the target point cloud is
Figure SMS_172
Then corresponding target point cloud and the first
Figure SMS_177
The Euclidean distance between two points of the connecting line of each point cloud is
Figure SMS_179
The corresponding target point cloud is required to be separated from the connection line
Figure SMS_169
The position is interpolated. Thus, the interpolation operation of the point cloud of the key position of the structural change by using the interpolation weight and the interpolation position is realized, and the structural change in the region of interest is realized by the operationAnd carrying out interpolation operation on the point cloud of the key position to obtain three-dimensional point cloud data after data interpolation.
And obtaining the region of interest according to the local distribution characteristics of the point cloud data in each segment, and realizing the interpolation of the point cloud data in the region of interest.
And S004, carrying out coordinate conversion on the interpolated point cloud data, converting the point cloud data into a two-dimensional plane image, further carrying out three-dimensional point cloud image fusion, and realizing intelligent automatic inspection of the unmanned inspection vehicle by utilizing a neural network.
According to the coordinate conversion relation between the three-dimensional point cloud data and the two-dimensional plane image data obtained in the step S001, performing coordinate projection conversion, projecting the interpolated three-dimensional point cloud data into the two-dimensional plane image data, giving the pixel value of each pixel point in the two-dimensional plane image data to the pixel point of the projected point cloud data position, fusing the three-dimensional point cloud data and the two-dimensional plane image data, wherein the fused image contains three-dimensional point cloud information and color information, and performing objective characteristics for accurately representing a target, so that an intelligent auxiliary inspection system of an unmanned inspection vehicle can accurately detect the condition of the target, and better inspection route assistance is provided, and the specific inspection auxiliary method comprises the following steps:
firstly, constructing a VGGNet convolutional neural network which takes an image fused by point cloud data and a visible light image as input and takes patrol instruction information such as obstacle avoidance, forward, backward, parking and the like as output;
secondly, taking the point cloud data of any road traffic and the image data fused with the visible light image as samples, giving artificial tags containing inspection instruction information such as obstacle avoidance, forward, backward and parking to the fused images by professionals in the industry, taking a large number of fused image samples with the artificial tags as data sets, and training the VGGNet convolutional neural network;
finally, the VGGNet convolutional neural network after training is put into use, so that parking or backing is realized when pedestrians and animals appear in front of or behind the unmanned patrol vehicle, obstacle avoidance is realized when obstacles appear, and forward intelligent automatic patrol is performed during normal running, so that better patrol route assistance is provided.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (6)

1. The three-dimensional point cloud image fusion method based on computer vision is characterized by comprising the following steps of:
acquiring three-dimensional point cloud data and a visible light image, acquiring image segmentation areas according to edges in the visible light image, and combining the image segmentation areas with the largest adjacent frame structure similarity to acquire the average value of the structure similarity of all the image area combinations;
obtaining the segmentation degree of each frame of visible light image according to the average value of the number proportion relation and the structural similarity of the image segmentation areas in the adjacent frames of visible light images, and obtaining a segmentation time interval according to the segmentation degree of all frames;
clustering three-dimensional point cloud data of each frame, selecting three point clouds as an initial point in each point cloud cluster, gradually increasing the number of the point clouds, gradually fitting the normal vector of the three-dimensional plane where the point clouds are located and fitting times, obtaining an initial first structure complexity according to the cosine similarity and fitting times of the normal vector of the corresponding three-dimensional plane adjacent to two times, and selecting a plurality of initial points in each point cloud cluster to obtain the average value of the first structure complexity of all initial points;
in the process of successively fitting the three-dimensional plane in which the point clouds are located, each point cloud corresponds to a normal vector of the three-dimensional plane before and after the point clouds participate in fitting;
acquiring a time interval between adjacent frames during three-dimensional point cloud data acquisition, taking the ratio of the centroid distance of the point cloud clusters of the adjacent frames to the time interval as a centroid change rate, and taking the product of the distance between the centroid coordinates of the point cloud clusters and the whole three-dimensional point cloud data and the centroid change rate as an offset correction value;
correcting the average value of the first structural complexity by using the offset correction value to obtain a second structural complexity;
obtaining an interested region according to the complexity of the second structure, obtaining target point clouds in the interested region according to the cosine similarity change difference of each point cloud before and after the point clouds participate in fitting, obtaining interpolation weights according to the specific gravity of the point cloud densities of the target point clouds at different positions in the neighborhood, and carrying out data interpolation between the target point clouds by utilizing the interpolation weights to obtain new three-dimensional point cloud data;
fusing the new three-dimensional point cloud data with the visible light image to obtain a fused image, and realizing intelligent inspection assistance by using the fused image;
the segmentation degree is obtained by the following steps:
Figure QLYQS_1
in the formula ,
Figure QLYQS_2
indicate->
Figure QLYQS_3
The number of image areas in the frame visible image; />
Figure QLYQS_4
Indicate->
Figure QLYQS_5
The number of image areas in the frame visible light image; maximum similarity mean->
Figure QLYQS_6
Indicate->
Figure QLYQS_7
Frame and->
Figure QLYQS_8
Frame visible lightA mean value of the structural similarity of all image region combinations of the image;
the segmented time interval is obtained by the following steps:
taking part of visible light images with segmentation degree continuously larger than a first preset threshold value in the visible light images of the continuous frames as segmented frames, taking the left side and the right side of each segmented frame as end points of a section, and recording the section corresponding to the visible light images contained in the end points and the end points as a segmented time section to obtain a plurality of segmented time sections of the visible light images of the continuous frames;
the first structure complexity is obtained by the following steps:
taking any one point cloud cluster as a target point cloud cluster, randomly selecting a plurality of point clouds in the target point cloud cluster, randomly selecting any one of the plurality of point clouds as a target starting center, acquiring the point cloud closest to the target starting center to form a three-dimensional plane, and acquiring the normal vector of the three-dimensional plane
Figure QLYQS_9
The point cloud closest to the three-dimensional plane is further acquired without considering the point cloud which has participated in fitting the three-dimensional plane, a new three-dimensional plane is acquired again by utilizing the least square plane fitting method, and the normal vector of the new three-dimensional plane is still acquired>
Figure QLYQS_10
And so on, making all points in the point cloud cluster participate in calculation;
first structural complexity:
Figure QLYQS_11
in the formula ,
Figure QLYQS_15
representing +.f. in a number of point clouds randomly selected>
Figure QLYQS_17
The point cloud is used as the initial center of three-dimensional plane fitting, and the fitting times are +.>
Figure QLYQS_21
Indicating the +.>
Figure QLYQS_13
Frame->
Figure QLYQS_19
The number of times the individual point clouds cluster to fit the plane; complexity factor
Figure QLYQS_22
Is indicated at +.>
Figure QLYQS_24
The +.sup.th acquired under the initial center>
Figure QLYQS_12
Plane normal vector of the second fit plane and +.>
Figure QLYQS_16
Cosine similarity between plane normal vectors of the sub-fitting planes, first structural complexity +.>
Figure QLYQS_20
Indicating the utilization of +.>
Figure QLYQS_23
The +.>
Figure QLYQS_14
The first part of the frame three-dimensional point cloud data>
Figure QLYQS_18
The structural complexity of the individual point cloud clusters.
2. The method for fusing three-dimensional point cloud images based on computer vision according to claim 1, wherein the method for obtaining the mean value of the structural similarity is as follows:
will be the first
Figure QLYQS_25
Any one of the image division regions in the frame visible light image is recorded as the target image division region, and the +.>
Figure QLYQS_28
Every image division area in the frame visible light image is associated with +.>
Figure QLYQS_31
Structural similarity between target image segmentation regions of frame, will be +.>
Figure QLYQS_27
Target image segmentation region and +.>
Figure QLYQS_30
The image division area with the maximum structural similarity in the frame image is taken as an image area combination, and the +.>
Figure QLYQS_32
Frame visible light image and +.>
Figure QLYQS_33
All image area combinations and corresponding structural similarity in the frame image and obtain +.>
Figure QLYQS_26
Frame and->
Figure QLYQS_29
The mean of the structural similarity of all image area combinations of the frame visible image.
3. The method for fusing three-dimensional point cloud images based on computer vision according to claim 1, wherein the offset correction value is obtained by the following method:
offset correction value:
Figure QLYQS_34
in the formula ,
Figure QLYQS_45
representing +.f in a combining point cloud cluster>
Figure QLYQS_36
Frame three-dimensional Point cloud data +.>
Figure QLYQS_41
Clustering coordinates of centroids by the point clouds;
Figure QLYQS_37
the +.f in the combined point cloud cluster>
Figure QLYQS_40
Frame three-dimensional point cloud Cluster +.>
Figure QLYQS_44
Clustering coordinates of centroids by the point clouds;
Figure QLYQS_49
indicate->
Figure QLYQS_43
Frame and->
Figure QLYQS_47
Acquiring interval time of frame three-dimensional point cloud data; centroid change Rate->
Figure QLYQS_35
Indicate->
Figure QLYQS_39
Frame and->
Figure QLYQS_42
The average change rate of the change of the centroid position of the first point cloud cluster of the frame three-dimensional point cloud data in the data acquisition interval; />
Figure QLYQS_46
Barycenter coordinates representing the entirety of the j-th frame three-dimensional point cloud data,>
Figure QLYQS_48
indicate->
Figure QLYQS_50
Frame three-dimensional Point cloud data +.>
Figure QLYQS_38
Euclidean distance between the centroid of the individual point cloud clusters and the centroid of the three-dimensional point cloud data entity.
4. The method for fusing three-dimensional point cloud images based on computer vision according to claim 1, wherein the second structure complexity is obtained by the following steps:
Figure QLYQS_51
wherein ,
Figure QLYQS_52
represents the linear normalized +.>
Figure QLYQS_53
Frame three-dimensional Point cloud data +.>
Figure QLYQS_54
Structural complexity of individual point cloud clustersCorrection value->
Figure QLYQS_55
Indicate->
Figure QLYQS_56
Frame three-dimensional Point cloud data +.>
Figure QLYQS_57
A mean of a first structural complexity of the point cloud clusters.
5. The method for fusing the three-dimensional point cloud images based on computer vision according to claim 1, wherein the new three-dimensional point cloud data is obtained by the following steps:
when the second structure complexity value of the point cloud cluster is larger than a second preset threshold value, setting the point cloud cluster as an interested area;
in the region of interest, calculating a point cloud with larger cosine similarity difference change in three-dimensional plane fitting as a position of a structure change key, if the cosine similarity is smaller than a preset threshold value, indicating that the corresponding point cloud is positioned at the position of the structure change key, and performing data interpolation in a sphere range of the point cloud positioned at the position of the structure change key, wherein the determination of the sphere radius range is the Euclidean distance between the point clouds positioned at the position of the structure change key and closest to the point cloud
Figure QLYQS_58
And (3) for radius, interpolating positions on connecting lines between the point cloud and other point clouds within the sphere range, obtaining interpolation weights according to specific gravity of the point cloud density within different sphere ranges, reducing Euclidean distances between the target point clouds and the other point clouds by using the interpolation weights, obtaining interpolation positions of interpolation data, carrying out data interpolation according to the interpolation positions, and obtaining three-dimensional point cloud data after data interpolation on line segments between the other point clouds and the target point clouds when interpolation directions are Euclidean distances.
6. The method for fusing the three-dimensional point cloud images based on computer vision according to claim 1, wherein the intelligent inspection assistance is realized by utilizing the fused images, specifically comprising the following steps:
according to the coordinate conversion relation between the three-dimensional point cloud data and the two-dimensional plane image data, the interpolated three-dimensional point cloud data is projected into the two-dimensional plane image data, the pixel value of each pixel point in the two-dimensional plane image data is also given to the pixel point of the projected point cloud data position, fusion of the three-dimensional point cloud data and the two-dimensional plane image data is carried out, the fused image is input into a convolutional neural network to obtain patrol instruction information for obstacle avoidance, forward, backward and parking, and intelligent patrol assistance is realized.
CN202310363382.3A 2023-04-07 2023-04-07 Three-dimensional point cloud image fusion method based on computer vision Active CN116071283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310363382.3A CN116071283B (en) 2023-04-07 2023-04-07 Three-dimensional point cloud image fusion method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310363382.3A CN116071283B (en) 2023-04-07 2023-04-07 Three-dimensional point cloud image fusion method based on computer vision

Publications (2)

Publication Number Publication Date
CN116071283A CN116071283A (en) 2023-05-05
CN116071283B true CN116071283B (en) 2023-06-16

Family

ID=86171829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310363382.3A Active CN116071283B (en) 2023-04-07 2023-04-07 Three-dimensional point cloud image fusion method based on computer vision

Country Status (1)

Country Link
CN (1) CN116071283B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561216B (en) * 2023-07-04 2023-09-15 湖南腾琨信息科技有限公司 Multi-dimensional space-time data visualization performance optimization method and system
CN116824526B (en) * 2023-08-29 2023-10-27 广州视安智能科技有限公司 Digital intelligent road monitoring system based on image processing
CN116993627B (en) * 2023-09-26 2023-12-15 山东莱恩光电科技股份有限公司 Laser scanning image data correction method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346073A (en) * 2020-09-25 2021-02-09 中山大学 Dynamic vision sensor and laser radar data fusion method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274976B (en) * 2020-01-22 2020-09-18 清华大学 Lane detection method and system based on multi-level fusion of vision and laser radar
KR102338665B1 (en) * 2020-03-02 2021-12-10 건국대학교 산학협력단 Apparatus and method for classficating point cloud using semantic image
CN112396650B (en) * 2020-03-30 2023-04-07 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar
CN111337947B (en) * 2020-05-18 2020-09-22 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN115761550A (en) * 2022-12-20 2023-03-07 江苏优思微智能科技有限公司 Water surface target detection method based on laser radar point cloud and camera image fusion
CN115830015B (en) * 2023-02-09 2023-04-25 深圳市威祥五金制品有限公司 Hardware stamping accessory quality detection method based on computer vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346073A (en) * 2020-09-25 2021-02-09 中山大学 Dynamic vision sensor and laser radar data fusion method

Also Published As

Publication number Publication date
CN116071283A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN116071283B (en) Three-dimensional point cloud image fusion method based on computer vision
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN114708585B (en) Attention mechanism-based millimeter wave radar and vision fusion three-dimensional target detection method
CN110222626B (en) Unmanned scene point cloud target labeling method based on deep learning algorithm
CN109658442B (en) Multi-target tracking method, device, equipment and computer readable storage medium
CN114581887B (en) Method, device, equipment and computer readable storage medium for detecting lane line
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN111383333A (en) Segmented SFM three-dimensional reconstruction method
CN111144213A (en) Object detection method and related equipment
CN115187964A (en) Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN112749654A (en) Deep neural network model construction method, system and device for video fog monitoring
WO2023060632A1 (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN112947419A (en) Obstacle avoidance method, device and equipment
CN117274749B (en) Fused 3D target detection method based on 4D millimeter wave radar and image
CN108416798A (en) A kind of vehicle distances method of estimation based on light stream
CN116258817A (en) Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN117994987B (en) Traffic parameter extraction method and related device based on target detection technology
CN113408550B (en) Intelligent weighing management system based on image processing
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN117284320A (en) Vehicle feature recognition method and system for point cloud data
CN114820931B (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant