CN116071283B - Three-dimensional point cloud image fusion method based on computer vision - Google Patents
Three-dimensional point cloud image fusion method based on computer vision Download PDFInfo
- Publication number
- CN116071283B CN116071283B CN202310363382.3A CN202310363382A CN116071283B CN 116071283 B CN116071283 B CN 116071283B CN 202310363382 A CN202310363382 A CN 202310363382A CN 116071283 B CN116071283 B CN 116071283B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- image
- dimensional
- frame
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 13
- 230000008859 change Effects 0.000 claims abstract description 38
- 238000007689 inspection Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims description 52
- 238000003709 image segmentation Methods 0.000 claims description 35
- 230000011218 segmentation Effects 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 21
- 238000012937 correction Methods 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 230000005484 gravity Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 abstract 1
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 3
- 241000669618 Nothes Species 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241000287196 Asthenes Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image data processing, in particular to a three-dimensional point cloud image fusion method based on computer vision, which comprises the following steps: dividing continuous frame images according to texture features of adjacent frame images, acquiring a region of interest from local features of point cloud data in each divided section, and performing data interpolation according to local density change of the region of interest. According to the invention, the problem of spatial characteristics of the traditional point cloud data and the volatile point cloud data fused by the image can be avoided through the interpolation of the point cloud data, a more accurate target detection effect is realized, and more accurate and stable intelligent automatic inspection assistance is provided.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a three-dimensional point cloud image fusion method based on computer vision.
Background
With the development of science and technology, intelligent management has been implemented inside many enterprise parks, where the security of the parks inside the enterprise is a major issue of intelligent management. Along with popularization of unmanned inspection vehicle application, the unmanned inspection vehicle is often adopted to replace traditional methods such as manual inspection in a park, so that security efficiency and park operation cost are greatly improved. However, in the inspection process of the unmanned inspection vehicle, real-time image information in the campus needs to be collected for working, but the digital image collected by the single image collecting device cannot meet the actual application requirement, and cooperative work of a plurality of sensors is needed, wherein fusion between three-dimensional point cloud data and image data is widely applied to an unmanned inspection vehicle system. The three-dimensional point cloud data are different from the application of two-dimensional images, the three-dimensional point cloud data are often characterized as three-dimensional information features such as depth and geometry of a target, and the two-dimensional image data are characterized as two-dimensional information features such as color and texture of the target, so that the three-dimensional point cloud data and the two-dimensional image data are fused to obtain a three-dimensional point cloud model with the information such as color, objective features of the target can be accurately represented, and better automatic inspection assistance can be provided under the condition that an unmanned inspection vehicle system can accurately detect the target.
However, in the existing method, three-dimensional point cloud data are projected into a two-dimensional plane for fusion according to a coordinate conversion relation, the method loses the spatial distribution characteristics of the three-dimensional point cloud data, affects the spatial relation between the point cloud data, and causes larger errors in projection results, so that the fusion requirements of the unmanned inspection vehicle in the running process of the three-dimensional point cloud data and the image data cannot be met in the fusion process of the three-dimensional point cloud data, and therefore, a three-dimensional point cloud image fusion method based on computer vision is needed.
Disclosure of Invention
The invention provides a three-dimensional point cloud image fusion method based on computer vision, which aims to solve the existing problems.
The three-dimensional point cloud image fusion method based on computer vision adopts the following technical scheme:
one embodiment of the invention provides a three-dimensional point cloud image fusion method based on computer vision, which comprises the following steps:
acquiring three-dimensional point cloud data and a visible light image, acquiring image segmentation areas according to edges in the visible light image, and combining the image segmentation areas with the largest adjacent frame structure similarity to acquire the average value of the structure similarity of all the image area combinations;
obtaining the segmentation degree of each frame of visible light image according to the average value of the number proportion relation and the structural similarity of the image segmentation areas in the adjacent frames of visible light images, and obtaining a segmentation time interval according to the segmentation degree of all frames;
clustering three-dimensional point cloud data of each frame, selecting three point clouds as an initial point in each point cloud cluster, gradually increasing the number of the point clouds, gradually fitting the normal vector of the three-dimensional plane where the point clouds are located and fitting times, obtaining an initial first structure complexity according to the cosine similarity and fitting times of the normal vector of the corresponding three-dimensional plane adjacent to two times, and selecting a plurality of initial points in each point cloud cluster to obtain the average value of the first structure complexity of all initial points;
in the process of successively fitting the three-dimensional plane in which the point clouds are located, each point cloud corresponds to a normal vector of the three-dimensional plane before and after the point clouds participate in fitting;
acquiring a time interval between adjacent frames during three-dimensional point cloud data acquisition, taking the ratio of the centroid distance of the point cloud clusters of the adjacent frames to the time interval as a centroid change rate, and taking the product of the distance between the centroid coordinates of the point cloud clusters and the whole three-dimensional point cloud data and the centroid change rate as an offset correction value;
correcting the average value of the first structural complexity by using the offset correction value to obtain a second structural complexity;
obtaining an interested region according to the complexity of the second structure, obtaining target point clouds in the interested region according to the cosine similarity change difference of each point cloud before and after the point clouds participate in fitting, obtaining interpolation weights according to the specific gravity of the point cloud densities of the target point clouds at different positions in the neighborhood, and carrying out data interpolation between the target point clouds by utilizing the interpolation weights to obtain new three-dimensional point cloud data;
and fusing the new three-dimensional point cloud data with the visible light image to obtain a fused image, and realizing intelligent inspection assistance by using the fused image.
Further, the average value of the structural similarity is obtained by the following steps:
will be the firstAny one image segmentation area in the frame visible light image is marked as a target image segmentation area and calculated in the first stepEach image segmentation area and the first image segmentation area in the frame visible light imageStructural similarity between target image segmentation regions of a frame, will beTarget image segmentation region of frame and the firstThe image segmentation area with the maximum structural similarity in the frame image is taken as an image area combination, and the first process is obtained by repeating the above processesFrame visible light image and the firstAll image area combinations and corresponding structural similarity in the frame image are obtainedFrame and thThe mean of the structural similarity of all image area combinations of the frame visible image.
Further, the segmentation degree is obtained by the following steps:
in the formula ,represent the firstThe number of image areas in the frame visible image;represent the firstThe number of image areas in the frame visible light image; maximum similarity meanRepresent the firstFrame and thThe mean of the structural similarity of all image area combinations of the frame visible image.
Further, the method for obtaining the segment time interval is as follows:
and taking part of visible light images with segmentation degree continuously larger than a first preset threshold value in the visible light images of the continuous frames as segmented frames, taking the left side and the right side of each segmented frame as end points of the section, and recording the section corresponding to the visible light images contained in the end points and the end points as a segmented time section to obtain a plurality of segmented time sections of the visible light images of the continuous frames.
Further, the first structure complexity is obtained by the following steps:
taking any one point cloud cluster as a target point cloud cluster, randomly selecting a plurality of point clouds in the target point cloud cluster, randomly selecting any one of the plurality of point clouds as a target starting center, acquiring the point cloud closest to the target starting center to form a three-dimensional plane, and acquiring the normal vector of the three-dimensional planeThe point cloud closest to the three-dimensional plane is further acquired without considering the point cloud which has participated in fitting the three-dimensional plane, a new three-dimensional plane is acquired again by utilizing the least square plane fitting method, and the normal vector of the new three-dimensional plane is still acquiredAnd so on, making all points in the point cloud cluster participate in calculation;
first structural complexity:
in the formula ,representing a plurality of point clouds selected randomlyIs the first of (2)The point clouds are used as initial centers of three-dimensional plane fitting, and the fitting times are calculatedIndicating the first of the segment time intervalsFrame NoThe number of times the individual point clouds cluster to fit the plane; complexity factorIs shown in the firstAcquired under the initial centerPlane normal vector sum of the subsfit planesCosine similarity between plane normal vectors of the subsfit planes, first structural complexityRepresentation and utilization of the firstThe first acquired by the initial centerFrame three-dimensional point cloud dataThe structural complexity of the individual point cloud clusters.
Further, the offset correction value is obtained by the following method:
offset correction value:
in the formula ,representing the first in a group point cloud clusterFrame three-dimensional point cloud dataClustering coordinates of centroids by the point clouds;represented in the combined point cloud clusterFrame three-dimensional point cloud clusteringClustering coordinates of centroids by the point clouds;represent the firstFrame and thAcquiring interval time of frame three-dimensional point cloud data; centroid rate of changeRepresent the firstFrame and thThe average change rate of the change of the centroid position of the first point cloud cluster of the frame three-dimensional point cloud data in the data acquisition interval;representing the j-th frame three-dimensional pointThe centroid coordinates of the cloud data as a whole,represent the firstFrame three-dimensional point cloud dataEuclidean distance between the centroid of the individual point cloud clusters and the centroid of the three-dimensional point cloud data entity.
Further, the second structure complexity is obtained by the following method:
wherein ,representing the first after linear normalizationFrame three-dimensional point cloud dataA correction for the structural complexity of the individual point cloud clusters,represent the firstFrame three-dimensional point cloud dataA mean of a first structural complexity of the point cloud clusters.
Further, the new three-dimensional point cloud data is obtained by the following steps:
when the second structure complexity value of the point cloud cluster is larger than a second preset threshold value, setting the point cloud cluster as an interested area;
in the region of interest, the cosine phase is calculated from the three-dimensional plane fitIf cosine similarity is smaller than a preset threshold value, the corresponding point cloud is indicated to be located at the position of the structure change key, and data interpolation is carried out in the sphere range of the point cloud located at the position of the structure change key, wherein the determination of the sphere radius range is that the Euclidean distance between the point clouds located at the position of the structure change key is the closest to the point cloudAnd (3) for radius, interpolating positions on connecting lines between the point cloud and other point clouds within the sphere range, obtaining interpolation weights according to specific gravity of the point cloud density within different sphere ranges, reducing Euclidean distances between the target point clouds and the other point clouds by using the interpolation weights, obtaining interpolation positions of interpolation data, carrying out data interpolation according to the interpolation positions, and obtaining three-dimensional point cloud data after data interpolation on line segments between the other point clouds and the target point clouds when interpolation directions are Euclidean distances.
Further, the intelligent inspection assistance is realized by using the fused image, which specifically comprises the following steps:
according to the coordinate conversion relation between the three-dimensional point cloud data and the two-dimensional plane image data, the interpolated three-dimensional point cloud data is projected into the two-dimensional plane image data, the pixel value of each pixel point in the two-dimensional plane image data is also given to the pixel point of the projected point cloud data position, fusion of the three-dimensional point cloud data and the two-dimensional plane image data is carried out, the fused image is input into a convolutional neural network to obtain patrol instruction information for obstacle avoidance, forward, backward and parking, and intelligent patrol assistance is realized.
The technical scheme of the invention has the beneficial effects that: the method comprises the steps of obtaining image interval sections through distribution characteristics of two-dimensional image data of continuous frames, obtaining positions of an interested region according to local distribution characteristics of point cloud data in each interval section, and interpolating the interested region according to density changes in a local range of the data of the point cloud, so that a local characteristic relation on a three-dimensional space after projection is reserved. The acquisition process of the region of interest performs cluster analysis by considering three-dimensional point cloud data of a single frame, acquires the complexity of a structure according to the distribution characteristics of the point cloud data in each point cloud cluster, and corrects the three-dimensional point cloud data by combining the changes of the point cloud data of continuous frames in the regional segments, thereby acquiring the final complexity of the structure and acquiring the region of interest. The method has the advantages that the defect that in the traditional fusion process of the three-dimensional point cloud data and the two-dimensional image data, the spatial distribution characteristics of the three-dimensional point cloud data are lost, the spatial relationship among the point cloud data is influenced, and a large error occurs in a projection result is avoided, and the spatial local distribution characteristics of the three-dimensional point cloud data and the corresponding spatial relationship among the three-dimensional point cloud data are greatly reserved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of steps of a three-dimensional point cloud image fusion method based on computer vision.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of a specific implementation, structure, characteristics and effects thereof based on a three-dimensional point cloud image fusion method based on computer vision for a data management method applied to a security operation and maintenance system according to the invention, which is provided by the invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the three-dimensional point cloud image fusion method based on computer vision provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a three-dimensional point cloud image fusion method based on computer vision according to an embodiment of the present invention is shown, where the method includes the following steps:
and S001, acquiring image data and corresponding three-dimensional point cloud data, and acquiring a coordinate conversion relation between the image data and the corresponding three-dimensional point cloud data according to a parameter relation between acquisition sensors.
In this embodiment, three-dimensional point cloud data is obtained by using a laser radar and a visible light camera mounted on an unmanned inspection vehicle, and multi-frame two-dimensional image data of visible light is obtained by using a visible light camera, wherein the laser radar and the visible light camera are calibrated, and a coordinate conversion relationship between the three-dimensional point cloud data and the two-dimensional image is obtained according to calibration parameters of the visible light camera in a calibration process (this process is a known technology and is not described in detail in this embodiment).
Step S002, according to the distribution relation of the similar areas between the continuous frames of visible light images of the two-dimensional image data, segments of the continuous frames of visible light image sections are obtained, and segments of the continuous frames of point cloud data sections are obtained.
In order to greatly reduce the calculated amount in the matching fusion process of the three-dimensional point cloud data and the two-dimensional image data, and preserve the spatial local distribution characteristics of the three-dimensional point cloud data and the corresponding spatial relationship between the three-dimensional point cloud data and the two-dimensional image data, the corresponding region of interest is obtained by obtaining the characteristic distribution between the three-dimensional point cloud data and the two-dimensional image data, and the coordinate conversion process is carried out after interpolation is carried out by the distribution characteristics between the region of interest, so that the spatial local distribution characteristics of the three-dimensional point cloud data and the corresponding spatial relationship are preserved.
In addition, because the unmanned inspection vehicle is continuously changed in the driving process, in the process of fusing the two-dimensional image data and the three-dimensional point cloud data, the region of interest is required to be firstly obtained according to the local distribution characteristics of the point cloud data, and interpolation is carried out according to the three-dimensional point cloud data between the regions of interest and the pixel point relationship in the two-dimensional image, in the process of obtaining the region of interest, the distribution of a time period is required to be firstly obtained, wherein the image information is basically the same in a single time period, only small changes (the lane line change of the road and the small distribution change of the vehicle information) exist, and the analysis can be carried out in the same time period, namely, the three-dimensional point cloud data of continuous frames in the corresponding time period and the two-dimensional image have great relevance. Because the data volume of the three-dimensional point cloud data of the continuous frames is larger, the section segmentation is carried out by using the visible light images of the continuous frames, and the corresponding three-dimensional point cloud data is acquired while the visible light images are acquired, so that the section segmentation of the point cloud data of the continuous frames can be obtained.
Firstly, carrying out graying treatment on a single-frame visible light image to obtain a graying treated gray image, carrying out edge detection on the gray image by utilizing a Sobel operator to obtain all edge lines in the image, obtaining an edge image, carrying out processing on the edge image by utilizing morphological closing operation, connecting broken edge lines in the image to obtain a new edge image, taking the new edge image as a mask, wherein the gray value of the edge line in the edge image is 255, the gray value of a non-edge line part is 0, carrying out region segmentation on the corresponding single-frame visible light image by utilizing the edge line in the new edge image as a region segmentation line, setting the pixel value of a pixel point corresponding to the edge line in the single-frame visible light image to be 0, generating each image segmentation region by the corresponding current-frame visible light image, and obtaining a plurality of image segmentation regions in the single-frame image, wherein the pixel values of the pixel points corresponding to the non-edge line part are recorded asThe image segmentation areas are subjected to the same area segmentation processing on other continuous frame images by the same operation, so that a plurality of image segmentation areas contained in each frame image are obtained;
secondly, according to the structure between the image segmentation areas in the front and back two frames of imagesSimilarity SSIM, when the structural similarity between a certain segmentation area in the previous frame of visible light image and a certain image segmentation area in the next frame of visible light image is maximum, marking the two image segmentation areas as image area combinations, and combining the first frame of visible light image with the second frame of visible light imageAny one image segmentation area in the frame visible light image is marked as a target image segmentation area and calculated in the first stepEach image segmentation area and the first image segmentation area in the frame visible light imageStructural similarity between target image segmentation regions of a frame, will beTarget image segmentation region of frame and the firstThe image segmentation area with the maximum structural similarity in the frame image is taken as an image area combination, and the first process is obtained by repeating the above processesFrame visible light image and the firstAll image area combinations and corresponding structural similarity in the frame image are obtainedFrame and thMean value of structural similarity of all image region combinations of frame visible light imageAnd recording as the maximum similarity mean value.
Then the corresponding frame 1 is except the visible light imageFrame [ ]) The visible light image is the segmentation degree of the segmentation frameThe calculated expression of (2) is:
in the formula ,represent the firstThe number of image segmentation areas in the frame visible light image;represent the firstThe number of image areas in the frame visible light image; maximum similarity meanRepresent the firstFrame and thThe structural similarity mean value of all the image region combinations of the frame visible light image (wherein the calculation formula of the structural similarity is a known technology, and is not described in detail in this embodiment, it should be noted that the image region combinations in this embodiment are the combination types with the largest structural similarity in the combination types of the image regions in the two frame images). Wherein the method comprises the steps ofCharacterizing two consecutive frames of visible light imagesIf the difference between the number of the image areas and 1 is larger, the image characteristic change difference of the visible light images of two continuous frames is larger;the degree of similarity of two image areas characterizing an image combination of two consecutive frames of visible light images, the greater the value, the less similar the image feature variation between the two frames of visible light images. Wherein the method comprises the steps ofAs an evaluation index of the overall similarity of the images, the method needs to be based onAs a weight value to be adjusted, the weight value is a reference degree value of the subsequent overall similarity and the segmentation degreeReflect the firstThe degree of similarity between a frame image and its previous frame image, when the degree of similarity is higher, the firstThe more consistent the image information between the frame visible light image and the previous frame visible light image, the more can be used as the same time period for analysis, further, the moreThe more likely the frame visible image is an image over that period of time, denoted as a segmented frame.
Finally, according to the calculation process, obtaining the segmentation degree of the visible light images of other frames except the 1 st frame of visible light image, and setting a segmentation degree threshold value(depending on the implementation of the implementation, the present embodiment gives an empirical reference value), if the segmentation level of a certain frame of visible light imageGreater than the thresholdThe frame is indicated as a segmented frame.
The method comprises the steps of obtaining segments of a continuous frame visible light image interval according to the distribution relation of similar areas among continuous frame visible light images, obtaining segments of continuous frame point cloud data intervals, marking continuous visible light image frames which are larger than a segmentation degree threshold value in each continuous frame visible light image after the segments as a similar image group, marking a time period where the similar image group is located as a segmentation time interval, and obtaining all segmentation time intervals.
Step S003, the region of interest is obtained according to the local distribution characteristics of the point cloud data in each segment, and data interpolation is carried out in the region of interest.
In each divided interval, because the aggregation degree of the point cloud data is different, larger projection errors can exist after the two-dimensional coordinate projection, when the local area structure of the point cloud data is more complex, the probability of deviation is larger when the point cloud data is projected onto a two-dimensional plane, and the corresponding point cloud data with larger structural complexity loses more three-dimensional information features after the projection. In the projection process, in order to preserve the spatial local distribution characteristics and the corresponding spatial relationship of the point cloud data, that is, the local characteristics of a certain point cloud after projection are guaranteed to be basically the same as the local characteristics of a corresponding three-dimensional space, that is, the local relationship of a part of point cloud after projection is guaranteed to be added at certain positions of the point cloud. Therefore, the position of the region of interest is obtained according to the local distribution characteristics of the point cloud data, and interpolation is performed on the region of interest, so that the local characteristic relation on the three-dimensional space after projection is reserved.
K-Means clustering is carried out on any frame of three-dimensional point cloud data in any divided segmentation time interval, whereinThe value of the three-dimensional point cloud data point cloud cluster is set as the number N of image segmentation areas in a two-dimensional visible light image of a corresponding frame, a plurality of three-dimensional point cloud data point cloud clusters are obtained and recorded as the point cloud clusters, and any one point cloud cluster is in the corresponding segmentation time intervalFrame three-dimensional point cloud dataClustering the point clouds, and calculating the structural complexity degree of each point cloud cluster in the three-dimensional point cloud data of a single frame: taking any one point cloud cluster as a target point cloud cluster, randomly selecting R point clouds in the target point cloud cluster, taking R=20 according to experience, and taking the R point clouds as initial centers respectively;
taking R point clouds as any one of the initial centers as an example, acquiring 2 point clouds nearest to the initial center to form a three-dimensional plane, and further acquiring the normal vector of the three-dimensional planeThe point cloud closest to the three-dimensional plane is further acquired without considering the point cloud which has participated in fitting the three-dimensional plane, a new three-dimensional plane is acquired again by utilizing the least square plane fitting method, and the normal vector of the new three-dimensional plane is still acquiredThe method of least square plane fitting is known in the prior art and will not be described in detail in this embodiment; by utilizing the operation, all points in the point cloud cluster participate in three-dimensional plane fitting calculation to obtain a normal vector of a final plane, which can be known according to the following formula: when a plane is continuously fitted a plurality of times by taking any one of the R point clouds as a starting center, a plurality of normal vectors are obtained, and the final number of times when the fitting is stopped is recorded asWherein the normal vector obtained after the nth fitting is recorded asFitting a three-dimensional plane to all of the R starting centers is done in this way.
The point cloud cluster in the corresponding three-dimensional point cloud data acquired from the initial midpoint of the target is recorded as the first point in the corresponding segment time intervalFrame three-dimensional point cloud dataClustering the point clouds, so that the structural complexity of the point cloud clusters is highThe method of (1):
in the formula ,representing the first of the R starting centers randomly selectedThe initial centers are used as initial centers of three-dimensional plane fitting, and the fitting times are as followsIndicating the first of the segment time intervalsFrame NoThe number of times the individual point clouds cluster to fit the plane; complexity factorIs shown in the firstAcquired under the initial centerPlane normal vector sum of the subsfit planesCosine similarity between plane normal vectors of sub-fitting planes, and structural complexityRepresentation and utilization of the firstThe first acquired by the initial centerFrame three-dimensional point cloud dataThe structural complexity of the clustering of the point clouds, wherein if the difference between the normal vector of the plane after each fitting of the plane and the normal vector of the plane of the last fitting of the plane is smaller, namely, the smaller the difference between the cosine similarity of the normal vectors of the planes of the two corresponding planes and 1 is, the more similar the distribution directions of the two corresponding three-dimensional planes are, conversely, the more the difference is, the more dissimilar the distribution directions of the two three-dimensional planes are, and along with the accumulation calculation, the larger the accumulation result is, the more the point clouds forming the three-dimensional plane are distributed in the three-dimensional space, the less the point clouds are distributed to the same three-dimensional plane, namely, the greater the structural complexity of the corresponding point cloud clusters is.
Calculating by using the method for calculating the structural complexity calculated by taking a certain point cloud as an initial centerCorresponding to the initial centerFrame three-dimensional point cloud dataThe structural complexity of each initial center of the point cloud clusters is countedCalculation ofAverage value of first structural complexity of all initial centers of first point cloud cluster in j-th frame of three-dimensional point cloud data corresponding to initial centers。
Although the image characteristic information in the interval is similar, the relative position of the obtained point cloud data of each point cloud cluster may change in the running process of the unmanned patrol vehicle, wherein the greater the relative position change degree of the point cloud clusters of two adjacent frames, the greater the possibility that errors occur in the corresponding image segmentation areas is represented by the point cloud data in the point cloud clusters with the greater relative position change degree, and the less important the point cloud clusters are in the running process of the unmanned patrol vehicle for the point cloud clusters which are closer to the bottom (corresponding road surface) and closer to the top (corresponding sky).
The three-dimensional point cloud data is converted into two-dimensional point cloud data by utilizing the conversion relation between the three-dimensional point cloud data and the two-dimensional visible light image, wherein the point cloud data contained in the point cloud clusters are unchanged, but are changed into two-dimensional point cloud clusters, the centers of mass of the two-dimensional point cloud clusters and the centers of mass of image segmentation areas in the two-dimensional visible light image are respectively recorded as point cloud cluster centers and area centers, the distance between the point cloud cluster centers and the area centers is calculated by taking one point cloud cluster center as an example, the point cloud clusters and the areas corresponding to the closest point cloud cluster centers and the area centers are regarded as the point cloud clusters and the image segmentation areas corresponding to each other, in addition, the front frame point cloud cluster and the rear frame point cloud cluster corresponding to the image area combination formed by the front frame and the rear frame image segmentation areas with the largest structural similarity are recorded as combined point cloud clusters, namely the third point cloud cluster in the combined point cloud clusters is shownPoint cloud clustering of frame three-dimensional point cloud dataThe point cloud clusters of the frame three-dimensional point cloud data belong to a mutually corresponding relationship.
In addition, the time interval between adjacent frames of the three-dimensional point cloud data during acquisition is acquiredCentroid coordinates of three-dimensional point cloud clustered wholeCentroid coordinates of each combined point cloud cluster in three-dimensional point cloud data of all front and back frames、。
The present embodiment corrects the structural complexity of the point cloud clusters by the overall variation of the point cloud clusters of consecutive frames, where the firstFrame three-dimensional point cloud dataOffset correction value for a first structural complexity of a point cloud clusterThe calculated expression of (2) is:
in the formula ,representing the first in a group point cloud clusterFrame three-dimensional point cloud dataClustering coordinates of centroids by the point clouds;represented in the combined point cloud clusterFrame three-dimensional point cloud clusteringClustering coordinates of centroids by the point clouds;represent the firstFrame and thAcquiring interval time of frame three-dimensional point cloud data; centroid rate of changeRepresent the firstFrame and thFrame three-dimensional point cloud dataThe average change rate of the change of the centroid position of the point cloud clusters in the data acquisition intervals;represent the firstAnd the mass center coordinates of the whole frame three-dimensional point cloud data. Wherein the method comprises the steps ofRepresent the firstFrame three-dimensional point cloud numberAccording to the firstThe Euclidean distance between the mass center of each point cloud cluster and the mass center of the whole three-dimensional point cloud data is larger, which indicates that the point cloud clusters are more unimportant in the edge area of the whole point cloud data, namely, the point cloud clusters are closer to the bottom or the top; wherein the rate of change of centroid displacementFor characterizing the change of the relative position of the same object in adjacent frames, if the change of the relative position in adjacent frames is larger, the corresponding possibility of error is larger, and therefore the corresponding structural complexity needs to be increased (the structural complexity of the region is larger for obtaining the region of interest through the structural complexity later, the greater the region of interest is), wherein the structural complexity is increased byAs the degree of deviationTo represent a correction value of the structural complexity to be acquired.
For the acquired firstPerforming linear normalization processing on correction values of structural complexity of all point cloud clusters of the frame three-dimensional point cloud data to obtain normalized correction values。
Final firstFrame three-dimensional point cloud dataFinal structural complexity of the point cloud clustersThe second structural complexity is noted:
wherein ,represent the firstFrame three-dimensional point cloud dataA normalized correction value of a first structural complexity mean of the point cloud clusters,represent the firstFrame three-dimensional point cloud dataA first structural complexity mean of the point cloud clusters.
Setting a structural complexity threshold according to the second structural complexity of each point cloud cluster of the current frame(an empirical reference value is given in this embodiment according to the implementation of the embodiment), if the second structural complexity of the point cloud cluster is greater than the set threshold, the point cloud cluster is set as the region of interest.
According to each obtained point cloud of a single region of interest, processing is performed to perform data interpolation, the purpose of the interpolation is to ensure that local space information is not lost when the point cloud data is projected onto an image plane, then the corresponding point cloud interpolation needs to be performed in the region of interest, wherein the point cloud which is located at a position with critical structural change in the region of interest is the place where the interpolation is most needed, so the embodiment is in the region of interest, according to the aboveThe point with larger cosine similarity difference variation calculated to a certain point in the fitting plane obtained by the step is taken as the position of the key structure variation, wherein a cosine similarity threshold value is set in the fitting plane process(depending on the implementation situation of the implementation, the embodiment provides an empirical reference value), the cosine similarity of the normal vector of the plane before and after the plane is fitted at a certain point is obtained, and if the cosine similarity is smaller than a set threshold value, the point cloud is indicated to be located at a position where the structural change is critical. Thus, a local range of the point cloud location is interpolated, wherein the local range is determined as: the Euclidean distance between the point clouds which are positioned at the key position of the structural change and are closest to the point cloudsIs the sphere range of radius.
Local extent of point clouds at current structural change critical locationsAnd analyzing, namely interpolating the positions of the point cloud in the local range on the connecting lines between the point cloud and other point clouds, and further reserving the local relation of the point cloud in the projection process. Where interpolation is performed, the interpolation is characterized by calculating the aggregation degree of the points in the local range, wherein if the aggregation degree of the point cloud in the point cloud range is larger, the point cloud in the whole local range is more important, that is, the local relation needs to be reserved, the corresponding interpolation degree needs to be biased towards the point cloud.
The specific process is as follows: the point cloud of any one structural change key position is recorded as a target point cloud, and the point cloud is in the local range of the target point cloudPersonal point cloudFor radius (as may be the case for the particular implementation, the present embodiment gives an empirical reference value)Sphere range, and target point cloudCalculating the point cloud density conditions of the two sphere ranges for the radius sphere ranges (wherein in the process of counting the density, only the point clouds in the local range of the target point cloud are counted, so that the point clouds of the two sphere ranges outside the local range are not analyzed) to obtain the first pointThe point cloud density of the sphere range of the individual point clouds is noted as(obtained by dividing the number of point clouds in the sphere range by the number of all point clouds in the local sphere range of the target point cloud), and calculating the density of the point clouds in the sphere range of the target point cloud by the same operation is recorded asThen corresponding toInterpolation weight of the point cloud isThe interpolation weight of the target point cloud isThen corresponding target point cloud and the firstThe Euclidean distance between two points of the connecting line of each point cloud isThe corresponding target point cloud is required to be separated from the connection lineThe position is interpolated. Thus, the interpolation operation of the point cloud of the key position of the structural change by using the interpolation weight and the interpolation position is realized, and the structural change in the region of interest is realized by the operationAnd carrying out interpolation operation on the point cloud of the key position to obtain three-dimensional point cloud data after data interpolation.
And obtaining the region of interest according to the local distribution characteristics of the point cloud data in each segment, and realizing the interpolation of the point cloud data in the region of interest.
And S004, carrying out coordinate conversion on the interpolated point cloud data, converting the point cloud data into a two-dimensional plane image, further carrying out three-dimensional point cloud image fusion, and realizing intelligent automatic inspection of the unmanned inspection vehicle by utilizing a neural network.
According to the coordinate conversion relation between the three-dimensional point cloud data and the two-dimensional plane image data obtained in the step S001, performing coordinate projection conversion, projecting the interpolated three-dimensional point cloud data into the two-dimensional plane image data, giving the pixel value of each pixel point in the two-dimensional plane image data to the pixel point of the projected point cloud data position, fusing the three-dimensional point cloud data and the two-dimensional plane image data, wherein the fused image contains three-dimensional point cloud information and color information, and performing objective characteristics for accurately representing a target, so that an intelligent auxiliary inspection system of an unmanned inspection vehicle can accurately detect the condition of the target, and better inspection route assistance is provided, and the specific inspection auxiliary method comprises the following steps:
firstly, constructing a VGGNet convolutional neural network which takes an image fused by point cloud data and a visible light image as input and takes patrol instruction information such as obstacle avoidance, forward, backward, parking and the like as output;
secondly, taking the point cloud data of any road traffic and the image data fused with the visible light image as samples, giving artificial tags containing inspection instruction information such as obstacle avoidance, forward, backward and parking to the fused images by professionals in the industry, taking a large number of fused image samples with the artificial tags as data sets, and training the VGGNet convolutional neural network;
finally, the VGGNet convolutional neural network after training is put into use, so that parking or backing is realized when pedestrians and animals appear in front of or behind the unmanned patrol vehicle, obstacle avoidance is realized when obstacles appear, and forward intelligent automatic patrol is performed during normal running, so that better patrol route assistance is provided.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (6)
1. The three-dimensional point cloud image fusion method based on computer vision is characterized by comprising the following steps of:
acquiring three-dimensional point cloud data and a visible light image, acquiring image segmentation areas according to edges in the visible light image, and combining the image segmentation areas with the largest adjacent frame structure similarity to acquire the average value of the structure similarity of all the image area combinations;
obtaining the segmentation degree of each frame of visible light image according to the average value of the number proportion relation and the structural similarity of the image segmentation areas in the adjacent frames of visible light images, and obtaining a segmentation time interval according to the segmentation degree of all frames;
clustering three-dimensional point cloud data of each frame, selecting three point clouds as an initial point in each point cloud cluster, gradually increasing the number of the point clouds, gradually fitting the normal vector of the three-dimensional plane where the point clouds are located and fitting times, obtaining an initial first structure complexity according to the cosine similarity and fitting times of the normal vector of the corresponding three-dimensional plane adjacent to two times, and selecting a plurality of initial points in each point cloud cluster to obtain the average value of the first structure complexity of all initial points;
in the process of successively fitting the three-dimensional plane in which the point clouds are located, each point cloud corresponds to a normal vector of the three-dimensional plane before and after the point clouds participate in fitting;
acquiring a time interval between adjacent frames during three-dimensional point cloud data acquisition, taking the ratio of the centroid distance of the point cloud clusters of the adjacent frames to the time interval as a centroid change rate, and taking the product of the distance between the centroid coordinates of the point cloud clusters and the whole three-dimensional point cloud data and the centroid change rate as an offset correction value;
correcting the average value of the first structural complexity by using the offset correction value to obtain a second structural complexity;
obtaining an interested region according to the complexity of the second structure, obtaining target point clouds in the interested region according to the cosine similarity change difference of each point cloud before and after the point clouds participate in fitting, obtaining interpolation weights according to the specific gravity of the point cloud densities of the target point clouds at different positions in the neighborhood, and carrying out data interpolation between the target point clouds by utilizing the interpolation weights to obtain new three-dimensional point cloud data;
fusing the new three-dimensional point cloud data with the visible light image to obtain a fused image, and realizing intelligent inspection assistance by using the fused image;
the segmentation degree is obtained by the following steps:
in the formula ,indicate->The number of image areas in the frame visible image; />Indicate->The number of image areas in the frame visible light image; maximum similarity mean->Indicate->Frame and->Frame visible lightA mean value of the structural similarity of all image region combinations of the image;
the segmented time interval is obtained by the following steps:
taking part of visible light images with segmentation degree continuously larger than a first preset threshold value in the visible light images of the continuous frames as segmented frames, taking the left side and the right side of each segmented frame as end points of a section, and recording the section corresponding to the visible light images contained in the end points and the end points as a segmented time section to obtain a plurality of segmented time sections of the visible light images of the continuous frames;
the first structure complexity is obtained by the following steps:
taking any one point cloud cluster as a target point cloud cluster, randomly selecting a plurality of point clouds in the target point cloud cluster, randomly selecting any one of the plurality of point clouds as a target starting center, acquiring the point cloud closest to the target starting center to form a three-dimensional plane, and acquiring the normal vector of the three-dimensional planeThe point cloud closest to the three-dimensional plane is further acquired without considering the point cloud which has participated in fitting the three-dimensional plane, a new three-dimensional plane is acquired again by utilizing the least square plane fitting method, and the normal vector of the new three-dimensional plane is still acquired>And so on, making all points in the point cloud cluster participate in calculation;
first structural complexity:
in the formula ,representing +.f. in a number of point clouds randomly selected>The point cloud is used as the initial center of three-dimensional plane fitting, and the fitting times are +.>Indicating the +.>Frame->The number of times the individual point clouds cluster to fit the plane; complexity factorIs indicated at +.>The +.sup.th acquired under the initial center>Plane normal vector of the second fit plane and +.>Cosine similarity between plane normal vectors of the sub-fitting planes, first structural complexity +.>Indicating the utilization of +.>The +.>The first part of the frame three-dimensional point cloud data>The structural complexity of the individual point cloud clusters.
2. The method for fusing three-dimensional point cloud images based on computer vision according to claim 1, wherein the method for obtaining the mean value of the structural similarity is as follows:
will be the firstAny one of the image division regions in the frame visible light image is recorded as the target image division region, and the +.>Every image division area in the frame visible light image is associated with +.>Structural similarity between target image segmentation regions of frame, will be +.>Target image segmentation region and +.>The image division area with the maximum structural similarity in the frame image is taken as an image area combination, and the +.>Frame visible light image and +.>All image area combinations and corresponding structural similarity in the frame image and obtain +.>Frame and->The mean of the structural similarity of all image area combinations of the frame visible image.
3. The method for fusing three-dimensional point cloud images based on computer vision according to claim 1, wherein the offset correction value is obtained by the following method:
offset correction value:
in the formula ,representing +.f in a combining point cloud cluster>Frame three-dimensional Point cloud data +.>Clustering coordinates of centroids by the point clouds;the +.f in the combined point cloud cluster>Frame three-dimensional point cloud Cluster +.>Clustering coordinates of centroids by the point clouds;indicate->Frame and->Acquiring interval time of frame three-dimensional point cloud data; centroid change Rate->Indicate->Frame and->The average change rate of the change of the centroid position of the first point cloud cluster of the frame three-dimensional point cloud data in the data acquisition interval; />Barycenter coordinates representing the entirety of the j-th frame three-dimensional point cloud data,>indicate->Frame three-dimensional Point cloud data +.>Euclidean distance between the centroid of the individual point cloud clusters and the centroid of the three-dimensional point cloud data entity.
4. The method for fusing three-dimensional point cloud images based on computer vision according to claim 1, wherein the second structure complexity is obtained by the following steps:
5. The method for fusing the three-dimensional point cloud images based on computer vision according to claim 1, wherein the new three-dimensional point cloud data is obtained by the following steps:
when the second structure complexity value of the point cloud cluster is larger than a second preset threshold value, setting the point cloud cluster as an interested area;
in the region of interest, calculating a point cloud with larger cosine similarity difference change in three-dimensional plane fitting as a position of a structure change key, if the cosine similarity is smaller than a preset threshold value, indicating that the corresponding point cloud is positioned at the position of the structure change key, and performing data interpolation in a sphere range of the point cloud positioned at the position of the structure change key, wherein the determination of the sphere radius range is the Euclidean distance between the point clouds positioned at the position of the structure change key and closest to the point cloudAnd (3) for radius, interpolating positions on connecting lines between the point cloud and other point clouds within the sphere range, obtaining interpolation weights according to specific gravity of the point cloud density within different sphere ranges, reducing Euclidean distances between the target point clouds and the other point clouds by using the interpolation weights, obtaining interpolation positions of interpolation data, carrying out data interpolation according to the interpolation positions, and obtaining three-dimensional point cloud data after data interpolation on line segments between the other point clouds and the target point clouds when interpolation directions are Euclidean distances.
6. The method for fusing the three-dimensional point cloud images based on computer vision according to claim 1, wherein the intelligent inspection assistance is realized by utilizing the fused images, specifically comprising the following steps:
according to the coordinate conversion relation between the three-dimensional point cloud data and the two-dimensional plane image data, the interpolated three-dimensional point cloud data is projected into the two-dimensional plane image data, the pixel value of each pixel point in the two-dimensional plane image data is also given to the pixel point of the projected point cloud data position, fusion of the three-dimensional point cloud data and the two-dimensional plane image data is carried out, the fused image is input into a convolutional neural network to obtain patrol instruction information for obstacle avoidance, forward, backward and parking, and intelligent patrol assistance is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310363382.3A CN116071283B (en) | 2023-04-07 | 2023-04-07 | Three-dimensional point cloud image fusion method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310363382.3A CN116071283B (en) | 2023-04-07 | 2023-04-07 | Three-dimensional point cloud image fusion method based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116071283A CN116071283A (en) | 2023-05-05 |
CN116071283B true CN116071283B (en) | 2023-06-16 |
Family
ID=86171829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310363382.3A Active CN116071283B (en) | 2023-04-07 | 2023-04-07 | Three-dimensional point cloud image fusion method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116071283B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116561216B (en) * | 2023-07-04 | 2023-09-15 | 湖南腾琨信息科技有限公司 | Multi-dimensional space-time data visualization performance optimization method and system |
CN116824526B (en) * | 2023-08-29 | 2023-10-27 | 广州视安智能科技有限公司 | Digital intelligent road monitoring system based on image processing |
CN116993627B (en) * | 2023-09-26 | 2023-12-15 | 山东莱恩光电科技股份有限公司 | Laser scanning image data correction method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112346073A (en) * | 2020-09-25 | 2021-02-09 | 中山大学 | Dynamic vision sensor and laser radar data fusion method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274976B (en) * | 2020-01-22 | 2020-09-18 | 清华大学 | Lane detection method and system based on multi-level fusion of vision and laser radar |
KR102338665B1 (en) * | 2020-03-02 | 2021-12-10 | 건국대학교 산학협력단 | Apparatus and method for classficating point cloud using semantic image |
CN112396650B (en) * | 2020-03-30 | 2023-04-07 | 青岛慧拓智能机器有限公司 | Target ranging system and method based on fusion of image and laser radar |
CN111337947B (en) * | 2020-05-18 | 2020-09-22 | 深圳市智绘科技有限公司 | Instant mapping and positioning method, device, system and storage medium |
CN115761550A (en) * | 2022-12-20 | 2023-03-07 | 江苏优思微智能科技有限公司 | Water surface target detection method based on laser radar point cloud and camera image fusion |
CN115830015B (en) * | 2023-02-09 | 2023-04-25 | 深圳市威祥五金制品有限公司 | Hardware stamping accessory quality detection method based on computer vision |
-
2023
- 2023-04-07 CN CN202310363382.3A patent/CN116071283B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112346073A (en) * | 2020-09-25 | 2021-02-09 | 中山大学 | Dynamic vision sensor and laser radar data fusion method |
Also Published As
Publication number | Publication date |
---|---|
CN116071283A (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116071283B (en) | Three-dimensional point cloud image fusion method based on computer vision | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN114708585B (en) | Attention mechanism-based millimeter wave radar and vision fusion three-dimensional target detection method | |
CN110222626B (en) | Unmanned scene point cloud target labeling method based on deep learning algorithm | |
CN109658442B (en) | Multi-target tracking method, device, equipment and computer readable storage medium | |
CN114581887B (en) | Method, device, equipment and computer readable storage medium for detecting lane line | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN111383333A (en) | Segmented SFM three-dimensional reconstruction method | |
CN111144213A (en) | Object detection method and related equipment | |
CN115187964A (en) | Automatic driving decision-making method based on multi-sensor data fusion and SoC chip | |
CN112749654A (en) | Deep neural network model construction method, system and device for video fog monitoring | |
WO2023060632A1 (en) | Street view ground object multi-dimensional extraction method and system based on point cloud data | |
CN111178193A (en) | Lane line detection method, lane line detection device and computer-readable storage medium | |
CN112947419A (en) | Obstacle avoidance method, device and equipment | |
CN117274749B (en) | Fused 3D target detection method based on 4D millimeter wave radar and image | |
CN108416798A (en) | A kind of vehicle distances method of estimation based on light stream | |
CN116258817A (en) | Automatic driving digital twin scene construction method and system based on multi-view three-dimensional reconstruction | |
CN115032648A (en) | Three-dimensional target identification and positioning method based on laser radar dense point cloud | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN117994987B (en) | Traffic parameter extraction method and related device based on target detection technology | |
CN113408550B (en) | Intelligent weighing management system based on image processing | |
CN113516853B (en) | Multi-lane traffic flow detection method for complex monitoring scene | |
CN117284320A (en) | Vehicle feature recognition method and system for point cloud data | |
CN114820931B (en) | Virtual reality-based CIM (common information model) visual real-time imaging method for smart city | |
CN116794650A (en) | Millimeter wave radar and camera data fusion target detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |