CN116777963A - Point cloud and image registration method and device, electronic equipment and storage medium - Google Patents

Point cloud and image registration method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116777963A
CN116777963A CN202310860561.8A CN202310860561A CN116777963A CN 116777963 A CN116777963 A CN 116777963A CN 202310860561 A CN202310860561 A CN 202310860561A CN 116777963 A CN116777963 A CN 116777963A
Authority
CN
China
Prior art keywords
point cloud
image
point
target
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310860561.8A
Other languages
Chinese (zh)
Inventor
童朋飞
张晨光
樊邵宗
肖玉强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Ningbo Geely Automobile Research and Development Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202310860561.8A priority Critical patent/CN116777963A/en
Publication of CN116777963A publication Critical patent/CN116777963A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application provides a point cloud and image registration method, a point cloud and image registration device, electronic equipment and a storage medium, and relates to the technical field of intelligent driving. According to the method, a point cloud depth image and a camera panoramic image are obtained, a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image are extracted, a position mapping relation between the point cloud depth image and the camera panoramic image is determined based on the position relation between the first contour feature and the second contour feature, and pixel values of the camera panoramic image are mapped into the point cloud depth image based on the position mapping relation and a coordinate mapping relation between point cloud coordinates of the point cloud depth image and pixel coordinates of the camera panoramic image, so that the target image is obtained; by adopting the method, the characteristics of all target objects in the point cloud depth image or the camera panoramic image can be prevented from being scanned, the calculation complexity is reduced, and the speed of point cloud and image registration is improved.

Description

Point cloud and image registration method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of intelligent driving technologies, and in particular, to a point cloud and image registration method, a device, an electronic apparatus, and a storage medium.
Background
Image registration refers to the process of matching and overlapping two or more acquired images under different sensors or under different conditions (illumination, imaging position and angle).
In the intelligent driving technology, a radar can be used for detecting a 3D target, and position information such as distance, azimuth height and the like of the target can be acquired, but texture features of the target, such as colors of the target, can not be acquired; the image sensor can acquire texture features of the object without determining the depth (distance) of the object. Thus, to obtain an accurate target image, registration of two different data sources, point cloud and image, is required.
The existing 2D-2D point cloud and image registration method generally converts the point cloud into a depth image or an intensity image according to a point cloud height value or a point cloud intensity value, and realizes image registration between two different data sources of the point cloud and the image. In the method, the mutual information image registration algorithm is usually used for realizing, however, the mutual information image registration algorithm has the problems of high computational complexity and poor instantaneity.
Disclosure of Invention
The invention provides a point cloud and image registration method, which is used for improving the rate of point cloud and image registration. The specific technical scheme is as follows:
In a first aspect, the present application provides a point cloud and image registration method, including:
acquiring a point cloud depth image and a camera panoramic image;
extracting a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image;
determining a position mapping relationship between the point cloud depth image and the camera panoramic image based on the position relationship between the first contour feature and the second contour feature;
and mapping the pixel value of the camera panoramic image into the point cloud depth image based on the position mapping relation and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image to obtain a target image.
Based on the method, the depth point cloud image and the camera panoramic image are registered according to the outline features with obvious edge information of the target object in the road detected by the edge detection algorithm, global scanning of the features of the target object in the point cloud depth image or the camera panoramic image is avoided, the calculation complexity is reduced, and the registration rate of the point cloud depth image and the camera panoramic image is improved.
In one possible implementation, the acquiring a camera panoramic image includes:
acquiring a first image and a second image acquired by image acquisition equipment, wherein the first image and the second image are acquired by shooting at the same camera moment from different shooting angles or two adjacent camera moments;
selecting the same target interest point from a first interest point set corresponding to the first image and a second interest point set corresponding to the second image;
determining a position mapping relationship between the first image and the second image based on first position information of the target point of interest in the first image and second position information of the target point of interest in the second image;
and generating the panoramic image of the camera based on the position mapping relation.
Based on the method, the interest points which do not change along with rotation, scale scaling and brightness change in the first image and the second image, namely the interest points with characteristic invariance, can be extracted; according to the position mapping relation of the target interest points in the first image and the second image, a camera panoramic image containing various target objects can be obtained, and the information of the target objects contained in the camera panoramic image is richer.
In one possible implementation, before the acquiring the point cloud depth image, the method further includes:
acquiring a first point cloud set and a second point cloud set acquired by radar detection equipment, wherein the first point cloud set and the second point cloud set are obtained by scanning at the same laser moment from different scanning angles or at two adjacent laser moments;
selecting one point cloud subset to be selected from the first point cloud set, and determining a target point cloud subset corresponding to the point cloud subset to be selected in the second point cloud set;
constructing a conversion matrix between the cloud subset of points to be selected and the target point Yun Ziji set based on the position relationship between the cloud subset of points to be selected and the target point Yun Ziji set;
transforming the point cloud subset to be selected to a corresponding position of the target point Yun Ziji set based on the transformation matrix;
calculating a position deviation value between the cloud subset of points to be selected and the target point Yun Ziji;
judging whether the position deviation value is larger than a set threshold value or not;
if yes, iteratively updating the conversion matrix until the position deviation value is smaller than or equal to the threshold value, or the iteration times are equal to the set maximum iteration times;
And if not, generating a panoramic point cloud set based on the conversion matrix.
Based on the method, the radar detection equipment can be rapidly synthesized into the panoramic point cloud set from all point cloud data sets scanned at all angles or all laser moments, and the speed of acquiring the panoramic point cloud set corresponding to the scene of the panoramic image of the camera is improved.
In one possible implementation, the acquiring the point cloud depth image includes:
selecting all panoramic point clouds to be selected in a set point cloud space from the panoramic point cloud set, and taking the all panoramic point clouds to be selected as target panoramic point cloud subsets;
mapping the target full-view point Yun Ziji into a pixel space corresponding to the point cloud space, and recording a coordinate mapping relation between the point cloud space and the pixel space;
determining respective vertical coordinates of all-scenic spot clouds in the target all-scenic spot cloud subset;
and calculating the average value of the sum of all the vertical coordinates to obtain an elevation value, and generating the point cloud depth image by taking the elevation value as the depth value of the point cloud depth image.
Based on the method, the 3D panoramic point cloud set is converted into the point cloud depth image, so that the subsequent registration of the point cloud depth image and the panoramic image of the camera can be facilitated.
In a second aspect, the present application provides a point cloud and image registration apparatus, comprising:
the data acquisition module is used for acquiring the point cloud depth image and the camera panoramic image;
the feature extraction module is used for extracting a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image;
the mapping module is used for determining the position mapping relation between the point cloud depth image and the camera panoramic image based on the position relation between the first contour feature and the second contour feature;
and the point cloud and image registration module is used for mapping the pixel value of the camera panoramic image into the point cloud depth image based on the position mapping relation and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image to obtain a target image.
In one possible implementation, the data acquisition module is specifically configured to:
acquiring a first image and a second image acquired by image acquisition equipment, wherein the first image and the second image are acquired by shooting at the same camera moment from different shooting angles or two adjacent camera moments;
Selecting the same target interest point from a first interest point set corresponding to the first image and a second interest point set corresponding to the second image;
determining a position mapping relationship between the first image and the second image based on first position information of the target point of interest in the first image and second position information of the target point of interest in the second image;
and generating the panoramic image of the camera based on the position mapping relation.
In one possible implementation, the data acquisition module is further configured to:
acquiring a first point cloud set and a second point cloud set acquired by radar detection equipment, wherein the first point cloud set and the second point cloud set are obtained by scanning at the same laser moment from different scanning angles or at two adjacent laser moments;
selecting one point cloud subset to be selected from the first point cloud set, and determining a target point cloud subset corresponding to the point cloud subset to be selected in the second point cloud set;
constructing a conversion matrix between the cloud subset of points to be selected and the target point Yun Ziji set based on the position relationship between the cloud subset of points to be selected and the target point Yun Ziji set;
Transforming the point cloud subset to be selected to a corresponding position of the target point Yun Ziji set based on the transformation matrix;
calculating a position deviation value between the cloud subset of points to be selected and the target point Yun Ziji;
judging whether the position deviation value is larger than a set threshold value or not;
if yes, iteratively updating the conversion matrix until the position deviation value is smaller than or equal to the threshold value, or the iteration times are equal to the set maximum iteration times;
and if not, generating a panoramic point cloud set based on the conversion matrix.
In one possible implementation, the data acquisition module is specifically configured to:
selecting all panoramic point clouds to be selected in a set point cloud space from the panoramic point cloud set, and taking the all panoramic point clouds to be selected as target panoramic point cloud subsets;
mapping the target full-view point Yun Ziji into a pixel space corresponding to the point cloud space, and recording a coordinate mapping relation between the point cloud space and the pixel space;
determining respective vertical coordinates of all-scenic spot clouds in the target all-scenic spot cloud subset;
and calculating the average value of the sum of all the vertical coordinates to obtain an elevation value, and generating the point cloud depth image by taking the elevation value as the depth value of the point cloud depth image.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the point cloud and image registration method when executing the computer program stored in the memory.
In a fourth aspect, the present application provides a computer readable storage medium having stored therein a computer program which when executed by a processor implements the steps of the above described point cloud and image registration method.
The technical effects of each of the second to fourth aspects and the technical effects that may be achieved by each aspect are referred to above for the technical effects that may be achieved by the first aspect or each possible aspect in the first aspect, and the detailed description is not repeated here.
Drawings
FIG. 1 is a flow chart of a point cloud and image registration method provided by the application;
fig. 2 is a schematic diagram of a point cloud and image registration system architecture provided by the present application;
fig. 3 is a schematic position diagram of a camera according to the present application;
fig. 4 is a schematic diagram of positions of a first to-be-selected panoramic point cloud and a second to-be-selected panoramic point cloud provided by the application;
FIG. 5 is a schematic diagram of a coordinate mapping between a point cloud space and a pixel space according to the present application;
Fig. 6 is a schematic structural diagram of a point cloud and image registration device provided by the application;
fig. 7 is a schematic structural diagram of an electronic device according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings. The specific method of operation in the method embodiment may also be applied to the device embodiment or the system embodiment. In the description of the present application, "a plurality of" means "at least two". "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. A is connected with B, and can be represented as follows: both cases of direct connection of A and B and connection of A and B through C. In addition, in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not be construed as indicating or implying a relative importance or order.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Image registration refers to the process of matching and overlapping two or more acquired images under different sensors or under different conditions (illumination, imaging position and angle).
In the intelligent driving technology, a radar can be used for detecting a 3D target, and position information such as distance, azimuth height and the like of the target can be acquired, but texture features of the target, such as colors of the target, can not be acquired; the image sensor can acquire texture features of the object without determining the depth (distance) of the object. Thus, to obtain an accurate target image, registration of two different data sources, point cloud and image, is required.
The existing 2D-2D point cloud and image registration method generally converts the point cloud into a depth image or an intensity image according to a point cloud height value or a point cloud intensity value, and realizes image registration between two different data sources of the point cloud and the image. In the method, the mutual information image registration algorithm is usually used for realizing, however, the mutual information image registration algorithm has the problems of high computational complexity and poor instantaneity.
In view of this, in order to realize the registration of the images between two different data sources of point cloud and image, reduce the complexity of calculation, promote the rate of registration, the application provides a registration method of point cloud and image, comprising: firstly, acquiring a point cloud depth image and a panoramic image of a camera, then extracting a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the panoramic image of the camera, and then determining a position mapping relation between the point cloud depth image and the panoramic image of the camera based on a position relation between the first contour feature and the second contour feature. And finally, mapping pixel values of the camera panoramic image into the point cloud depth image based on the position mapping relation and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image to obtain a target image.
By the method, the point cloud depth image converted from the point cloud coordinate system to the pixel coordinate system can be firstly obtained, and then the global scanning of the point cloud depth image and the characteristics in the camera panoramic image can be avoided by extracting the first contour characteristics of the target object in the point cloud depth image and the second contour characteristics of the target object in the camera panoramic image, so that the calculation complexity is reduced; and finally, mapping pixel values of the camera panoramic image into the point cloud depth image through the position relation between the first contour feature and the second contour feature and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image, so as to obtain a target image, and improving the registration rate of the point cloud and the image.
Referring to fig. 1, a flowchart of a point cloud and image registration method according to an embodiment of the present application is shown, where the method includes:
s1, acquiring a point cloud depth image and a camera panoramic image.
In the first place, the method provided by the application can be applied to the system architecture shown in fig. 2, and the system architecture comprises: vehicle, image acquisition device, radar detection device, image processing device.
The number of the devices is not particularly limited, the devices can be physically deployed on a vehicle, and the method provided by the application can be operated in an image processing device, and the devices and respective functions thereof are briefly described below.
The image acquisition equipment is a physical sensor with environment perception capability, and can acquire texture characteristics of a target object in real time, wherein the target object can be any entity in a lane line, a street lamp, a vehicle, a pedestrian or a scene; the image capturing device includes any type of vehicle-mounted camera, for example, a front-view camera, a rear-view camera, a side-view camera, a ring-view camera, or a built-in camera; the image capturing device is used for capturing different image data from multiple view angles of a vehicle in real time, and it should be noted that at the same camera time, there is an overlapping portion between two images captured from two adjacent view angles, so that the images captured from multiple view angles can be combined into a panoramic image at a later stage.
The radar detection equipment is a physical sensor with environment sensing capability, and can acquire the position information such as the distance, the height and the like of a target object; the Radar detection device may be any one of Laser Radar (Laser Radar), millimeter Wave Radar (Millimeter-Wave Radar), ultrasonic Radar (ultrasonic) or a combination thereof, which is not particularly limited in the present application; the radar detection equipment can also collect different point cloud data from multiple angles of the vehicle in real time at the same laser moment, so that the point cloud data collected from multiple angles of view can be conveniently combined into a panoramic point cloud set in the later period.
The image processing device is used for receiving the image data acquired by the image acquisition device and the point cloud data acquired by the radar detection device, and correspondingly processing the data sources of the two different data types according to the set data processing rule.
The vehicle may be any vehicle having an autopilot function, and the vehicle is equipped with a positioning system, a control system, and the like in addition to the above-described devices, and the positioning system may be an inertial measurement unit (InertialMeasurement Unit, IMU), a Real-Time Kinematic (RTK), or a global positioning system (Global Positioning System, GPS), which is not particularly limited in the present application.
In the embodiment of the application, one camera or a camera can only acquire a scene image within a certain view angle range, namely, an image acquired by a single camera or a camera has a certain limitation, and in order to acquire a panoramic image containing various target objects in all directions, corresponding cameras are required to be respectively installed at a plurality of angles of a vehicle, for example, one camera is installed at each interval of 90 degrees; the image processing device may then receive the image data collected by each camera (camera) by wireless transmission or wired transmission, and the communication manner between the image collecting device and the image processing device is not particularly limited in the present application.
The image processing device receives the first image and the second image acquired by the image acquisition device, and as shown in fig. 3, the first image acquired by the camera 1 in the positive x-axis direction and the second image acquired by the camera 2 in the positive y-axis direction are acquired by taking the main driving position as the origin of coordinates.
After receiving the first image and the second image, the image processing device needs to extract points of interest (feature points) in the first image and the second image, which do not change along with rotation, scaling, brightness change and other factors, and in the embodiment of the present application, a Scale Invariant feature transform algorithm (Scale Invariant FeatureTransform, SIFT) may be used to extract feature points, which specifically includes the following steps:
step one, detecting a scale space extremum: all pictures on the gaussian pyramid scale space are searched, and potential feature points which are unchanged for scale and rotation are identified through a gaussian difference function.
Step two, feature point positioning: at each candidate feature point location, a fine model is simulated to determine the feature point location and scale.
Step three, determining the direction of the characteristic points: based on the local gradient direction of the image, one or more directions are allocated to each feature point, and all subsequent processing of the image data is transformed relative to the direction, scale and position of the feature point, so that the invariance of the feature is ensured.
Step four, describing characteristic points: gradients of the image local are measured at selected scales within a neighborhood around each feature point, these gradients acting as descriptors of the feature points, which allow for relatively large local deformations and illumination intensities.
Here, it should be noted that the scale space refers to a scale set formed by performing gaussian filtering on any one image cycle to form images with different scales and formed by scales of various different images.
After extracting the interest points in the first image and the second image, the image processing device can select the same target interest point from the first interest point set corresponding to the first image and the second interest point set corresponding to the second image. The target interest point may be a lane line, a green light, a vehicle or a pedestrian in the image, which is not particularly limited by the present application, and the target interest point may be represented by a 128×1 feature matrix, which may be used to describe the color, dimension, size, position, etc. of the target interest point.
The image processing device may determine a positional mapping relationship between the first image and the second image according to the first positional information of the target point of interest in the first image and the second positional information of the target point of interest in the second image. The positional mapping relationship can be described by a rotational translation matrix, according to which a first image can be converted to a position corresponding to a second image or a second image can be converted to a position of the first image, and finally the first image is added to the translated first image, and an overlapping portion between the first image and the translated first image is removed, so that a camera image in which two images (first image and second image) taken from two photographing angles at the same camera time can be obtained.
In a possible implementation manner, according to the above steps, all the images acquired from multiple angles at the same moment can be synthesized to obtain a panoramic image of a camera at the same camera moment or two images captured at two adjacent camera moments are synthesized to obtain a panoramic image of the camera.
Through the SIFT algorithm, the interest points which do not change along with rotation, scale scaling and brightness change in the first image and the second image, namely the interest points with feature invariance, can be extracted, and the feature extraction rate is improved; according to the position mapping relation of the target interest points in the first image and the second image, a camera panoramic image containing various target objects can be obtained, and the information of the target objects contained in the camera panoramic image is richer.
In the embodiment of the application, after the image processing device acquires the camera panoramic image, in order to ensure that the scene of the point cloud data acquired by the radar detection device is unified with the scene of the acquired camera panoramic image, a panoramic point cloud set corresponding to the scene of the camera panoramic image needs to be synthesized.
Specifically, the image processing device may synthesize the panoramic point cloud set by using a nearest point iterative algorithm (english full name: iterative Closest Point, abbreviated as ICP), and the specific steps are as follows:
The image processing device firstly acquires a first point cloud set and a second point cloud set acquired by the radar detection device, wherein the first point cloud set and the second point cloud set can be obtained by scanning the radar detection device from different scanning angles or at two adjacent laser moments at the same laser moment, and the radar detection device can be arranged by referring to the setting mode of the camera in the vehicle, so that details are not repeated.
And selecting a point cloud subset P to be selected from the first point cloud set M. For example, p= { P 1 ,P 2 ..P n }。
And searching a target point Yun Ziji and Q corresponding to the point cloud subset P to be selected from the second point cloud set N. For example, q= { Q 1 ,Q 2 ..Q n Target point cloud Q in target point cloud subset 1 May be the point cloud P to be selected in the point cloud subset to be selected 1 The point clouds with the closest relative distance are correspondingly arranged in the same manner as the other point clouds in the point cloud subset according to the above manner, and are not described herein.
Constructing a conversion matrix between the point cloud subset P to be selected and the target point Yun Ziji and Q according to the position relation between the point cloud subset to be selected and the target point Yun Ziji; according to the transformation matrix, transforming the point cloud subset to be selected to the corresponding position Q of the target point cloud subset * For example, Q * =r×p+t, where R is a rotation matrix and T is a translation matrix.
Calculating the position deviation of the candidate point cloud subset P and the target point Yun Ziji Q, wherein the position deviation can be calculated by the following ways:
and judging whether the position deviation value of the point cloud subset P to be selected and the target point Yun Ziji Q is larger than a set threshold value. If yes, iteratively updating the conversion matrix until the position deviation value is smaller than or equal to a set threshold value, or the number of times of iteratively updating the conversion matrix is equal to a set maximum iteration number; if not, generating a panoramic point cloud set synthesized by the first point cloud set and the second point cloud set at the same laser moment based on the conversion matrix.
In the same way, after the panoramic point cloud set synthesized by the first point cloud set and the second point cloud set is generated at the same laser moment, the image processing device can also synthesize all the point cloud sets of multiple angles acquired at the same laser moment to obtain the panoramic point cloud set at the same laser moment or synthesize the third point cloud set and the fourth point cloud set scanned at two adjacent laser moments to obtain the panoramic point cloud set corresponding to the scene of the panoramic image of the camera.
By using the ICP algorithm, the radar detection equipment can quickly synthesize the panoramic point cloud set from all point cloud data sets scanned at all angles or all laser moments, and the speed of acquiring the panoramic point cloud set corresponding to the scene of the panoramic image of the camera is improved.
After the image processing device acquires the panoramic point cloud set, in order to register the obtained 3D panoramic point cloud set with the panoramic image of the 2D camera, the 3D panoramic point cloud set needs to be converted into a point cloud depth image.
In the embodiment of the application, the panoramic point cloud set is converted into the point cloud depth image in the following manner:
the image processing equipment firstly selects all panorama point clouds to be selected, which are positioned in the set point cloud space, from the panorama point cloud set, and takes the all panorama point clouds to be selected as target point cloud subsets. As shown in fig. 4, a first panoramic point cloud a to be selected with a point cloud coordinate of (0.2 m,0.4m,0.5 m) and a second panoramic point cloud B to be selected with a point cloud coordinate of (0.3 m,0.5m,0.9 m) are selected from the panoramic point cloud set, the first panoramic point cloud a and the second panoramic point cloud to be selected are used as target point cloud sub-sets, and it is noted that the number of the panoramic point clouds to be selected may be multiple, and the application is described only by taking the first panoramic point cloud a to be selected and the second panoramic point cloud B to be selected as examples.
The target full view point Yun Ziji is then mapped into a pixel space (grid) corresponding to the point cloud space, and the coordinate mapping relationship (index relationship) between the point cloud space and the pixel space is recorded, and the mapping process can be referred to as shown in fig. 5.
And according to the respective vertical coordinates of all the scenic spot clouds in the target all-scenic spot cloud sub-set, for example, the first vertical coordinate of the first to-be-selected panoramic spot cloud is 0.5m and the second vertical coordinate of the second to-be-selected panoramic spot cloud is 0.9m.
Calculating the average value of the sum of the vertical coordinates of all the scenic spot clouds in the target scenic spot cloud sub-set to obtain an elevation value, taking the elevation value as the depth value of the point cloud depth image to generate the point cloud depth image, wherein the average value of the sum of the intensity values of all the scenic spot clouds in the target scenic spot cloud sub-set can also be used as the depth value of the point cloud depth image, the intensity value refers to laser energy information obtained by reflecting from the surface of the target object, and the type of the depth value of the point cloud depth image is not particularly limited.
Here, it should be noted that if there is no panorama point cloud to be selected in a certain pixel space, the empty pixel point corresponding to the pixel space is removed.
By means of the method, the 3D panoramic point cloud set is converted into the point cloud depth image, and subsequent registration of the point cloud depth image and the camera panoramic image can be facilitated.
S2, extracting a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image.
In the embodiment of the application, after the point cloud depth image and the camera panoramic image are acquired, the image detection equipment can extract the contour features of each target object to be selected in the point cloud depth image and the contour features of each target object to be selected in the camera panoramic image by adopting an edge detection algorithm, then perform feature matching on the contour features in the point cloud depth image and the contour features in the camera panoramic image, and determine one identical target object in the point cloud depth image and the camera panoramic image; the first contour feature of the target object in the point cloud depth image and the second contour feature of the target object in the camera panoramic image are obtained, and the first contour feature and the second contour feature can be structural information of edges such as a road edge, a boundary line or a lane line of a lane, and the application is not limited in particular.
The first contour feature of the target object in the point cloud depth image and the second contour feature of the target object in the camera panoramic image are extracted through the edge detection algorithm, so that global scanning of the point cloud depth image or the features of the target object in the camera panoramic image is avoided, the complexity of calculation is reduced, and the registration rate of the point cloud depth image and the camera panoramic image is improved.
S3, determining a position mapping relation between the point cloud depth image and the camera panoramic image based on the position relation between the first contour feature and the second contour feature;
in the embodiment of the present application, after acquiring the first contour feature of the target object in the point cloud depth image and the second contour feature of the target object in the camera panoramic image, the image processing device may acquire the first contour position information corresponding to the first contour feature and the second contour position information corresponding to the second contour feature, and according to the first contour position information and the second contour position information, the image processing device may establish a position mapping relationship between the point cloud depth image and the camera panoramic image, where the position mapping relationship may be a rotational translation matrix between the point cloud depth image and the camera panoramic image, and the function of the position mapping relationship (rotational translation matrix) between the first image and the second image is the same as that of the position mapping relationship (rotational translation matrix) between the first image and the second image, which is not described herein again.
By constructing a rotation translation matrix between the point cloud depth image and the camera panoramic image, the point cloud depth image can be transformed to a position corresponding to the camera panoramic image, and the point cloud depth image and the camera panoramic image can be registered in position.
And S4, mapping pixel values of the camera panoramic image into the point cloud depth image based on the position mapping relation and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image, so as to obtain a target image.
According to the embodiment of the application, the image processing device can firstly transform the point cloud depth image to the position corresponding to the camera panoramic image according to the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image, and can reversely assign all pixel values in the camera panoramic image through indexes according to the coordinate mapping (index) relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image after registering the point cloud depth image and the position of the camera panoramic image, so that the pixel values of the camera panoramic image are mapped into the point cloud depth image to generate color point cloud, and the color point cloud is used as a target image after registering the point cloud and the image.
Here, it is to be noted that the coordinate mapping (indexing) relationship between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image has been recorded at the time of the above-described mapping of the target full-view point Yun Ziji to the pixel space (grid) corresponding to the point cloud space.
In summary, according to the point cloud and matching method provided by the application, through the SIFT algorithm, the interest points with feature invariance in the first image and the second image can be extracted, the images acquired at each angle or at the moment of adjacent cameras can be registered according to the position mapping relation of the target interest points in the first image and the second image, and the camera panoramic images containing various target objects are synthesized, so that the information of the target objects contained in the camera panoramic images is more abundant; through an ICP algorithm, the point cloud data sets scanned by the radar detection equipment from angles or adjacent laser moments can be rapidly registered, and a panoramic point cloud set corresponding to a scene of a panoramic image of a camera is synthesized; then converting the panoramic point cloud set into a point cloud depth image through elevation mapping, so that unification of a plurality of laser radars and image spaces is realized; and finally, registering the depth point cloud image and the camera panoramic image according to the outline features of the detected object in the road with obvious edge information by an edge detection algorithm, avoiding global scanning of the features of the object in the point cloud depth image or the camera panoramic image, reducing the complexity of calculation and improving the registration rate of the point cloud depth image and the camera panoramic image.
Based on the method provided in the foregoing embodiment, the embodiment of the present application further provides a point cloud and image registration apparatus, and as shown in fig. 6, a schematic structural diagram of the point cloud and image registration apparatus in the embodiment of the present application is shown, where the apparatus includes:
a data acquisition module 601, configured to acquire a point cloud depth image and a camera panoramic image;
a feature extraction module 602, configured to extract a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image;
a mapping module 603, configured to determine a positional mapping relationship between the point cloud depth image and the camera panoramic image based on a positional relationship between the first contour feature and the second contour feature;
the point cloud and image registration module 604 is configured to map the pixel value of the camera panoramic image to the point cloud depth image based on the position mapping relationship and the coordinate mapping relationship between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image, so as to obtain a target image.
In one possible implementation, the data acquisition module 601 is specifically configured to:
Acquiring a first image and a second image acquired by image acquisition equipment, wherein the first image and the second image are acquired by shooting at the same camera moment from different shooting angles or two adjacent camera moments;
selecting the same target interest point from a first interest point set corresponding to the first image and a second interest point set corresponding to the second image;
determining a position mapping relationship between the first image and the second image based on first position information of the target point of interest in the first image and second position information of the target point of interest in the second image;
and generating the panoramic image of the camera based on the position mapping relation.
In one possible implementation, the data acquisition module 601 is further configured to:
acquiring a first point cloud set and a second point cloud set acquired by radar detection equipment, wherein the first point cloud set and the second point cloud set are obtained by scanning at the same laser moment from different scanning angles or at two adjacent laser moments;
selecting one point cloud subset to be selected from the first point cloud set, and determining a target point cloud subset corresponding to the point cloud subset to be selected in the second point cloud set;
Constructing a conversion matrix between the cloud subset of points to be selected and the target point Yun Ziji set based on the position relationship between the cloud subset of points to be selected and the target point Yun Ziji set;
transforming the point cloud subset to be selected to a corresponding position of the target point Yun Ziji set based on the transformation matrix;
calculating a position deviation value between the cloud subset of points to be selected and the target point Yun Ziji;
judging whether the position deviation value is larger than a set threshold value or not;
if yes, iteratively updating the conversion matrix until the position deviation value is smaller than or equal to the threshold value, or the iteration times are equal to the set maximum iteration times;
and if not, generating a panoramic point cloud set based on the conversion matrix.
In one possible implementation, the data acquisition module 601 is specifically configured to:
selecting all panoramic point clouds to be selected in a set point cloud space from the panoramic point cloud set, and taking the all panoramic point clouds to be selected as target panoramic point cloud subsets;
mapping the target full-view point Yun Ziji into a pixel space corresponding to the point cloud space, and recording a coordinate mapping relation between the point cloud space and the pixel space;
Determining respective vertical coordinates of all-scenic spot clouds in the target all-scenic spot cloud subset;
and calculating the average value of the sum of all the vertical coordinates to obtain an elevation value, and generating the point cloud depth image by taking the elevation value as the depth value of the point cloud depth image.
Based on the same inventive concept, the embodiment of the present application further provides an electronic device, where the electronic device may implement the functions of the foregoing point cloud and image registration apparatus, and referring to fig. 7, the electronic device includes:
at least one processor 701, and a memory 702 connected to the at least one processor 701, in which the specific connection medium between the processor 701 and the memory 702 is not limited in the embodiment of the present application, and in fig. 7, the connection between the processor 701 and the memory 702 through the bus 700 is taken as an example. Bus 700 is shown in bold lines in fig. 7, and the manner in which the other components are connected is illustrated schematically and not by way of limitation. The bus 700 may be divided into an address bus, a data bus, a control bus, etc., and is represented by only one thick line in fig. 7 for convenience of representation, but does not represent only one bus or one type of bus. Alternatively, the processor 701 may be referred to as a controller, and the names are not limited.
In an embodiment of the present application, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may perform the point cloud and image registration method as previously discussed by executing the instructions stored by the memory 702. The processor 401 may implement the functions of the respective modules in the apparatus shown in fig. 6.
The processor 701 is a control center of the apparatus, and may connect various parts of the entire control device using various interfaces and lines, and by executing or executing instructions stored in the memory 702 and invoking data stored in the memory 702, various functions of the apparatus and processing data, thereby performing overall monitoring of the apparatus.
In one possible design, processor 701 may include one or more processing units, and processor 701 may integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, and the like, and the modem processor primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701. In some embodiments, processor 701 and memory 702 may be implemented on the same chip, or they may be implemented separately on separate chips in some embodiments.
The processor 701 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the point cloud and image registration method disclosed in connection with the embodiment of the application can be directly embodied as being executed by a hardware processor or by a combination of hardware and software modules in the processor.
The memory 702 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 702 may include at least one type of storage medium, and may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory), magnetic Memory, magnetic disk, optical disk, and the like. Memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 in embodiments of the present application may also be circuitry or any other device capable of performing storage functions for storing program instructions and/or data.
By programming the processor 701, the codes corresponding to the point cloud and image registration method described in the foregoing embodiments may be cured into the chip, so that the chip can execute the steps of the point cloud and image registration method of the embodiment shown in fig. 1 at runtime. How to design and program the processor 701 is a technology well known to those skilled in the art, and will not be described in detail herein.
Based on the same inventive concept, embodiments of the present application also provide a storage medium storing computer instructions that, when run on a computer, cause the computer to perform the point cloud and image registration method discussed previously.
In some possible embodiments, aspects of the point cloud and image registration method provided by the present application may also be implemented in the form of a program product comprising program code for causing the control apparatus to carry out the steps in the point cloud and image registration method according to the various exemplary embodiments of the application as described herein above when the program product is run on an apparatus.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method of point cloud and image registration, the method comprising:
acquiring a point cloud depth image and a camera panoramic image;
extracting a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image;
determining a position mapping relationship between the point cloud depth image and the camera panoramic image based on the position relationship between the first contour feature and the second contour feature;
And mapping the pixel value of the camera panoramic image into the point cloud depth image based on the position mapping relation and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image to obtain a target image.
2. The method of claim 1, wherein the acquiring a camera panoramic image comprises:
acquiring a first image and a second image acquired by image acquisition equipment, wherein the first image and the second image are acquired by shooting at the same camera moment from different shooting angles or two adjacent camera moments;
selecting the same target interest point from a first interest point set corresponding to the first image and a second interest point set corresponding to the second image;
determining a position mapping relationship between the first image and the second image based on first position information of the target point of interest in the first image and second position information of the target point of interest in the second image;
and generating the panoramic image of the camera based on the position mapping relation.
3. The method of claim 1, wherein prior to the acquiring the point cloud depth image, further comprising:
Acquiring a first point cloud set and a second point cloud set acquired by radar detection equipment, wherein the first point cloud set and the second point cloud set are obtained by scanning at the same laser moment from different scanning angles or at two adjacent laser moments;
selecting one point cloud subset to be selected from the first point cloud set, and determining a target point cloud subset corresponding to the point cloud subset to be selected in the second point cloud set;
constructing a conversion matrix between the cloud subset of points to be selected and the target point Yun Ziji set based on the position relationship between the cloud subset of points to be selected and the target point Yun Ziji set;
transforming the point cloud subset to be selected to a corresponding position of the target point Yun Ziji set based on the transformation matrix;
calculating a position deviation value between the cloud subset of points to be selected and the target point Yun Ziji;
judging whether the position deviation value is larger than a set threshold value or not;
if yes, iteratively updating the conversion matrix until the position deviation value is smaller than or equal to the threshold value, or the iteration times are equal to the set maximum iteration times;
and if not, generating a panoramic point cloud set based on the conversion matrix.
4. The method of claim 3, wherein the acquiring the point cloud depth image comprises:
selecting all panoramic point clouds to be selected in a set point cloud space from the panoramic point cloud set, and taking the all panoramic point clouds to be selected as target panoramic point cloud subsets;
mapping the target full-view point Yun Ziji into a pixel space corresponding to the point cloud space, and recording a coordinate mapping relation between the point cloud space and the pixel space;
determining respective vertical coordinates of all-scenic spot clouds in the target all-scenic spot cloud subset;
and calculating the average value of the sum of all the vertical coordinates to obtain an elevation value, and generating the point cloud depth image by taking the elevation value as the depth value of the point cloud depth image.
5. A point cloud and image registration apparatus, comprising:
the data acquisition module is used for acquiring the point cloud depth image and the camera panoramic image;
the feature extraction module is used for extracting a first contour feature of a target object in the point cloud depth image and a second contour feature of the target object in the camera panoramic image;
the mapping module is used for determining the position mapping relation between the point cloud depth image and the camera panoramic image based on the position relation between the first contour feature and the second contour feature;
And the point cloud and image registration module is used for mapping the pixel value of the camera panoramic image into the point cloud depth image based on the position mapping relation and the coordinate mapping relation between the point cloud coordinates of the point cloud depth image and the pixel coordinates of the camera panoramic image to obtain a target image.
6. The apparatus of claim 5, wherein the data acquisition module is specifically configured to:
acquiring a first image and a second image acquired by image acquisition equipment, wherein the first image and the second image are acquired by shooting at the same camera moment from different shooting angles or two adjacent camera moments;
selecting the same target interest point from a first interest point set corresponding to the first image and a second interest point set corresponding to the second image;
determining a position mapping relationship between the first image and the second image based on first position information of the target point of interest in the first image and second position information of the target point of interest in the second image;
and generating the panoramic image of the camera based on the position mapping relation.
7. The apparatus of claim 5, wherein the data acquisition module is further to:
Acquiring a first point cloud set and a second point cloud set acquired by radar detection equipment, wherein the first point cloud set and the second point cloud set are obtained by scanning at the same laser moment from different scanning angles or at two adjacent laser moments;
selecting one point cloud subset to be selected from the first point cloud set, and determining a target point cloud subset corresponding to the point cloud subset to be selected in the second point cloud set;
constructing a conversion matrix between the cloud subset of points to be selected and the target point Yun Ziji set based on the position relationship between the cloud subset of points to be selected and the target point Yun Ziji set;
transforming the point cloud subset to be selected to a corresponding position of the target point Yun Ziji set based on the transformation matrix;
calculating a position deviation value between the cloud subset of points to be selected and the target point Yun Ziji;
judging whether the position deviation value is larger than a set threshold value or not;
if yes, iteratively updating the conversion matrix until the position deviation value is smaller than or equal to the threshold value, or the iteration times are equal to the set maximum iteration times;
and if not, generating a panoramic point cloud set based on the conversion matrix.
8. The apparatus of claim 5, wherein the data acquisition module is specifically configured to:
selecting all panoramic point clouds to be selected in a set point cloud space from the panoramic point cloud set, and taking the all panoramic point clouds to be selected as target panoramic point cloud subsets;
mapping the target full-view point Yun Ziji into a pixel space corresponding to the point cloud space, and recording a coordinate mapping relation between the point cloud space and the pixel space;
determining respective vertical coordinates of all-scenic spot clouds in the target all-scenic spot cloud subset;
and calculating the average value of the sum of all the vertical coordinates to obtain an elevation value, and generating the point cloud depth image by taking the elevation value as the depth value of the point cloud depth image.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-4 when executing a computer program stored on said memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-4.
CN202310860561.8A 2023-07-13 2023-07-13 Point cloud and image registration method and device, electronic equipment and storage medium Pending CN116777963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310860561.8A CN116777963A (en) 2023-07-13 2023-07-13 Point cloud and image registration method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310860561.8A CN116777963A (en) 2023-07-13 2023-07-13 Point cloud and image registration method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116777963A true CN116777963A (en) 2023-09-19

Family

ID=87991366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310860561.8A Pending CN116777963A (en) 2023-07-13 2023-07-13 Point cloud and image registration method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116777963A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958220A (en) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 Camera visual field range generation method and device, storage medium and electronic equipment
CN117876430A (en) * 2024-03-13 2024-04-12 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Method, equipment and medium for predicting glance path in panoramic image and video
CN117994309A (en) * 2024-04-07 2024-05-07 绘见科技(深圳)有限公司 SLAM laser point cloud and panoramic image automatic registration method based on large model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958220A (en) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 Camera visual field range generation method and device, storage medium and electronic equipment
CN116958220B (en) * 2023-09-20 2024-01-12 深圳市信润富联数字科技有限公司 Camera visual field range generation method and device, storage medium and electronic equipment
CN117876430A (en) * 2024-03-13 2024-04-12 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Method, equipment and medium for predicting glance path in panoramic image and video
CN117994309A (en) * 2024-04-07 2024-05-07 绘见科技(深圳)有限公司 SLAM laser point cloud and panoramic image automatic registration method based on large model

Similar Documents

Publication Publication Date Title
CN116777963A (en) Point cloud and image registration method and device, electronic equipment and storage medium
EP2491529B1 (en) Providing a descriptor for at least one feature of an image
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
US10621446B2 (en) Handling perspective magnification in optical flow processing
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN111507327A (en) Target detection method and device
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
Zingoni et al. Real-time 3D reconstruction from images taken from an UAV
Gupta et al. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones
CN114119682A (en) Laser point cloud and image registration method and registration system
CN117765039A (en) Point cloud coarse registration method, device and equipment
CN110298320B (en) Visual positioning method, device and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN116912417A (en) Texture mapping method, device, equipment and storage medium based on three-dimensional reconstruction of human face
US20220301176A1 (en) Object detection method, object detection device, terminal device, and medium
CN112146647B (en) Binocular vision positioning method and chip for ground texture
WO2019080257A1 (en) Electronic device, vehicle accident scene panoramic image display method and storage medium
JP7251631B2 (en) Template creation device, object recognition processing device, template creation method, object recognition processing method, and program
CN112528918A (en) Road element identification method, map marking method and device and vehicle
CN112215048A (en) 3D target detection method and device and computer readable storage medium
CN113255405A (en) Parking space line identification method and system, parking space line identification device and storage medium
CN114780762B (en) Point cloud ranging automatic labeling method and system for night vision image of power transmission line
Kim et al. Geo-registration of wide-baseline panoramic image sequences using a digital map reference
CN116740681B (en) Target detection method, device, vehicle and storage medium
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination