CN114820809A - Parameter determination method, equipment and computer storage medium - Google Patents

Parameter determination method, equipment and computer storage medium Download PDF

Info

Publication number
CN114820809A
CN114820809A CN202210343808.4A CN202210343808A CN114820809A CN 114820809 A CN114820809 A CN 114820809A CN 202210343808 A CN202210343808 A CN 202210343808A CN 114820809 A CN114820809 A CN 114820809A
Authority
CN
China
Prior art keywords
feature vector
determining
image
target
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210343808.4A
Other languages
Chinese (zh)
Inventor
田疆
刘林虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202210343808.4A priority Critical patent/CN114820809A/en
Publication of CN114820809A publication Critical patent/CN114820809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a parameter determination method, equipment and a computer storage medium. The method comprises the following steps: aiming at the same scene, point cloud data detected by a detection device and image data acquired by an image acquisition device are obtained; based on a target object in a scene, carrying out segmentation and two-dimensional projection transformation on point cloud data, and determining a first feature vector; performing image segmentation on the image data based on the target object, and determining a second feature vector; and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector. Therefore, the point cloud data is subjected to two-dimensional projection transformation, the point cloud data and the image data are fused, and target external parameters of the detection device and the image acquisition device can be accurately registered, so that the detection device and the image acquisition device are matched with an external scene after data fusion.

Description

Parameter determination method, equipment and computer storage medium
Technical Field
The present application relates to the field of automatic registration technologies, and in particular, to a parameter determination method, device, and computer storage medium.
Background
The sensor is one of the key technologies of the car networking, and a common sensor comprises a detection device and an image acquisition device. For example, the detecting device may be a laser radar, and the image collecting device may be a camera, wherein the laser radar is used for obtaining three-dimensional position information of an object around the vehicle, and the camera is used for obtaining two-dimensional information, color information and the like of the object. Due to the inevitable limitation of a single sensor, in order to improve the robustness of the system, a multi-sensor fusion scheme is usually adopted.
In the related art, a multi-sensor fusion scheme is generally an all-in-one machine integrating a 2D camera and a 3D laser radar, and the geometric relationship between the two is fixed. However, based on the fixed geometric relationship, the scheme cannot adapt to other different scenes, and cannot realize accurate fusion of data of the 3D laser radar and the 2D camera.
Disclosure of Invention
The application provides a parameter determination method, equipment and a computer storage medium.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a parameter determining method, which may include:
aiming at the same scene, point cloud data detected by a detection device and image data acquired by an image acquisition device are obtained;
based on a target object in the scene, performing segmentation and two-dimensional projection transformation on the point cloud data to determine a first feature vector;
performing image segmentation on the image data based on the target object, and determining a second feature vector;
and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector.
In some embodiments, the point cloud data and the image data are obtained based on a target scene at the same time; wherein the target scene comprises at least one object.
In some embodiments, the method further comprises:
acquiring initial external parameters; wherein the initial external parameters are obtained according to the relative postures of the detection device and the image acquisition device;
correspondingly, the image segmentation and two-dimensional projection transformation are carried out on the point cloud data, and a first feature vector is determined, wherein the determining comprises the following steps:
carrying out object segmentation on the point cloud data to obtain a three-dimensional model;
carrying out two-dimensional projection transformation on the three-dimensional model by using the initial external parameters to obtain a first image in a two-dimensional format; wherein the first image comprises at least one object;
and determining a first feature vector corresponding to each at least one object based on the contour information of the at least one object.
In some embodiments, the determining the first feature vector corresponding to each of the at least one object based on the contour information of the at least one object includes:
performing normal vector distribution calculation according to contour information of a first object to obtain a contour feature vector of the first object;
triangulation processing is carried out on the contour information of the at least one object, normal vector distribution calculation is carried out on other contours of the first object in a preset neighborhood, and a background feature vector of the first object is obtained;
determining a first feature vector corresponding to the first object according to the contour feature vector of the first object and the background feature vector of the first object;
wherein the first object is any one of the at least one object.
In some embodiments, the image segmenting the image data based on the target object, determining a second feature vector, includes:
carrying out object segmentation on the image data to obtain a second image in a two-dimensional format; wherein the second image comprises at least one object;
and determining a second feature vector corresponding to each at least one object based on the contour information of the at least one object.
In some embodiments, the determining the second feature vector corresponding to each of the at least one object based on the contour information of the at least one object includes:
performing normal vector distribution calculation according to contour information of a second object to obtain a contour feature vector of the second object;
triangulation processing is carried out on the contour information of the at least one object, and normal vector distribution calculation is carried out on other contours of the second object in a preset neighborhood to obtain a background feature vector of the second object;
determining a second feature vector corresponding to the second object according to the contour feature vector of the second object and the background feature vector of the second object;
wherein the second object is any one of the at least one object.
In some embodiments, the determining, according to the first feature vector and the second feature vector, a target external parameter for data fusion between the detection device and the image acquisition device includes:
determining an objective function according to the similar matrix of the first eigenvector and the second eigenvector;
and determining target external parameters for data fusion between the detection device and the image acquisition device according to the target function.
In some embodiments, the method further comprises:
and when the target external parameters do not meet the preset conditions, the target external parameters are used as initial external parameters, the steps of performing image segmentation and two-dimensional projection transformation on the point cloud data and determining first characteristic vectors are returned, so that the target external parameters are updated.
In a second aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program capable of running on the processor;
a processor for performing the method according to any of the first aspect when running the computer program.
In a third aspect, an embodiment of the present application provides a computer storage medium storing a computer program, which when executed by at least one processor implements the method according to any one of the first aspect.
The embodiment of the application provides a parameter determination method, equipment and a computer storage medium. Aiming at the same scene, point cloud data detected by a detection device and image data acquired by an image acquisition device are obtained; based on a target object in a scene, carrying out segmentation and two-dimensional projection transformation on point cloud data, and determining a first feature vector; performing image segmentation on the image data based on the target object, and determining a second feature vector; and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector. Therefore, the point cloud data is subjected to two-dimensional projection transformation, the point cloud data and the image data are fused, and target external parameters of the detection device and the image acquisition device can be accurately registered, so that the detection device and the image acquisition device are matched with an external scene after data fusion.
Drawings
Fig. 1 is a schematic flow chart of a parameter determining method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another parameter determination method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a parameter determination method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a parameter determining apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a specific hardware structure of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. It should also be noted that reference to the terms "first \ second \ third" in the embodiments of the present application is only used for distinguishing similar objects and does not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged with a specific order or sequence where possible so that the embodiments of the present application described herein can be implemented in an order other than that shown or described herein.
It can be understood that sensor fusion is one of the key technologies of the car networking, and a single sensor inevitably has limitations, and in order to improve the robustness of the system, a scheme of multi-sensor fusion is adopted. A camera is a well-known sensor that is a 2D sensor for outputting bounding boxes, lane line positions, traffic light colors, traffic signs, and many others. Lidar stands for light detection and ranging and is a 3D sensor that outputs a set of point clouds, each with a 3D coordinate. By fusing lidar and cameras, the resolution of the cameras, the ability to understand context and classify objects, and lidar technology are exploited to estimate range and view the 3D world.
In the correlation technique, at present, the all-in-one of integrating 2D camera and 3D laser radar mostly, the geometric relation is fixed, and the use scene is autopilot. Under the car networking scene, for more accurate, low-cost environmental perception, the geometric layout between 3D lidar and the 2D camera relies on specific road environment, fuses the target vehicle that relies on in the real road scene for this scheme lacks the adaptability to the external scene, also can't realize carrying out accurate integration to the data of 3D lidar and 2D camera.
Based on this, the embodiment of the present application provides a parameter determining method, and the basic idea of the method is: aiming at the same scene, point cloud data detected by a detection device and image data acquired by an image acquisition device are obtained; based on a target object in a scene, carrying out segmentation and two-dimensional projection transformation on point cloud data, and determining a first feature vector; performing image segmentation on the image data based on the target object, and determining a second feature vector; and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector. Therefore, the point cloud data is subjected to two-dimensional projection transformation, the point cloud data and the image data are fused, and target external parameters of the detection device and the image acquisition device can be accurately registered, so that the detection device and the image acquisition device are matched with an external scene after data fusion.
In an embodiment of the present application, referring to fig. 1, a flowchart of a parameter determination method provided in the embodiment of the present application is shown. As shown in fig. 1, the method may include:
s101: and aiming at the same scene, point cloud data detected by the detection device and image data acquired by the image acquisition device are acquired.
It should be noted that the parameter determination method provided by the embodiment of the present application may be applied to an electronic device having a requirement for parameter determination in a multi-sensor fusion process. Here, the electronic device may be, for example, an automobile, a robot, an Augmented Reality (AR) virtual device, and the like. The present invention is not limited to the above embodiments, but may be embodied in various forms.
It should be noted that the detecting device here may be a laser radar, and the image collecting device may be a camera. In the process of data fusion of a plurality of sensors in the same scene, data acquired by different sensors in the same scene need to be acquired, and specifically, point cloud data detected by a laser radar and image data acquired by a camera need to be acquired.
In some embodiments, for S101, the point cloud data and the image data are obtained based on a target scene at a same time; wherein the target scene comprises at least one object.
It should be noted that, when point cloud data and image data are collected, on the premise of determining the same scene, attention needs to be paid to the point cloud data industry image data at the same time, so that it can be ensured that objects contained in the same scene are the same, and thus the point cloud data and the image data reflect parameters of the same object in the same scene, and further perform corresponding data fusion.
S102: and based on a target object in the scene, carrying out segmentation and two-dimensional projection transformation on the point cloud data, and determining a first feature vector.
It should be noted that, based on the target object in the scene, segmenting the point cloud data specifically includes segmenting the point cloud data into a plurality of portions according to the difference of the object, where each portion includes an object. Specifically, the point cloud data is subjected to two-dimensional projection transformation, namely, original three-dimensional data of the point cloud data is converted into two-dimensional data through projection transformation, so that the converted point cloud data can be better segmented to generate subsequent two-dimensional image data for data fusion.
In some embodiments, the method may further comprise: acquiring initial external parameters; and obtaining the initial external parameters according to the relative postures of the detection device and the image acquisition device.
It should be further noted that after the point cloud data obtained by the laser radar is obtained, the initial external parameters obtained according to the relative posture of the laser radar and the binocular camera are continuously obtained, the corresponding initial transformation matrix is obtained according to the initial external parameters, and the point cloud data is subjected to coordinate transformation according to the initial transformation matrix corresponding to the initial external parameters, so that the three-dimensional point cloud data is transformed into the point cloud data in the two-dimensional camera coordinate system. The initial extrinsic parameters may be rough initial extrinsic parameters between the lidar and the binocular camera that are measured manually.
Thus, after the initial external parameters are obtained, for S102, refer to fig. 2, which shows a schematic flow chart of another parameter determination method provided in an embodiment of the present application, and as shown in fig. 2, the performing image segmentation and two-dimensional projection transformation on the point cloud data to determine a first feature vector may include:
s201: and carrying out object segmentation on the point cloud data to obtain a three-dimensional model.
S202: carrying out two-dimensional projection transformation on the three-dimensional model by using the initial external parameters to obtain a first image in a two-dimensional format; wherein the first image comprises at least one object.
S203: based on the contour information of the at least one object, a first feature vector corresponding to each of the at least one object is determined.
It should be noted that the initial external parameters are external parameters that are reflected by the laser radar in the current pose state without considering environmental factors before the parameters are determined, wherein image data acquired by the monocular camera is two-dimensional, and point cloud data detected by the laser radar is three-dimensional, so that the point cloud data needs to be converted into two-dimensional data for data fusion.
It should be further noted that the object segmentation of the point cloud data to obtain the object segmentation in the three-dimensional model means that the point cloud data is divided according to the object, and finally, each part of the point cloud data only contains one object, and the point cloud data corresponding to each object is the three-dimensional model. And carrying out projection transformation on the three-dimensional model according to the two-dimensional coordinates by using the initial external parameters of the laser radar to obtain a first image in a two-dimensional format.
Specifically, in some embodiments, the determining the first feature vector corresponding to each of the at least one object based on the contour information of the at least one object may include:
performing normal vector distribution calculation according to contour information of a first object to obtain a contour feature vector of the first object;
triangulation processing is carried out on the contour information of the at least one object, normal vector distribution calculation is carried out on other contours of the first object in a preset neighborhood, and a background feature vector of the first object is obtained;
determining a first feature vector corresponding to the first object according to the contour feature vector of the first object and the background feature vector of the first object;
wherein the first object is any one of the at least one object.
The contour feature vector obtained by the normal vector distribution calculation may represent the contour information of the first object in a vector form, and the background feature vector may represent the normal vectors of other three-dimensional models around the first object with the first object as a center point by triangulation processing, and finally combine the contour feature vector corresponding to the first object and the background feature vector to serve as the first feature vector corresponding to the first object.
It should be further noted that the preset neighborhood is other objects set in the same scene and adjacent to the first object, and each object in the scene is used to calculate its own contour vector and also used to calculate background feature vectors in the preset neighborhoods of the other objects.
In this way, the point cloud data is firstly segmented and two-dimensional projection transformed to obtain a plurality of objects, a contour vector is obtained through normal vector distribution calculation of contours of the objects, a background feature vector is obtained through normal vector distribution calculation of other objects in a preset neighborhood of the objects, and then the contour vector and the background feature vector are determined as a first feature vector.
S103: and performing image segmentation on the image data based on the target object, and determining a second feature vector.
It should be noted that, based on the target object, the image data is specifically divided according to different target objects, each part includes one object after the division is completed, and corresponding second feature vectors are determined for a plurality of divided objects.
In some embodiments, for S103, the image segmenting the image data based on the target object, and determining the second feature vector may include:
carrying out object segmentation on the image data to obtain a second image in a two-dimensional format; wherein the second image comprises at least one object;
and determining a second feature vector corresponding to each at least one object based on the contour information of the at least one object.
It should be noted that, in the process of segmenting the image data, the image data is divided according to a plurality of objects in the image data, after the division is completed, each second image includes at least one object, normal vector distribution calculation is performed on at least one object to obtain a contour vector of at least one object, then normal vector distribution conditions of other surrounding objects corresponding to at least one object are determined to obtain a background feature vector, and finally the contour vector and the background feature vector are determined as second feature vectors.
Specifically, in some embodiments, the determining the second feature vector corresponding to each of the at least one object based on the contour information of the at least one object may include:
performing normal vector distribution calculation according to contour information of a second object to obtain a contour feature vector of the second object;
triangulation processing is carried out on the contour information of the at least one object, and normal vector distribution calculation is carried out on other contours of the second object in a preset neighborhood to obtain a background feature vector of the second object;
determining a second feature vector corresponding to the second object according to the contour feature vector of the second object and the background feature vector of the second object;
wherein the second object is any one of the at least one object.
It should be noted that the contour feature vector obtained by normal vector distribution calculation may represent the contour information of the second object in a vector form, and the background feature vector represents the normal vectors of other three-dimensional objects around the second object with the second object as a center point through triangulation processing, and finally combines the contour feature vector and the background feature vector corresponding to the second object to serve as the second feature vector corresponding to the second object.
It should be further noted that the preset neighborhood is other objects set in the same scene and adjacent to the second object, and each object in the scene is used to calculate its own contour vector and also used to calculate the background feature vector in the preset neighborhood of the other object.
S104: and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector.
It should be noted that, according to the first feature vector and the second feature vector, a target external parameter for data fusion between the detection apparatus and the image acquisition apparatus is determined, specifically, the target external parameter for data fusion between the detection apparatus and the image acquisition apparatus is determined by using the first feature vectors corresponding to a plurality of objects determined on the point cloud map and the second feature vectors corresponding to a plurality of objects determined in the image data.
Specifically, in some embodiments, for S104, the determining, according to the first feature vector and the second feature vector, a target external parameter for data fusion between the detection device and the image acquisition device may include:
determining an objective function according to the similar matrix of the first eigenvector and the second eigenvector;
and determining target external parameters for data fusion between the laser radar and the monocular camera according to the target function.
Specifically, a similarity matrix of the first feature vector and the second feature vector is calculated; clustering and normalizing the similarity matrix to obtain a target function; and determining target external parameters for data fusion between the laser radar and the monocular camera according to the target function.
It should be further noted that, in the similarity matrix, the similarity between the first eigenvector and the second eigenvector is calculated by using the cosine distance, after the objective function is obtained, the objective function is abstracted into a function related to the external parameter, each time the external parameter is updated, the first eigenvector and the second eigenvector need to be determined repeatedly and the objective function value needs to be calculated, then the quality of the external parameter at this time is measured, and whether the updating needs to be continued or not is determined until the external parameter with the optimal objective function value is obtained.
In some embodiments, the method may further comprise:
and when the target external parameters do not meet the preset conditions, the target external parameters are used as initial external parameters, the steps of performing image segmentation and two-dimensional projection transformation on the point cloud data and determining first characteristic vectors are returned, so that the target external parameters are updated.
It should be noted that, in some embodiments, the external parameters may be used only by meeting the user requirements, and the optimal external parameters are not required to be confirmed, so that corresponding trade-offs between time cost and effect may be performed, and in general, under the condition that an external scene changes, the external parameters need to be re-determined, and the external parameters may also be determined and updated regularly as needed.
In this way, the target external parameters for data fusion between the detection device and the image acquisition device can be continuously updated according to the first feature vector and the second feature vector.
The embodiment of the application provides a parameter determining method, aiming at the same scene, point cloud data detected by a detecting device and image data acquired by an image acquisition device are obtained; based on a target object in a scene, carrying out segmentation and two-dimensional projection transformation on point cloud data, and determining a first feature vector; performing image segmentation on the image data based on the target object, and determining a second feature vector; and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector. Therefore, the point cloud data is subjected to two-dimensional projection transformation, the point cloud data and the image data are fused, and target external parameters of the detection device and the image acquisition device can be accurately registered, so that the detection device and the image acquisition device are matched with an external scene after data fusion.
In another embodiment of the present application, a parameter determination method based on the foregoing embodiment is described with reference to fig. 3, which shows a schematic diagram of a principle of a parameter determination method provided in an embodiment of the present application. As shown in fig. 3, for the same scene, the camera and the lidar are used to collect data at the same time, respectively determine corresponding eigenvectors, respectively calculate through similar matrices of eigenvectors corresponding to different sensors (e.g., the camera and the lidar), determine external parameters corresponding to the camera and the lidar, and replace the original external parameters with the newly determined external parameters.
Based on the principle, the sensor takes a camera and a laser radar as an example, the embodiment of the application provides a method for fusing a full-automatic laser radar and a camera based on a real road environment, and the method can comprise the following steps:
the method comprises the following steps: acquiring laser radar point cloud data and camera image data under the same scene at the same time;
step two: dividing a 3D target vehicle of laser radar point cloud data, projecting the divided target vehicle to a two-dimensional view, acquiring a corresponding 2D target vehicle projection image, and calculating a corresponding characteristic vector D on the basis 3
It should be noted that the target vehicles, that is, the objects in the foregoing embodiments, the number of the target vehicles may be determined according to the specific situations of the vehicles in the image data and the point cloud data, and the vehicles around the vehicle where the sensor is located are selected as the target vehicles.
Step three: 2D target vehicle segmentation of camera image data, obtaining 2D image segmentation of target vehicle, calculating corresponding feature vector D based on the segmentation 2
By calculating d 3 And d 2 And performing iterative optimization registration on the similarity to obtain a related rotation R and translation t matrix, and finally determining external parameters corresponding to the camera and the laser radar.
It should be further noted that, in a specific implementation manner, the specific steps of the embodiment of the present application are as follows:
the method comprises the following steps: determining initial external parameters according to the relative attitude of the laser radar and the camera
Figure BDA0003575660690000121
Acquiring an internal reference P of the camera by using a checkerboard calibration mode;
step two: and 3D target vehicle segmentation and projection of the laser radar point cloud data. The segmentation model uses a full convolution neural network to obtain a plurality of target vehicles, the target vehicles are subdivided to make the 3D model smoother, the 3D model is divided into P,
Figure BDA0003575660690000122
performing projection transformation to form a 2D image including N2D target vehicles, obtaining the outer contour of each target vehicle, calculating the distribution of the normal vector of the contour as the contour feature vector D of the target vehicle i (i is more than or equal to 1 and less than or equal to N) 3_u_i . And triangulating the central points of the N contours (Delaunay), and acquiring the normal vector distribution of other contours in the neighborhood of the contour i as the background feature vector d of the target vehicle i 3_c_i The feature vector of the target vehicle i is d 3_i =[d 3_c_i ;d 3_u_i ];
Step three: 2D target vehicle segmentation of camera images. The segmentation model uses a full convolution neural network to obtain a plurality of target vehicles, and the image comprisesM2D target vehicles obtain the outer contour of each target vehicle, and the distribution of the normal vector of the contour is calculated to be used as the contour feature vector D of the target vehicle j (j is more than or equal to 1 and less than or equal to M) 2_u_j . And simultaneously performing Delaunay triangulation on the center points of the M contours, and acquiring the distribution of normal vectors of other contours in the neighborhood of the contour j as a background feature vector d of the target vehicle j 2_c_j The feature vector of the target vehicle j is d 2_j =[d 2_c_j ;d 2_u_j ];
Step four: feature vector d obtained by calculating point cloud data 3_i Feature vector d obtained from camera image 2_j The similarity matrix of (2) is as follows:
Figure BDA0003575660690000123
wherein the similarity of the feature vectors is calculated using cosine distances: s ij =cos(d 3_i ,d 2_j );
Step five: clustering the similar matrix, and normalizing the cosine distance of the clustering result as a target function;
step six: abstracting an objective function into extrinsic parameters
Figure BDA0003575660690000124
And (4) repeating the steps from the second step to the fifth step to calculate the objective function value each time the external parameter is updated, then evaluating the quality of the external parameter at the time, and continuously updating to obtain the external parameter with the optimal objective function value.
Briefly, according to the above specific steps, the embodiment of the present application may obtain a transformation relationship between the laser radar and the camera, so that the information fusion of the visual information and the laser radar may be performed conveniently.
The embodiment provides a parameter determination method, and the specific implementation of the embodiment is elaborated based on the embodiment, so that it can be seen that according to the technical scheme of the embodiment, registration is performed through full-automatic method iterative optimization, and thus more flexible deployment of geometric relationships is achieved; in long-term operation, regular automatic registration can compensate for changes in the geometric layout between the lidar and the camera due to external disturbances. Therefore, the point cloud data is subjected to two-dimensional projection transformation, the point cloud data and the image data are fused, and target external parameters of the detection device and the image acquisition device can be accurately registered, so that the detection device and the image acquisition device are matched with an external scene after data fusion.
In another embodiment of the present application, refer to fig. 4, which shows a schematic structural diagram of a parameter determining apparatus provided in the embodiment of the present application. As shown in fig. 4, the parameter determining device 40 may include:
a data obtaining unit 401 configured to obtain point cloud data detected by the detecting device and image data acquired by the image acquiring device for the same scene;
a data processing unit 402 configured to perform segmentation and two-dimensional projection transformation on the point cloud data based on a target object in the scene to determine a first feature vector;
a data processing unit 402, further configured to perform image segmentation on the image data based on the target object, and determine a second feature vector;
a parameter determining unit 403 configured to determine a target external parameter for data fusion between the detecting device and the image capturing device according to the first feature vector and the second feature vector.
In some embodiments, the point cloud data and the image data are obtained based on a target scene at the same time; wherein the target scene comprises at least one object.
In some embodiments, the data obtaining unit 401 is further configured to obtain an initial external parameter; the initial external parameters are obtained according to the relative attitude of the laser radar and the monocular camera;
a data processing unit 402, further configured to perform object segmentation on the point cloud data to obtain a three-dimensional model; carrying out two-dimensional projection transformation on the three-dimensional model by using the initial external parameters to obtain a first image in a two-dimensional format; wherein the first image comprises at least one object; and determining a first feature vector corresponding to each of the at least one object based on the contour information of the at least one object. .
In some embodiments, the data processing unit 402 is further configured to perform normal vector distribution calculation according to the contour information of the first object, so as to obtain a contour feature vector of the first object; triangulation processing is carried out on the contour information of the at least one object, normal vector distribution calculation is carried out on other contours of the first object in a preset neighborhood, and a background feature vector of the first object is obtained; determining a first feature vector corresponding to the first object according to the contour feature vector of the first object and the background feature vector of the first object; wherein the first object is any one of the at least one object.
In some embodiments, the data processing unit 402 is further configured to perform object segmentation on the image data to obtain a second image in a two-dimensional format; wherein the second image comprises at least one object; and determining a second feature vector corresponding to each of the at least one object based on the contour information of the at least one object.
In some embodiments, the data processing unit 402 is further configured to perform normal vector distribution calculation according to the contour information of the second object, so as to obtain a contour feature vector of the second object; triangulation processing is carried out on the contour information of the at least one object, normal vector distribution calculation is carried out on other contours of the second object in a preset neighborhood, and a background feature vector of the second object is obtained; determining a second feature vector corresponding to the second object according to the contour feature vector of the second object and the background feature vector of the second object; wherein the second object is any one of the at least one object.
In some embodiments, the parameter determining unit 403 is further configured to determine an objective function according to a similarity matrix of the first eigenvector and the second eigenvector; and determining target external parameters for data fusion between the detection device and the image acquisition device according to the target function.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, the present embodiments provide a computer storage medium storing a computer program which, when executed by at least one processor, performs the steps of the method of any of the preceding embodiments.
Based on the composition of the parameter determination apparatus 40 and the computer storage medium, refer to fig. 5, which shows a specific hardware structure diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 5, the electronic device 50 may include: a communication interface 501, a memory 502, and a processor 503; the various components are coupled together by a bus system 504. It is understood that the bus system 504 is used to enable communications among the components. The bus system 504 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 504 in fig. 5. The communication interface 501 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 502 for storing a computer program capable of running on the processor 503;
a processor 503 for executing, when running the computer program, the following:
aiming at the same scene, point cloud data detected by a detection device and image data acquired by an image acquisition device are obtained;
based on a target object in the scene, performing segmentation and two-dimensional projection transformation on the point cloud data to determine a first feature vector;
performing image segmentation on the image data based on the target object, and determining a second feature vector;
and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector.
It will be appreciated that the memory 502 in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous chained SDRAM (Synchronous link DRAM, SLDRAM), and Direct memory bus RAM (DRRAM). The memory 502 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 503 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 503. The Processor 503 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 502, and the processor 503 reads the information in the memory 502 and completes the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 503 is further configured to perform the steps of the method of any one of the preceding embodiments when running the computer program.
In some embodiments, refer to fig. 6, which shows a schematic structural diagram of an electronic device 50 provided in an embodiment of the present application. As shown in fig. 6, the electronic device 50 at least comprises the parameter determining apparatus 40 according to any of the previous embodiments.
In the embodiment of the present application, for the electronic device 50, point cloud data detected by the detection device and image data acquired by the image acquisition device are obtained for the same scene; based on a target object in a scene, carrying out segmentation and two-dimensional projection transformation on point cloud data, and determining a first feature vector; performing image segmentation on the image data based on the target object, and determining a second feature vector; and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector. Therefore, the point cloud data is subjected to two-dimensional projection transformation, the point cloud data and the image data are fused, and target external parameters of the detection device and the image acquisition device can be accurately registered, so that the detection device and the image acquisition device are matched with an external scene after data fusion.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of parameter determination, the method comprising:
aiming at the same scene, point cloud data detected by a detection device and image data acquired by an image acquisition device are obtained;
based on a target object in the scene, performing segmentation and two-dimensional projection transformation on the point cloud data to determine a first feature vector;
performing image segmentation on the image data based on the target object, and determining a second feature vector;
and determining target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector.
2. The method of claim 1, the point cloud data and the image data being obtained based on a target scene at a same time; wherein the target scene comprises at least one object.
3. The method of claim 2, further comprising:
acquiring initial external parameters; wherein the initial external parameters are obtained according to the relative postures of the detection device and the image acquisition device;
correspondingly, the image segmentation and two-dimensional projection transformation are carried out on the point cloud data, and a first feature vector is determined, wherein the determining comprises the following steps:
carrying out object segmentation on the point cloud data to obtain a three-dimensional model;
carrying out two-dimensional projection transformation on the three-dimensional model by using the initial external parameters to obtain a first image in a two-dimensional format; wherein the first image comprises at least one object;
and determining a first feature vector corresponding to each at least one object based on the contour information of the at least one object.
4. The method of claim 3, wherein the determining a first feature vector for each of the at least one object based on the contour information of the at least one object comprises:
performing normal vector distribution calculation according to contour information of a first object to obtain a contour feature vector of the first object;
triangulation processing is carried out on the contour information of the at least one object, normal vector distribution calculation is carried out on other contours of the first object in a preset neighborhood, and a background feature vector of the first object is obtained;
determining a first feature vector corresponding to the first object according to the contour feature vector of the first object and the background feature vector of the first object;
wherein the first object is any one of the at least one object.
5. The method of claim 2, the image segmenting the image data based on the target object, determining a second feature vector, comprising:
carrying out object segmentation on the image data to obtain a second image in a two-dimensional format; wherein the second image comprises at least one object;
and determining a second feature vector corresponding to each at least one object based on the contour information of the at least one object.
6. The method of claim 5, wherein the determining a second feature vector for each of the at least one object based on the contour information of the at least one object comprises:
performing normal vector distribution calculation according to contour information of a second object to obtain a contour feature vector of the second object;
triangulation processing is carried out on the contour information of the at least one object, and normal vector distribution calculation is carried out on other contours of the second object in a preset neighborhood to obtain a background feature vector of the second object;
determining a second feature vector corresponding to the second object according to the contour feature vector of the second object and the background feature vector of the second object;
wherein the second object is any one of the at least one object.
7. The method according to any one of claims 1 to 6, wherein determining the target external parameters for data fusion between the detection device and the image acquisition device according to the first feature vector and the second feature vector comprises:
determining an objective function according to the similar matrix of the first eigenvector and the second eigenvector;
and determining target external parameters for data fusion between the detection device and the image acquisition device according to the target function.
8. The method of claim 7, further comprising:
and when the target external parameters do not meet the preset conditions, the target external parameters are used as initial external parameters, the steps of performing image segmentation and two-dimensional projection transformation on the point cloud data and determining first characteristic vectors are returned, so that the target external parameters are updated.
9. An electronic device, comprising:
a memory for storing a computer program capable of running on the processor;
a processor for performing the method of any one of claims 1 to 8 when running the computer program.
10. A computer storage medium storing a computer program which, when executed by at least one processor, implements the method of any one of claims 1 to 8.
CN202210343808.4A 2022-03-31 2022-03-31 Parameter determination method, equipment and computer storage medium Pending CN114820809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210343808.4A CN114820809A (en) 2022-03-31 2022-03-31 Parameter determination method, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210343808.4A CN114820809A (en) 2022-03-31 2022-03-31 Parameter determination method, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN114820809A true CN114820809A (en) 2022-07-29

Family

ID=82532410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210343808.4A Pending CN114820809A (en) 2022-03-31 2022-03-31 Parameter determination method, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN114820809A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422848A (en) * 2023-10-27 2024-01-19 神力视界(深圳)文化科技有限公司 Method and device for segmenting three-dimensional model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422848A (en) * 2023-10-27 2024-01-19 神力视界(深圳)文化科技有限公司 Method and device for segmenting three-dimensional model

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
CN111126269B (en) Three-dimensional target detection method, device and storage medium
CN104833370B (en) System and method for mapping, positioning and pose correction
CN111797734B (en) Vehicle point cloud data processing method, device, equipment and storage medium
JP6767998B2 (en) Estimating external parameters of the camera from the lines of the image
CN111639663B (en) Multi-sensor data fusion method
US10909395B2 (en) Object detection apparatus
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
US10872246B2 (en) Vehicle lane detection system
US11887336B2 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
JP2020057387A (en) Vehicle positioning method, vehicle positioning device, electronic apparatus, and computer-readable storage medium
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
CN113030960B (en) Vehicle positioning method based on monocular vision SLAM
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN113706633B (en) Three-dimensional information determination method and device for target object
CN114820809A (en) Parameter determination method, equipment and computer storage medium
CN114494466A (en) External parameter calibration method, device and equipment and storage medium
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN111754388B (en) Picture construction method and vehicle-mounted terminal
CN116817891A (en) Real-time multi-mode sensing high-precision map construction method
CN114648639B (en) Target vehicle detection method, system and device
CN116343165A (en) 3D target detection system, method, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination