CN113269840A - Combined calibration method for camera and multi-laser radar and electronic equipment - Google Patents

Combined calibration method for camera and multi-laser radar and electronic equipment Download PDF

Info

Publication number
CN113269840A
CN113269840A CN202110586612.3A CN202110586612A CN113269840A CN 113269840 A CN113269840 A CN 113269840A CN 202110586612 A CN202110586612 A CN 202110586612A CN 113269840 A CN113269840 A CN 113269840A
Authority
CN
China
Prior art keywords
camera
point cloud
target
cloud data
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110586612.3A
Other languages
Chinese (zh)
Other versions
CN113269840B (en
Inventor
陈飞逸
马福龙
李明阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yiqing Innovation Technology Co ltd
Original Assignee
Shenzhen Yiqing Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yiqing Innovation Technology Co ltd filed Critical Shenzhen Yiqing Innovation Technology Co ltd
Priority to CN202110586612.3A priority Critical patent/CN113269840B/en
Priority claimed from CN202110586612.3A external-priority patent/CN113269840B/en
Publication of CN113269840A publication Critical patent/CN113269840A/en
Application granted granted Critical
Publication of CN113269840B publication Critical patent/CN113269840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the invention relates to the technical field of sensor calibration, in particular to a combined calibration method for a camera and a multi-laser radar and electronic equipment. And then, generating pseudo point cloud data according to the image data, determining external parameters between any two laser radars in the laser radars according to the first point cloud data, and determining the external parameters between the camera and the target laser radar according to the pseudo point cloud data and the first point cloud data corresponding to the target laser radar. The point cloud corresponding to the calibration plate does not need to be manually deducted or priori information is not needed on the basis of the known geometric relation of the three surfaces of the structured object, so that the position of the calibration plate in the point cloud data can be automatically and accurately positioned, and the first point cloud data used for determining the external parameters is accurate. Therefore, the method can integrate and realize automatic calibration between the camera and the multi-laser radar and has higher external reference calibration precision.

Description

Combined calibration method for camera and multi-laser radar and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of sensor calibration, in particular to a combined calibration method for a camera and a multi-laser radar and electronic equipment.
Background
In recent years, autonomous machines from various fields have come into the field of vision more and more. With the rapid development of various technologies in related fields such as automatic driving and robots, single sensor information often cannot meet the requirement of environmental perception. Taking automatic driving as an example, vehicles often use all installed sensors to sense and understand the surrounding environment, and then can identify the type and position of an object more accurately, and classify the surrounding environment, thereby accurately obtaining real-time road condition information. The camera and the laser radar are the most commonly used sensors and are generally used for sensing detection tasks such as 2D/3D object detection, depth map completion, real-time positioning and the like, and the camera and the radar can acquire accurate external parameters between the radar and the radar on the premise that the tasks can be completed.
In the prior stage, in radar and radar external parameter calibration, point cloud registration is carried out by constructing a local map of each radar, and continuous iteration is carried out, so that final external parameters are obtained, which requires a more striking building outside, and radar external parameter calibration is assisted by using a calibration object with a special shape, for example, in the environment of an underground parking lot, geometric features in different radar point clouds are extracted, and matching is carried out according to autocorrelation of the geometric features, so that the external parameters of the radar are obtained. In the calibration of the camera radar external parameter, the camera detects the angular point of a specific object, searches the corresponding point in the radar point cloud, and utilizes the PNP to solve the camera radar external parameter.
According to the method for calibrating the radar and the radar external parameter and calibrating the camera radar external parameter, after the external parameter is initially calibrated, the external parameter needs to be manually and finely adjusted, and the precision is low. Furthermore, external parameters between the camera and any two of the multiple radars cannot be calibrated simultaneously. That is, the existing calibration method has low precision, needs manual participation, cannot meet industrial large-scale automatic calibration, and cannot complete independent calibration of multiple sensors.
Disclosure of Invention
The embodiment of the invention mainly solves the technical problem of providing a combined calibration method for a camera and a multi-laser radar, an electronic device and a storage medium, which can integrate and realize automatic calibration between the camera and the multi-laser radar and have higher external reference calibration precision.
In order to solve the above technical problem, in a first aspect, an embodiment of the present invention provides a combined calibration method for a camera and multiple lidar, where the camera and at least two lidar are fixed on an electronic device, and a structured object having three surfaces is provided in a shooting range of the camera and in scanning ranges of the at least two lidar, and calibration plates are respectively provided on the three surfaces of the structured object, the method includes:
acquiring point cloud data respectively acquired by the at least two laser radars and acquiring image data acquired by the camera;
for each point cloud data in the point cloud data, extracting first point cloud data belonging to three surfaces of the structured object from the point cloud data;
generating pseudo point cloud data of each calibration plate in a camera coordinate system according to the image data;
determining external parameters between any two laser radars in each laser radar according to the first point cloud data corresponding to each laser radar;
and determining external parameters between the camera and a target laser radar according to the pseudo-point cloud data and first point cloud data corresponding to the target laser radar, wherein the target laser radar is any one of the at least two laser radars.
In some embodiments, the determining, according to the first point cloud data respectively corresponding to each of the lidar, an external parameter between any two of the lidar in each of the lidar includes:
extracting a plane group corresponding to each laser radar from each first point cloud data, wherein the plane group comprises plane point clouds reflecting three surfaces of the structured object;
and determining external parameters between any two laser radars in the laser radars according to the plane group corresponding to each laser radar.
In some embodiments, the determining an external parameter between any two of the lidar according to the plane group corresponding to each of the lidar includes:
determining the plane corresponding relation between the plane groups according to the plane groups of the laser radars;
acquiring initial external parameters between any two laser radars according to the plane corresponding relation between each plane group by adopting a preset rotation matrix algorithm;
and carrying out nonlinear optimization on each initial external parameter to obtain the external parameter between any two laser radars.
In some embodiments, the performing nonlinear optimization on each initial external parameter to obtain the external parameter between any two laser radars includes:
for each initial external parameter, adjusting the initial external parameter until a first optimization function meets a first preset optimization condition, and obtaining the external parameter between any two laser radars;
wherein the first optimization function is:
Figure BDA0003087812560000031
wherein the content of the first and second substances,
Figure BDA0003087812560000032
wherein, piiAnd
Figure BDA0003087812560000033
is a corresponding plane in the two plane groups, pi is a plane piiThe point cloud of the lower part of the system,
Figure BDA0003087812560000034
is a plane piiThe normal vector of (a) is,
Figure BDA0003087812560000035
is a plane piiThe (R, T) is the initial external parameter between any two of the lidar.
In some embodiments, the determining external parameters between the camera and the target lidar according to the pseudo-point cloud data and the first point cloud data corresponding to the target lidar includes:
extracting a target plane group from first point cloud data corresponding to the target laser radar, and acquiring a camera plane group from the pseudo-point cloud data;
acquiring initial external parameters between the camera and the target radar according to the camera plane group and the target plane group by adopting a preset rotation matrix algorithm;
performing secondary step optimization on the initial external parameters between the camera and the target radar to obtain the external parameters between the camera and the target laser radar.
In some embodiments, said performing a second step optimization of the initialized external parameters between the camera and the target radar to obtain the external parameters between the camera and the target lidar comprises:
performing first optimization on the initialized external parameters between the camera and the target radar to obtain the first external parameters between the camera and the target laser radar, wherein the first optimization is the optimization on translation vectors in the initialized external parameters between the camera and the target radar;
and performing second optimization on the first external parameter between the camera and the target radar to obtain the external parameter between the camera and the target laser radar, wherein the second optimization is the optimization on the first external parameter middle rotation torque matrix between the camera and the target laser radar.
In some embodiments, the first optimizing the initialized external parameters between the camera and the target radar to obtain the first external parameters between the camera and the target lidar comprises:
acquiring a plurality of camera postures and a plurality of radar postures in the synchronous motion process of the camera and the target laser radar, wherein the camera postures and the radar postures are all in the same coordinate system;
adjusting the initial external parameters by adopting the following second optimization function until the second optimization function meets a second preset optimization condition, and obtaining the first external parameters between the camera and the target laser radar;
wherein the second optimization function is:
Figure BDA0003087812560000041
wherein, CamPosii=(xi,yi,zi),CamAnglei=(αiii) For camera pose, lidar Posii=(xi,yi,zi),LidarAnglei=(αiii) Is the attitude of the target lidar.
In some embodiments, the second optimizing the first external reference between the camera and the target radar to obtain the external reference between the camera and the target lidar includes:
acquiring a point cloud boundary of a target object from first point cloud data corresponding to the target laser radar;
acquiring a pixel boundary of the target object from image data acquired by the camera;
adjusting a first external parameter between the camera and the target laser radar until the external parameter between the camera and the target laser radar is obtained when a boundary alignment function meets a third preset optimization condition;
wherein the boundary alignment function is:
Figure BDA0003087812560000042
wherein I is the label of the point on the boundary, I is the identity matrix,
Figure BDA0003087812560000051
is the direction of the pixel boundary, wherein the superscript C is the variable of the camera, L is the variable of the target lidar, K is the internal reference of the camera, and (R, T) is the external reference to be optimized, wherein R is the rotation matrix and T is the translation vector,
Figure BDA0003087812560000052
are the points on the boundary of the point cloud,
Figure BDA0003087812560000053
as coordinates of vertices in said pixel boundaries, Z0Are coefficients.
In some embodiments, the extracting first point cloud data belonging to three surfaces of the structured object from the point cloud data comprises:
performing semantic segmentation on the point cloud data by adopting a pre-trained deep learning network model to obtain a labeling result of each point in the point cloud data;
and separating the first point cloud data according to the marking result.
In some embodiments, the generating, from the image data, pseudo point cloud data of each calibration plate in a camera coordinate system includes:
and acquiring the coordinate position of the center of each calibration plate under a camera coordinate system according to the image data so as to enable each coordinate position to form the pseudo point cloud data.
In order to solve the above technical problem, in a second aspect, an embodiment of the present invention provides an electronic device, including:
a camera and at least two lidar;
at least one processor in communication with the camera and the at least two lidar respectively;
a memory communicatively coupled to the at least one processor, wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect as described above.
In order to solve the above technical problem, in a third aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium storing computer-executable instructions for causing an electronic device to perform the method according to the first aspect.
The embodiment of the invention has the following beneficial effects: different from the situation in the prior art, in the joint calibration method for the camera and the multiple laser radars provided by the embodiment of the invention, the camera and the at least two laser radars are fixed on the electronic device, the structured object with three surfaces is arranged in the camera shooting range of the camera and the scanning range of the at least two laser radars, the three surfaces of the structured object are respectively provided with the calibration plate, in the environment, the point cloud data respectively acquired by each laser radar and the image data acquired by the camera are acquired, and for each point cloud data in each point cloud data, the first point cloud data belonging to the three surfaces of the structured object are extracted from the point cloud data. And then, generating pseudo point cloud data of each calibration plate under a camera coordinate system according to the image data, determining external parameters between any two laser radars in each laser radar according to the first point cloud data corresponding to each laser radar, and determining the external parameters between the camera and the target laser radar according to the pseudo point cloud data and the first point cloud data corresponding to the target laser radar. The point cloud corresponding to the calibration plate does not need to be manually buckled based on the known geometrical relationship of the three surfaces of the structured object, and prior information such as the height of the calibration plate from the ground is also not needed, so that the position of the calibration plate in the point cloud data can be automatically and accurately positioned, and the first point cloud data used for determining the external parameters is accurate. Therefore, the method can integrate and realize automatic calibration between the camera and the multi-laser radar and has higher external reference calibration precision.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic diagram of a structured object for use in a combined calibration method for a camera and a multi-lidar according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device for a camera and multi-lidar joint calibration method according to an embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention;
FIG. 4 is a schematic flowchart of a joint calibration method for a camera and a multi-lidar according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating a sub-process of step S20 in the method of FIG. 4;
FIG. 6 is a schematic flow chart illustrating a sub-process of step S40 in the method of FIG. 4;
FIG. 7 is a schematic flow chart illustrating a sub-process of step S42 in the method of FIG. 6;
FIG. 8 is a schematic flow chart illustrating a sub-process of step S50 in the method of FIG. 4;
FIG. 9 is a schematic view of a sub-flow chart of step S53 in the method shown in FIG. 8;
FIG. 10(a) is a schematic diagram of a point cloud boundary and a pixel boundary of a target object according to an embodiment of the present invention;
fig. 10(b) is a schematic diagram of the point cloud boundary and the pixel boundary of the target object shown in fig. 10(a) after being aligned by the boundary.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the present application. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. Further, the terms "first," "second," "third," and the like, as used herein, do not limit the data and the execution order, but merely distinguish the same items or similar items having substantially the same functions and actions.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Please refer to fig. 1, which is a schematic diagram of a structured object applied to an embodiment of the combined calibration method for a camera and a multiple laser radar of the present invention, and refer to fig. 2, which is a schematic structural diagram of an electronic device applied to an embodiment of the combined calibration method for a camera and a multiple laser radar of the present invention, that is, an application system includes: an electronic device 10 and a structured object 20, wherein the electronic device 10 comprises a camera 11 and at least two lidar 12.
Specifically, the camera 11 and the at least two laser radars 12 are fixed on the body of the electronic device 10, the laser radar 12 in the embodiment of the present invention may be a single line laser radar, the camera 11 may be a monocular camera, and the electronic device 10 may be an automobile or a robot. Before the laser radar 12 scans and the camera 11 captures images, a structured object 20 is arranged in advance in the image capture range of the camera 11 and in the scanning range of the respective laser radar 12.
Wherein the structured object 20 has three surfaces and calibration plates are provided on the three surfaces, respectively. It will be appreciated that the geometric relationships of the three surfaces of the structured object are known, i.e., no measurement is required to obtain the geometric relationships of the three surfaces of the structured object 20. In some embodiments, the structured object 20 can be a corner, it being understood that the three surfaces of the corner are perpendicular to each other. In other embodiments, the structured object 20 may also be constructed manually.
The calibration board may be an Apriltag calibration board, an ArUco (Augmented Reality of the University of Cordoba, koldowa) tag calibration board, a black and white checkerboard calibration board, or the like, which is not limited in the embodiment of the present invention. In the embodiment of the present invention, an Apriltag calibration plate is taken as an example for explanation.
Next, on the basis of the above-mentioned fig. 2, another embodiment of the present invention provides a hardware structure diagram of an electronic device, wherein the electronic device 10 may be any type of electronic device with computing capability, such as an autonomous automobile or a robot.
Specifically, as shown in fig. 3, the electronic device 10 further includes at least one processor 13 and a memory 14 (a bus connection, one processor, and two laser radars are taken as an example in fig. 3) which are communicatively connected in addition to the camera 11 and the at least two laser radars 12, and the processor 13 is communicatively connected to the camera 11 and each laser radar 12, respectively.
Wherein the communication connection may be a wired connection, such as: fiber optic cables, serial communication buses (CAN buses), etc., as well as wireless communication connections, such as: WIFI connection, bluetooth connection, 4G wireless communication connection, 5G wireless communication connection and so on.
The processor 13 is configured to provide computing and control capabilities to control the electronic device 10 to perform corresponding tasks, for example, control the electronic device 10 to perform any one of the joint calibration methods for a camera and a multi-lidar provided in the following embodiments of the present invention.
It is understood that the Processor 13 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The memory 14, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the joint calibration method for a camera and a multi-lidar in an embodiment of the present invention. The processor 13 may implement the joint calibration method for the camera and the multi-lidar in any of the method embodiments described below by running non-transitory software programs, instructions, and modules stored in the memory 14. In particular, the memory 14 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 14 may also include memory located remotely from the processor 13, which may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the following, a detailed description is given of a combined calibration method for a camera and a multi-lidar according to an embodiment of the present invention, referring to fig. 4, the method includes, but is not limited to, the following steps:
s10: and acquiring point cloud data respectively acquired by the at least two laser radars and acquiring image data acquired by the camera.
S20: for each point cloud data of the point cloud data, first point cloud data belonging to three surfaces of the structured object are extracted from the point cloud data.
S30: and generating pseudo point cloud data of each calibration plate in a camera coordinate system according to the image data.
S40: and determining external parameters between any two laser radars in each laser radar according to the first point cloud data corresponding to each laser radar.
S50: and determining external parameters between the camera and a target laser radar according to the pseudo-point cloud data and first point cloud data corresponding to the target laser radar, wherein the target laser radar is any one of the at least two laser radars.
Each lidar illuminates a target (three surfaces of the structured object) with laser light and may acquire point cloud data reflecting the spatial coordinates of each spot of the laser light impinging on the surface of the structured object. From the point cloud data, the topographical features of the surface of the structured object and the distance between the lidar and the structured object may be obtained. It is understood that a lidar corresponds to a point cloud data. It will be appreciated that the camera captures the structured object so that image data can be collected about the structured object, including, for example, corners of walls and calibration plates.
It is understood that the point cloud data includes point clouds corresponding to other background objects in addition to the corresponding point cloud of the structured object, and in order to remove the point clouds corresponding to other background objects, for each point cloud data in the point cloud data, the first point cloud data belonging to the three surfaces of the structured object is extracted from the point cloud data. Therefore, the first point cloud data can accurately reflect the three surfaces of the structured object and is not interfered by other background objects. For example, the first point cloud data belonging to the three surfaces of the structured object may be extracted according to the features of the point cloud data and the appearance features of the structured object.
Based on the known geometric relationship of the three surfaces of the structured object, the point cloud corresponding to the calibration plate does not need to be manually deducted, and prior information, such as the height of the calibration plate from the ground, is also not needed, so that the position of the calibration plate in the point cloud data can be automatically and accurately positioned, and the first point cloud data for determining the external parameters is accurate.
Specifically, in some embodiments, referring to fig. 5, the step S20 specifically includes:
s21: and performing semantic segmentation on the point cloud data by adopting a pre-trained deep learning network model to obtain a labeling result of each point in the point cloud data.
S22: and separating the first point cloud data according to the marking result.
It can be understood that the deep learning network model can be a Randlanet deep learning network, and the Randlanet deep learning network is trained through a training set to obtain a trained Randlanet deep learning network model which can be used for identifying the surface of a structural object. The training set is a plurality of point cloud data prepared in advance, and each point cloud in each point cloud data in the training set is labeled with a corresponding label, for example, when the structured object is a corner, the label includes a "corner point cloud" or a "non-corner point cloud".
Therefore, the trained deep learning network model has the capability of distinguishing point clouds corresponding to the structured object, and for the point cloud data A under each laser radar, the trained deep learning network model is adopted to carry out semantic segmentation on the point cloud data A so as to obtain a marking result of each point in the point cloud data A, wherein the marking result records whether the point is a point on the structured object, and for example, the marking result comprises 'wall corner point cloud' or 'non-wall corner point cloud'. Therefore, according to the marking result, the point with the marking result of 'wall corner point cloud' is extracted, and the first point cloud data can be separated.
In the embodiment, the point cloud data is subjected to semantic segmentation by adopting a pre-trained deep learning network model to obtain a labeling result, the first point cloud data can be accurately separated according to the labeling result, and the input point cloud data can be processed end to end, namely, the point cloud reflecting the surface of the structural object is directly output.
In step S30, pseudo point cloud data of each calibration board in the camera coordinate system is generated from the image data. It can be understood that the camera acquires an image in a camera coordinate system, and the image includes the structured object and each calibration plate on the structured object, so that each pixel point can be converted into a three-dimensional space with the camera as a coordinate origin according to each pixel point in the camera coordinate system and the image data, and three-dimensional coordinate data, that is, pseudo point cloud data, of each calibration plate in the camera coordinate system is generated. Specifically, the pixel coordinates of each calibration plate in the image can be analyzed from the image data by using an existing analysis method such as a gray level gravity center method, and the three-dimensional coordinates of each calibration plate in a camera coordinate system can be obtained by converting the pixel coordinates of each calibration plate into the camera coordinates through projection transformation, so that the pseudo point cloud data is generated.
In some embodiments, the step S30 specifically includes:
s31: and acquiring the coordinate position of the center of each calibration plate under a camera coordinate system according to the image data so as to enable each coordinate position to form the pseudo point cloud data.
In this embodiment, in order to simplify the calculation, instead of performing coordinate conversion on each pixel point in the image data, the center of each calibration board is converted from the pixel coordinates to the three-dimensional coordinates in the camera coordinate system, that is, the coordinate position of the center of each calibration board in the camera coordinate system (three-dimensional coordinate data with the camera as the origin of coordinates) is obtained, and thus the above pseudo point cloud data is generated.
In step S40, an external parameter between any two of the laser radars is determined based on the first point cloud data corresponding to each of the laser radars. For example, for lidar 1# and lidar 2#, lidar 1# corresponds to first point cloud data 1# and lidar 2# corresponds to first point cloud data 2#, because first point cloud data 1# and first point cloud data 2# both reflect the spatial coordinates of the surface of the structured object, the difference is only that first point cloud data 1# is the spatial coordinates in the lidar 1# coordinate system, and first point cloud data 2# is the spatial coordinates in the lidar 2# coordinate system, so that the rotation matrix and the translation vector between first point cloud data 1# and first point cloud data 2# can be determined in combination with the characteristics of the structured object, and the external parameters (the rotation matrix and the translation vector) between lidar 1# and lidar 2# can be obtained. By analogy, external parameters between any two laser radars in each laser radar can be determined.
In step S50, in order to determine an external reference between the camera and each of the laser radars, for any one of the target laser radars 3#, the external reference between the camera and the target laser radar is determined based on the pseudo point cloud data and the first point cloud data 3# under the target laser radar 3 #. Specifically, it can be understood that the pseudo-point cloud data and the first point cloud data 3# both reflect the spatial coordinates of the surface of the structured object, the pseudo-point cloud data is the spatial coordinates in the camera coordinate system, and the first point cloud data 3# is the spatial coordinates in the target lidar coordinate system, so that the rotation matrix and the translation vector between the pseudo-point cloud data and the first point cloud data 3# can be determined by combining the characteristics of the structured object, and the external reference between the camera and the target lidar can be obtained. By analogy, the external parameters between the camera and each lidar can be determined.
In this embodiment, first point cloud data belonging to three surfaces of the structured object is extracted from point cloud data for each point cloud data in the point cloud data by acquiring point cloud data respectively acquired by each laser radar and acquiring image data acquired by a camera. And then, generating pseudo point cloud data of each calibration plate under a camera coordinate system according to the image data, determining external parameters between any two laser radars in each laser radar according to the first point cloud data corresponding to each laser radar, and determining the external parameters between the camera and the target laser radar according to the pseudo point cloud data and the first point cloud data corresponding to the target laser radar. The point cloud corresponding to the calibration plate does not need to be manually buckled based on the known geometrical relationship of the three surfaces of the structured object, and prior information such as the height of the calibration plate from the ground is also not needed, so that the position of the calibration plate in the point cloud data can be automatically and accurately positioned, and the first point cloud data used for determining the external parameters is accurate. Therefore, the method can integrate and realize automatic calibration between the camera and the multi-laser radar and has higher external reference calibration precision.
In some embodiments, referring to fig. 6, the step S40 specifically includes:
s41: and extracting a plane group corresponding to each laser radar from the first point cloud data respectively, wherein the plane group comprises plane point clouds reflecting three surfaces of the structured object.
S42: and determining external parameters between any two laser radars in the laser radars according to the plane group corresponding to each laser radar.
It is understood that the first point cloud data comprises point clouds on three surfaces of the structured object. In order to determine the corresponding relationship between the first point cloud data, a plane group corresponding to each laser radar is extracted from the first point cloud data, wherein the plane group comprises plane point clouds reflecting three surfaces of the structured object. For example, the plane set corresponding to the laser radar i # includes three planes (α i, β i, pi i), where each plane may be represented by a plane equation dx + by + cz + d ═ 0, where a, b, c, and d are coefficients of the plane equation, and (x, y, z) are three-dimensional coordinates of the point cloud. It will be appreciated that the plane equations for the planes in each plane set may be calculated by the ransac algorithm.
And then, determining external parameters between any two laser radars in each laser radar according to the plane group corresponding to each laser radar. For example, for any two of laser radar 4# and laser radar 3# in each laser radar, the plane set corresponding to laser radar 4# is (a 4, β 4, and pi 4), and the plane set corresponding to laser radar 5# is (a 5, β 5, and pi 5), so that the external parameter between laser radar 4# and laser radar 5# can be determined according to the conversion parameter between (a 4, β 4, and pi 4) and (a 5, β 5, and pi 5), for example, the conversion parameter between pi 4 and pi 5.
In this embodiment, the first point cloud data is divided into plane groups including three planes according to the surface features of the structured object, and the external parameters between any two laser radars can be determined according to the plane correspondence conversion relationship between the plane groups, so that the calculation is simple and the accuracy is high.
In order to make the external reference between any two laser radars more precise, in some embodiments, referring to fig. 7, the step S42 specifically includes:
s421: and determining the plane corresponding relation among the plane groups according to the plane groups of the laser radars.
S422: and acquiring initial external parameters between any two laser radars according to the plane corresponding relation between the plane groups by adopting a preset rotation matrix algorithm.
S423: and carrying out nonlinear optimization on each initial external parameter to obtain the external parameter between any two laser radars.
For example, in the plane group (α 4, β 4, π 4) and the plane group (α 5, β 5, π 5) described above, if the transformation parameter is too large, there is a risk of plane correspondence error, i.e., π 4 and π 5 may not correspond. Thus, it is first necessary to determine the correspondence of planes between the plane groups, specifically, for each plane group, a coordinate origin (o) is assumedx,oy,oz) All above three planes, i.e. aox+boy+cozThe + d is greater than or equal to 0, so that the normal vector directions of the three planes can be determined to be unique, and then, according to a right-hand rule, unique plane corresponding relations can be obtained, for example, alpha 4 corresponds to alpha 5, beta 4 corresponds to beta 5, and pi 4 corresponds to pi 5.
And then, acquiring initial external parameters between any two laser radars according to the plane corresponding relation between the plane groups by adopting a preset rotation matrix algorithm. The preset rotation matrix algorithm may be a Kabsch algorithm, that is, the Kabsch algorithm is used to calculate a corresponding rotation matrix and a corresponding translation vector, that is, an initial external reference between the laser radar 4# and the laser radar 5# according to a plane equation of a corresponding plane, for example, a plane equation of a corresponding plane pi 4 and pi 5. It is understood that the specific calculation process related to the Kabsch algorithm is prior art and is not described in detail herein.
After each initial external parameter is obtained, nonlinear optimization is carried out on each initial external parameter so as to obtain the external parameter between any two laser radars. Specifically, for an initial external parameter, an existing nonlinear optimization algorithm may be used for optimization, for example, a gradient descent method, a newton method, or a conjugate gradient method is used to calculate a minimum distance after two corresponding planes are converted, for example, pi 4 is converted into pi 4 ' through the corresponding initial external parameter, and pi 4 ' and pi 5 should theoretically coincide with each other, but actually there is an error to cause misalignment, so that the initial external parameter may be continuously adjusted by using the gradient descent method, so that a deviation between pi 4 ' and pi 5 reaches a preset condition, for example, when the minimum distance is reached, the corresponding adjusted external parameter at this time is used as the external parameter between the laser radar 4# and the laser radar 5 #.
In the embodiment, firstly, the accurate plane corresponding relation is obtained by determining the corresponding relation of the planes between each plane group, which is beneficial to improving the accuracy of the subsequent initial external parameters, then, the initial external parameters between any two laser radars are obtained according to the corresponding relation by adopting a preset rotation matrix algorithm, and then, the initial external parameters are subjected to nonlinear optimization, so that the external parameters between any two laser radars with high accuracy are obtained.
In some embodiments, the step S423 specifically includes:
for each initial external parameter, adjusting the initial external parameter until a first optimization function meets a first preset optimization condition, and obtaining the external parameter between any two laser radars;
wherein the first optimization function is:
Figure BDA0003087812560000151
wherein the content of the first and second substances,
Figure BDA0003087812560000152
wherein, piiAnd
Figure BDA0003087812560000153
is a corresponding plane in the two plane groups, pi is a plane piiThe point cloud of the lower part of the system,
Figure BDA0003087812560000154
is a plane piiThe normal vector of (a) is,
Figure BDA0003087812560000155
is a plane piiThe (R, T) is the initial external parameter between any two of the lidar.
In this embodiment, by setting the first optimization function, the initial external parameters (R, T) may be continuously adjusted in a gradient descent or LM manner, that is, iteratively updated, so that the external parameters between the two laser radars are obtained when the first optimization function satisfies a first preset condition. The first preset condition may be that the value of the first optimization function is smaller than a preset value, or that the number of iterative updates reaches a preset number.
Wherein the content of the first and second substances,
Figure BDA0003087812560000156
denotes the plane piiTo the plane
Figure BDA0003087812560000157
It will be appreciated that the first optimisation function is such that the distance between the two planes is as small as possible. Therefore, the accuracy of the optimized external parameter can be continuously improved.
In this embodiment, the initial external parameters are continuously iteratively adjusted through the first optimization function based on the distance minimization, and the external parameters with high accuracy can be obtained through the first preset optimization condition constraint first optimization function.
In some embodiments, referring to fig. 8, the step S50 specifically includes:
s51: and extracting a target plane group from the first point cloud data corresponding to the target laser radar, and acquiring a camera plane group from the pseudo-point cloud data.
S52: and acquiring initial external parameters between the camera and the target radar according to the camera plane group and the target plane group by adopting a preset rotation matrix algorithm.
S53: performing secondary step optimization on the initial external parameters between the camera and the target radar to obtain the external parameters between the camera and the target laser radar.
It is to be understood that the first point cloud data corresponding to the target lidar includes point clouds on three surfaces of the structured object, and in order to determine a planar correspondence between the target plane set and the camera plane set, the target plane set is extracted from the first point cloud data corresponding to the target lidar, the target plane set including a planar point cloud reflecting the three surfaces of the structured object. It will be appreciated that the plane equations for the planes in the set of target planes may be calculated by the ransac algorithm.
The pseudo-point cloud data is generated based on image data, and in the image data, which surface of each calibration plate belongs to the structured object can be directly obtained through information of the calibration plate, so that 3 planes can be clearly obtained, so that three planes can be directly obtained in the pseudo-point cloud data, that is, a camera plane group can be directly obtained from the pseudo-point cloud data, and the camera plane group is 3 planes in the pseudo-point cloud data.
After the target plane group and the camera plane group are obtained, initial external parameters between the camera and the target radar are determined according to the plane corresponding relation between the camera plane group and the target plane group by adopting a preset rotation matrix algorithm. The preset rotation matrix algorithm may be a Kabsch algorithm, that is, the Kabsch algorithm is adopted to calculate a corresponding rotation matrix and translation vector, that is, an initial external parameter between the camera and the target laser radar, according to a plane equation of a corresponding plane.
And then, carrying out secondary step optimization on the initial external parameters between the camera and the target radar so as to obtain the external parameters between the camera and the target laser radar. Wherein, the quadratic step optimization comprises rotation matrix optimization and translational vector optimization. The rotation matrix optimization and the translation vector optimization have no precedence relationship, and specifically, the following steps are performed: firstly, performing rotation matrix optimization and then performing translation vector optimization, or performing translation vector optimization and then performing rotation matrix optimization. Based on quadratic distribution optimization, a rotation matrix and a translation vector can be better optimized, so that the problems that one of the rotation matrix and the translation vector is insufficient in optimization and low in accuracy of external parameter caused by primary optimization are solved.
In the embodiment, the pseudo-point cloud is generated based on image data acquired by a camera, so that a plane group of the camera in the pseudo-point cloud and a corresponding relation between the plane group and a target plane group can be directly determined, then, a preset rotation matrix algorithm is adopted to acquire initial external parameters between the target laser radar and the camera according to the corresponding relation, and then, the initial external parameters are subjected to secondary step-by-step optimization, so that the external parameters between the camera and the target laser radar have higher accuracy.
In some embodiments, referring to fig. 9, the step S53 specifically includes:
s531: and performing first optimization on the initialized external parameters between the camera and the target radar to obtain the first external parameters between the camera and the target laser radar, wherein the first optimization is the optimization of translation vectors in the initialized external parameters between the camera and the target radar.
S532: and performing second optimization on the first external parameter between the camera and the target radar to obtain the external parameter between the camera and the target laser radar, wherein the second optimization is the optimization on the first external parameter middle rotation torque matrix between the camera and the target laser radar.
In this embodiment, a first optimization is performed on the initial external parameters between the camera and the target radar, and the first external parameters are obtained through the first optimization, wherein the first optimization is directed to the optimization of the translation vector in the initial external parameters. That is, in the first optimization, emphasis is placed on improving the accuracy of the translation vector.
Specifically, in some embodiments, the step S531 includes:
step S5311: and acquiring a plurality of camera postures and a plurality of radar postures in the synchronous motion process of the camera and the target laser radar, wherein the camera postures and the radar postures are all in the same coordinate system.
It will be appreciated that the camera and target lidar both collect data from the structured object, and thus the camera and target lidar may be in the same coordinate system, with the structured object being the origin of coordinates. For example, when the structured object is a corner, the intersecting lines of the three surfaces are orthogonal to each other, and the three intersecting lines can be used as three-dimensional coordinate axes to form a corner coordinate system, so that the camera and the target lidar share the corner coordinate system.
Based on the fact that the camera and the target laser radar continuously acquire the point cloud data and the pseudo point cloud data of the corner of the wall, when the camera and the target laser radar move synchronously, the movement tracks of the camera and the target laser radar can be obtained. Specifically, the postures of the camera and the radar at multiple moments are recorded, that is, multiple camera postures and radar postures are obtained. The camera and the target laser radar share the wall angle coordinate system, so that a plurality of camera postures and a plurality of radar postures are under the same coordinate system, and therefore the moving tracks of the camera formed by the camera postures and the moving tracks of the target laser radar formed by the radar postures are theoretically superposed after initial external parameter conversion. However, in practical situations, noise exists, and the initial external parameters have errors due to the influence of self precision. For this purpose, the initial external parameters are optimized in combination with the motion trajectory (multiple camera poses and multiple radar poses), as shown in step S5312.
Step S5312: adjusting the initial external parameters by adopting the following second optimization function until the second optimization function meets a second preset optimization condition, and obtaining the first external parameters between the camera and the target laser radar;
wherein the second optimization function is:
Figure BDA0003087812560000181
wherein, CamPosii=(xi,yi,zi),CamAnglei=(αiii) For camera pose, lidar Posii=(xi,yi,zi),LidarAnglei=(αiii) Is a target laserAttitude of the radar.
In this embodiment, by setting the second optimization function, the initial external parameter may be continuously adjusted in a gradient descent or LM manner, that is, iteratively updated, so that the first external parameter between the camera and the target laser radar is obtained when the second optimization function satisfies the second preset optimization condition. The second preset optimization condition may be that the value of the second optimization function is smaller than a preset value, or that the number of iterative updates reaches a preset number.
Wherein, CamPosii=(xi,yi,zi) Being the three-dimensional coordinates of the camera, Camanglei=(αiii) At the angular orientation of the camera, lidar Posii=(xi,yi,zi) Lidar, a three-dimensional coordinate of a target lidari=(αiii) Is the angular orientation of the target lidar. And combining a formula of a second optimization function, wherein the second optimization function enables the camera attitude to be coincident with the radar attitude as much as possible after being converted by (R, T), and the (R, T) is the initial external parameter or the adjusted initial external parameter. In addition, the second optimization function focuses on optimizing the translation vector T, which is equivalent to that in this embodiment, the translation vector in the first external reference is better optimized, that is, the translation vector in the first external reference is more accurate, compared with the optimization of the rotation matrix.
In this embodiment, based on the second optimization function whose tracks are coincident as much as possible, the initial external parameter between the camera and the target laser radar is continuously iteratively adjusted, and the second optimization function is constrained by the second preset optimization condition, so that the first external parameter between the camera and the target laser radar is more accurate.
In step S532, after the first optimization is performed to obtain the first external parameter, the second optimization is performed on the first external parameter between the camera and the target radar to obtain the external parameter between the camera and the target laser radar, wherein the second optimization is an optimization of a rotation matrix in the first external parameter. That is, in the second optimization, emphasis is placed on improving the accuracy of the rotation matrix.
Specifically, in some embodiments, the step S532 specifically includes:
s5321: and acquiring a point cloud boundary of the target object from the first point cloud data corresponding to the target laser radar.
S5322: and acquiring the pixel boundary of the target object from the image data acquired by the camera.
S5323: adjusting a first external parameter between the camera and the target laser radar until the external parameter between the camera and the target laser radar is obtained when a boundary alignment function meets a third preset optimization condition;
wherein the boundary alignment function is:
Figure BDA0003087812560000191
wherein I is the label of the point on the boundary, I is the identity matrix,
Figure BDA0003087812560000192
is the direction of the pixel boundary, wherein the superscript C is the variable of the camera, L is the variable of the target lidar, K is the internal reference of the camera, and (R, T) is the external reference to be optimized, wherein R is the rotation matrix and T is the translation vector,
Figure BDA0003087812560000193
are the points on the boundary of the point cloud,
Figure BDA0003087812560000194
as coordinates of vertices in said pixel boundaries, Z0Are coefficients.
The target object may be a calibration plate, or an object in a target laser radar scanning range and a camera shooting range, for example, the target object may also be a box body. The following description is schematically made by taking the target object as a calibration plate.
It can be understood that the laser line of the target lidar is incident on the calibration plate, so that the point cloud boundary of the calibration plate (target object) can be obtained according to the first point cloud data corresponding to the target lidar, and the point cloud boundary is the corresponding point cloud on the calibration plate boundary.
The calibration plate is included based on the image data, so that the pixel boundaries of the calibration plate can be acquired from the image data.
As shown in fig. 10(a), the point cloud boundary and the pixel boundary of the target object have a deviation, which indicates that there is a matching error between the camera and the target lidar. In order to eliminate the matching error, in this embodiment, the external parameter between the camera and the target lidar is obtained by adjusting the first external parameter between the camera and the target lidar until the boundary alignment function satisfies the third preset optimization condition. The third preset optimization condition may be that the value of the boundary alignment function is smaller than a preset value, or that the number of iterative updates reaches a preset number.
The boundary alignment function enables the point on the point cloud boundary to be aligned with the pixel boundary as much as possible after being converted by the external parameter (R, T) needing to be optimized, wherein the (R, T) is the first external parameter or the adjusted first external parameter. As shown in fig. 10(b), after the boundary alignment, the point cloud boundary substantially coincides with the pixel boundary. It will be appreciated that by boundary alignment, the amount of rotation in the first external reference is mainly optimised, making the external reference between the camera and the lidar more accurate.
In this embodiment, based on the boundary alignment, the first external parameter between the camera and the target laser radar is continuously iteratively adjusted, and the boundary alignment function is constrained through a third preset optimization condition, so that the external parameter between the camera and the target laser radar is more accurate.
In summary, according to the joint calibration method for the camera and the multiple lidar provided by the embodiment of the present invention, the camera and the at least two lidar are fixed on the electronic device, and a structured object having three surfaces is provided in the camera shooting range of the camera and in the scanning range of the at least two lidar, calibration plates are respectively provided on the three surfaces of the structured object, in this environment, first point cloud data belonging to the three surfaces of the structured object is extracted from the point cloud data by obtaining point cloud data respectively collected by each lidar and image data collected by the camera for each point cloud data in the point cloud data. And then, generating pseudo point cloud data of each calibration plate under a camera coordinate system according to the image data, determining external parameters between any two laser radars in each laser radar according to the first point cloud data corresponding to each laser radar, and determining the external parameters between the camera and the target laser radar according to the pseudo point cloud data and the first point cloud data corresponding to the target laser radar. The point cloud corresponding to the calibration plate does not need to be manually buckled based on the known geometrical relationship of the three surfaces of the structured object, and prior information such as the height of the calibration plate from the ground is also not needed, so that the position of the calibration plate in the point cloud data can be automatically and accurately positioned, and the first point cloud data used for determining the external parameters is accurate. Therefore, the method can integrate and realize automatic calibration between the camera and the multi-laser radar and has higher external reference calibration precision.
Another embodiment of the present invention further provides a non-transitory computer-readable storage medium storing computer-executable instructions for causing an electronic device to perform a joint calibration method for a camera and a multi-lidar as in any of the above embodiments.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A combined calibration method for a camera and multiple laser radars, wherein the camera and at least two laser radars are fixed on an electronic device, and a structured object with three surfaces is arranged in the camera shooting range of the camera and the scanning ranges of the at least two laser radars, and calibration plates are respectively arranged on the three surfaces of the structured object, the method comprises the following steps:
acquiring point cloud data respectively acquired by the at least two laser radars and acquiring image data acquired by the camera;
for each point cloud data in the point cloud data, extracting first point cloud data belonging to three surfaces of the structured object from the point cloud data;
generating pseudo point cloud data of each calibration plate in a camera coordinate system according to the image data;
determining external parameters between any two laser radars in each laser radar according to the first point cloud data corresponding to each laser radar;
and determining external parameters between the camera and a target laser radar according to the pseudo-point cloud data and first point cloud data corresponding to the target laser radar, wherein the target laser radar is any one of the at least two laser radars.
2. The method of claim 1, wherein determining the external parameters between any two of the lidar according to the first point cloud data corresponding to each of the lidar comprises:
extracting a plane group corresponding to each laser radar from each first point cloud data, wherein the plane group comprises plane point clouds reflecting three surfaces of the structured object;
and determining external parameters between any two laser radars in the laser radars according to the plane group corresponding to each laser radar.
3. The method of claim 2, wherein determining the external parameters between any two of the lidar in each of the lidar based on the plane groups to which each of the lidar corresponds comprises:
determining the plane corresponding relation between the plane groups according to the plane groups of the laser radars;
acquiring initial external parameters between any two laser radars according to the plane corresponding relation between each plane group by adopting a preset rotation matrix algorithm;
and carrying out nonlinear optimization on each initial external parameter to obtain the external parameter between any two laser radars.
4. The method of claim 3, wherein the performing nonlinear optimization on each initial external parameter to obtain the external parameter between any two laser radars comprises:
for each initial external parameter, adjusting the initial external parameter until a first optimization function meets a first preset optimization condition, and obtaining the external parameter between any two laser radars;
wherein the first optimization function is:
Figure FDA0003087812550000021
wherein the content of the first and second substances,
Figure FDA0003087812550000022
wherein, piiAnd
Figure FDA0003087812550000023
is a corresponding plane in the two plane groups, pi is a plane piiThe point cloud of the lower part of the system,
Figure FDA0003087812550000024
is a plane piiThe normal vector of (a) is,
Figure FDA0003087812550000025
is a plane piiThe (R, T) is the initial external parameter between any two of the lidar.
5. The method of any one of claims 1-4, wherein determining the external reference between the camera and the target lidar from the pseudo-point cloud data and the first point cloud data corresponding to the target lidar comprises:
extracting a target plane group from first point cloud data corresponding to the target laser radar, and acquiring a camera plane group from the pseudo-point cloud data;
acquiring initial external parameters between the camera and the target radar according to the camera plane group and the target plane group by adopting a preset rotation matrix algorithm;
performing secondary step optimization on the initial external parameters between the camera and the target radar to obtain the external parameters between the camera and the target laser radar.
6. The method of claim 5, wherein the performing a second step optimization of the initialized external parameters between the camera and the target radar to obtain the external parameters between the camera and the target lidar comprises:
performing first optimization on the initialized external parameters between the camera and the target radar to obtain the first external parameters between the camera and the target laser radar, wherein the first optimization is the optimization on translation vectors in the initialized external parameters between the camera and the target radar;
and performing second optimization on the first external parameter between the camera and the target radar to obtain the external parameter between the camera and the target laser radar, wherein the second optimization is the optimization on the first external parameter middle rotation torque matrix between the camera and the target laser radar.
7. The method of claim 6, wherein the first optimizing the initialized external parameters between the camera and the target lidar to obtain first external parameters between the camera and the target lidar comprises:
acquiring a plurality of camera postures and a plurality of radar postures in the synchronous motion process of the camera and the target laser radar, wherein the camera postures and the radar postures are all in the same coordinate system;
adjusting the initial external parameters by adopting the following second optimization function until the second optimization function meets a second preset optimization condition, and obtaining the first external parameters between the camera and the target laser radar;
wherein the second optimization function is:
Figure FDA0003087812550000031
wherein, CamPosii=(xi,yi,zi),CamAnglei=(αiii) For camera pose, lidar Posii=(xi,yi,zi),LidarAnglei=(αiii) Is the attitude of the target lidar.
8. The method of claim 6, wherein the second optimizing the first external reference between the camera and the target radar to obtain the external reference between the camera and the target lidar comprises:
acquiring a point cloud boundary of a target object from first point cloud data corresponding to the target laser radar;
acquiring a pixel boundary of the target object from image data acquired by the camera;
adjusting a first external parameter between the camera and the target laser radar until the external parameter between the camera and the target laser radar is obtained when a boundary alignment function meets a third preset optimization condition;
wherein the boundary alignment function is:
Figure FDA0003087812550000032
wherein I is the label of the point on the boundary, I is the identity matrix,
Figure FDA0003087812550000033
is the direction of the pixel boundary, wherein the superscript C is the variable of the camera, L is the variable of the target lidar, K is the internal reference of the camera, and (R, T) is the external reference to be optimized, wherein R is the rotation matrix and T is the translation vector,
Figure FDA0003087812550000034
are the points on the boundary of the point cloud,
Figure FDA0003087812550000041
is in the pixel boundaryCoordinates of the vertex, Z0Are coefficients.
9. The method of claim 1, wherein extracting first point cloud data belonging to three surfaces of the structured object from the point cloud data comprises:
performing semantic segmentation on the point cloud data by adopting a pre-trained deep learning network model to obtain a labeling result of each point in the point cloud data;
and separating the first point cloud data according to the marking result.
10. The method according to claim 1, wherein the generating pseudo point cloud data of each calibration plate in a camera coordinate system according to the image data comprises:
and acquiring the coordinate position of the center of each calibration plate under a camera coordinate system according to the image data so as to enable each coordinate position to form the pseudo point cloud data.
11. An electronic device, comprising:
a camera and at least two lidar;
at least one processor in communication with the camera and the at least two lidar respectively;
a memory communicatively coupled to the at least one processor, wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
12. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions for causing an electronic device to perform the method of any of claims 1-10.
CN202110586612.3A 2021-05-27 Combined calibration method for camera and multi-laser radar and electronic equipment Active CN113269840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110586612.3A CN113269840B (en) 2021-05-27 Combined calibration method for camera and multi-laser radar and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586612.3A CN113269840B (en) 2021-05-27 Combined calibration method for camera and multi-laser radar and electronic equipment

Publications (2)

Publication Number Publication Date
CN113269840A true CN113269840A (en) 2021-08-17
CN113269840B CN113269840B (en) 2024-07-09

Family

ID=

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN113884104A (en) * 2021-09-27 2022-01-04 苏州挚途科技有限公司 Multi-sensor combined calibration method and device and electronic equipment
CN114152935A (en) * 2021-11-19 2022-03-08 苏州一径科技有限公司 Method, device and equipment for evaluating radar external parameter calibration precision
CN115100287A (en) * 2022-04-14 2022-09-23 美的集团(上海)有限公司 External reference calibration method and robot
CN115184909A (en) * 2022-07-11 2022-10-14 中国人民解放军国防科技大学 Vehicle-mounted multi-spectral laser radar calibration system and method based on target detection
CN115482294A (en) * 2022-09-19 2022-12-16 北京斯年智驾科技有限公司 External reference accurate calibration method and system for camera and laser radar
CN115840196A (en) * 2023-02-24 2023-03-24 新石器慧通(北京)科技有限公司 Laser radar inter-calibration method and device based on entity calibration
CN116577796A (en) * 2022-11-17 2023-08-11 昆易电子科技(上海)有限公司 Verification method and device for alignment parameters, storage medium and electronic equipment
CN116740197A (en) * 2023-08-11 2023-09-12 之江实验室 External parameter calibration method and device, storage medium and electronic equipment
CN117274402A (en) * 2023-11-24 2023-12-22 魔视智能科技(武汉)有限公司 Calibration method and device for camera external parameters, computer equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399643A (en) * 2018-03-15 2018-08-14 南京大学 A kind of outer ginseng calibration system between laser radar and camera and method
CN110031824A (en) * 2019-04-12 2019-07-19 杭州飞步科技有限公司 Laser radar combined calibrating method and device
CN110333503A (en) * 2019-05-29 2019-10-15 菜鸟智能物流控股有限公司 Laser radar calibration method and device and electronic equipment
CN110596683A (en) * 2019-10-25 2019-12-20 中山大学 Multi-group laser radar external parameter calibration system and method thereof
CN111127563A (en) * 2019-12-18 2020-05-08 北京万集科技股份有限公司 Combined calibration method and device, electronic equipment and storage medium
CN111325801A (en) * 2020-01-23 2020-06-23 天津大学 Combined calibration method for laser radar and camera
CN111815716A (en) * 2020-07-13 2020-10-23 北京爱笔科技有限公司 Parameter calibration method and related device
CN111965624A (en) * 2020-08-06 2020-11-20 北京百度网讯科技有限公司 Calibration method, device and equipment for laser radar and camera and readable storage medium
CN112184824A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Camera external parameter calibration method and device
CN112233182A (en) * 2020-12-15 2021-01-15 北京云测网络科技有限公司 Method and device for marking point cloud data of multiple laser radars
CN112379352A (en) * 2020-11-04 2021-02-19 广州文远知行科技有限公司 Laser radar calibration method, device, equipment and storage medium
CN112462350A (en) * 2020-12-10 2021-03-09 苏州一径科技有限公司 Radar calibration method and device, electronic equipment and storage medium
CN112654886A (en) * 2020-05-27 2021-04-13 华为技术有限公司 External parameter calibration method, device, equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399643A (en) * 2018-03-15 2018-08-14 南京大学 A kind of outer ginseng calibration system between laser radar and camera and method
CN110031824A (en) * 2019-04-12 2019-07-19 杭州飞步科技有限公司 Laser radar combined calibrating method and device
CN110333503A (en) * 2019-05-29 2019-10-15 菜鸟智能物流控股有限公司 Laser radar calibration method and device and electronic equipment
CN112184824A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Camera external parameter calibration method and device
CN110596683A (en) * 2019-10-25 2019-12-20 中山大学 Multi-group laser radar external parameter calibration system and method thereof
CN111127563A (en) * 2019-12-18 2020-05-08 北京万集科技股份有限公司 Combined calibration method and device, electronic equipment and storage medium
CN111325801A (en) * 2020-01-23 2020-06-23 天津大学 Combined calibration method for laser radar and camera
CN112654886A (en) * 2020-05-27 2021-04-13 华为技术有限公司 External parameter calibration method, device, equipment and storage medium
CN111815716A (en) * 2020-07-13 2020-10-23 北京爱笔科技有限公司 Parameter calibration method and related device
CN111965624A (en) * 2020-08-06 2020-11-20 北京百度网讯科技有限公司 Calibration method, device and equipment for laser radar and camera and readable storage medium
CN112379352A (en) * 2020-11-04 2021-02-19 广州文远知行科技有限公司 Laser radar calibration method, device, equipment and storage medium
CN112462350A (en) * 2020-12-10 2021-03-09 苏州一径科技有限公司 Radar calibration method and device, electronic equipment and storage medium
CN112233182A (en) * 2020-12-15 2021-01-15 北京云测网络科技有限公司 Method and device for marking point cloud data of multiple laser radars

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569812A (en) * 2021-08-31 2021-10-29 东软睿驰汽车技术(沈阳)有限公司 Unknown obstacle identification method and device and electronic equipment
CN113884104A (en) * 2021-09-27 2022-01-04 苏州挚途科技有限公司 Multi-sensor combined calibration method and device and electronic equipment
CN113884104B (en) * 2021-09-27 2024-02-02 苏州挚途科技有限公司 Multi-sensor joint calibration method and device and electronic equipment
CN114152935A (en) * 2021-11-19 2022-03-08 苏州一径科技有限公司 Method, device and equipment for evaluating radar external parameter calibration precision
CN114152935B (en) * 2021-11-19 2023-02-03 苏州一径科技有限公司 Method, device and equipment for evaluating radar external parameter calibration precision
CN115100287A (en) * 2022-04-14 2022-09-23 美的集团(上海)有限公司 External reference calibration method and robot
CN115184909A (en) * 2022-07-11 2022-10-14 中国人民解放军国防科技大学 Vehicle-mounted multi-spectral laser radar calibration system and method based on target detection
CN115482294A (en) * 2022-09-19 2022-12-16 北京斯年智驾科技有限公司 External reference accurate calibration method and system for camera and laser radar
CN116577796A (en) * 2022-11-17 2023-08-11 昆易电子科技(上海)有限公司 Verification method and device for alignment parameters, storage medium and electronic equipment
CN116594028A (en) * 2022-11-17 2023-08-15 昆易电子科技(上海)有限公司 Verification method and device for alignment parameters, storage medium and electronic equipment
CN116594028B (en) * 2022-11-17 2024-02-06 昆易电子科技(上海)有限公司 Verification method and device for alignment parameters, storage medium and electronic equipment
CN116577796B (en) * 2022-11-17 2024-03-19 昆易电子科技(上海)有限公司 Verification method and device for alignment parameters, storage medium and electronic equipment
CN115840196A (en) * 2023-02-24 2023-03-24 新石器慧通(北京)科技有限公司 Laser radar inter-calibration method and device based on entity calibration
CN116740197A (en) * 2023-08-11 2023-09-12 之江实验室 External parameter calibration method and device, storage medium and electronic equipment
CN116740197B (en) * 2023-08-11 2023-11-21 之江实验室 External parameter calibration method and device, storage medium and electronic equipment
CN117274402A (en) * 2023-11-24 2023-12-22 魔视智能科技(武汉)有限公司 Calibration method and device for camera external parameters, computer equipment and storage medium
CN117274402B (en) * 2023-11-24 2024-04-19 魔视智能科技(武汉)有限公司 Calibration method and device for camera external parameters, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108717712B (en) Visual inertial navigation SLAM method based on ground plane hypothesis
CN109270534B (en) Intelligent vehicle laser sensor and camera online calibration method
CN110148185B (en) Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment
CN107063228B (en) Target attitude calculation method based on binocular vision
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
CN113848931B (en) Agricultural machinery automatic driving obstacle recognition method, system, equipment and storage medium
Mi et al. A vision-based displacement measurement system for foundation pit
CN115685160A (en) Target-based laser radar and camera calibration method, system and electronic equipment
Zhu et al. Object detection and localization in 3D environment by fusing raw fisheye image and attitude data
JPH07103715A (en) Method and apparatus for recognizing three-dimensional position and attitude based on visual sense
CN117115252A (en) Bionic ornithopter space pose estimation method based on vision
CN111198563A (en) Terrain recognition method and system for dynamic motion of foot type robot
CN114792343B (en) Calibration method of image acquisition equipment, method and device for acquiring image data
CN113269840B (en) Combined calibration method for camera and multi-laser radar and electronic equipment
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN113269840A (en) Combined calibration method for camera and multi-laser radar and electronic equipment
CN111862146A (en) Target object positioning method and device
CN114353779B (en) Method for rapidly updating robot local cost map by adopting point cloud projection
CN114879168A (en) Laser radar and IMU calibration method and system
CN112348874B (en) Method and device for determining structural parameter representation of lane line
CN113256726A (en) Online calibration and inspection method for sensing system of mobile device and mobile device
CN112308905B (en) Method and device for determining coordinates of plane marker
CN112348875B (en) Zxfoom sign rod sign mark rod parameter representation determination method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant