CN112330756B - Camera calibration method and device, intelligent vehicle and storage medium - Google Patents

Camera calibration method and device, intelligent vehicle and storage medium Download PDF

Info

Publication number
CN112330756B
CN112330756B CN202110000597.XA CN202110000597A CN112330756B CN 112330756 B CN112330756 B CN 112330756B CN 202110000597 A CN202110000597 A CN 202110000597A CN 112330756 B CN112330756 B CN 112330756B
Authority
CN
China
Prior art keywords
camera
radar
rotation
sequence
translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110000597.XA
Other languages
Chinese (zh)
Other versions
CN112330756A (en
Inventor
汪丹
许全优
徐伟健
范国泽
闵锐
莫耀凯
王劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Transportation Technology Co.,Ltd.
Original Assignee
Ciic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ciic Technology Co ltd filed Critical Ciic Technology Co ltd
Priority to CN202110000597.XA priority Critical patent/CN112330756B/en
Publication of CN112330756A publication Critical patent/CN112330756A/en
Application granted granted Critical
Publication of CN112330756B publication Critical patent/CN112330756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a camera calibration method, a camera calibration device, an intelligent vehicle and a storage medium; the method comprises the steps of obtaining an initial camera pose transformation sequence and an initial radar pose transformation sequence, performing time synchronization processing on the obtained initial camera pose transformation sequence and the obtained initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence, determining a first rotation external parameter according to a first association relation, determining a second rotation external parameter, a scale factor and a translation parameter according to the radar pose transformation sequence, the camera pose transformation sequence, a scale factor of a target camera, a second association relation and the first rotation external parameter, correcting the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence, and recalculating according to the radar pose transformation sequence, the corrected camera pose transformation sequence, the first association relation and the second association relation to obtain a target rotation parameter and a target translation parameter; therefore, the camera calibration can be realized without using a calibration plate during the camera calibration, and the calibration accuracy is improved.

Description

Camera calibration method and device, intelligent vehicle and storage medium
Technical Field
The invention relates to the technical field of unmanned driving, in particular to a camera calibration method and device, an intelligent vehicle and a storage medium.
Background
The unmanned technology generally comprises technologies such as environment perception, behavior decision, motion control, vehicle positioning and the like, the technologies need to be realized by fusing data information acquired by a vehicle-mounted camera and a radar of the unmanned vehicle, and in order to accurately analyze data acquired by the camera, the camera generally needs to be calibrated to determine camera parameters such as external parameters of the camera.
At present, when camera calibration is performed on data acquired by a combined radar, a calibration board is generally used, laser points scanned on the calibration board by the radar are matched with pixel points corresponding to images shot by the camera, a reprojection error equation is constructed, and external parameter information of the camera is estimated. In the method, the laser point area corresponding to the calibration plate needs to be manually selected, so that the camera external parameters obtained by the calibration camera are influenced by human factors.
Disclosure of Invention
The embodiment of the invention provides a camera calibration method and device, an intelligent vehicle and a storage medium, which can determine external reference information of a camera without a calibration plate, reduce the influence of human factors on a calibration result and are beneficial to improving the accuracy of the external reference of the camera.
The embodiment of the invention provides a camera calibration method, which comprises the following steps:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
Correspondingly, an embodiment of the present invention further provides a camera calibration device, where the camera calibration device includes:
the system comprises a sequence acquisition unit, a radar acquisition unit and a radar conversion unit, wherein the sequence acquisition unit is used for acquiring an initial camera pose transformation sequence of a target camera and acquiring an initial radar pose transformation sequence of a target radar, the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
the time synchronization unit is used for carrying out time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
the first parameter determining unit is used for determining a first rotation external parameter of the target camera in the rotation parameters according to a first incidence relation which is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and the rotation parameters required by converting the camera coordinate system into the radar coordinate system;
a second parameter determination unit, configured to determine, according to a second association relation that is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, a scale factor of a target camera, translation transformation information in the camera pose transformation sequence, and a rotation parameter and a translation parameter that are required for converting a camera coordinate system into a radar coordinate system, and the determined first rotation external parameter, a second rotation external parameter in the rotation parameters, the scale factor, and the translation parameter;
the sequence correction unit is used for correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and the external parameter correcting unit is used for recalculating the rotation parameters and the translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
In an optional example, the initial camera pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target camera, and the initial radar pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target radar;
correspondingly, the time synchronization unit is configured to calculate time delays of the initial camera pose sequence and the initial radar pose sequence respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
and performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence, wherein the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence is the same.
In an optional example, the time synchronization unit includes an interpolation calculation subunit, and the interpolation operator unit is configured to determine, according to the adjusted time information, camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the camera interpolation point time information to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the radar interpolation point time information to obtain a radar pose transformation sequence.
In an optional example, the first association relationship includes a product of rotation transformation information corresponding to first time information in the radar pose transformation sequence and the rotation parameter, and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence is equal to each other;
correspondingly, the first parameter determining unit is configured to construct a first equation between a product of rotation transformation information corresponding to first time information in a radar pose transformation sequence and the rotation parameter and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence according to a first association relationship, move all calculation items on a target side of the first equation to the other side, and then replace a numerical value on the target side with a preset residual error to obtain a rotation residual error relational expression;
and taking the second rotation external parameter in the rotation parameters as a quantitative quantity, and performing residual calculation according to the rotation residual relation to obtain the first rotation external parameter of the target camera in the rotation parameters.
In an optional example, the second association relationship includes a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, and a difference value between the translation parameter and the rotation parameter is equal to a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence;
correspondingly, the second parameter determining unit is configured to construct a second equation between a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter and a difference between the product of the rotation transformation information and the translation parameter, a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference between the translation transformation information corresponding to the second time information in the radar pose transformation sequence, move all calculation items on a target side of the second equation to another side, and then replace a numerical value on the target side with a preset residual to obtain a translation residual relation;
and performing residual calculation according to the translation residual relation and the determined first rotation external parameter to obtain a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters.
In an optional example, after the external reference correcting unit recalculates the rotation parameter and the translation parameter according to the radar pose transformation sequence, the corrected camera pose transformation sequence, the first association relationship and the second association relationship to obtain a target rotation parameter and a target translation parameter, the camera calibration apparatus further includes a parameter optimizing unit, configured to calculate the rotation residual relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence, and the corrected camera pose transformation sequence to obtain a rotation residual;
calculating the translation residual error relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain a translation residual error;
and optimizing a matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
In an optional example, the initial camera pose transformation sequence is obtained through image data acquired by a camera, and the initial radar pose transformation sequence is obtained through point cloud data acquired by a radar;
correspondingly, the external reference correcting unit comprises an external reference correcting subunit and an external reference error determining unit, wherein the external reference correcting subunit is used for obtaining a first corrected rotation external reference in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first association relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
the external parameter error determining unit is used for performing projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor and the corrected translation parameter, and determining a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
Correspondingly, the embodiment of the invention also provides an intelligent vehicle, which comprises a processor, a memory, a target camera and a target radar, wherein the processor is used for realizing any step in the camera calibration method provided by the embodiment of the invention when executing the computer program stored in the memory.
In addition, an embodiment of the present invention further provides a storage medium, where the storage medium has a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform any of the steps in the camera calibration method provided in the embodiment of the present invention.
By adopting the scheme of the embodiment of the invention, an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar can be obtained, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in the radar coordinate system, the initial camera pose transformation sequence and the initial radar pose transformation sequence are subjected to time synchronization processing to obtain the camera pose transformation sequence and the radar pose transformation sequence, and a first association relation which is satisfied between rotation parameters required by converting the camera coordinate system into the radar coordinate system is obtained according to the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence, determining a first rotation external parameter of the target camera in the rotation parameters, according to a second association relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of the target camera, translation transformation information in the camera pose transformation sequence, and rotation parameters and translation parameters required by converting a camera coordinate system into a radar coordinate system, and the determined first rotational external parameters, determining a second rotational external parameter of the rotational parameters, the scale factor, and the translation parameter, correcting the camera pose transformation sequence according to the rotation parameters and the scale factors to obtain a corrected camera pose transformation sequence, recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters; in the embodiment, because the first incidence relation and the second incidence relation exist between the target camera and the target radar with the relative positions kept unchanged, the rotation parameters and the translation parameters of the camera can be calculated based on the rotation transformation information and the translation transformation information in the camera pose transformation sequence and the radar pose transformation sequence obtained after time synchronization processing, so that when the camera is calibrated, a good calibration effect can be achieved without using a calibration plate, the calibration accuracy is improved, and the human resources are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a camera calibration method provided in an embodiment of the present invention;
fig. 2 is a schematic flowchart of a camera calibration method according to an embodiment of the present invention;
fig. 3 is another schematic flow chart of a camera calibration method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a camera calibration device according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an intelligent vehicle according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a camera calibration method and device, an intelligent vehicle and a storage medium. Specifically, the embodiment of the present invention provides a camera calibration method suitable for a camera calibration device, which may be integrated in an electronic device.
The electronic equipment can be a terminal and other equipment, such as an intelligent vehicle, an intelligent mobile phone, an intelligent watch, a tablet computer, a notebook computer and the like, wherein the intelligent vehicle is an integrated system integrating functions of environmental perception, planning decision, multi-level auxiliary driving and the like, the electronic equipment centralizes the application of the technologies of computers, modern sensing, information fusion, communication, artificial intelligence, automatic control and the like, and is a typical high and new technology integrated body. Currently, research on intelligent vehicles, such as unmanned vehicles, which are a kind of intelligent vehicles and are also called wheeled mobile robots, mainly aims to improve safety and comfort of automobiles and provide excellent human-vehicle interaction interfaces, and the intelligent vehicles mainly rely on intelligent drivers in the automobiles, mainly comprising computer systems, to achieve the purpose of unmanned driving, so-called unmanned driving, which can safely and reliably drive vehicles on roads by sensing the surrounding environment of the vehicles by using vehicle-mounted sensors and controlling the steering of the vehicles and the speed of the vehicles according to road, vehicle position and obstacle information obtained by the intelligent vehicles.
The electronic device may also be a device such as a server, and the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, but is not limited thereto.
The camera calibration method of the embodiment of the invention can be realized by a server, and can also be realized by a terminal and the server together.
The method is described below by taking an example that the terminal and the server jointly implement the camera calibration method, where the terminal may include a moving device such as an intelligent vehicle.
As shown in fig. 1, a camera calibration system provided by the embodiment of the present invention includes a motion device 10, a server 20, and the like; the moving device 10 and the server 20 may be connected through a network, for example, a wired or wireless network connection, and the like, wherein the moving device 10 may exist as a terminal loaded with a target camera and a target radar.
The moving equipment 10 may receive image data acquired by a target camera loaded on the moving equipment 10 and point cloud data acquired by a target radar loaded on the moving equipment 10.
After receiving the image data and the point cloud data, the sports equipment 10 may directly send the image data and the point cloud data to the server 20, or may store the image data and the point cloud data in a storage space of the sports equipment 10, and when receiving a data acquisition request sent by the server 20, send the image data and the point cloud data to the server 20.
After receiving the image data and the point cloud data sent by the moving equipment 10, the server 20 may obtain an initial camera pose transformation sequence of a target camera according to the image data and the point cloud data, and obtain an initial radar pose transformation sequence of a target radar, where the relative positions of the target camera and the target radar are not changed, the initial camera pose transformation sequence includes rotation transformation information and translation transformation information of the target camera in a camera coordinate system, the initial radar pose transformation sequence includes rotation transformation information and translation transformation information of the target radar in a radar coordinate system, perform time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar transformation sequence, and obtain a camera pose transformation sequence and a radar pose transformation sequence according to the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence, determining a first association relation which is satisfied between the radar coordinate system and a rotation parameter required by converting the radar coordinate system into the radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameter, determining a second association relation which is satisfied between the rotation parameter and a translation parameter required by converting the camera coordinate system into the radar coordinate system and the translation parameter in the camera pose conversion sequence according to rotation conversion information and translation conversion information in the radar pose conversion sequence, a scale factor of the target camera, the translation conversion information in the camera pose conversion sequence, and the translation conversion information in the camera coordinate system, and the translation conversion information in the camera pose conversion sequence, determining the first rotation external parameter, the scale factor and the translation parameter in the rotation parameter, correcting the camera pose conversion sequence according to the scale factor to obtain a corrected camera pose conversion sequence, recalculating the rotation parameter and the translation parameter according to the radar pose conversion sequence, the corrected camera pose conversion sequence and the first association relation and the second association relation And obtaining a target rotation parameter and a target translation parameter.
After obtaining the target rotation parameter and the target translation parameter, the server 20 may continue to perform operations such as pose estimation of the moving apparatus 10 according to the target rotation parameter and the target translation parameter, or may send the target rotation parameter and the target translation parameter to the moving apparatus 10, so that the moving apparatus 10 performs further processing according to the target rotation parameter and the target translation parameter.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiments of the present invention will be described from the perspective of a camera calibration device, which may be specifically integrated in a server or a terminal.
As shown in fig. 2, a specific process of the camera calibration method of the present embodiment may be as follows:
201. the method comprises the steps of obtaining an initial camera pose transformation sequence of a target camera and obtaining an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system.
It should be noted that the target camera in this embodiment includes, but is not limited to, an ordinary camera, a video camera, and other image acquisition devices, and the target radar includes, but is not limited to, a millimeter wave radar, a laser radar, and other detection devices.
Wherein, the relative position of the target camera and the target radar is not changed, it can be understood that in the time length of the initial camera pose transformation sequence and the initial radar pose transformation sequence, the target camera and the target radar are kept in a relatively static state, such as the relative distance between the target camera and the target radar is not changed, the relative distance between the target camera and the target radar can be calculated based on the specific point on the target camera and the target radar, the position of the specific point is not limited, for example, the relative distance is the distance from the central point of the lens of the target camera to the center of gravity of the radar, it can be understood that if the camera rotates, the center of the lens will be changed, which will result in the relative distance transformation, so the relative position of the target camera and the target radar is not changed in this embodiment, which is also limited in the time length of the initial camera pose transformation sequence and the initial radar pose transformation sequence, the camera and radar each do not move independently.
For example, in one example, the target camera and the target radar may be both located on the smart vehicle, and a fixture may be provided on the smart vehicle for each of the target camera and the target radar, and the target camera and the target radar may be fixed to the smart vehicle by the fixture, and the target camera and the target radar may be movable themselves, such as rotating, but may remain relatively stationary while camera calibration is performed.
The initial camera pose transformation sequence may be derived from image data acquired by the target camera and may include pose transformation information of the target camera between adjacent times. The initial radar pose transformation sequence may be derived from point cloud data collected by the target radar, and may include pose transformation information for the target radar between adjacent times.
In this embodiment, the camera pose represents the position and the posture of the target camera in the camera coordinate system, and the radar pose represents the position and the posture of the target radar in the radar coordinate system.
For example, in one example, the initial camera pose transformation sequence may be
Figure 789194DEST_PATH_IMAGE001
In the form of (a), wherein,
Figure 321806DEST_PATH_IMAGE002
the target camera may be determined to be at a position 1, 2, 3, … …,
Figure 848603DEST_PATH_IMAGE003
at this time, the pose state information in the camera coordinate system, in this embodiment, the pose state information may include position information and pose information of the target camera in the camera coordinate system.
Pose transformation information for the target camera between adjacent times may be determined from pose state information for the initial camera pose transformation sequence, the pose transformation information including rotation transformation information and translation transformation information.
For example, in another example, the initial camera pose transformation sequence may also be
Figure 517482DEST_PATH_IMAGE004
Wherein, in the step (A),
Figure 979687DEST_PATH_IMAGE005
the pose transformation information of the target camera between adjacent moments can be determined according to the image data collected by the target camera, such as
Figure 999595DEST_PATH_IMAGE006
Is shown according to
Figure 815236DEST_PATH_IMAGE007
1 acquisition time and the first
Figure 604200DEST_PATH_IMAGE008
Camera pose state information determination at each acquisition time
Figure 34045DEST_PATH_IMAGE007
1 acquisition time and the first
Figure 213353DEST_PATH_IMAGE008
Between the acquisition momentsPose transformation information of the target camera.
Where the rotation transformation information refers to information for reflecting a change in the pose of the target camera in the camera coordinate system, in one example, the rotation transformation information may be pose change information of the target camera between adjacent acquisition time instants in the camera coordinate system.
The translation transformation information refers to information for reflecting a position change of the target camera in the camera coordinate system, and in one example, the translation transformation information may be position change information of the target camera between adjacent acquisition time instants in the camera coordinate system.
It can be understood that the initial camera pose transformation sequence needs to be obtained by processing image data acquired by the target camera, and in an example, the step of "acquiring an initial camera pose transformation sequence of the target camera" may specifically include:
acquiring image data acquired by a target camera, wherein the image data comprises at least two images and an acquisition timestamp of each image;
detecting the characteristic points of each image to obtain the characteristic points of each image;
carrying out feature point matching between the images and determining target images with the same feature points;
obtaining an image sequence according to the target image and the acquisition time stamp of the target image;
and carrying out camera pose extraction operation on the image sequence to obtain an initial camera pose transformation sequence.
In this example, the acquisition timestamp of each image includes a time stamp used to determine the acquisition time of each image from which the target image needs to be processed to generate the image sequence.
The characteristic points represent points with characteristic properties in image data acquired by the target camera, and can represent images or targets in an identical or at least very similar invariant form in other similar images containing the same scene or target. It will be appreciated that a feature point may not be just a point, but it may also comprise a local set of information. Even in many cases, it is a small area of area itself.
In the step "detecting feature points of the image data to obtain feature points in each set of the image data", any one of a feature point extraction (FAST) algorithm, a Speeded Up Robust Features (Surf) algorithm, and the like, which can extract feature points in the image data, may be used.
It can be understood that the initial radar pose transformation sequence needs to be obtained by processing point cloud data acquired by the target radar, and in an example, the step of "acquiring the initial radar pose transformation sequence of the target radar" may specifically include:
acquiring a plurality of point cloud data acquired by a target radar and an acquisition timestamp of each point cloud data;
carrying out point cloud registration processing on the point cloud data to obtain a registered point cloud data set, wherein any point cloud data in the registered point cloud data set and other point cloud data are successfully registered;
and obtaining an initial radar pose transformation sequence according to the point cloud data in the registration point cloud data set and the acquisition time stamp of the point cloud data.
In this embodiment, the point cloud data refers to a set of vectors in a radar coordinate system.
In this example, the acquisition timestamp of each point cloud data may include a time identifier used to determine the acquisition time of each point cloud data, and the point cloud data after point cloud registration needs to be processed according to the acquisition timestamp to generate an initial radar pose transformation sequence.
In the step of performing Point cloud registration processing on the Point cloud data to obtain registered Point cloud data, the registration may be achieved by Iterative Closest Point registration (ICP) registration or Normal Distribution Transform (NDT). The ICP algorithm is a common algorithm for laser synchronous positioning and mapping (SLAM) point cloud matching, and is essentially an optimal registration method based on a least square method, that is, two points closest to each other are considered to be the same, a corresponding relationship point pair is repeatedly selected, and the process of calculating optimal rigid body transformation is repeated until the convergence accuracy requirement of correct registration is met.
It can be understood that before point cloud data is subjected to point cloud registration processing, point cloud data can also be subjected to point cloud filtering and other processing, and originally acquired point cloud data often contains a large number of hash points and isolated points, so that effective point cloud data can be retained through point cloud filtering and other processing, and the calculation pressure is reduced.
202. And carrying out time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence.
In one example, the initial camera pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target camera, and the initial radar pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target radar.
The time synchronization processing in this embodiment has an effect of making the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence obtained by processing the same, where a specific scheme adopted by the time synchronization processing is not limited, and for example, the time synchronization processing may be implemented based on an interpolation method.
The time synchronization processing is performed on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence, and the method comprises the following steps:
calculating time delays of the initial camera pose sequence and the initial radar pose sequence respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
and performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence, wherein the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence is the same.
The time delay is the difference in time information between the target camera and the target radar due to different acquisition frame rates, different data transmission rates, and the like.
The fact that the time information corresponding to the same sequence position is the same means that after the rotation transformation information and the translation transformation information of the target camera are written into the camera pose transformation sequence according to the time sequence and the rotation transformation information and the translation transformation information of the target radar are written into the radar pose transformation sequence according to the time sequence, the time information corresponding to the rotation transformation information and the translation transformation information in the same sequence in the camera pose transformation sequence and the radar pose transformation sequence is the same. For example, in the camera pose transformation sequence
Figure 19635DEST_PATH_IMAGE007
The time information corresponding to the rotation transformation information and the translation transformation information of the bits and the radar position and pose transformation sequence
Figure 991002DEST_PATH_IMAGE007
The time information corresponding to the rotation transform information and the translation transform information of the bits is the same.
It can be understood that the camera pose time information and the radar pose time information need to be processed according to the acquisition frame rate, the data transmission rate and other information between the target camera and the target radar, so as to eliminate the time delay of the camera pose time information and the radar pose time information.
After the time delay is eliminated, the initial camera pose transformation sequence and the initial radar pose transformation sequence may be further processed to synchronize the initial camera pose transformation sequence and the initial radar pose transformation sequence, i.e., to obtain a camera pose transformation sequence and a radar pose transformation sequence.
In one example, the performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the synchronized radar pose time information and the synchronized camera pose time information to obtain a camera pose transformation sequence and a radar pose transformation sequence includes:
determining camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation to obtain a radar pose transformation sequence.
For example, in the initial camera pose transformation sequence, rotation transformation information and translation transformation information corresponding to one or more groups of time information before and after the camera interpolation point time information are selected, and interpolation calculation is performed by means of averaging. Or selecting the rotation transformation information and the translation transformation information corresponding to a plurality of groups of time information before and after the time information of the camera interpolation point in the initial camera pose transformation sequence to obtain a simulation curve, and then selecting the rotation transformation information and the translation transformation information corresponding to the time information of the camera interpolation point on the simulation curve to realize interpolation calculation. Interpolation calculation can also be performed by adopting an inverse distance weighting method with larger calculation amount, a trend surface smooth difference value method and the like according to actual requirements.
In another example, in step 202, interpolation processing may also be performed only on the initial camera pose transformation sequence or the initial radar pose transformation sequence. For example:
taking the initial radar pose transformation sequence as a radar pose transformation sequence;
determining time information corresponding to the rotation transformation information and the translation transformation information of the target radar in the radar pose transformation sequence;
determining corresponding camera interpolation point time information in the initial camera pose transformation sequence based on the time information corresponding to the rotation transformation information and the translation transformation information of the target radar;
and according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the time information of the camera interpolation points through interpolation calculation to obtain a camera pose transformation sequence.
203. And determining a first rotation external parameter of the target camera in the rotation parameters according to a first association relation which is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and the rotation parameters required by converting the camera coordinate system into the radar coordinate system.
The camera coordinate system can rotate around the origin of the camera coordinate system according to the rotation parameters, and the directions of the coordinate axes are consistent with the directions of the coordinate axes in the radar coordinate system.
The first incidence relation comprises a product of rotation transformation information corresponding to first time information in the radar pose transformation sequence and the rotation parameter, and is equal to a product of the rotation parameter and the rotation transformation information corresponding to the first time information in the camera pose transformation sequence.
For example, the first association may be of the form shown in the following formula:
Figure 591748DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 523932DEST_PATH_IMAGE010
representing the target radar obtained from the radar pose transformation sequence in a radar coordinate system
Figure 602746DEST_PATH_IMAGE011
Time of day and
Figure 444931DEST_PATH_IMAGE011
rotational transformation information between +1 time instants (first time information),
Figure 216578DEST_PATH_IMAGE012
representing the rotation parameters of the camera coordinate system to the radar coordinate system,
Figure 636058DEST_PATH_IMAGE013
representing the target camera acquired from the sequence of camera pose transformations in the camera coordinate system
Figure 784143DEST_PATH_IMAGE011
Time of day and
Figure 464523DEST_PATH_IMAGE011
the rotational conversion information between +1 time (first time information).
Figure 672650DEST_PATH_IMAGE014
And
Figure 313847DEST_PATH_IMAGE015
the product of (a) represents: target camera is in radar coordinate system
Figure 44DEST_PATH_IMAGE011
Time of day and
Figure 548312DEST_PATH_IMAGE011
the rotational transition information between +1 time instants.
In this example, the rotation transformation information may be represented in the form of a matrix, and the rotation transformation information of the target camera in the camera coordinate system may be transformed into the rotation transformation information of the target camera in the radar coordinate system by the rotation parameters.
In order to realize accurate calculation of the rotation parameters and improve the accuracy of the rotation parameters, in this embodiment, the rotation parameters of the camera coordinate system and the radar coordinate system may be expressed as a connected form of the rotor parameters obtained by resolving around three coordinate axes ZYX of the radar coordinate system. As shown in the following equation:
Figure 661762DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 790255DEST_PATH_IMAGE017
representing a rotation sub-parameter between the origin of the camera coordinate system and the z-axis in the radar coordinate system,
Figure 280142DEST_PATH_IMAGE018
representing the rotation sub-parameters between the origin of the camera coordinate system and the y-axis in the radar coordinate system,
Figure 935114DEST_PATH_IMAGE019
representing the rotational sub-parameters between the origin of the camera coordinate system and the x-axis of the radar coordinate system.
Figure 219465DEST_PATH_IMAGE020
Figure 897571DEST_PATH_IMAGE021
And
Figure 863253DEST_PATH_IMAGE022
may be a matrix.
The arrangement of the radar coordinate system is not limited, and the present embodiment does not limit this.
Wherein the first rotation external parameter in the rotation parameters refers to the rotor parameter
Figure 310415DEST_PATH_IMAGE023
And
Figure 844295DEST_PATH_IMAGE024
the second rotation external parameter refers to the rotor parameter
Figure 9697DEST_PATH_IMAGE025
Wherein the content of the first and second substances,
Figure 575808DEST_PATH_IMAGE026
refers to the roll angle (roll angle) in the external reference of the camera,
Figure 815159DEST_PATH_IMAGE027
refers to the pitch angle in the camera external reference,
Figure 706892DEST_PATH_IMAGE028
refers to the yaw angle (yaw angle) of the camera external parameter.
In one example, the determining a first rotation external parameter of the target camera in the rotation parameters according to a first association relation satisfied between rotation transformation information of the target camera and the target radar in respective coordinate systems and the rotation parameters required by converting the camera coordinate system into a radar coordinate system includes:
constructing a first equation between a product of rotation transformation information corresponding to first time information in a radar pose transformation sequence and the rotation parameter and a product of the rotation parameter and the rotation transformation information corresponding to the first time information in the camera pose transformation sequence according to a first incidence relation, moving all calculation items on a target side of the first equation to the other side, and then replacing a numerical value on the target side with a preset residual error to obtain a rotation residual error relational expression;
and taking the second rotation external parameter in the rotation parameters as a quantitative quantity, and performing residual calculation according to the rotation residual relation to obtain the first rotation external parameter of the target camera in the rotation parameters.
In one example, the rotation residual relation may be expressed in the form of the following equation:
Figure 421907DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 791709DEST_PATH_IMAGE030
a preset residual is represented, instead of the value on the target side. When residual calculation is performed according to the rotation residual relation, a first rotation external parameter in the rotation parameter corresponding to the minimized rotation residual can be solved by minimizing the rotation residual.
204. And determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of the target camera, the translation transformation information in the camera pose transformation sequence, the rotation parameters and the translation parameters required by the conversion of the camera coordinate system into the radar coordinate system, and the determined first rotation external parameter.
And the translation parameters can enable the origin of the camera coordinate system to coincide with the origin of the radar coordinate system after the camera coordinate system is translated according to the translation parameters.
Wherein the scale factor is the ratio of the distance between the object and the camera in the real space to the focal length of the target camera.
The second incidence relation comprises a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, and a difference value between the translation parameter and the rotation parameter, and is equal to a product of a scale factor, the rotation parameter and the translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence.
For example, the second association may be in the form of the following formula:
Figure 151146DEST_PATH_IMAGE031
wherein the content of the first and second substances,
Figure 213780DEST_PATH_IMAGE032
which is indicative of a translation parameter(s),
Figure 901244DEST_PATH_IMAGE033
the scale factors are represented by a scale factor,
Figure 340316DEST_PATH_IMAGE034
representing the target camera extracted from the sequence of camera pose transformations in the camera coordinate system
Figure 554259DEST_PATH_IMAGE011
Time of day and
Figure 522215DEST_PATH_IMAGE011
the translation transformation information between +1 time instants (second time information),
Figure 883927DEST_PATH_IMAGE035
representing the second target radar extracted from the radar pose transformation sequence in a radar coordinate system
Figure 657848DEST_PATH_IMAGE036
Time of day and
Figure 54194DEST_PATH_IMAGE036
translation transformation information between +1 time instants. Under the ideal condition of the water-cooling device,
Figure 396314DEST_PATH_IMAGE033
may be 1.
In one example, the determining, according to a second association relation satisfied between rotation parameters and translation parameters required for converting a camera coordinate system into a radar coordinate system and the determined first rotation external parameter, a second rotation external parameter, the scale factor, and the translation parameters in the radar pose transformation sequence, the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of a target camera, the translation transformation information in the camera pose transformation sequence, and the rotation parameters and the translation parameters required for converting the camera coordinate system into the radar coordinate system, includes:
constructing a second equation between a difference value between a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter and a difference value between a product of a scale factor, the rotation parameter and the translation transformation information corresponding to the second time information in the camera pose transformation sequence and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence according to the second incidence relation, moving all calculation items on a target side of the second equation to the other side, and then replacing a numerical value on the target side with a preset residual error to obtain a translation residual error relational expression;
and performing residual calculation according to the translation residual relation and the determined first rotation external parameter to obtain a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters.
Wherein the second equation may be in the form of:
Figure 245321DEST_PATH_IMAGE037
wherein, I3As in a second association relation
Figure 495037DEST_PATH_IMAGE038
And (4) carrying out the unit matrix after the formula extraction.
In one example, the translation residual relation may be expressed in the form of the following equation:
Figure 558939DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 134277DEST_PATH_IMAGE040
a preset residual is represented, instead of the value on the target side. When residual calculation is carried out according to the translation residual relation and the determined first translation external parameter, the translation residual can be minimized to solve the minimum averageAnd the second rotation external parameter, the scale factor and the translation parameter in the rotation parameters corresponding to the translation residual. Wherein the content of the first and second substances,
Figure 408263DEST_PATH_IMAGE041
and
Figure 727249DEST_PATH_IMAGE042
different.
205. And correcting the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence.
Wherein, the correcting the camera pose transformation sequence according to the scale factor may specifically be: and modifying the rotation transformation information and the translation transformation information of the target camera in the camera coordinate system in the camera pose transformation sequence according to the calculated scale factor. In step 206, the rotation parameters and the translation parameters are recalculated according to the corrected camera pose transformation sequence, so that the accuracy of the target rotation external parameters and the target translation external parameters can be improved.
206. And recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
In the practical application process, with the continuous improvement of sensors such as cameras and radars, in the process of acquiring image data and point cloud data by the cameras, the influence of factors such as noise is smaller and smaller, so that after the image data and the point cloud data are acquired, operations such as denoising and the like are not performed, and after rotation parameters and translation parameters are obtained through calculation, optimized calculation is performed, and the influence of noise is eliminated.
In one example, after step 206, further comprising:
calculating the relation of the rotation residual errors according to the target rotation parameters, the target translation parameters, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain rotation residual errors;
calculating the translation residual error relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain a translation residual error;
and optimizing a matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
The step of optimizing the matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter may be specifically implemented by the following formula:
Figure 894925DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 641164DEST_PATH_IMAGE044
may represent a matrix of rotation parameters and translation parameters, optionally
Figure 136868DEST_PATH_IMAGE045
The matrix splicing method can be obtained by matrix splicing corresponding to the rotation parameters and the translation parameters.
Figure 993965DEST_PATH_IMAGE046
Representing the second of a sequence of camera and radar pose transformations
Figure 953831DEST_PATH_IMAGE046
At each of the time points, the time point,
Figure 681091DEST_PATH_IMAGE046
is 1, and the maximum value is less than the number of moments in the camera pose transformation sequence and the radar pose transformation sequence;
Figure 991987DEST_PATH_IMAGE047
representing the second of a sequence of camera and radar pose transformations
Figure 590458DEST_PATH_IMAGE047
At each of the time points, the time point,
Figure 404830DEST_PATH_IMAGE047
the minimum value of (2) and the maximum value of (2) are the number of moments in the camera pose transformation sequence and the radar pose transformation sequence;
Figure 820768DEST_PATH_IMAGE048
indicates that first to
Figure 353381DEST_PATH_IMAGE046
An
Figure 489964DEST_PATH_IMAGE049
And then square evaluating the modulus.
It can be understood that, in order to ensure the accuracy of the target rotation parameter and the target translation parameter, after the rotation external parameter and the translation external parameter are calculated, the rotation external parameter and the translation external parameter should be checked, and it is determined that the calculated rotation external parameter and the calculated translation external parameter can be used as the target rotation parameter and the target translation parameter.
In one example, the initial camera pose transformation sequence is obtained through image data acquired by a camera, and the initial radar pose transformation sequence is obtained through point cloud data acquired by a radar;
the recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first and second association relations to obtain target rotation parameters and target translation parameters includes:
obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
performing projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor and the corrected translation parameter, and determining a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
The projection error threshold is a preset parameter value used for judging whether the projection error meets the requirement or not, and the developer can set the projection error threshold according to the actual situation, and if the developer wants the projection error to be less than 99%, the projection error threshold can be set to be 99% and the like. When the projection error value is smaller than the projection error threshold value, the obtained corrected rotation parameter and the corrected translation parameter can be considered to meet the requirement, the corrected rotation parameter is used as a target rotation parameter, and the corrected translation parameter is used as a target translation parameter.
As can be seen from the above, according to the embodiment of the present invention, an initial camera pose transformation sequence of a target camera is obtained, an initial radar pose transformation sequence of a target radar is obtained, wherein the relative position between the target camera and the target radar is not changed, the initial camera pose transformation sequence includes rotation transformation information and translation transformation information of the target camera in a camera coordinate system, the initial radar pose transformation sequence includes rotation transformation information and translation transformation information of the target radar in a radar coordinate system, the initial camera pose transformation sequence and the initial radar pose transformation sequence are time-synchronized to obtain a camera pose transformation sequence and a radar pose transformation sequence, and a first association relationship is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar transformation sequence and rotation parameters required for transforming the camera coordinate system into the radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters, according to a second association relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of the target camera, translation transformation information in the camera pose transformation sequence, and rotation parameters and translation parameters required by converting a camera coordinate system into a radar coordinate system, and the determined first rotational external parameters, determining a second rotational external parameter of the rotational parameters, the scale factor, and the translation parameter, correcting the camera pose transformation sequence according to the rotation parameters and the scale factors to obtain a corrected camera pose transformation sequence, recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters; in the embodiment, because the first incidence relation and the second incidence relation exist between the target camera and the target radar with the relative positions kept unchanged, the rotation parameters and the translation parameters of the camera can be calculated based on the rotation transformation information and the translation transformation information in the camera pose transformation sequence and the radar pose transformation sequence obtained after time synchronization processing, so that when the camera is calibrated, a good calibration effect can be achieved without using a calibration plate, the calibration accuracy is improved, and the human resources are saved.
The method according to the preceding embodiment is illustrated in further detail below by way of example.
In this embodiment, the system shown in fig. 1 will be described as a system in which a camera calibration device is integrated into a smart vehicle, and a target camera and a target radar are loaded on the smart vehicle.
As shown in fig. 3, the specific process of the camera calibration method of this embodiment may be as follows:
301. the target camera collects image data and sends the image data to the intelligent vehicle, and the target radar collects point cloud data and sends the point cloud data to the intelligent vehicle.
Wherein the target camera and the target radar can move along with the movement of the intelligent vehicle, but the relative positions of the target camera and the target radar are kept unchanged.
302. The intelligent vehicle receives image data collected by the target camera and point cloud data collected by the target radar.
After receiving the image data and the point cloud data, the intelligent vehicle can perform denoising processing on the image data and the point cloud data, and remove noise points generated by factors such as camera shake and radar shake in the image data and the point cloud data.
303. The intelligent vehicle acquires an initial camera pose transformation sequence of a target camera according to the image data and the point cloud data, and acquires an initial radar pose transformation sequence of a target radar, wherein the relative position of the target camera and the target radar is unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system.
In one embodiment, the initial camera pose transformation sequence is derived from image feature point matching of image data captured by the target camera, and may include pose transformation information of the target camera between adjacent time instants. The initial radar pose transformation sequence is obtained by point cloud registration of point cloud data acquired by a target radar, and can comprise pose transformation information of the target radar between adjacent moments.
304. And the intelligent vehicle carries out time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence.
In one example, step 304 includes:
calculating time delays of the initial camera pose sequence and the initial radar pose sequence respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
determining camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation to obtain a radar pose transformation sequence.
305. And the intelligent vehicle determines a first rotation external parameter of the target camera in the rotation parameters according to a first association relation which is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and the rotation parameters required by converting the camera coordinate system into the radar coordinate system.
The camera coordinate system can rotate around the origin of the camera coordinate system according to the rotation parameters, and the directions of the coordinate axes are consistent with the directions of the coordinate axes in the radar coordinate system.
306. And the intelligent vehicle determines a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of the target camera, the translation transformation information in the camera pose transformation sequence, a second incidence relation which is satisfied between the rotation parameters and the translation parameters required by the conversion of the camera coordinate system into the radar coordinate system, and the determined first rotation external parameter.
The translation parameters are parameters which can enable the origin of the camera coordinate system to coincide with the origin of the radar coordinate system after the camera coordinate system translates according to the translation parameters. The scale factor is the ratio of the distance between the object and the camera in real space to the focal length of the target camera.
307. And the intelligent vehicle corrects the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence.
And correcting the representation of the camera pose transformation sequence according to the scale factors, and modifying the rotation transformation information and the translation transformation information of the target camera in the camera coordinate system in the camera pose transformation sequence according to the calculated scale factors so as to improve the accuracy of the target rotation external parameters and the target translation external parameters.
308. And the intelligent vehicle recalculates the rotation parameters and the translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence, the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
Optionally, step 308 may specifically include: obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
performing projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor and the corrected translation parameter, and determining a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
309. And the intelligent vehicle calculates the relation of the rotation residual errors according to the target rotation parameters, the target translation parameters, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain rotation residual errors, and calculates the relation of the translation residual errors to obtain translation residual errors.
310. And the intelligent vehicle optimizes the matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
In one example, step 310 may be specifically implemented by the following formula:
Figure 424422DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 761994DEST_PATH_IMAGE051
representing a matrix formed by rotation parameters and translation parameters together.
Therefore, when the embodiment is used for calibrating the camera, a calibration plate can be omitted, a good calibration effect can be achieved, the calibration accuracy is improved, and manpower resource saving is facilitated.
In order to better implement the method, correspondingly, the embodiment of the invention also provides a user interaction content management device.
Referring to fig. 4, the apparatus includes:
a sequence obtaining unit 401, configured to obtain an initial camera pose transform sequence of a target camera, where a relative position of the target camera and a target radar is unchanged, obtain an initial radar pose transform sequence of the target radar, where the initial camera pose transform sequence includes rotation transform information and translation transform information of the target camera in a camera coordinate system, and the initial radar pose transform sequence includes rotation transform information and translation transform information of the target radar in a radar coordinate system;
a time synchronization unit 402, configured to perform time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
a first parameter determining unit 403, configured to determine, according to a first association relation that is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and rotation parameters required for converting the camera coordinate system into the radar coordinate system, a first rotation external parameter of the target camera in the rotation parameters;
a second parameter determining unit 404, configured to determine, according to rotation transformation information and translation transformation information in the radar pose transformation sequence, a scale factor of a target camera, translation transformation information in the camera pose transformation sequence, a second association relation that is satisfied between a rotation parameter and a translation parameter required for converting a camera coordinate system into a radar coordinate system, and the determined first rotation external parameter, a second rotation external parameter in the rotation parameters, the scale factor, and the translation parameter;
a sequence correction unit 405, configured to correct the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence;
and an external reference correcting unit 406, configured to recalculate the rotation parameter and the translation parameter according to the radar pose transformation sequence, the corrected camera pose transformation sequence, and the first association relationship and the second association relationship, so as to obtain a target rotation parameter and a target translation parameter.
In an optional example, the initial camera pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target camera, and the initial radar pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target radar;
correspondingly, the time synchronization unit 402 is configured to calculate time delays of the initial camera pose sequence and the initial radar pose sequence, respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
and performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence, wherein the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence is the same.
In an optional example, the time synchronization unit 402 includes an interpolation calculation subunit 407, configured to determine, according to the adjusted time information, camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the camera interpolation point time information to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the radar interpolation point time information to obtain a radar pose transformation sequence.
In an optional example, the first association relationship includes a product of rotation transformation information corresponding to first time information in the radar pose transformation sequence and the rotation parameter, and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence is equal to each other;
correspondingly, the first parameter determining unit 403 is configured to construct a first equation between a product of rotation transformation information corresponding to first time information in a radar pose transformation sequence and the rotation parameter and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence according to a first association relationship, move all calculation items on a target side of the first equation to another side, and then replace a numerical value on the target side with a preset residual to obtain a rotation residual relation;
and taking the second rotation external parameter in the rotation parameters as a quantitative quantity, and performing residual calculation according to the rotation residual relation to obtain the first rotation external parameter of the target camera in the rotation parameters.
In an optional example, the second association relationship includes a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, and a difference value between the translation parameter and the rotation parameter is equal to a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence;
correspondingly, the second parameter determining unit 404 is configured to construct a second equation between a difference between a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference between translation transformation information corresponding to the second time information in the radar pose transformation sequence, move all computation items on a target side of the second equation to another side, and then replace a numerical value on the target side with a preset residual to obtain a translation residual relation;
and performing residual calculation according to the translation residual relation and the determined first rotation external parameter to obtain a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters.
In an optional example, after the external reference correcting unit 406 recalculates the rotation parameter and the translation parameter according to the radar pose transformation sequence, the corrected camera pose transformation sequence, the first association relation and the second association relation to obtain a target rotation parameter and a target translation parameter, the camera calibration apparatus further includes a parameter optimizing unit, configured to calculate the rotation residual relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence, and the corrected camera pose transformation sequence to obtain a rotation residual;
calculating the translation residual error relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain a translation residual error;
and optimizing a matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
In an optional example, the initial camera pose transformation sequence is obtained through image data acquired by a camera, and the initial radar pose transformation sequence is obtained through point cloud data acquired by a radar;
correspondingly, the external reference correcting unit 406 includes an external reference correcting subunit 408 and an external reference error determining unit 409, where the external reference correcting subunit 408 is configured to obtain a first corrected rotation external reference in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence, and the first association relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
the external reference error determining unit 409 is configured to perform projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor, and the corrected translation parameter, and determine a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
Therefore, by adopting the device provided by the embodiment of the invention, based on the first incidence relation and the second incidence relation existing between the camera and the radar with relatively unchanged positions, when the camera is calibrated, a good calibration effect can be achieved without using a calibration plate, the calibration accuracy is improved, and the manpower resource is saved.
Accordingly, an embodiment of the present invention further provides an electronic device, as shown in fig. 5, the electronic device may include Radio Frequency (RF) circuit 501, memory 502 including one or more computer-readable storage media, input unit 503, display unit 504, sensor 505, audio circuit 506, Wireless Fidelity (WiFi) module 507, processor 508 including one or more processing cores, and power supply 509. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 5 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 501 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 501 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 502 may be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 508 and the input unit 503 access to the memory 502.
The input unit 503 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 508, and can receive and execute commands sent by the processor 508. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 503 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 504 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 504 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 508 to determine the type of touch event, and then the processor 508 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 5 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The electronic device may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the electronic device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, can be used for applications for recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like which can be configured for the electronic device, and are not described herein again.
Speakers, microphones in audio circuitry 506 may provide an audio interface between the user and the electronic device. The audio circuit 506 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508 and then sent to, for example, another electronic device via the RF circuit 501, or the audio data is output to the memory 502 for further processing. The audio circuit 506 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
WiFi belongs to short-distance wireless transmission technology, and the electronic equipment can help a user to receive and send emails, browse webpages, access streaming media and the like through the WiFi module 507, and provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 507, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the electronic device, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby integrally monitoring the mobile phone. Optionally, processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 508.
The electronic device also includes a power supply 509 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 508 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 509 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 508 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 508 runs the application programs stored in the memory 502, so as to implement various functions:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the camera calibration methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any camera calibration method provided in the embodiments of the present invention, the beneficial effects that can be achieved by any camera calibration method provided in the embodiments of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
According to an aspect of the application, there is also provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations in the embodiments described above.
An embodiment of the present invention further provides an intelligent vehicle, as shown in fig. 6, which shows a schematic structural diagram of the intelligent vehicle according to the embodiment of the present invention, specifically:
the smart vehicle may include a vehicle body 601, a sensing device 602, an execution device 603, and an in-vehicle processing device 604, and those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the smart vehicle, and may include more or fewer components than those shown, or combine certain components, or a different arrangement of components. Wherein:
the vehicle body 601 is a vehicle body structure of the smart vehicle, and may include hardware structures such as a frame, a door, a vehicle body, and an internal seat.
The sensing device 602 is a sensing structure of the smart vehicle for sensing internal state information of the smart vehicle and environmental information in the external driving environment. Specifically, the device can comprise a wheel speed meter, a positioning meter, a tire pressure meter, a target radar, a target camera and the like.
The executing device 603 is a structure for executing a running function of the intelligent vehicle, and the executing device may include a power device such as an engine, a power battery, a transmission structure, a display device such as a display screen and a sound device, a steering device such as a steering wheel, and a tire.
The on-vehicle processing device 604 is the "brain" of the intelligent vehicle, and integrates a control device for controlling vehicle operation parameters such as vehicle speed, direction, acceleration steering, etc., a vehicle running safety monitoring device for monitoring the running state of the unmanned vehicle, an information acquisition device for analyzing information sensed by the sensing device, a planning device for planning a vehicle running route, and the like.
The execution device, the sensing device and the vehicle-mounted processing device are all mounted on a vehicle body, and the vehicle-mounted processing device is connected with the execution device and the sensing device through a bus, so that the vehicle-mounted processing device can execute the steps in any camera calibration method provided by the embodiment of the application, and therefore, the beneficial effects which can be realized by any camera calibration method provided by the embodiment of the application can be realized, for example, the vehicle-mounted processing device can execute the following steps:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The camera calibration method, the camera calibration device, the electronic device, and the storage medium provided by the embodiments of the present invention are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A camera calibration method is characterized by comprising the following steps:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to the camera pose transformation sequence and the rotation transformation information in the radar pose transformation sequence, a first incidence relation which is satisfied between the camera coordinate system and the rotation parameters required by the conversion of the camera coordinate system into the radar coordinate system is obtained
Figure 889769DEST_PATH_IMAGE002
Determining a first rotational outer parameter of the target camera in the rotational parameters, wherein,
Figure 704010DEST_PATH_IMAGE004
representing the target radar obtained from the radar pose transformation sequence in the radar coordinate system
Figure 618877DEST_PATH_IMAGE006
Time of day and
Figure 218485DEST_PATH_IMAGE008
the rotational transformation information between +1 time instants,
Figure 91763DEST_PATH_IMAGE010
representing rotation parameters of the camera coordinate system to the radar coordinate system,
Figure 862142DEST_PATH_IMAGE012
representing the target camera acquired from the sequence of camera pose transformations in the camera coordinate system
Figure 315120DEST_PATH_IMAGE014
Time of day and
Figure 769235DEST_PATH_IMAGE008
the rotational transformation information between +1 time instants;
according to the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of the target camera, the translation transformation information in the camera pose transformation sequence, and a second incidence relation which is satisfied between the rotation parameter and the translation parameter required by the conversion of the camera coordinate system into the radar coordinate system
Figure 62682DEST_PATH_IMAGE016
And the determined first rotational external parameters, determining a second rotational external parameter of the rotational parameters, the scale factor, and the translation parameter, wherein,
Figure 602248DEST_PATH_IMAGE018
is representative of the parameters of the translation,
Figure 124496DEST_PATH_IMAGE020
the scale factors are represented by a scale factor representing,
Figure 433118DEST_PATH_IMAGE022
representing the target camera extracted from the sequence of camera pose transformations as being the second in the camera coordinate system
Figure 631887DEST_PATH_IMAGE024
Time of day and
Figure 658749DEST_PATH_IMAGE008
the translation transformation information between +1 time instants,
Figure 984688DEST_PATH_IMAGE026
representing the second of the target radar extracted from the radar pose transformation sequence in the radar coordinate system
Figure 147816DEST_PATH_IMAGE028
Time of day and
Figure 251907DEST_PATH_IMAGE030
the translation transformation information between +1 time instants;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
2. The method according to claim 1, wherein the initial camera pose transformation sequence further comprises time information corresponding to rotation transformation information and translation transformation information of the target camera, and the initial radar pose transformation sequence further comprises time information corresponding to rotation transformation information and translation transformation information of the target radar;
the time synchronization processing is performed on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence, and the method comprises the following steps:
calculating time delays of the initial camera pose sequence and the initial radar pose sequence respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
and performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence, wherein the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence is the same.
3. The method of claim 2, wherein the interpolating the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence comprises:
according to the adjusted time information, determining camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the camera interpolation point time information to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the radar interpolation point time information to obtain a radar pose transformation sequence.
4. The method according to claim 1, wherein the first association relationship comprises a product of rotation transformation information corresponding to first time information in the radar pose transformation sequence and the rotation parameter, and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence is equal to each other;
the first incidence relation which is satisfied between the rotation parameters required by the conversion of the camera coordinate system into the radar coordinate system according to the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and the camera coordinate system is
Figure 234906DEST_PATH_IMAGE002
Determining a first rotational external parameter of the target camera in the rotational parameters, comprising:
constructing a first equation between a product of rotation transformation information corresponding to first time information in a radar pose transformation sequence and the rotation parameter and a product of the rotation parameter and the rotation transformation information corresponding to the first time information in the camera pose transformation sequence according to a first incidence relation, moving all calculation items on a target side of the first equation to the other side, and then replacing a numerical value on the target side with a preset residual error to obtain a rotation residual error relational expression;
and taking the second rotation external parameter in the rotation parameters as a quantitative quantity, and performing residual calculation according to the rotation residual relation to obtain the first rotation external parameter of the target camera in the rotation parameters.
5. The method according to claim 4, wherein the second association relationship comprises a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, and a difference value between the translation parameter, which is equal to a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence;
and according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of the target camera, translation transformation information in the camera pose transformation sequence, and rotation parameters and translation parameters required by the conversion of the camera coordinate system into the radar coordinate system
Figure 98957DEST_PATH_IMAGE016
And determining the determined first rotation external parameter, determining a second rotation external parameter in the rotation parameters, the scale factor and the translation parameter, including:
constructing a second equation between a difference value between a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter and a difference value between a product of a scale factor, the rotation parameter and the translation transformation information corresponding to the second time information in the camera pose transformation sequence and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence according to the second incidence relation, moving all calculation items on a target side of the second equation to the other side, and then replacing a numerical value on the target side with a preset residual error to obtain a translation residual error relational expression;
and performing residual calculation according to the translation residual relation and the determined first rotation external parameter to obtain a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters.
6. The method of claim 5, wherein after recalculating the rotation parameters and the translation parameters from the radar pose transformation sequence, the corrected camera pose transformation sequence, and the first and second correlations to obtain target rotation parameters and target translation parameters, further comprising:
calculating the relation of the rotation residual errors according to the target rotation parameters, the target translation parameters, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain rotation residual errors;
calculating the translation residual error relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain a translation residual error;
and optimizing a matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
7. The method of claim 1, wherein the initial camera pose transformation sequence is derived from image data acquired by a camera, and the initial radar pose transformation sequence is derived from point cloud data acquired by a radar;
the recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first and second association relations to obtain target rotation parameters and target translation parameters includes:
obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
performing projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor and the corrected translation parameter, and determining a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
8. A camera calibration device is characterized by comprising:
the system comprises a sequence acquisition unit, a radar acquisition unit and a radar conversion unit, wherein the sequence acquisition unit is used for acquiring an initial camera pose transformation sequence of a target camera and acquiring an initial radar pose transformation sequence of a target radar, the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
the time synchronization unit is used for carrying out time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
a first parameter determination unit for determining a first association relation satisfied between the camera pose transformation sequence and the rotation transformation information in the radar pose transformation sequence and the rotation parameters required for converting the camera coordinate system into the radar coordinate system
Figure 365859DEST_PATH_IMAGE002
Determining a first rotational outer parameter of the target camera in the rotational parameters, wherein,
Figure 188322DEST_PATH_IMAGE004
representing the target radar obtained from the radar pose transformation sequence in the radar coordinate system
Figure 189776DEST_PATH_IMAGE032
Time of day and
Figure 591938DEST_PATH_IMAGE034
the rotational transformation information between +1 time instants,
Figure 995238DEST_PATH_IMAGE010
representing rotation parameters of the camera coordinate system to the radar coordinate system,
Figure 723023DEST_PATH_IMAGE012
representing the target camera acquired from the sequence of camera pose transformations in the camera coordinate system
Figure DEST_PATH_IMAGE036
Time of day and
Figure 398723DEST_PATH_IMAGE032
the rotational transformation information between +1 time instants;
a second parameter determining unit, configured to determine, according to rotation transformation information and translation transformation information in the radar pose transformation sequence, a scale factor of the target camera, translation transformation information in the camera pose transformation sequence, and a second association relation that is satisfied between rotation parameters and translation parameters required for converting the camera coordinate system into the radar coordinate system
Figure 135735DEST_PATH_IMAGE016
And the determined first rotational external parameters, determining a second rotational external parameter of the rotational parameters, the scale factor, and the translation parameter, wherein,
Figure 127962DEST_PATH_IMAGE018
is representative of the parameters of the translation,
Figure 26648DEST_PATH_IMAGE020
the scale factors are represented by a scale factor representing,
Figure 251962DEST_PATH_IMAGE022
representing the target camera extracted from the sequence of camera pose transformations as being the second in the camera coordinate system
Figure DEST_PATH_IMAGE038
Time of day and
Figure DEST_PATH_IMAGE040
the translation transformation information between +1 time instants,
Figure 730348DEST_PATH_IMAGE026
representing the second of the target radar extracted from the radar pose transformation sequence in the radar coordinate system
Figure DEST_PATH_IMAGE042
Time of day and
Figure DEST_PATH_IMAGE044
the translation transformation information between +1 time instants;
the sequence correction unit is used for correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and the external parameter correcting unit is used for recalculating the rotation parameters and the translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
9. An intelligent vehicle, characterized in that the intelligent vehicle comprises a processor, a memory and the target camera and target radar according to any one of claims 1 to 7, the processor being configured to implement the camera calibration method according to any one of claims 1 to 7 when executing the computer program stored in the memory.
10. A storage medium storing instructions adapted to be loaded by a processor to perform the steps of the camera calibration method according to any one of claims 1 to 7.
CN202110000597.XA 2021-01-04 2021-01-04 Camera calibration method and device, intelligent vehicle and storage medium Active CN112330756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110000597.XA CN112330756B (en) 2021-01-04 2021-01-04 Camera calibration method and device, intelligent vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110000597.XA CN112330756B (en) 2021-01-04 2021-01-04 Camera calibration method and device, intelligent vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN112330756A CN112330756A (en) 2021-02-05
CN112330756B true CN112330756B (en) 2021-03-23

Family

ID=74302386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110000597.XA Active CN112330756B (en) 2021-01-04 2021-01-04 Camera calibration method and device, intelligent vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN112330756B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724303B (en) * 2021-09-07 2024-05-10 广州文远知行科技有限公司 Point cloud and image matching method and device, electronic equipment and storage medium
CN114387347B (en) * 2021-10-26 2023-09-19 浙江视觉智能创新中心有限公司 Method, device, electronic equipment and medium for determining external parameter calibration
CN114037814B (en) * 2021-11-11 2022-12-23 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN114049404B (en) * 2022-01-12 2022-04-05 深圳佑驾创新科技有限公司 Method and device for calibrating internal phase and external phase of vehicle
CN114782556B (en) * 2022-06-20 2022-09-09 季华实验室 Camera and laser radar registration method and system and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882313B (en) * 2010-07-14 2011-12-21 中国人民解放军国防科学技术大学 Calibration method of correlation between single line laser radar and CCD (Charge Coupled Device) camera
CN105976353B (en) * 2016-04-14 2020-01-24 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN106997614B (en) * 2017-03-17 2021-07-20 浙江光珀智能科技有限公司 Large-scale scene 3D modeling method and device based on depth camera
CN110456330B (en) * 2019-08-27 2021-07-09 中国人民解放军国防科技大学 Method and system for automatically calibrating external parameter without target between camera and laser radar

Also Published As

Publication number Publication date
CN112330756A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112330756B (en) Camera calibration method and device, intelligent vehicle and storage medium
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN109165606B (en) Vehicle information acquisition method and device and storage medium
US20160112701A1 (en) Video processing method, device and system
US11131557B2 (en) Full-vision navigation and positioning method, intelligent terminal and storage device
US10893209B2 (en) Photographing direction deviation detection method, apparatus, device, and storage medium
CN112489121A (en) Video fusion method, device, equipment and storage medium
EP3416130B1 (en) Method, device and nonvolatile computer-readable medium for image composition
CN111652942B (en) Calibration method of camera module, first electronic equipment and second electronic equipment
CN114612531A (en) Image processing method and device, electronic equipment and storage medium
JP7472281B2 (en) Electronic device and focusing method
CN116577796B (en) Verification method and device for alignment parameters, storage medium and electronic equipment
CN108494946B (en) Method and device for correcting electronic compass in mobile terminal
CN113469923B (en) Image processing method and device, electronic equipment and storage medium
CN112200130B (en) Three-dimensional target detection method and device and terminal equipment
CN108871356B (en) Driving navigation method and mobile terminal
CN110933305B (en) Electronic equipment and focusing method
CN113780291A (en) Image processing method and device, electronic equipment and storage medium
CN109785226B (en) Image processing method and device and terminal equipment
CN110706158A (en) Image processing method, image processing device and terminal equipment
CN113347710B (en) Positioning method and related device
CN112866629B (en) Control method and terminal for binocular vision application
CN118115544A (en) Vehicle motion state determining method and device based on forward-looking monocular camera
CN114630049A (en) Image compensation method, mobile terminal and image compensation device
CN115267868A (en) Positioning point processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211209

Address after: 215000 room 808, 8 / F, building 9a, launch area of Yangtze River Delta International R & D community, No. 286, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Patentee after: Tianyi Transportation Technology Co.,Ltd.

Address before: 2nd floor, building A3, Hongfeng science and Technology Park, Nanjing Economic and Technological Development Zone, Nanjing, Jiangsu Province 210033

Patentee before: CIIC Technology Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210205

Assignee: Zhongzhixing (Shanghai) Transportation Technology Co.,Ltd.

Assignor: Tianyi Transportation Technology Co.,Ltd.

Contract record no.: X2022980005387

Denomination of invention: Camera calibration method, device, intelligent vehicle and storage medium

Granted publication date: 20210323

License type: Common License

Record date: 20220518

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210205

Assignee: CIIC Technology Co.,Ltd.

Assignor: Tianyi Transportation Technology Co.,Ltd.

Contract record no.: X2022980005922

Denomination of invention: Camera calibration method, device, intelligent vehicle and storage medium

Granted publication date: 20210323

License type: Common License

Record date: 20220524