Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a camera calibration method and device, an intelligent vehicle and a storage medium. Specifically, the embodiment of the present invention provides a camera calibration method suitable for a camera calibration device, which may be integrated in an electronic device.
The electronic equipment can be a terminal and other equipment, such as an intelligent vehicle, an intelligent mobile phone, an intelligent watch, a tablet computer, a notebook computer and the like, wherein the intelligent vehicle is an integrated system integrating functions of environmental perception, planning decision, multi-level auxiliary driving and the like, the electronic equipment centralizes the application of the technologies of computers, modern sensing, information fusion, communication, artificial intelligence, automatic control and the like, and is a typical high and new technology integrated body. Currently, research on intelligent vehicles, such as unmanned vehicles, which are a kind of intelligent vehicles and are also called wheeled mobile robots, mainly aims to improve safety and comfort of automobiles and provide excellent human-vehicle interaction interfaces, and the intelligent vehicles mainly rely on intelligent drivers in the automobiles, mainly comprising computer systems, to achieve the purpose of unmanned driving, so-called unmanned driving, which can safely and reliably drive vehicles on roads by sensing the surrounding environment of the vehicles by using vehicle-mounted sensors and controlling the steering of the vehicles and the speed of the vehicles according to road, vehicle position and obstacle information obtained by the intelligent vehicles.
The electronic device may also be a device such as a server, and the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, but is not limited thereto.
The camera calibration method of the embodiment of the invention can be realized by a server, and can also be realized by a terminal and the server together.
The method is described below by taking an example that the terminal and the server jointly implement the camera calibration method, where the terminal may include a moving device such as an intelligent vehicle.
As shown in fig. 1, a camera calibration system provided by the embodiment of the present invention includes a motion device 10, a server 20, and the like; the moving device 10 and the server 20 may be connected through a network, for example, a wired or wireless network connection, and the like, wherein the moving device 10 may exist as a terminal loaded with a target camera and a target radar.
The moving equipment 10 may receive image data acquired by a target camera loaded on the moving equipment 10 and point cloud data acquired by a target radar loaded on the moving equipment 10.
After receiving the image data and the point cloud data, the sports equipment 10 may directly send the image data and the point cloud data to the server 20, or may store the image data and the point cloud data in a storage space of the sports equipment 10, and when receiving a data acquisition request sent by the server 20, send the image data and the point cloud data to the server 20.
After receiving the image data and the point cloud data sent by the moving equipment 10, the server 20 may obtain an initial camera pose transformation sequence of a target camera according to the image data and the point cloud data, and obtain an initial radar pose transformation sequence of a target radar, where the relative positions of the target camera and the target radar are not changed, the initial camera pose transformation sequence includes rotation transformation information and translation transformation information of the target camera in a camera coordinate system, the initial radar pose transformation sequence includes rotation transformation information and translation transformation information of the target radar in a radar coordinate system, perform time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar transformation sequence, and obtain a camera pose transformation sequence and a radar pose transformation sequence according to the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence, determining a first association relation which is satisfied between the radar coordinate system and a rotation parameter required by converting the radar coordinate system into the radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameter, determining a second association relation which is satisfied between the rotation parameter and a translation parameter required by converting the camera coordinate system into the radar coordinate system and the translation parameter in the camera pose conversion sequence according to rotation conversion information and translation conversion information in the radar pose conversion sequence, a scale factor of the target camera, the translation conversion information in the camera pose conversion sequence, and the translation conversion information in the camera coordinate system, and the translation conversion information in the camera pose conversion sequence, determining the first rotation external parameter, the scale factor and the translation parameter in the rotation parameter, correcting the camera pose conversion sequence according to the scale factor to obtain a corrected camera pose conversion sequence, recalculating the rotation parameter and the translation parameter according to the radar pose conversion sequence, the corrected camera pose conversion sequence and the first association relation and the second association relation And obtaining a target rotation parameter and a target translation parameter.
After obtaining the target rotation parameter and the target translation parameter, the server 20 may continue to perform operations such as pose estimation of the moving apparatus 10 according to the target rotation parameter and the target translation parameter, or may send the target rotation parameter and the target translation parameter to the moving apparatus 10, so that the moving apparatus 10 performs further processing according to the target rotation parameter and the target translation parameter.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiments of the present invention will be described from the perspective of a camera calibration device, which may be specifically integrated in a server or a terminal.
As shown in fig. 2, a specific process of the camera calibration method of the present embodiment may be as follows:
201. the method comprises the steps of obtaining an initial camera pose transformation sequence of a target camera and obtaining an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system.
It should be noted that the target camera in this embodiment includes, but is not limited to, an ordinary camera, a video camera, and other image acquisition devices, and the target radar includes, but is not limited to, a millimeter wave radar, a laser radar, and other detection devices.
Wherein, the relative position of the target camera and the target radar is not changed, it can be understood that in the time length of the initial camera pose transformation sequence and the initial radar pose transformation sequence, the target camera and the target radar are kept in a relatively static state, such as the relative distance between the target camera and the target radar is not changed, the relative distance between the target camera and the target radar can be calculated based on the specific point on the target camera and the target radar, the position of the specific point is not limited, for example, the relative distance is the distance from the central point of the lens of the target camera to the center of gravity of the radar, it can be understood that if the camera rotates, the center of the lens will be changed, which will result in the relative distance transformation, so the relative position of the target camera and the target radar is not changed in this embodiment, which is also limited in the time length of the initial camera pose transformation sequence and the initial radar pose transformation sequence, the camera and radar each do not move independently.
For example, in one example, the target camera and the target radar may be both located on the smart vehicle, and a fixture may be provided on the smart vehicle for each of the target camera and the target radar, and the target camera and the target radar may be fixed to the smart vehicle by the fixture, and the target camera and the target radar may be movable themselves, such as rotating, but may remain relatively stationary while camera calibration is performed.
The initial camera pose transformation sequence may be derived from image data acquired by the target camera and may include pose transformation information of the target camera between adjacent times. The initial radar pose transformation sequence may be derived from point cloud data collected by the target radar, and may include pose transformation information for the target radar between adjacent times.
In this embodiment, the camera pose represents the position and the posture of the target camera in the camera coordinate system, and the radar pose represents the position and the posture of the target radar in the radar coordinate system.
For example, in one example, the initial camera pose transformation sequence may be
In the form of (a), wherein,
the target camera may be determined to be at a position 1, 2, 3, … …,
at this time, the pose state information in the camera coordinate system, in this embodiment, the pose state information may include position information and pose information of the target camera in the camera coordinate system.
Pose transformation information for the target camera between adjacent times may be determined from pose state information for the initial camera pose transformation sequence, the pose transformation information including rotation transformation information and translation transformation information.
For example, in another example, the initial camera pose transformation sequence may also be
Wherein, in the step (A),
the pose transformation information of the target camera between adjacent moments can be determined according to the image data collected by the target camera, such as
Is shown according to
1 acquisition time and the first
Camera pose state information determination at each acquisition time
1 acquisition time and the first
Between the acquisition momentsPose transformation information of the target camera.
Where the rotation transformation information refers to information for reflecting a change in the pose of the target camera in the camera coordinate system, in one example, the rotation transformation information may be pose change information of the target camera between adjacent acquisition time instants in the camera coordinate system.
The translation transformation information refers to information for reflecting a position change of the target camera in the camera coordinate system, and in one example, the translation transformation information may be position change information of the target camera between adjacent acquisition time instants in the camera coordinate system.
It can be understood that the initial camera pose transformation sequence needs to be obtained by processing image data acquired by the target camera, and in an example, the step of "acquiring an initial camera pose transformation sequence of the target camera" may specifically include:
acquiring image data acquired by a target camera, wherein the image data comprises at least two images and an acquisition timestamp of each image;
detecting the characteristic points of each image to obtain the characteristic points of each image;
carrying out feature point matching between the images and determining target images with the same feature points;
obtaining an image sequence according to the target image and the acquisition time stamp of the target image;
and carrying out camera pose extraction operation on the image sequence to obtain an initial camera pose transformation sequence.
In this example, the acquisition timestamp of each image includes a time stamp used to determine the acquisition time of each image from which the target image needs to be processed to generate the image sequence.
The characteristic points represent points with characteristic properties in image data acquired by the target camera, and can represent images or targets in an identical or at least very similar invariant form in other similar images containing the same scene or target. It will be appreciated that a feature point may not be just a point, but it may also comprise a local set of information. Even in many cases, it is a small area of area itself.
In the step "detecting feature points of the image data to obtain feature points in each set of the image data", any one of a feature point extraction (FAST) algorithm, a Speeded Up Robust Features (Surf) algorithm, and the like, which can extract feature points in the image data, may be used.
It can be understood that the initial radar pose transformation sequence needs to be obtained by processing point cloud data acquired by the target radar, and in an example, the step of "acquiring the initial radar pose transformation sequence of the target radar" may specifically include:
acquiring a plurality of point cloud data acquired by a target radar and an acquisition timestamp of each point cloud data;
carrying out point cloud registration processing on the point cloud data to obtain a registered point cloud data set, wherein any point cloud data in the registered point cloud data set and other point cloud data are successfully registered;
and obtaining an initial radar pose transformation sequence according to the point cloud data in the registration point cloud data set and the acquisition time stamp of the point cloud data.
In this embodiment, the point cloud data refers to a set of vectors in a radar coordinate system.
In this example, the acquisition timestamp of each point cloud data may include a time identifier used to determine the acquisition time of each point cloud data, and the point cloud data after point cloud registration needs to be processed according to the acquisition timestamp to generate an initial radar pose transformation sequence.
In the step of performing Point cloud registration processing on the Point cloud data to obtain registered Point cloud data, the registration may be achieved by Iterative Closest Point registration (ICP) registration or Normal Distribution Transform (NDT). The ICP algorithm is a common algorithm for laser synchronous positioning and mapping (SLAM) point cloud matching, and is essentially an optimal registration method based on a least square method, that is, two points closest to each other are considered to be the same, a corresponding relationship point pair is repeatedly selected, and the process of calculating optimal rigid body transformation is repeated until the convergence accuracy requirement of correct registration is met.
It can be understood that before point cloud data is subjected to point cloud registration processing, point cloud data can also be subjected to point cloud filtering and other processing, and originally acquired point cloud data often contains a large number of hash points and isolated points, so that effective point cloud data can be retained through point cloud filtering and other processing, and the calculation pressure is reduced.
202. And carrying out time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence.
In one example, the initial camera pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target camera, and the initial radar pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target radar.
The time synchronization processing in this embodiment has an effect of making the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence obtained by processing the same, where a specific scheme adopted by the time synchronization processing is not limited, and for example, the time synchronization processing may be implemented based on an interpolation method.
The time synchronization processing is performed on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence, and the method comprises the following steps:
calculating time delays of the initial camera pose sequence and the initial radar pose sequence respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
and performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence, wherein the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence is the same.
The time delay is the difference in time information between the target camera and the target radar due to different acquisition frame rates, different data transmission rates, and the like.
The fact that the time information corresponding to the same sequence position is the same means that after the rotation transformation information and the translation transformation information of the target camera are written into the camera pose transformation sequence according to the time sequence and the rotation transformation information and the translation transformation information of the target radar are written into the radar pose transformation sequence according to the time sequence, the time information corresponding to the rotation transformation information and the translation transformation information in the same sequence in the camera pose transformation sequence and the radar pose transformation sequence is the same. For example, in the camera pose transformation sequence
The time information corresponding to the rotation transformation information and the translation transformation information of the bits and the radar position and pose transformation sequence
The time information corresponding to the rotation transform information and the translation transform information of the bits is the same.
It can be understood that the camera pose time information and the radar pose time information need to be processed according to the acquisition frame rate, the data transmission rate and other information between the target camera and the target radar, so as to eliminate the time delay of the camera pose time information and the radar pose time information.
After the time delay is eliminated, the initial camera pose transformation sequence and the initial radar pose transformation sequence may be further processed to synchronize the initial camera pose transformation sequence and the initial radar pose transformation sequence, i.e., to obtain a camera pose transformation sequence and a radar pose transformation sequence.
In one example, the performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the synchronized radar pose time information and the synchronized camera pose time information to obtain a camera pose transformation sequence and a radar pose transformation sequence includes:
determining camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation to obtain a radar pose transformation sequence.
For example, in the initial camera pose transformation sequence, rotation transformation information and translation transformation information corresponding to one or more groups of time information before and after the camera interpolation point time information are selected, and interpolation calculation is performed by means of averaging. Or selecting the rotation transformation information and the translation transformation information corresponding to a plurality of groups of time information before and after the time information of the camera interpolation point in the initial camera pose transformation sequence to obtain a simulation curve, and then selecting the rotation transformation information and the translation transformation information corresponding to the time information of the camera interpolation point on the simulation curve to realize interpolation calculation. Interpolation calculation can also be performed by adopting an inverse distance weighting method with larger calculation amount, a trend surface smooth difference value method and the like according to actual requirements.
In another example, in step 202, interpolation processing may also be performed only on the initial camera pose transformation sequence or the initial radar pose transformation sequence. For example:
taking the initial radar pose transformation sequence as a radar pose transformation sequence;
determining time information corresponding to the rotation transformation information and the translation transformation information of the target radar in the radar pose transformation sequence;
determining corresponding camera interpolation point time information in the initial camera pose transformation sequence based on the time information corresponding to the rotation transformation information and the translation transformation information of the target radar;
and according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the time information of the camera interpolation points through interpolation calculation to obtain a camera pose transformation sequence.
203. And determining a first rotation external parameter of the target camera in the rotation parameters according to a first association relation which is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and the rotation parameters required by converting the camera coordinate system into the radar coordinate system.
The camera coordinate system can rotate around the origin of the camera coordinate system according to the rotation parameters, and the directions of the coordinate axes are consistent with the directions of the coordinate axes in the radar coordinate system.
The first incidence relation comprises a product of rotation transformation information corresponding to first time information in the radar pose transformation sequence and the rotation parameter, and is equal to a product of the rotation parameter and the rotation transformation information corresponding to the first time information in the camera pose transformation sequence.
For example, the first association may be of the form shown in the following formula:
wherein the content of the first and second substances,
representing the target radar obtained from the radar pose transformation sequence in a radar coordinate system
Time of day and
rotational transformation information between +1 time instants (first time information),
representing the rotation parameters of the camera coordinate system to the radar coordinate system,
representing the target camera acquired from the sequence of camera pose transformations in the camera coordinate system
Time of day and
the rotational conversion information between +1 time (first time information).
And
the product of (a) represents: target camera is in radar coordinate system
Time of day and
the rotational transition information between +1 time instants.
In this example, the rotation transformation information may be represented in the form of a matrix, and the rotation transformation information of the target camera in the camera coordinate system may be transformed into the rotation transformation information of the target camera in the radar coordinate system by the rotation parameters.
In order to realize accurate calculation of the rotation parameters and improve the accuracy of the rotation parameters, in this embodiment, the rotation parameters of the camera coordinate system and the radar coordinate system may be expressed as a connected form of the rotor parameters obtained by resolving around three coordinate axes ZYX of the radar coordinate system. As shown in the following equation:
wherein the content of the first and second substances,
representing a rotation sub-parameter between the origin of the camera coordinate system and the z-axis in the radar coordinate system,
representing the rotation sub-parameters between the origin of the camera coordinate system and the y-axis in the radar coordinate system,
representing the rotational sub-parameters between the origin of the camera coordinate system and the x-axis of the radar coordinate system.
、
And
may be a matrix.
The arrangement of the radar coordinate system is not limited, and the present embodiment does not limit this.
Wherein the first rotation external parameter in the rotation parameters refers to the rotor parameter
And
the second rotation external parameter refers to the rotor parameter
。
Wherein the content of the first and second substances,
refers to the roll angle (roll angle) in the external reference of the camera,
refers to the pitch angle in the camera external reference,
refers to the yaw angle (yaw angle) of the camera external parameter.
In one example, the determining a first rotation external parameter of the target camera in the rotation parameters according to a first association relation satisfied between rotation transformation information of the target camera and the target radar in respective coordinate systems and the rotation parameters required by converting the camera coordinate system into a radar coordinate system includes:
constructing a first equation between a product of rotation transformation information corresponding to first time information in a radar pose transformation sequence and the rotation parameter and a product of the rotation parameter and the rotation transformation information corresponding to the first time information in the camera pose transformation sequence according to a first incidence relation, moving all calculation items on a target side of the first equation to the other side, and then replacing a numerical value on the target side with a preset residual error to obtain a rotation residual error relational expression;
and taking the second rotation external parameter in the rotation parameters as a quantitative quantity, and performing residual calculation according to the rotation residual relation to obtain the first rotation external parameter of the target camera in the rotation parameters.
In one example, the rotation residual relation may be expressed in the form of the following equation:
wherein the content of the first and second substances,
a preset residual is represented, instead of the value on the target side. When residual calculation is performed according to the rotation residual relation, a first rotation external parameter in the rotation parameter corresponding to the minimized rotation residual can be solved by minimizing the rotation residual.
204. And determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of the target camera, the translation transformation information in the camera pose transformation sequence, the rotation parameters and the translation parameters required by the conversion of the camera coordinate system into the radar coordinate system, and the determined first rotation external parameter.
And the translation parameters can enable the origin of the camera coordinate system to coincide with the origin of the radar coordinate system after the camera coordinate system is translated according to the translation parameters.
Wherein the scale factor is the ratio of the distance between the object and the camera in the real space to the focal length of the target camera.
The second incidence relation comprises a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, and a difference value between the translation parameter and the rotation parameter, and is equal to a product of a scale factor, the rotation parameter and the translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence.
For example, the second association may be in the form of the following formula:
wherein the content of the first and second substances,
which is indicative of a translation parameter(s),
the scale factors are represented by a scale factor,
representing the target camera extracted from the sequence of camera pose transformations in the camera coordinate system
Time of day and
the translation transformation information between +1 time instants (second time information),
representing the second target radar extracted from the radar pose transformation sequence in a radar coordinate system
Time of day and
translation transformation information between +1 time instants. Under the ideal condition of the water-cooling device,
may be 1.
In one example, the determining, according to a second association relation satisfied between rotation parameters and translation parameters required for converting a camera coordinate system into a radar coordinate system and the determined first rotation external parameter, a second rotation external parameter, the scale factor, and the translation parameters in the radar pose transformation sequence, the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of a target camera, the translation transformation information in the camera pose transformation sequence, and the rotation parameters and the translation parameters required for converting the camera coordinate system into the radar coordinate system, includes:
constructing a second equation between a difference value between a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter and a difference value between a product of a scale factor, the rotation parameter and the translation transformation information corresponding to the second time information in the camera pose transformation sequence and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence according to the second incidence relation, moving all calculation items on a target side of the second equation to the other side, and then replacing a numerical value on the target side with a preset residual error to obtain a translation residual error relational expression;
and performing residual calculation according to the translation residual relation and the determined first rotation external parameter to obtain a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters.
Wherein the second equation may be in the form of:
wherein, I
3As in a second association relation
And (4) carrying out the unit matrix after the formula extraction.
In one example, the translation residual relation may be expressed in the form of the following equation:
wherein the content of the first and second substances,
a preset residual is represented, instead of the value on the target side. When residual calculation is carried out according to the translation residual relation and the determined first translation external parameter, the translation residual can be minimized to solve the minimum averageAnd the second rotation external parameter, the scale factor and the translation parameter in the rotation parameters corresponding to the translation residual. Wherein the content of the first and second substances,
and
different.
205. And correcting the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence.
Wherein, the correcting the camera pose transformation sequence according to the scale factor may specifically be: and modifying the rotation transformation information and the translation transformation information of the target camera in the camera coordinate system in the camera pose transformation sequence according to the calculated scale factor. In step 206, the rotation parameters and the translation parameters are recalculated according to the corrected camera pose transformation sequence, so that the accuracy of the target rotation external parameters and the target translation external parameters can be improved.
206. And recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
In the practical application process, with the continuous improvement of sensors such as cameras and radars, in the process of acquiring image data and point cloud data by the cameras, the influence of factors such as noise is smaller and smaller, so that after the image data and the point cloud data are acquired, operations such as denoising and the like are not performed, and after rotation parameters and translation parameters are obtained through calculation, optimized calculation is performed, and the influence of noise is eliminated.
In one example, after step 206, further comprising:
calculating the relation of the rotation residual errors according to the target rotation parameters, the target translation parameters, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain rotation residual errors;
calculating the translation residual error relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain a translation residual error;
and optimizing a matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
The step of optimizing the matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter may be specifically implemented by the following formula:
wherein the content of the first and second substances,
may represent a matrix of rotation parameters and translation parameters, optionally
The matrix splicing method can be obtained by matrix splicing corresponding to the rotation parameters and the translation parameters.
Representing the second of a sequence of camera and radar pose transformations
At each of the time points, the time point,
is 1, and the maximum value is less than the number of moments in the camera pose transformation sequence and the radar pose transformation sequence;
representing the second of a sequence of camera and radar pose transformations
At each of the time points, the time point,
the minimum value of (2) and the maximum value of (2) are the number of moments in the camera pose transformation sequence and the radar pose transformation sequence;
indicates that first to
An
And then square evaluating the modulus.
It can be understood that, in order to ensure the accuracy of the target rotation parameter and the target translation parameter, after the rotation external parameter and the translation external parameter are calculated, the rotation external parameter and the translation external parameter should be checked, and it is determined that the calculated rotation external parameter and the calculated translation external parameter can be used as the target rotation parameter and the target translation parameter.
In one example, the initial camera pose transformation sequence is obtained through image data acquired by a camera, and the initial radar pose transformation sequence is obtained through point cloud data acquired by a radar;
the recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first and second association relations to obtain target rotation parameters and target translation parameters includes:
obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
performing projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor and the corrected translation parameter, and determining a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
The projection error threshold is a preset parameter value used for judging whether the projection error meets the requirement or not, and the developer can set the projection error threshold according to the actual situation, and if the developer wants the projection error to be less than 99%, the projection error threshold can be set to be 99% and the like. When the projection error value is smaller than the projection error threshold value, the obtained corrected rotation parameter and the corrected translation parameter can be considered to meet the requirement, the corrected rotation parameter is used as a target rotation parameter, and the corrected translation parameter is used as a target translation parameter.
As can be seen from the above, according to the embodiment of the present invention, an initial camera pose transformation sequence of a target camera is obtained, an initial radar pose transformation sequence of a target radar is obtained, wherein the relative position between the target camera and the target radar is not changed, the initial camera pose transformation sequence includes rotation transformation information and translation transformation information of the target camera in a camera coordinate system, the initial radar pose transformation sequence includes rotation transformation information and translation transformation information of the target radar in a radar coordinate system, the initial camera pose transformation sequence and the initial radar pose transformation sequence are time-synchronized to obtain a camera pose transformation sequence and a radar pose transformation sequence, and a first association relationship is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar transformation sequence and rotation parameters required for transforming the camera coordinate system into the radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters, according to a second association relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of the target camera, translation transformation information in the camera pose transformation sequence, and rotation parameters and translation parameters required by converting a camera coordinate system into a radar coordinate system, and the determined first rotational external parameters, determining a second rotational external parameter of the rotational parameters, the scale factor, and the translation parameter, correcting the camera pose transformation sequence according to the rotation parameters and the scale factors to obtain a corrected camera pose transformation sequence, recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters; in the embodiment, because the first incidence relation and the second incidence relation exist between the target camera and the target radar with the relative positions kept unchanged, the rotation parameters and the translation parameters of the camera can be calculated based on the rotation transformation information and the translation transformation information in the camera pose transformation sequence and the radar pose transformation sequence obtained after time synchronization processing, so that when the camera is calibrated, a good calibration effect can be achieved without using a calibration plate, the calibration accuracy is improved, and the human resources are saved.
The method according to the preceding embodiment is illustrated in further detail below by way of example.
In this embodiment, the system shown in fig. 1 will be described as a system in which a camera calibration device is integrated into a smart vehicle, and a target camera and a target radar are loaded on the smart vehicle.
As shown in fig. 3, the specific process of the camera calibration method of this embodiment may be as follows:
301. the target camera collects image data and sends the image data to the intelligent vehicle, and the target radar collects point cloud data and sends the point cloud data to the intelligent vehicle.
Wherein the target camera and the target radar can move along with the movement of the intelligent vehicle, but the relative positions of the target camera and the target radar are kept unchanged.
302. The intelligent vehicle receives image data collected by the target camera and point cloud data collected by the target radar.
After receiving the image data and the point cloud data, the intelligent vehicle can perform denoising processing on the image data and the point cloud data, and remove noise points generated by factors such as camera shake and radar shake in the image data and the point cloud data.
303. The intelligent vehicle acquires an initial camera pose transformation sequence of a target camera according to the image data and the point cloud data, and acquires an initial radar pose transformation sequence of a target radar, wherein the relative position of the target camera and the target radar is unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system.
In one embodiment, the initial camera pose transformation sequence is derived from image feature point matching of image data captured by the target camera, and may include pose transformation information of the target camera between adjacent time instants. The initial radar pose transformation sequence is obtained by point cloud registration of point cloud data acquired by a target radar, and can comprise pose transformation information of the target radar between adjacent moments.
304. And the intelligent vehicle carries out time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence.
In one example, step 304 includes:
calculating time delays of the initial camera pose sequence and the initial radar pose sequence respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
determining camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation to obtain a radar pose transformation sequence.
305. And the intelligent vehicle determines a first rotation external parameter of the target camera in the rotation parameters according to a first association relation which is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and the rotation parameters required by converting the camera coordinate system into the radar coordinate system.
The camera coordinate system can rotate around the origin of the camera coordinate system according to the rotation parameters, and the directions of the coordinate axes are consistent with the directions of the coordinate axes in the radar coordinate system.
306. And the intelligent vehicle determines a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to the rotation transformation information and the translation transformation information in the radar pose transformation sequence, the scale factor of the target camera, the translation transformation information in the camera pose transformation sequence, a second incidence relation which is satisfied between the rotation parameters and the translation parameters required by the conversion of the camera coordinate system into the radar coordinate system, and the determined first rotation external parameter.
The translation parameters are parameters which can enable the origin of the camera coordinate system to coincide with the origin of the radar coordinate system after the camera coordinate system translates according to the translation parameters. The scale factor is the ratio of the distance between the object and the camera in real space to the focal length of the target camera.
307. And the intelligent vehicle corrects the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence.
And correcting the representation of the camera pose transformation sequence according to the scale factors, and modifying the rotation transformation information and the translation transformation information of the target camera in the camera coordinate system in the camera pose transformation sequence according to the calculated scale factors so as to improve the accuracy of the target rotation external parameters and the target translation external parameters.
308. And the intelligent vehicle recalculates the rotation parameters and the translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence, the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
Optionally, step 308 may specifically include: obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
performing projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor and the corrected translation parameter, and determining a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
309. And the intelligent vehicle calculates the relation of the rotation residual errors according to the target rotation parameters, the target translation parameters, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain rotation residual errors, and calculates the relation of the translation residual errors to obtain translation residual errors.
310. And the intelligent vehicle optimizes the matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
In one example, step 310 may be specifically implemented by the following formula:
wherein the content of the first and second substances,
representing a matrix formed by rotation parameters and translation parameters together.
Therefore, when the embodiment is used for calibrating the camera, a calibration plate can be omitted, a good calibration effect can be achieved, the calibration accuracy is improved, and manpower resource saving is facilitated.
In order to better implement the method, correspondingly, the embodiment of the invention also provides a user interaction content management device.
Referring to fig. 4, the apparatus includes:
a sequence obtaining unit 401, configured to obtain an initial camera pose transform sequence of a target camera, where a relative position of the target camera and a target radar is unchanged, obtain an initial radar pose transform sequence of the target radar, where the initial camera pose transform sequence includes rotation transform information and translation transform information of the target camera in a camera coordinate system, and the initial radar pose transform sequence includes rotation transform information and translation transform information of the target radar in a radar coordinate system;
a time synchronization unit 402, configured to perform time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
a first parameter determining unit 403, configured to determine, according to a first association relation that is satisfied between the rotation transformation information in the camera pose transformation sequence and the radar pose transformation sequence and rotation parameters required for converting the camera coordinate system into the radar coordinate system, a first rotation external parameter of the target camera in the rotation parameters;
a second parameter determining unit 404, configured to determine, according to rotation transformation information and translation transformation information in the radar pose transformation sequence, a scale factor of a target camera, translation transformation information in the camera pose transformation sequence, a second association relation that is satisfied between a rotation parameter and a translation parameter required for converting a camera coordinate system into a radar coordinate system, and the determined first rotation external parameter, a second rotation external parameter in the rotation parameters, the scale factor, and the translation parameter;
a sequence correction unit 405, configured to correct the camera pose transformation sequence according to the scale factor to obtain a corrected camera pose transformation sequence;
and an external reference correcting unit 406, configured to recalculate the rotation parameter and the translation parameter according to the radar pose transformation sequence, the corrected camera pose transformation sequence, and the first association relationship and the second association relationship, so as to obtain a target rotation parameter and a target translation parameter.
In an optional example, the initial camera pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target camera, and the initial radar pose transformation sequence further includes time information corresponding to rotation transformation information and translation transformation information of the target radar;
correspondingly, the time synchronization unit 402 is configured to calculate time delays of the initial camera pose sequence and the initial radar pose sequence, respectively;
adjusting the time information of the initial camera pose sequence and the initial radar pose sequence based on the time delays of the initial camera pose sequence and the initial radar pose sequence;
and performing interpolation calculation on the initial camera pose transformation sequence and the initial radar pose transformation sequence according to the adjusted time information to obtain a camera pose transformation sequence and a radar pose transformation sequence, wherein the time information corresponding to the same sequence position in the camera pose transformation sequence and the radar pose transformation sequence is the same.
In an optional example, the time synchronization unit 402 includes an interpolation calculation subunit 407, configured to determine, according to the adjusted time information, camera interpolation point time information in the initial camera pose transformation sequence and radar interpolation point time information in the initial radar pose transformation sequence;
according to the rotation transformation information and the translation transformation information in the initial camera pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the camera interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the camera interpolation point time information to obtain a camera pose transformation sequence;
and according to the rotation transformation information and the translation transformation information in the initial radar pose transformation sequence, obtaining rotation transformation information and translation transformation information corresponding to the radar interpolation point time information through interpolation calculation, and arranging the corresponding rotation transformation information and translation transformation information according to the sequence of the radar interpolation point time information to obtain a radar pose transformation sequence.
In an optional example, the first association relationship includes a product of rotation transformation information corresponding to first time information in the radar pose transformation sequence and the rotation parameter, and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence is equal to each other;
correspondingly, the first parameter determining unit 403 is configured to construct a first equation between a product of rotation transformation information corresponding to first time information in a radar pose transformation sequence and the rotation parameter and a product of the rotation parameter and rotation transformation information corresponding to the first time information in the camera pose transformation sequence according to a first association relationship, move all calculation items on a target side of the first equation to another side, and then replace a numerical value on the target side with a preset residual to obtain a rotation residual relation;
and taking the second rotation external parameter in the rotation parameters as a quantitative quantity, and performing residual calculation according to the rotation residual relation to obtain the first rotation external parameter of the target camera in the rotation parameters.
In an optional example, the second association relationship includes a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, and a difference value between the translation parameter and the rotation parameter is equal to a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference value between the translation transformation information corresponding to the second time information in the radar pose transformation sequence;
correspondingly, the second parameter determining unit 404 is configured to construct a second equation between a difference between a product of rotation transformation information corresponding to second time information in the radar pose transformation sequence and the translation parameter, a product of a scale factor, the rotation parameter and translation transformation information corresponding to the second time information in the camera pose transformation sequence, and a difference between translation transformation information corresponding to the second time information in the radar pose transformation sequence, move all computation items on a target side of the second equation to another side, and then replace a numerical value on the target side with a preset residual to obtain a translation residual relation;
and performing residual calculation according to the translation residual relation and the determined first rotation external parameter to obtain a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters.
In an optional example, after the external reference correcting unit 406 recalculates the rotation parameter and the translation parameter according to the radar pose transformation sequence, the corrected camera pose transformation sequence, the first association relation and the second association relation to obtain a target rotation parameter and a target translation parameter, the camera calibration apparatus further includes a parameter optimizing unit, configured to calculate the rotation residual relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence, and the corrected camera pose transformation sequence to obtain a rotation residual;
calculating the translation residual error relation according to the target rotation parameter, the target translation parameter, the radar pose transformation sequence and the corrected camera pose transformation sequence to obtain a translation residual error;
and optimizing a matrix corresponding to the rotation parameter and the translation parameter according to the rotation residual and the translation residual to obtain the optimized rotation parameter and translation parameter.
In an optional example, the initial camera pose transformation sequence is obtained through image data acquired by a camera, and the initial radar pose transformation sequence is obtained through point cloud data acquired by a radar;
correspondingly, the external reference correcting unit 406 includes an external reference correcting subunit 408 and an external reference error determining unit 409, where the external reference correcting subunit 408 is configured to obtain a first corrected rotation external reference in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence, and the first association relation;
obtaining a second corrected rotation external parameter, a corrected scale factor and a corrected translation parameter in the corrected rotation parameters according to the first corrected rotation external parameter, the corrected camera pose transformation sequence, the radar pose transformation sequence and the second incidence relation;
the external reference error determining unit 409 is configured to perform projection conversion between the image data acquired by the camera and the point cloud data acquired by the radar according to the corrected rotation parameter, the corrected scale factor, and the corrected translation parameter, and determine a projection error based on information obtained by the projection conversion;
when the projection error is smaller than a preset projection error threshold value, taking the corrected rotation parameter as a target rotation parameter, and taking the corrected translation parameter as a target translation parameter;
and when the projection error is not smaller than the projection error threshold, correcting the corrected camera pose transformation sequence to obtain a new corrected camera pose sequence, and based on the new corrected camera pose sequence, re-executing the step of obtaining a first corrected rotation external parameter in the corrected rotation parameters according to the corrected camera pose transformation sequence, the radar pose transformation sequence and the first incidence relation.
Therefore, by adopting the device provided by the embodiment of the invention, based on the first incidence relation and the second incidence relation existing between the camera and the radar with relatively unchanged positions, when the camera is calibrated, a good calibration effect can be achieved without using a calibration plate, the calibration accuracy is improved, and the manpower resource is saved.
Accordingly, an embodiment of the present invention further provides an electronic device, as shown in fig. 5, the electronic device may include Radio Frequency (RF) circuit 501, memory 502 including one or more computer-readable storage media, input unit 503, display unit 504, sensor 505, audio circuit 506, Wireless Fidelity (WiFi) module 507, processor 508 including one or more processing cores, and power supply 509. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 5 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 501 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 501 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 502 may be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 508 and the input unit 503 access to the memory 502.
The input unit 503 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 508, and can receive and execute commands sent by the processor 508. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 503 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 504 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 504 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 508 to determine the type of touch event, and then the processor 508 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 5 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The electronic device may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the electronic device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, can be used for applications for recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like which can be configured for the electronic device, and are not described herein again.
Speakers, microphones in audio circuitry 506 may provide an audio interface between the user and the electronic device. The audio circuit 506 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508 and then sent to, for example, another electronic device via the RF circuit 501, or the audio data is output to the memory 502 for further processing. The audio circuit 506 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
WiFi belongs to short-distance wireless transmission technology, and the electronic equipment can help a user to receive and send emails, browse webpages, access streaming media and the like through the WiFi module 507, and provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 507, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the electronic device, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby integrally monitoring the mobile phone. Optionally, processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 508.
The electronic device also includes a power supply 509 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 508 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 509 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 508 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 508 runs the application programs stored in the memory 502, so as to implement various functions:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the camera calibration methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any camera calibration method provided in the embodiments of the present invention, the beneficial effects that can be achieved by any camera calibration method provided in the embodiments of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
According to an aspect of the application, there is also provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations in the embodiments described above.
An embodiment of the present invention further provides an intelligent vehicle, as shown in fig. 6, which shows a schematic structural diagram of the intelligent vehicle according to the embodiment of the present invention, specifically:
the smart vehicle may include a vehicle body 601, a sensing device 602, an execution device 603, and an in-vehicle processing device 604, and those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the smart vehicle, and may include more or fewer components than those shown, or combine certain components, or a different arrangement of components. Wherein:
the vehicle body 601 is a vehicle body structure of the smart vehicle, and may include hardware structures such as a frame, a door, a vehicle body, and an internal seat.
The sensing device 602 is a sensing structure of the smart vehicle for sensing internal state information of the smart vehicle and environmental information in the external driving environment. Specifically, the device can comprise a wheel speed meter, a positioning meter, a tire pressure meter, a target radar, a target camera and the like.
The executing device 603 is a structure for executing a running function of the intelligent vehicle, and the executing device may include a power device such as an engine, a power battery, a transmission structure, a display device such as a display screen and a sound device, a steering device such as a steering wheel, and a tire.
The on-vehicle processing device 604 is the "brain" of the intelligent vehicle, and integrates a control device for controlling vehicle operation parameters such as vehicle speed, direction, acceleration steering, etc., a vehicle running safety monitoring device for monitoring the running state of the unmanned vehicle, an information acquisition device for analyzing information sensed by the sensing device, a planning device for planning a vehicle running route, and the like.
The execution device, the sensing device and the vehicle-mounted processing device are all mounted on a vehicle body, and the vehicle-mounted processing device is connected with the execution device and the sensing device through a bus, so that the vehicle-mounted processing device can execute the steps in any camera calibration method provided by the embodiment of the application, and therefore, the beneficial effects which can be realized by any camera calibration method provided by the embodiment of the application can be realized, for example, the vehicle-mounted processing device can execute the following steps:
acquiring an initial camera pose transformation sequence of a target camera and an initial radar pose transformation sequence of a target radar, wherein the relative positions of the target camera and the target radar are unchanged, the initial camera pose transformation sequence comprises rotation transformation information and translation transformation information of the target camera in a camera coordinate system, and the initial radar pose transformation sequence comprises the rotation transformation information and the translation transformation information of the target radar in a radar coordinate system;
performing time synchronization processing on the initial camera pose transformation sequence and the initial radar pose transformation sequence to obtain a camera pose transformation sequence and a radar pose transformation sequence;
according to a first incidence relation which is satisfied between the camera pose transformation sequence and rotation transformation information in the radar pose transformation sequence and rotation parameters required by the conversion of a camera coordinate system into a radar coordinate system, determining a first rotation external parameter of the target camera in the rotation parameters;
determining a second rotation external parameter, the scale factor and the translation parameter in the rotation parameters according to a second incidence relation which is satisfied between rotation transformation information and translation transformation information in the radar pose transformation sequence, scale factors of a target camera, translation transformation information in the camera pose transformation sequence, rotation parameters and translation parameters required by a camera coordinate system to be converted into a radar coordinate system, and the determined first rotation external parameter;
correcting the camera pose transformation sequence according to the scale factors to obtain a corrected camera pose transformation sequence;
and recalculating rotation parameters and translation parameters according to the radar pose transformation sequence, the corrected camera pose transformation sequence and the first incidence relation and the second incidence relation to obtain target rotation parameters and target translation parameters.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The camera calibration method, the camera calibration device, the electronic device, and the storage medium provided by the embodiments of the present invention are described in detail above, and a specific example is applied in the description to explain the principle and the implementation of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.