CN116012428A - Method, device and storage medium for combining and positioning thunder and vision - Google Patents

Method, device and storage medium for combining and positioning thunder and vision Download PDF

Info

Publication number
CN116012428A
CN116012428A CN202211660709.5A CN202211660709A CN116012428A CN 116012428 A CN116012428 A CN 116012428A CN 202211660709 A CN202211660709 A CN 202211660709A CN 116012428 A CN116012428 A CN 116012428A
Authority
CN
China
Prior art keywords
coordinate system
radar
camera
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211660709.5A
Other languages
Chinese (zh)
Inventor
曹林
庄晟彬
赵宗民
杜康宁
张黎
王东峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202211660709.5A priority Critical patent/CN116012428A/en
Publication of CN116012428A publication Critical patent/CN116012428A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the specification provides a method, a device and a storage medium for combining and positioning thunder and lightning, which can be applied to the technical field of intelligent transportation. The method comprises the following steps: performing internal reference calibration on the camera and the radar respectively; the camera and the radar respectively correspond to a camera coordinate system and a radar coordinate system; determining a control point based on the set of reference points and the radar information; the radar information comprises radar speed information, radar depth information and radar rotation information acquired by a radar; determining the association relation between the control points and the camera coordinate system under the world coordinate system; determining a central fusion relationship between a camera coordinate system and a world coordinate system; and mapping radar information to an image coordinate system acquired by the camera by integrating the association relation and the center fusion relation. The method solves the problem of uncertainty of control point propagation, reduces errors in the positioning process, improves the effect of the combined positioning of the thunder and the vision, and is beneficial to positioning surrounding vehicles in practical application.

Description

Method, device and storage medium for combining and positioning thunder and vision
Technical Field
The embodiment of the specification relates to the technical field of intelligent transportation, in particular to a method, a device and a storage medium for combining and positioning thunder and vision.
Background
In the application scenario of intelligent traffic, the perceived positioning of objects around a vehicle is one of the key technologies. Currently, a camera and a millimeter wave radar are generally adopted for sensing. And because the camera is difficult to measure the position and the speed of the target, the millimeter wave radar cannot distinguish the target types, and the perception effect is generally optimized in a multi-sensor fusion mode.
In practical application, because the positions of the camera, the millimeter wave radar and the object to be detected cannot be in an absolute ideal state, the images and the radar information are not completely corresponding to each other, and the direct fusion of the images and the radar information can cause the lack of certain accuracy of the combined information, for example, the direct superposition of the radar information to the images can cause poor correlation between the images and the radar information center, so that the positioning effect is affected. Therefore, a method for optimizing the effect of the combined positioning of the radars is needed.
Disclosure of Invention
An objective of the embodiments of the present disclosure is to provide a method, an apparatus, and a storage medium for positioning a radar in combination, so as to solve the problem of how to optimize the radar positioning effect in combination.
In order to solve the above technical problems, an embodiment of the present disclosure provides a method for positioning a radar joint, including: performing internal reference calibration on the camera and the radar respectively; the camera and the radar respectively correspond to a camera coordinate system and a radar coordinate system; determining a control point based on the set of reference points and the radar information; the radar information comprises radar speed information, radar depth information and radar rotation information acquired by a radar; determining the association relation between the control points and the camera coordinate system under the world coordinate system; determining a central fusion relationship between a camera coordinate system and a world coordinate system; and mapping radar information to an image coordinate system acquired by the camera by integrating the association relation and the center fusion relation.
The embodiment of the specification also provides a lightning combined positioning device, which comprises: the internal reference calibration module is used for respectively calibrating internal references of the camera and the radar; the camera and the radar respectively correspond to a camera coordinate system and a radar coordinate system; the control point determining module is used for determining a control point based on the reference point set and the radar information; the radar information comprises radar speed information, radar depth information and radar rotation information acquired by a radar; the association relation determining module is used for determining the association relation between the control points and the camera coordinate system under the world coordinate system; the center fusion relation determining module is used for determining a center fusion relation between the camera coordinate system and the world coordinate system; and the information mapping module is used for integrating the association relation and the center fusion relation to map the radar information to an image coordinate system acquired by the camera.
The embodiments of the present specification also propose a computer storage medium on which a computer program is stored, which when executed implements the steps of the above-described method of radar joint localization.
The technical scheme provided by the embodiment of the specification can be seen that when the embodiment of the specification is used for carrying out the thunder and vision combined positioning, firstly, internal reference calibration is respectively carried out on the camera and the radar, and the correction of the parameters of the camera and the radar is ensured. And then, determining the control point by combining the radar information, so that the control point is associated with the radar information, further determining the association relation between the control point and the coordinate system of the camera and the world coordinate system, ensuring the strong association between the control point and different coordinate systems by determining the association relation, and introducing the association between the coordinate system of the radar and the coordinate system corresponding to the camera. And then, by determining the central fusion relation between the camera coordinate system and the world coordinate system and further combining the association relation, radar information is mapped into the image acquired by the camera, so that objects in the image also have information such as depth, speed and the like, and the combined positioning of the thunder and the vision is realized. By the method, the problem of uncertainty in control point propagation is solved, errors in the positioning process are reduced, the effect of combined positioning of the thunder and the vision is improved, and surrounding vehicles are positioned in practical application.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present description, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for combining location of a radar in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating the conversion from a global world coordinate system to a camera coordinate system according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a camera-to-image coordinate system plane mapping in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating the establishment of a relationship between control points in a world coordinate system according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a radar co-location device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions of the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
In order to solve the above technical problems, an embodiment of the present disclosure provides a method for positioning a radar in a combined manner. Specifically, the method for positioning the radar combined can be realized by the radar combined positioning device, and the radar combined positioning device can be applied to intelligent traffic vehicles to detect driving information of the intelligent traffic vehicles, such as the distribution state of surrounding vehicles. As shown in fig. 1, the method for positioning the radar united states includes the following specific implementation steps.
S110: performing internal reference calibration on the camera and the radar respectively; the camera and the radar correspond to a camera coordinate system and a radar coordinate system respectively.
Because the manufacturing of the equipment cannot reach an absolute ideal state, when the camera shoots an image or the radar acquires sensing information, certain difference exists between the obtained data and real data. For example, different images captured by a single camera may also produce multiple forms of malformed changes, known as image distortions. In order to minimize the errors produced by the device, internal calibration of the camera and radar is required.
When the internal reference is carried out on the camera, a BRIEF (Binary Robust Independent Elementary Features) algorithm in computer vision can be utilized to detect the target from the two-dimensional image acquired by the camera and extract information, and the information is mapped into a three-dimensional space. It is necessary to make an optical path from the target object to the camera and even inside the camera, and to model the physical path of the entire light. In order to model the optical model of the camera, so that the target of the three-dimensional world can be projected onto the two-dimensional image plane, all that is required is the transfer and unification of a plurality of coordinate systems.
When extracting a two-dimensional image acquired from a camera using the BRIEF algorithm, the image may first be gaussian filtered to reduce noise interference.
Then, the characteristic points are taken as the middleA neighborhood window of a certain size, such as a neighborhood window of SxS, is acquired. A pair of points is randomly selected in the window, and binary assignment is performed by comparing the pixel sizes of the two points. Specifically, let p (x), p (y) be the random point x= (u) 1 ,v 1 ),y=(u 2 ,v 2 ) The assignment can refer to the formula
Figure BDA0004013755280000031
Is carried out.
Selecting n=256 pairs of random points from the neighborhood window and ensuring the selected point x i ,y i All satisfy
Figure BDA0004013755280000032
Gaussian distribution, while the sampling criteria obeys isotropic gaussian distribution. The above assignment operation is repeated for the random points to form a binary code, which constitutes a description of the feature points.
The method greatly optimizes the speed of the algorithm and ensures the accuracy of the obtained binary codes.
After the image features are obtained through the steps, coordinate system conversion can be performed to ensure that all data can be fused under the same coordinate system. The forward central axis of the camera coordinate system is in reverse coincidence with the optical axis of the lens, and the forward central axis passes through the center of the camera lens and the center of the image and is perpendicular to the image plane. After four key coordinate systems are introduced, the transformation relationship of each coordinate system can be modeled.
Specifically, the conversion relationship between the world coordinate system and the camera coordinate system, the camera coordinate system and the image coordinate system, and the image coordinate system and the pixel coordinate system can be sequentially determined.
The world global coordinate system and the camera coordinate system are two independent three-dimensional coordinate systems, and any two three-dimensional coordinate systems can be integrated into the same coordinate system through two means of coordinate system rotation and coordinate system translation according to the geometric principle. Using an orthonormal matrix R to express a world global coordinate system to an image captureThe rotational transformation of the machine coordinate system uses the column vector t to express the translational transformation of the coordinate system. The rotation matrix R and the translation vector t constitute the extrinsic matrix of the camera and the global coordinate system of the world. The conversion from the world global coordinate system to the camera coordinate system is shown in fig. 2. Specifically, the conversion relationship between the world coordinate system and the camera coordinate system is that
Figure BDA0004013755280000041
Wherein x is c ,y c ,z c Is the coordinate in the camera coordinate system, x w ,y w ,z w The coordinate is under the world coordinate system, R is a rotation matrix, and t is a translation vector.
For the conversion relation from the camera coordinate system to the image coordinate system, the principle that the lens of the camera passes through light is assumed to be the same as the principle that the aperture images, the light of an object point passes through the aperture and then strikes the image plane, and the distance from the center of the lens to the image plane is the focal length f. Meanwhile, according to the optical principle, the essence of pinhole imaging is the projective transformation from three-dimensional to two-dimensional, i.e. the point where light passes through the image plane. With fig. 3, the plane is projected by being placed in front of the plane, so that the position information of the target object becomes two-dimensional after the projection. In the figure, the P point is an image coordinate system coordinate, and the M point is a world coordinate system coordinate. Specifically, the conversion relationship between the camera coordinate system and the image coordinate system is that
Figure BDA0004013755280000042
Wherein x is p ,y p Is the lower coordinate of the image coordinate system, x c ,y c ,z c And f is the focal length of the camera.
For the conversion relation from the image coordinate system to the pixel coordinate system, the image coordinate system and the pixel coordinate system are two-dimensional coordinate systems and are in the same plane. But the origin of coordinates is not the same. In actual camera imaging, the origin of the image coordinate system is offset from the image midpoint by a certain amount. Assume that the position of the projection of the object coordinate point in the world global coordinate system into the image coordinate system is (x) c ,y c ) Pixel coordinate systemThe conversion relation with the image coordinate system is that
Figure BDA0004013755280000043
Wherein u and v are coordinates in a pixel coordinate system, and x p ,y p U is the lower coordinate of the image coordinate system 0 ,v 0 D is the origin position of the image coordinate system x 、d y The pixel points correspond to the width and height of the image coordinate system, respectively.
The result obtained according to the above formula is the internal reference matrix Cipm of the camera, in particular
Figure BDA0004013755280000044
Through the above process, the conversion relation of the world coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinate system in sequence is completed.
Accordingly, millimeter wave radars also require calibration prior to use. In practical applications, the internal reference calibration is generally performed after the millimeter wave radar is installed in an automobile. The internal reference calibration is mainly performed for the yaw angle, pitch angle and roll angle of the radar. The pitch angle, the roll angle and the yaw angle are calibrated by using small-sized level bars, corner reflectors and other devices. Ensuring that the radar-mounted vehicle is stopped on level ground. Determining the running direction of the vehicle according to the vehicle body, determining a test area (straight line) 5-20 m in front of the vehicle, and calibrating the verticality and the horizontality of the vehicle-mounted radar by using a small-size level bar, wherein the test area (straight line) is vertical to the running direction; and adjusting the corner reflector to the same height as the radar, selecting a plurality of points to place the corner reflector, reading angle values through the radar, and finally determining pitch angle, roll angle and yaw angle according to the linear slope and the included angle fitted by the data of the plurality of points.
S120: determining a control point based on the set of reference points and the radar information; the radar information includes radar speed information, radar depth information and radar curl information acquired by the radar.
After the internal parameter calibration is completed, the external parameter calibration is required to be carried out for the radar system. The external reference calibration may be performed using the EPnP algorithm (End-to-End Probabilistic Perspective-n-Points). The EPnP algorithm involves the selection of control points, which are currently randomly selected in practical applications. However, since the control points need to achieve joint calibration between coordinate systems corresponding to different sensors, the randomly selected control points are not necessarily applicable to different coordinate systems, for example, the control points under the pixel coordinate system are not necessarily applicable to radar information, so that correlation of the center of the object is poor and a large number of overlapping depth values appear. However, since radar information needs to be enriched into an image in the subsequent steps, the relevance between different coordinate systems needs to be enhanced based on the control points.
Therefore, in the present embodiment, the control point is determined by combining radar information. Thus, the initial control point may be determined from the center of gravity of the reference point set and the radar information, and further other control points may be determined from the initial control point.
Specifically, the formula is used
Figure BDA0004013755280000051
Selecting an initial control point, wherein +_>
Figure BDA0004013755280000052
For the initial control point, n is the number of reference points, P i w F is the focal length of the camera, d is the reference point x The width of the pixel point in an image coordinate system is shown, gamma is the deflection angle of radar rotation information, and X r 、Y r Representing coordinates, t, for projection of radar depth information 1 、t 2 For translation variable +.>
Figure BDA0004013755280000053
Is the gravity center variable of the characteristic point in the space under the world coordinate system.
S130: and determining the association relation between the control points and the camera coordinate system under the world coordinate system.
After the control points are determined in the above manner, the association relationship between the coordinate systems can be determined based on the control points. Specifically, 4 three-dimensional points in the world coordinate system can be usedThe linear combination map of control points is expressed in the camera coordinate system as:
Figure BDA0004013755280000054
accordingly, it is possible to base on the formula +.>
Figure BDA0004013755280000055
Determining a correspondence between a world coordinate system and a camera coordinate system, wherein x w ,y w ,z w Is the coordinate in the world coordinate system, gamma is radar depth information, theta is radar rotation information, f is the focal length of the camera, and +.>
Figure BDA0004013755280000056
x c ,y c ,z c Is the lower coordinate of the camera coordinate system.
And parameterizing image information and radar information corresponding to the space object, carrying out conversion mapping under a coordinate system, selecting control points under the coordinate system, and establishing connection between the control points and other characteristic points under the coordinate system. As shown in fig. 4, a relationship between the camera coordinate system and the control points in the world coordinate system is established, i.e., the control points are transferred to the camera coordinate system along with other feature points. And solving the coordinates of the control points under the camera coordinate system, further establishing the relation between the radar coordinate system and the image coordinate system, and carrying out joint calibration on the radar sensor after solving the pose of the object to be measured under the camera and the radar by using depth, angle and rotation information.
S140: a central fusion relationship between the camera coordinate system and the world coordinate system is determined.
The central position fusion is the space fusion in the radar fusion, the centers of the two sensors in the same frame are required to be unified after the internal and external parameters of the two sensors are calibrated, and the joint positioning task is laid for the subsequent process. The central fusion problem maps to the mathematical problem as follows: the world global coordinate system and the camera coordinate system are two independent three-dimensional coordinate systems, and any two three-dimensional coordinate systems can be integrated into the same coordinate system through two means of coordinate system rotation and coordinate system translation according to the geometric principle. The external parameter calibration is actually a process of solving an external parameter matrix M (consisting of a rotation matrix R and a translation vector t) by acquiring image and millimeter wave radar data and calculating the minimum projection loss through projection.
Specifically, the radar data is subjected to data conversion, and the relative sensor coordinates of each radar point are output. Before data fusion, the space of the camera and the millimeter wave radar needs to be calibrated. After calibration is completed, all millimeter wave radar points are projected to an image plane by using the obtained R and t. And cutting the radar projection point cloud falling into the detection frame for the target on each image to obtain a radar area. In the radar interest area, each radar point is matched with the target detected by the image, so that the accurate millimeter wave radar point is matched for each image target, and information such as position and speed of the radar point is acquired.
Multiple regions of interest exist within the box of the same object, and the position of each region of interest on the associated image can have overlapping or interference associated problems, resulting in incorrect matching when the object is matched with the point cloud. Finally, after the target is fused and positioned, huge deviation occurs between the real position and the positioning position. Meanwhile, since the millimeter wave Lei Dadian cloud is extremely sparse, it is difficult to simply perform the filtering processing from the viewpoint of data. When the density-based clustering algorithm is used for removing low-density points, the radar points of the single target are frequently and independently generated, so that the point cloud of the single target is easily filtered out, and missing positioning and other reasons appear after fusion.
S150: and mapping radar information to an image coordinate system acquired by the camera by integrating the association relation and the center fusion relation.
After the association relationship and the center fusion relationship are determined, the relationships can be integrated to map radar information into images acquired by the camera.
Specifically, the process of mapping radar information may be to detect a target object from an image acquired by a camera, where the target object includes a target vehicle; and determining the object position of the target object under the camera coordinate system, converting the object position under the camera coordinate system into the target position under the radar coordinate system, and finally extracting the radar information under the target position and mapping the radar information to the target object in the image.
When the data of the target object has only the lateral distance x and the longitudinal distance y, the data dimension is reduced from 2 dimensions to 1 dimension through projection. That is, only the pixel abscissa u in the pixel coordinate system can be obtained after projection. From this procedure, the equation formula of the projection conversion from the camera coordinate system to the millimeter wave radar coordinate system can be obtained
Figure BDA0004013755280000071
Where u is the pixel abscissa of the target object in the pixel coordinate system, f is the focal length of the camera, d x Is the width of the pixel point under the image coordinate system, gamma is the yaw angle and X w For the transverse distance of the target object in the radar coordinate system, Y w For the longitudinal distance, t, of the target object in the radar coordinate system x Is a transverse translation vector, t y For longitudinal translation vector u 0 Is the abscissa of the origin of the image coordinate system.
The method for estimating the target height and the width is mainly based on an imaging model, and is assumed that the target height is h, the width is w, the distance between the target and the sensor is gamma, the focal length of the camera is f, and the pixel height of the target on the image is |y 1 -y 2 | pixel width |x 1 -x 2 | a. The invention relates to a method for producing a fibre-reinforced plastic composite. Constructing a correlation matching model based on the width and the height of a target pixel, and obtaining the coordinates of radar information under an image coordinate system as follows:
Figure BDA0004013755280000072
the target radar combined coordinate system shows the state of the movement of the object, and the front and back moments need to be considered. To be used for
Figure BDA0004013755280000073
Indicating the coordinate position change of the targets at the front and rear moments, wherein the acceleration of the front and rear targets is a k Is accelerated by the uniform acceleration motion of (2), and an error e exists between the sensors k And then the motion model of radar mapping under the image coordinate system can be obtainedIs->
Figure BDA0004013755280000074
And is also provided with
Figure BDA0004013755280000075
Wherein->
Figure BDA0004013755280000076
For the position coordinates of the target object at the latter moment, A, B, C is a coefficient, a k Acceleration of the target object, e k For sensor error, +.>
Figure BDA0004013755280000077
H for L-BFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno) algorithm k The matrix can obtain the optimal position motion estimation model of each time only by iterating the L-BFGS algorithm.
The following is a summary of the above processes, firstly, feature extraction and coordinate system conversion are performed through a BRIEF algorithm to complete internal reference calibration of the camera, and meanwhile, a level bar and a corner reflector are adopted to complete internal reference calibration of the millimeter wave radar. And then, integrating the image data and the radar information, performing sensor external parameter calibration, performing joint calibration by using an EPnP algorithm, positioning the central positions of the two coordinate systems, and finally completing the effect of mapping the radar information onto the image data.
By introducing the embodiment, it can be seen that when the method is used for carrying out the thunder and vision combined positioning, firstly, internal reference calibration is respectively carried out on the camera and the radar, and the correction of the parameters of the camera and the radar is ensured. And then, determining the control point by combining the radar information, so that the control point is associated with the radar information, further determining the association relation between the control point and the coordinate system of the camera and the world coordinate system, ensuring the strong association between the control point and different coordinate systems, and introducing the association between the radar coordinate system and the coordinate system corresponding to the camera. And then, by determining the central fusion relation between the camera coordinate system and the world coordinate system and further combining the fusion relation, radar information is mapped into an image acquired by the camera, so that objects in the image also have information such as depth, speed and the like, and the combined positioning of the radar is realized. By the method, the problem of uncertainty in control point propagation is solved, errors in the positioning process are reduced, the effect of combined positioning of the thunder and the vision is improved, and surrounding vehicles are positioned in practical application.
Based on the above-mentioned combined positioning method, the embodiment of the present specification also provides a combined positioning device for a radar. As shown in fig. 5, the lightning co-location device may include the following specific modules.
The internal reference calibration module 510 is used for respectively calibrating internal references for the camera and the radar; the camera and the radar correspond to a camera coordinate system and a radar coordinate system respectively.
A control point determination module 520 for determining a control point based on the set of reference points and the radar information; the radar information includes radar speed information, radar depth information and radar curl information acquired by the radar.
The association determination module 530 is configured to determine an association between the control point and the camera coordinate system in the world coordinate system.
The center fusion relation determining module 540 is configured to determine a center fusion relation between the camera coordinate system and the world coordinate system.
And the information mapping module 550 is used for mapping the radar information to the image coordinate system acquired by the camera by integrating the association relation and the center fusion relation.
Based on the above-mentioned combined positioning method, the embodiment of the present disclosure further provides a combined positioning device for radar. The lightning co-located device may include a memory and a processor.
In this embodiment, the memory may be implemented in any suitable manner. For example, the memory may be a read-only memory, a mechanical hard disk, a solid state hard disk, or a usb disk. The memory may be used to store computer program instructions.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor, and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic controller, and an embedded microcontroller, among others. The processor may execute the computer program instructions to implement the steps of the method for radar joint positioning corresponding to fig. 1.
The present description also provides one embodiment of a computer storage medium. The computer storage medium includes, but is not limited to, random access Memory (Random Access Memory, RAM), read-Only Memory (ROM), cache (Cache), hard Disk (HDD), memory Card (Memory Card), and the like. The computer storage medium stores computer program instructions. The computer program instructions in the corresponding embodiment of fig. 1 of the present specification are realized when said computer program is executed.
The radar combined positioning method introduced by the embodiment can be applied to the technical field of intelligent transportation and other technical fields, and is not limited.
While the process flows described above include a plurality of operations occurring in a particular order, it should be apparent that the processes may include more or fewer operations, which may be performed sequentially or in parallel (e.g., using a parallel processor or a multi-threaded environment).
While the process flows described above include a plurality of operations occurring in a particular order, it should be apparent that the processes may include more or fewer operations, which may be performed sequentially or in parallel (e.g., using a parallel processor or a multi-threaded environment).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method for combining and locating a radar, comprising:
performing internal reference calibration on the camera and the radar respectively; the camera and the radar respectively correspond to a camera coordinate system and a radar coordinate system;
determining a control point based on the set of reference points and the radar information; the radar information comprises radar speed information, radar depth information and radar rotation information acquired by a radar;
determining the association relation between the control points and the camera coordinate system under the world coordinate system;
determining a central fusion relationship between a camera coordinate system and a world coordinate system;
and mapping radar information to an image coordinate system acquired by the camera by integrating the association relation and the center fusion relation.
2. The method of claim 1, wherein the performing internal reference calibration for the camera and the radar, respectively, comprises:
extracting image feature codes from images acquired by a camera by using a BRIEF algorithm; the method for selecting the random point in the BRIEF algorithm is changed as follows, and the random point x is satisfied i And y i In the form of
Figure FDA0004013755270000011
While the sampling criteria obeys an isotropic gaussian distribution;
and determining the conversion relation among the world coordinate system, the camera coordinate system, the image coordinate system and the pixel coordinate system in sequence.
3. The method of claim 2, wherein the conversion relationship between the world coordinate system and the camera coordinate system is
Figure FDA0004013755270000012
Wherein x is c ,y c ,z c Is the coordinate in the camera coordinate system, x w ,y w ,z w The coordinate is the world coordinate system lower coordinate, R is a rotation matrix, and t is a translation vector;
the conversion relation between the camera coordinate system and the image coordinate system is that
Figure FDA0004013755270000013
Wherein x is p ,y p Is the lower coordinate of the image coordinate system, x c ,y c ,z c The focal length of the camera is f, which is the lower coordinate of the camera coordinate system;
the conversion relation between the pixel coordinate system and the image coordinate system is that
Figure FDA0004013755270000014
Wherein u and v are coordinates in a pixel coordinate system, and x p ,y p U is the lower coordinate of the image coordinate system 0 ,v 0 D is the origin position of the image coordinate system x 、d y Respectively the width of the pixel points corresponding to the image coordinate systemAnd a height.
4. The method of claim 1, wherein the determining a control point based on a set of reference points and radar information comprises:
selecting an initial control point according to the gravity center of the reference point set; the method comprises the following steps: using the formula
Figure FDA0004013755270000021
Selecting an initial control point, wherein +_>
Figure FDA0004013755270000022
For the initial control point, n is the number of reference points, P i w F is the focal length of the camera, d is the reference point x The width of the pixel point in an image coordinate system is shown, gamma is the deflection angle of radar rotation information, and X r 、Y r Representing coordinates, t, for projection of radar depth information 1 、t 2 For translation variable +.>
Figure FDA0004013755270000023
Is the gravity center variable of the characteristic point in the space under the world coordinate system.
5. The method of claim 1, wherein determining the association between the control point and the camera coordinate system in the world coordinate system comprises:
based on the formula
Figure FDA0004013755270000024
Determining a correspondence between a world coordinate system and a camera coordinate system, wherein x w ,y w ,z w Is the coordinate in the world coordinate system, gamma is radar depth information, theta is radar rotation information, f is the focal length of the camera, and +.>
Figure FDA0004013755270000025
x c ,y c ,z c In the camera coordinate systemCoordinates.
6. The method of claim 1, wherein said integrating the association and center fusion relationship to map radar information to an image coordinate system acquired by a camera comprises:
detecting a target object from an image acquired by a camera; the target object comprises a target vehicle;
determining an object position of a target object under a camera coordinate system;
converting the object position under the camera coordinate system into a target position under a radar coordinate system;
and extracting radar information at the target position and mapping the radar information to a target object in the image.
7. The method of claim 6, wherein determining the object position of the target object in the camera coordinate system comprises:
determining a transverse distance x and a longitudinal distance y of a target object under a camera coordinate system;
using the formula
Figure FDA0004013755270000026
Determining the pixel abscissa of the target object in a pixel coordinate system, wherein u is the pixel abscissa of the target object in the pixel coordinate system, f is the focal length of the camera, and d x Is the width of the pixel point under the image coordinate system, gamma is the yaw angle and X w For the transverse distance of the target object in the radar coordinate system, Y w For the longitudinal distance, t, of the target object in the radar coordinate system x Is a transverse translation vector, t y For longitudinal translation vector u 0 The abscissa of the origin of the image coordinate system;
the converting the object position under the camera coordinate system into the target position under the radar coordinate system comprises the following steps:
determining the abscissa corresponding to the target position as
Figure FDA0004013755270000031
The ordinate is +.>
Figure FDA0004013755270000032
Where f is the focal length of the camera, h is the target object height, w is the target object width, |x 1 -x 2 I is the pixel width of the target object in the image, y 1 -y 2 And I is the pixel height of the target object in the image.
8. The method of claim 6, wherein the extracting radar information at the target location to map to a target object in an image comprises:
mapping radar information into an image in combination with the motion state of a target object; wherein, the motion estimation position of the target object is determined by combining the motion model; the motion model comprises
Figure FDA0004013755270000033
And->
Figure FDA0004013755270000034
In (1) the->
Figure FDA0004013755270000035
For the position coordinates of the target object at the latter moment, A, B, C is a coefficient, a k Acceleration of the target object, e k For sensor error, +.>
Figure FDA0004013755270000036
H being the L-BFGS algorithm k A matrix.
9. A lightning co-location device, comprising:
the internal reference calibration module is used for respectively calibrating internal references of the camera and the radar; the camera and the radar respectively correspond to a camera coordinate system and a radar coordinate system;
the control point determining module is used for determining a control point based on the reference point set and the radar information; the radar information comprises radar speed information, radar depth information and radar rotation information acquired by a radar;
the association relation determining module is used for determining the association relation between the control points and the camera coordinate system under the world coordinate system;
the center fusion relation determining module is used for determining a center fusion relation between the camera coordinate system and the world coordinate system;
and the information mapping module is used for integrating the association relation and the center fusion relation to map the radar information to an image coordinate system acquired by the camera.
10. A computer storage medium having stored thereon computer program instructions, which when executed, implement the steps of a method of joint positioning for radars according to any of claims 1-8.
CN202211660709.5A 2022-12-23 2022-12-23 Method, device and storage medium for combining and positioning thunder and vision Pending CN116012428A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211660709.5A CN116012428A (en) 2022-12-23 2022-12-23 Method, device and storage medium for combining and positioning thunder and vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211660709.5A CN116012428A (en) 2022-12-23 2022-12-23 Method, device and storage medium for combining and positioning thunder and vision

Publications (1)

Publication Number Publication Date
CN116012428A true CN116012428A (en) 2023-04-25

Family

ID=86027561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211660709.5A Pending CN116012428A (en) 2022-12-23 2022-12-23 Method, device and storage medium for combining and positioning thunder and vision

Country Status (1)

Country Link
CN (1) CN116012428A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758150A (en) * 2023-05-15 2023-09-15 阿里云计算有限公司 Position information determining method and device
CN117406185A (en) * 2023-12-14 2024-01-16 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium
CN117541910A (en) * 2023-10-27 2024-02-09 北京市城市规划设计研究院 Fusion method and device for urban road multi-radar data

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758150A (en) * 2023-05-15 2023-09-15 阿里云计算有限公司 Position information determining method and device
CN116758150B (en) * 2023-05-15 2024-04-30 阿里云计算有限公司 Position information determining method and device
CN117541910A (en) * 2023-10-27 2024-02-09 北京市城市规划设计研究院 Fusion method and device for urban road multi-radar data
CN117406185A (en) * 2023-12-14 2024-01-16 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium
CN117406185B (en) * 2023-12-14 2024-02-23 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium

Similar Documents

Publication Publication Date Title
CN109961468B (en) Volume measurement method and device based on binocular vision and storage medium
EP3517997B1 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
CN116012428A (en) Method, device and storage medium for combining and positioning thunder and vision
CN113673282A (en) Target detection method and device
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
KR20170139548A (en) Camera extrinsic parameters estimation from image lines
CN113256729B (en) External parameter calibration method, device and equipment for laser radar and camera and storage medium
CN105551020A (en) Method and device for detecting dimensions of target object
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
WO2023035301A1 (en) A camera calibration method
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
CN117590362B (en) Multi-laser radar external parameter calibration method, device and equipment
CN110148205B (en) Three-dimensional reconstruction method and device based on crowdsourcing image
KR100933304B1 (en) An object information estimator using the single camera, a method thereof, a multimedia device and a computer device including the estimator, and a computer-readable recording medium storing a program for performing the method.
CN111986248B (en) Multi-vision sensing method and device and automatic driving automobile
CN116258687A (en) Data labeling method, system, device, electronic equipment and storage medium
CN113203424B (en) Multi-sensor data fusion method and device and related equipment
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
Stănescu et al. Mapping the environment at range: implications for camera calibration
CN114755663A (en) External reference calibration method and device for vehicle sensor and computer readable storage medium
CN113834463A (en) Intelligent vehicle side pedestrian/vehicle monocular depth distance measuring method based on absolute size
CN112183378A (en) Road slope estimation method and device based on color and depth image
CN113450415B (en) Imaging equipment calibration method and device
CN112802117B (en) Laser radar and camera calibration parameter blind restoration method
US20240200953A1 (en) Vision based cooperative vehicle localization system and method for gps-denied environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination