CN116295353A - Positioning method, device and equipment of unmanned vehicle and storage medium - Google Patents

Positioning method, device and equipment of unmanned vehicle and storage medium Download PDF

Info

Publication number
CN116295353A
CN116295353A CN202310320159.0A CN202310320159A CN116295353A CN 116295353 A CN116295353 A CN 116295353A CN 202310320159 A CN202310320159 A CN 202310320159A CN 116295353 A CN116295353 A CN 116295353A
Authority
CN
China
Prior art keywords
point cloud
camera
positioning mark
laser radar
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310320159.0A
Other languages
Chinese (zh)
Inventor
刘平
周文彬
李庭潘
孙金泉
蔡登胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Liugong Machinery Co Ltd
Original Assignee
Guangxi Liugong Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Liugong Machinery Co Ltd filed Critical Guangxi Liugong Machinery Co Ltd
Priority to CN202310320159.0A priority Critical patent/CN116295353A/en
Publication of CN116295353A publication Critical patent/CN116295353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a positioning method, a device, equipment and a storage medium of an unmanned vehicle. Comprising the following steps: acquiring images acquired by cameras on a vehicle and point clouds acquired by laser radars; acquiring the pixel coordinates of the characteristic points of the positioning mark in the vehicle running environment according to the image, and acquiring the cloud coordinates of the characteristic points of the positioning mark according to the point cloud; constructing a target error function according to the pixel coordinates of the characteristic points of each camera and the cloud coordinates of the characteristic points of each laser radar; and determining the pose of the unmanned vehicle according to the target error function. The positioning marks are acquired through the cameras and the laser radars, the field of view of the vehicle for observing the positioning marks is increased, the deployment cost of the positioning marks is reduced, and the data of the cameras and the laser radar sensors are fused, so that the accuracy and the stability of vehicle positioning are improved.

Description

Positioning method, device and equipment of unmanned vehicle and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for locating an unmanned vehicle.
Background
In the process of autonomous operation of the unmanned engineering vehicle indoors, the pose of the unmanned engineering vehicle in a working scene is usually required to be positioned, and at present, when the unmanned engineering vehicle is positioned in real time, positioning marks can be arranged at key positions in the environment, and the positions of the positioning marks are detected by means of point clouds of a laser radar or pixel positions of a camera image in a driving scene so as to reversely push out the pose of the vehicle body.
However, the current positioning method based on positioning marks only depends on a single sensor, and a plurality of positioning marks are required to be deployed in a map because the visual field of the single sensor is narrower, so that the deployment cost is increased; simply relying on a monocular camera, the monocular camera lacks depth information, so that inaccurate positioning can be caused; if the laser radar is simply relied on, when the positioning mark is shielded by the working device, such as unloading of a loader, the positioning coordinate information can be lost, so that the accuracy of vehicle positioning is affected.
Disclosure of Invention
The invention provides a positioning method, device, equipment and storage medium for an unmanned vehicle, so as to realize accurate positioning of the unmanned vehicle.
According to a first aspect of the present invention, there is provided a method of locating an unmanned vehicle, comprising: acquiring images acquired by cameras on a vehicle and point clouds acquired by laser radars;
Acquiring the pixel coordinates of the characteristic points of the positioning mark in the vehicle running environment according to the image, and acquiring the cloud coordinates of the characteristic points of the positioning mark according to the point cloud;
constructing a target error function according to the characteristic point pixel coordinates of each camera and the characteristic point cloud coordinates of each laser radar;
and determining the pose of the unmanned vehicle according to the target error function.
According to another aspect of the present invention, there is provided a positioning device of an unmanned vehicle, including: the multi-sensor acquisition module is used for acquiring images acquired by cameras on the vehicle and point clouds acquired by the laser radars;
the positioning mark coordinate acquisition module is used for acquiring the characteristic point pixel coordinates of the positioning mark in the vehicle running environment according to the image and acquiring the characteristic point cloud coordinates of the positioning mark according to the point cloud;
the target error function construction module is used for constructing a target error function according to the characteristic point pixel coordinates of each camera and the characteristic point cloud coordinates of each laser radar;
and the vehicle pose determining module is used for determining the pose of the unmanned vehicle according to the target error function.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present invention.
According to another aspect of the invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the method according to any of the embodiments of the invention.
According to the technical scheme, the positioning marks are acquired through the cameras and the laser radars, the field of view of the vehicle for observing the positioning marks is increased, the deployment cost of the positioning marks is reduced, and the data of the cameras and the laser radar sensors are fused, so that the accuracy and the stability of vehicle positioning are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a positioning method of an unmanned vehicle according to a first embodiment of the present invention;
fig. 2 is a flowchart of a positioning method of an unmanned vehicle according to a second embodiment of the present invention;
fig. 3 is a schematic structural view of a positioning device for an unmanned vehicle according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a positioning method of an unmanned vehicle according to an embodiment of the present invention, where the embodiment is applicable to a case of positioning the unmanned vehicle, the method may be performed by a positioning device of the unmanned vehicle, and the device may be implemented in a form of hardware and/or software. As shown in fig. 1, the method includes:
Step S101, acquiring images acquired by each camera and point clouds acquired by each lidar on the vehicle.
Specifically, in this embodiment, a plurality of cameras and a plurality of lidars may be disposed on the unmanned vehicle, and the plurality of cameras and the lidars may be disposed at different positions on the unmanned vehicle. In the process of self-operation of the unmanned vehicle, each camera and the laser radar are started, the environment scene in the vehicle running process is shot through each camera, each camera can shoot a frame of image, meanwhile, each laser radar can detect the environment scene in the vehicle running process, and each laser radar can detect a frame of point cloud.
The control equipment of the vehicle receives each frame of image acquired by each camera and each frame of point cloud detected by each laser radar, and analyzes and processes the images and the point clouds according to the images and the point clouds so as to position the positioning mark in the running process of the vehicle.
Step S102, obtaining the pixel coordinates of the characteristic points of the positioning mark in the vehicle running environment according to the image, and obtaining the cloud coordinates of the characteristic points of the positioning mark according to the cloud.
Optionally, acquiring, according to the image, the pixel coordinates of the feature point of the positioning mark in the vehicle running environment includes: determining a deep learning algorithm, and identifying an image by adopting the deep learning algorithm to obtain a positioning mark, wherein the deep learning algorithm comprises a neural network; feature point pixel coordinates are determined for locating the marker at the camera coordinates.
Specifically, for each frame of image sent by each camera, a deep learning algorithm, for example, a neural network of yolov5-face model is adopted to identify and obtain a positioning mark contained in one frame of image sent by camera a, and when identification is performed, the identification of the positioning mark can be determined, namely, which positioning mark is specific in the running environment is identified.
Optionally, obtaining the feature point cloud coordinates of the positioning mark according to the point cloud includes: determining the intensity information of each point cloud, and cutting the point cloud according to the intensity information to obtain a target point cloud; clustering the target point cloud to obtain single or multiple positioning marks; acquiring the position of a point cloud of a known positioning mark in a map under a vehicle coordinate system at the last moment; determining a target positioning mark corresponding to the target point cloud according to the position of the positioning mark at the previous moment and the point cloud position of the positioning mark at the current moment; and carrying out matching processing according to the point cloud of the target positioning mark and the point cloud of the positioning mark template, and determining the characteristic point cloud coordinates of the positioning mark under the laser radar coordinate system.
Optionally, the matching processing is performed according to the point cloud of the target positioning mark and the positioning mark template, and the determining of the characteristic point cloud coordinates of the positioning mark under the laser radar coordinate system includes: performing position rough matching adjustment on the point cloud of the target positioning mark and the point cloud of the positioning mark template by adopting a Principal Component Analysis (PCA) method to obtain a rough matched first rotation translation matrix and a point cloud after position adjustment; performing fine matching on the point cloud subjected to position adjustment and the point cloud of the positioning mark template by adopting an icp iterative nearest point algorithm to obtain a second rotation translation matrix subjected to fine matching; and acquiring the characteristic point cloud coordinates of the target positioning mark according to the angular point coordinates of the known point cloud of the positioning mark template, the first rotary translation matrix and the second rotary translation matrix.
Specifically, in this embodiment, for each frame of point cloud sent by each lidar, point cloud preprocessing is performed to obtain the characteristic point cloud coordinates of the positioning mark, and specifically, the coordinates in the lidar coordinate system. When preprocessing each frame of point clouds of each laser radar, the intensity information of each point cloud is specifically determined, for example, one frame of point clouds sent by the laser radar C contains 100 point clouds, and the position information and the intensity information of each point cloud are known, because the point clouds detected by the laser radar may also include obstacles of non-positioning marks such as roads, vehicles or cargoes, and the point clouds corresponding to the obstacles belong to interference information for positioning the positioning marks, the intensity threshold value of the point clouds for the positioning marks is preset, and the point clouds lower than the intensity threshold value in 100 point clouds are cut, so that the point clouds corresponding to the obstacles are deleted, for example, by cutting the remaining 40 point clouds and all the point clouds are the target point clouds corresponding to the positioning marks. However, in the remaining 40 target point clouds, a specific number of positioning marks are corresponding to each other, and a specific positioning mark corresponding to each positioning mark cannot be known at the moment, so that the point clouds after the intensity clipping also need to be identified. Clustering the cut target point clouds to obtain 40 target point clouds which specifically correspond to a plurality of positioning marks and point clouds in each positioning mark, for example, two positioning marks are arranged in the 40 target point clouds, the number of the point clouds in one positioning mark is 25, and the number of the point clouds in the other positioning mark is 15; when the acquired point clouds of the laser radar C after clustering treatment are identified, the position of a positioning mark in a map can be acquired and converted to the position under the vehicle coordinate system at the previous moment, nearest-neighbor matching treatment is carried out on the position of each positioning mark point cloud at the current moment, the position is nearest and the distance is smaller than a certain threshold value, so that successful matching is indicated, and the identification of the positioning mark in the map which is successfully matched is given to the identification of the positioning mark at the current moment; for example: the identification of the positioning marks in the map which is successfully matched is 001, and the number of the positioning marks at the current moment which is successfully matched with the positioning marks is 25, and the positioning marks which are 001 corresponding to the 25 target point clouds acquired at the current moment are determined; the identification of the positioning mark in the map which is successfully matched is 002, and the number of the positioning marks at the current moment which is successfully matched with the positioning marks is 15, and the positioning marks which are 002 and correspond to the 15 target point clouds acquired at the current moment are determined.
After determining the point cloud of the target point location mark, the main component analysis method (Principal components analysis, PCA) and the pre-designed locating mark template point cloud are adopted to perform position rough matching adjustment, namely, the position or format of the point cloud is adjusted, so that the point cloud after position adjustment has a better initial direction when performing icp matching with the template point cloud, and the principle of PCA rough matching is not the key point of the application, so that repeated description is omitted in the embodiment. After the point cloud of the target positioning mark is subjected to rough matching, a first rotation translation matrix and the point cloud after position adjustment are obtained, and the point cloud after position adjustment is subjected to fine matching by adopting an iterative nearest point algorithm (Iterative Closest Point, ICP) to obtain a second rotation translation matrix. And obtaining characteristic points of the target positioning mark under a laser radar coordinate system after the angular point coordinates of the known template point cloud are converted by the first rotary translation matrix and the second rotary translation matrix, and determining the characteristic point cloud coordinates of the target positioning mark under the laser radar coordinate system.
And step S103, constructing a target error function according to the characteristic point pixel coordinates of each camera and the characteristic point cloud coordinates of each laser radar.
Optionally, constructing the target error function according to the feature point pixel coordinates of each camera and the feature point cloud coordinates of each laser radar includes: determining camera sub-errors according to pixel coordinates of each camera feature point, the position of a positioning mark in an image acquired by each camera under a map coordinate system and a rotation translation matrix from a vehicle coordinate system to each camera coordinate system; determining a laser radar sub-error according to the characteristic point cloud coordinates of each laser radar, the position of a positioning mark in each laser radar acquisition point cloud under a map coordinate system and a rotation translation matrix from a vehicle coordinate system to each laser radar coordinate system; and constructing an objective error function according to the camera sub-errors and the laser radar sub-errors.
Specifically, in this embodiment, the objective error function of the following formula (1) is constructed according to the feature point pixel coordinates of each camera and the feature point coordinates of the lidar:
Figure BDA0004151398680000071
where n is the total number of cameras, u j Pixel coordinates, s, of feature points of the jth positioning mark detected by the camera j Normalizing the coordinate in the Z direction of the camera to 1, K as a normalization factor i Is the internal reference of the ith camera, T i For the rotation translation matrix from the whole car coordinate system to the ith camera coordinate system, P j For the coordinates of the positioning mark of the ith camera in the map coordinate system, T is the rotation translation matrix from the map to the whole vehicle coordinate system which is needed to be optimized in an iterative mode, and the set of the positioning mark coordinates in the view angle of the ith camera is C j P is then j ∈C i ,α i A camera weight factor, namely, representing the influence degree of the camera on a positioning result; m is the total number of laser radars, p j The 3d position coordinate, beta, of the jth positioning mark feature point detected by radar detection under a laser radar coordinate system i Is a laser radar weight factor, namely represents the influence degree of the laser radar on the positioning result, M i For the rotation translation matrix from the whole vehicle coordinate system to the ith laser radar coordinate system, N j For the coordinates of the positioning mark of the ith laser radar in the map coordinate system, the set of the coordinates of the positioning mark in the view angle of the ith laser radar is D j P is then j ∈D i . And in particular can
Figure BDA0004151398680000081
As camera sub-error, and will +.>
Figure BDA0004151398680000082
As a laser radar sub-error, T constructed by a camera sub-error and a laser radar sub-error * As a function of the target error.
Step S104, determining the pose of the unmanned vehicle according to the target error function.
Optionally, acquiring a camera rotation translation matrix and a laser radar rotation translation matrix; based on the camera rotation translation matrix and the laser radar rotation translation matrix, carrying out iterative optimization solution on the target error function by adopting a Gauss Newton method so as to obtain the minimum value of the target function; and taking the solving result as the pose of the unmanned vehicle.
Specifically, in this embodiment, the camera rotation translation matrix dx1 and the laser radar rotation translation matrix dx2 are obtained, the coordinates of the positioning mark monitored by the camera or the laser radar are optimized by using gaussian newton iteration, so that the value of the target error function is minimum, and at this time, the iteratively optimized T is the rotation translation matrix from the map coordinate system to the whole vehicle coordinate system, so that the pose of the vehicle in the map coordinate system at this time can be obtained, and thus the real-time positioning of the vehicle is realized.
In this embodiment, the data of multiple sensors, for example, the laser radar and the camera are fused, so that the field of view of the vehicle is improved, the deployment of positioning marks is reduced, and in addition, the target error function is constructed together to solve the pose of the vehicle body, so that the global optimal solution is obtained, the local optimal solution of a single sensor is avoided, and the accuracy and stability of vehicle positioning are improved.
According to the embodiment of the invention, the positioning marks are acquired through the cameras and the laser radars, the field of view of the vehicle for observing the positioning marks is increased, the deployment cost of the positioning marks is reduced, and the data of the cameras and the laser radar sensors are fused, so that the accuracy of vehicle positioning is improved.
Example two
Fig. 2 is a flowchart of a positioning method of an unmanned vehicle according to a second embodiment of the present invention, where the determining a pose of the unmanned vehicle according to an objective error function is specifically described based on the foregoing embodiments, as shown in fig. 2, and the method includes:
step S201, a multi-camera jacobian matrix and a multi-lidar jacobian matrix are acquired.
Wherein, knowing n three-dimensional space points P and the projection P of the points in the image, we want to calculate the pose R, t of the camera, the rotation translation matrix of which is denoted as T, and the coordinates of a space point are assumed to be P i =[X i ,Y i ,Z i ]The projected pixel coordinate is p i =[u i ,v i ]For multiple cameras, the respective camera internal parameters of the multiple cameras, external parameters from the body coordinate system to the respective camera coordinate systems, and pixel coordinates of three-dimensional space points under different cameras are known. Thus, the relation between the pixel and the spatial point position can be expressed as s i u i =K 1 T 1 TP i ,s i u i =K 2 T 2 TP i Wherein P is i The three-dimensional coordinate is the 3d point coordinate of the world coordinate system, T is the rotation translation matrix converted from the world coordinate system to the whole vehicle coordinate system, namely the quantity to be optimized, T 1 For the whole car coordinate system to the rotation translation matrix of the first camera, T 2 For the whole car coordinate system to the rotation translation matrix of the second camera, K 1 K is the internal reference of the first camera 2 Is an internal reference of the second camera. Thus, the following formula (2) is obtained in general
su=K 1 T 1 P′ (2)
Wherein,,
Figure BDA0004151398680000091
let the external parameters from the whole car coordinate system to the first camera be +>
Figure BDA0004151398680000092
Therefore, the position relationship between the pixel coordinate of the first camera coordinate system and the 3d point of the whole vehicle coordinate system can be deduced as
Figure BDA0004151398680000093
Figure BDA0004151398680000094
The u and v are respectively biased towards X ', Y ' and Z ', and the obtained formulas are as follows: />
Figure BDA0004151398680000095
Figure BDA0004151398680000096
Figure BDA0004151398680000101
Figure BDA0004151398680000102
In the method, the coordinates of the whole vehicle coordinate system are defined as intermediate variables, the deviation of pixel coordinate errors relative to the whole vehicle coordinate system can be obtained as the following formula (3) by taking the derivative of the change of e on the disturbance quantity into consideration after the T is multiplied by the disturbance quantity delta zeta
Figure BDA0004151398680000103
Wherein,,
Figure BDA0004151398680000104
so that a multi-camera jacobian matrix can be obtained as shown in the following formula (4):
Figure BDA0004151398680000105
the derivation is known from the above formula:
Figure BDA0004151398680000106
Figure BDA0004151398680000107
Figure BDA0004151398680000108
Figure BDA0004151398680000109
this gives a multi-camera jacobian matrix.
Wherein, install a plurality of lidars on the loader, known automobile body coordinate system is to the external reference matrix of each lidar coordinate system, three-dimensional space coordinate under different lidar coordinate systems of three-dimensional space point. However, since the external parameters from different lidars to the vehicle body coordinate system are different, the camera coordinate system is referred to as an intermediate variable, and is properly modified into the whole vehicle coordinate system as an intermediate variable, so that a formula based on the lidar can be changed into: p (P) lidar_1 =T 1 TP i 、P lidar_2 =T 2 TP i Wherein P is i The three-dimensional coordinate is the 3d point coordinate of the world coordinate system, T is the rotation translation matrix converted from the world coordinate system to the whole vehicle coordinate system, namely the quantity to be optimized, T 1 For the whole car coordinate system to the rotation translation matrix of the first laser radar, T 2 And (3) a rotation translation matrix from the whole vehicle coordinate system to the second laser radar. Thus, the following formula (5) can be obtained in general terms
P lidar =T 1 P′ (5)
Wherein,,
Figure BDA0004151398680000111
the external parameters from the whole vehicle coordinate system to the first laser radar are
Figure BDA0004151398680000112
Therefore, the position relation between the 3d point coordinate of the first laser radar coordinate system and the 3d point of the whole vehicle coordinate system can be deduced as X lidar =R 00 X′+R 01 Y′+R 02 Z′+T x ,Y lidar =R 10 X′+R 11 Y′+R 12 Z′+T y
Z lidar =R 20 X′+R 21 Y′+R 22 Z′+T z Point X in laser radar coordinate system lidar 、Y lidar 、Z lidar The partial derivatives of X ', Y ', Z ' are respectively calculated, and the calculated formulas are as follows:
Figure BDA0004151398680000113
Figure BDA0004151398680000114
we define the coordinates of the whole car coordinate system as intermediate variables, we multiply T by disturbance delta xi for T, then consider the derivative of the change of e with respect to disturbance, and by using the chain rule, we can write the following formula (6)
Figure BDA0004151398680000115
Wherein,,
Figure BDA0004151398680000121
Figure BDA0004151398680000122
thus, a multi-laser radar Jacobian matrix can be obtained as shown in the following formula (7):
Figure BDA0004151398680000123
step S202, a camera rotation translation matrix is obtained according to the multi-camera Jacobian matrix, and a laser radar rotation translation matrix is obtained according to the multi-laser radar Jacobian matrix.
Optionally, acquiring the camera rotation translation matrix according to the multi-camera jacobian matrix includes: acquiring coordinate errors of all pixels; determining a first accumulation matrix of the camera according to the multi-camera Jacobian matrix, and determining a second accumulation matrix of the camera according to the multi-camera Jacobian matrix and the coordinate errors of all pixels; a camera rotational translation matrix is determined from the camera first type matrix and the camera second accumulation matrix.
Specifically, in this embodiment, before performing iterative optimization solution on the objective error function by using the gaussian newton method, the iteration number and the iteration exit condition are designed first, for example, the pixel error is smaller than a certain value or the iteration step length is smaller than a certain value. For each iteration, initializing a matrix H of 6 rows and 6 columns, a matrix b of 6 rows and 1 columns, and a pixel overall error of 0, and then traversing all detected pixel points in one camera, for example, for a front-view camera, calculating the pixel coordinates of the point coordinates of a world coordinate system projected onto the front-view camera, wherein the formula is P car =TP W ,uv proj =K(RP car +t) and then calculating the difference e=uv between the coordinates of the individual projected pixels and the true coordinates of the image true -uv proj And then, summing the differences between all pixel coordinates and projection pixel coordinates of the front-view camera to obtain cost=cost+abs (e), determining a first camera accumulation matrix H+ =J.transfer (). J according to the multi-camera Jacobian matrix, and determining a second camera accumulation matrix b+ = -J.transfer (). E according to the multi-camera Jacobian matrix and the pixel coordinate errors. Thus, the value of the multi-camera first accumulation matrix H is only related to the multi-camera Jacobian matrix J, and the multi-camera Jacobian matrix is related to the 3d point coordinates of the world coordinate system, namely, the value of the camera first accumulation matrix H changes along with the change of the world coordinate system coordinates of each iteration; the value of the second accumulation matrix b of the camera is related to the Jacobian matrix J of the multi-camera and the pixel coordinate error e, and the pixel error e is changed when the world coordinate system is 3d point coordinates and the pixel coordinates are iterated each time, so that the value of the second accumulation matrix b of the camera is changed.
Optionally, obtaining the laser radar rotation translation matrix according to the multiple laser radar jacobian matrix includes: acquiring the point cloud coordinate error of each laser radar; determining a first accumulation matrix of the laser radar according to the multi-laser-radar Jacobian matrix, and determining a second accumulation matrix of the laser radar according to the multi-laser-radar Jacobian matrix and the point cloud coordinate errors of each laser radar; and determining a laser radar rotation translation matrix according to the first laser radar accumulation matrix and the second laser radar accumulation matrix.
Specifically, all the detected locating mark feature points in a laser radar are traversed again, for example, for a forward looking laser radar, the point projection of a world coordinate system is calculated and converted into coordinates under the laser radar coordinate system, and the formula is P car =TP W ,P proj =RP car +t, then calculating the difference e=p between the individual projected space coordinates of the world coordinate system to the lidar coordinate system and the lidar true coordinates lidar_true -P proj And solving a difference of all space point coordinates of the forward looking laser radar and projected space point coordinates to obtain cost=cost+abs (e), determining a first accumulation matrix H+ =J.transfer (). J of the laser radar according to the multi-laser-radar Jacobian matrix, and determining a second accumulation matrix b+ = -J.transfer (). E of the laser radar according to the multi-laser-radar Jacobian matrix and each laser-radar point cloud coordinate error. From the above formula, it can be derived that the value of the first accumulation matrix H of the lidar is related to the multi-lidar jacobian matrix J only, and the multi-lidar jacobian matrix is related to the 3d point coordinates of the world coordinate system, i.e. the value of the first accumulation matrix H of the lidar changes with each iteration of the world coordinate system coordinates; the value of the second accumulation matrix b of the laser radar is related to the multi-laser radar Jacobian matrix J and the space 3d point projection error e, and each iteration of the world coordinate system 3d point coordinates and the laser radar point coordinates changes the space 3d point projection error e, so that the value of the second accumulation matrix b of the laser radar is changed.
After accumulating the matrices H and b for all cameras, calculating an iteration update direction matrix dx, where dx is a matrix of 6 rows and 1 columns, where the first three numbers represent translation vectors of the rotation translation matrix, and the second three numbers represent the translation vectors of the rotation translation matrix, where the formulas of the translation vectors are yaw around the z axis, pitch around the y axis, roll angle around the x axis, h=dx=b, so that dx=h.ldlt (). Solid (b) can be obtained, and after obtaining the matrix dx, an iteration direction is obtained, so that the matrix dx of 6 rows and 1 columns can be converted into the rotation translation matrix, and similarly, the rotation translation matrix of the laser radar can be obtained for the laser radar, and the principle is similar to that of the camera, and the description is omitted in this embodiment.
And step S203, based on the camera rotation translation matrix and the laser radar rotation translation matrix, performing iterative optimization solution on the target error function by adopting a Gaussian Newton method so as to enable the target function to obtain a minimum value.
Wherein, the rotation translation matrix T from the world coordinate system to the whole vehicle coordinate system is iteratively optimized after =T dx T before Ending one iteration and then putting T after And (3) putting the target error into iteration to perform iterative optimization of the next wave, so that the target error becomes smaller gradually, and circulating until the circulation exit condition is met or the circulation times are finished, thereby obtaining an optimal solution T for the target error function.
And step S204, taking the solving result as the pose of the unmanned vehicle.
When the loop exit condition is always satisfied or the loop times are over for the target error function according to the above iterative operation, the above obtained optimal solution T is taken as the pose of the unmanned vehicle in the present embodiment, and the coordinate position of the unmanned vehicle under the map coordinate system, and pose information such as the rotation angle, the pitch angle and the translation angle are included in T.
According to the embodiment of the invention, the positioning marks are acquired through the cameras and the laser radars, the field of view of the vehicle for observing the positioning marks is increased, the deployment cost of the positioning marks is reduced, and the data of the cameras and the laser radar sensors are fused, so that the accuracy of vehicle positioning is improved.
Example III
Fig. 3 is a schematic structural diagram of a positioning device for an unmanned vehicle according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes: the system comprises a multi-sensor acquisition module 310, a positioning mark coordinate acquisition module 320, an objective error function construction module 330 and a vehicle pose determination module 340.
A multi-sensor acquisition module 310, configured to acquire images acquired by cameras on a vehicle and point clouds acquired by laser radars;
The positioning mark coordinate obtaining module 320 is configured to obtain, according to the image, a feature point pixel coordinate of a positioning mark in a vehicle driving environment, and obtain, according to a point cloud, a feature point cloud coordinate of the positioning mark;
the target error function construction module 330 is configured to construct a target error function according to the pixel coordinates of the feature points of each camera and the cloud coordinates of the feature points of each laser radar;
the vehicle pose determining module 340 is configured to determine the pose of the unmanned vehicle according to the target error function.
Optionally, the positioning mark coordinate acquisition module comprises a feature point pixel coordinate acquisition sub-module, and is used for determining a deep learning algorithm, and identifying the image by adopting the deep learning algorithm to acquire the positioning mark, wherein the deep learning algorithm comprises a neural network;
feature point pixel coordinates are determined for locating the marker at the camera coordinates.
Optionally, the positioning mark coordinate acquisition module comprises a characteristic point cloud coordinate acquisition sub-module, which is used for determining the intensity information of each point cloud and cutting the point cloud according to the intensity information to acquire the target point cloud; clustering the target point cloud to obtain single or multiple positioning marks; acquiring the position of a point cloud of a known positioning mark in a map under a vehicle coordinate system at the last moment; determining a target positioning mark corresponding to the target point cloud according to the position of the positioning mark at the previous moment and the point cloud position of the positioning mark at the current moment; and carrying out matching processing according to the point cloud of the target positioning mark and the point cloud of the positioning mark template, and determining the characteristic point cloud coordinates of the positioning mark under the laser radar coordinate system.
Optionally, the characteristic point cloud coordinate obtaining sub-module is used for performing position rough matching adjustment on the point cloud of the target positioning mark and the point cloud of the positioning mark template by adopting a Principal Component Analysis (PCA) method to obtain a rough matched first rotation translation matrix and a point cloud after position adjustment;
performing fine matching on the point cloud subjected to position adjustment and the point cloud of the positioning mark template by adopting an iterative closest point algorithm ICP to obtain a second rotation translation matrix subjected to fine matching;
and acquiring the characteristic point cloud coordinates of the target positioning mark according to the angular point coordinates of the known point cloud of the positioning mark template, the first rotary translation matrix and the second rotary translation matrix.
Optionally, the objective error function construction module is configured to determine a camera sub-error according to pixel coordinates of feature points of each camera, a position of a positioning mark in an acquired image of each camera under a map coordinate system, and a rotation translation matrix from a vehicle coordinate system to each camera coordinate system;
determining a laser radar sub-error according to the characteristic point cloud coordinates of each laser radar, the position of a positioning mark in each laser radar acquisition point cloud under a map coordinate system and a rotation translation matrix from a vehicle coordinate system to each laser radar coordinate system;
and constructing an objective error function according to the camera sub-errors and the laser radar sub-errors.
Optionally, the vehicle pose determining module is used for acquiring a multi-camera jacobian matrix and a multi-laser radar jacobian matrix;
acquiring a camera rotation translation matrix according to the multi-camera Jacobian matrix, and acquiring a laser radar rotation translation matrix according to the multi-laser radar Jacobian matrix;
based on the camera rotation translation matrix and the laser radar rotation translation matrix, carrying out iterative optimization solution on the target error function by adopting a Gauss Newton method so as to obtain the minimum value of the target function;
and taking the solving result as the pose of the unmanned vehicle.
Optionally, the vehicle pose determining module includes a camera rotation translation matrix obtaining sub-module, configured to obtain coordinate errors of each pixel;
determining a first accumulation matrix of the camera according to the multi-camera Jacobian matrix, and determining a second accumulation matrix of the camera according to the multi-camera Jacobian matrix and the coordinate errors of all pixels;
a camera rotational translation matrix is determined from the camera first accumulation matrix and the camera second accumulation matrix.
Optionally, the vehicle pose determining module comprises a laser radar rotation translation matrix obtaining sub-module, which is used for obtaining the coordinate error of each laser radar point cloud;
determining a first accumulation matrix of the laser radar according to the multi-laser-radar Jacobian matrix, and determining a second accumulation matrix of the laser radar according to the multi-laser-radar Jacobian matrix and the point cloud coordinate errors of each laser radar;
And determining a laser radar rotation translation matrix according to the first laser radar accumulation matrix and the second laser radar accumulation matrix.
The document query device provided by the embodiment of the invention can execute the unmanned vehicle positioning method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the method of locating an unmanned vehicle.
In some embodiments, the method of locating an unmanned vehicle may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the above-described method of locating an unmanned vehicle may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the method of locating the unmanned vehicle in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (11)

1. A method of locating an unmanned vehicle, comprising:
acquiring images acquired by cameras on a vehicle and point clouds acquired by laser radars;
acquiring the pixel coordinates of the characteristic points of the positioning mark in the vehicle running environment according to the image, and acquiring the cloud coordinates of the characteristic points of the positioning mark according to the point cloud;
constructing a target error function according to the characteristic point pixel coordinates of each camera and the characteristic point cloud coordinates of each laser radar;
And determining the pose of the unmanned vehicle according to the target error function.
2. The method of claim 1, wherein the acquiring feature point pixel coordinates of a locating marker in a vehicle driving environment from the image comprises:
determining a deep learning algorithm, and identifying the image by adopting the deep learning algorithm to obtain a positioning mark, wherein the deep learning algorithm comprises a neural network;
and determining the pixel coordinates of the characteristic points of the positioning mark under the camera coordinates.
3. The method according to claim 1, wherein the obtaining the characteristic point cloud coordinates of the positioning mark according to the point cloud includes:
determining the intensity information of each point cloud, and cutting the point cloud according to the intensity information to obtain a target point cloud;
clustering the target point cloud to obtain single or multiple positioning marks;
acquiring the position of a point cloud of a known positioning mark in a map under a vehicle coordinate system at the last moment;
determining a target positioning mark corresponding to the target point cloud according to the position of the positioning mark at the previous moment and the point cloud position of the positioning mark at the current moment;
and carrying out matching processing according to the point cloud of the target positioning mark and the point cloud of the positioning mark template, and determining the characteristic point cloud coordinates of the positioning mark under a laser radar coordinate system.
4. A method according to claim 3, wherein the determining the characteristic point cloud coordinates of the positioning mark under the laser radar coordinate system according to the matching process between the point cloud of the target positioning mark and the point cloud of the positioning mark template comprises:
performing position rough matching adjustment on the point cloud of the target positioning mark and the point cloud of the positioning mark template by adopting a Principal Component Analysis (PCA) method to obtain a rough matched first rotation translation matrix and a point cloud after position adjustment;
performing fine matching on the point cloud subjected to the position adjustment and the point cloud of the positioning mark template by adopting an iterative closest point algorithm ICP to obtain a second rotation translation matrix subjected to fine matching;
and acquiring the characteristic point cloud coordinates of the target positioning mark according to the angular point coordinates of the known point cloud of the positioning mark template, the first rotary translation matrix and the second rotary translation matrix.
5. The method of claim 1, wherein the constructing an objective error function from the feature point pixel coordinates of each camera and the feature point cloud coordinates of each lidar comprises:
determining camera sub-errors according to the pixel coordinates of the feature points of each camera, the position of a positioning mark in the acquired image of each camera under a map coordinate system, and a rotation translation matrix from a vehicle coordinate system to each camera coordinate system;
Determining a laser radar sub-error according to the characteristic point cloud coordinates of each laser radar, the position of a positioning mark in each laser radar acquisition point cloud under a map coordinate system, and a rotation translation matrix from a vehicle coordinate system to each laser radar coordinate system;
and constructing the target error function according to the camera sub-error and the laser radar sub-error.
6. The method of claim 1, wherein the determining the pose of the unmanned vehicle from the target error function comprises:
acquiring a multi-camera Jacobian matrix and a multi-laser radar Jacobian matrix;
acquiring a camera rotation translation matrix according to the multi-camera Jacobian matrix, and acquiring a laser radar rotation translation matrix according to the multi-laser radar Jacobian matrix;
based on the camera rotation translation matrix and the laser radar rotation translation matrix, carrying out iterative optimization solution on the target error function by adopting a Gaussian Newton method so as to enable the target function to obtain a minimum value;
and taking the solving result as the pose of the unmanned vehicle.
7. The method of claim 6, wherein the obtaining a camera rotation translation matrix from the multi-camera jacobian matrix comprises:
Acquiring coordinate errors of all pixels;
determining a first accumulation matrix of the camera according to the multi-camera Jacobian matrix, and determining a second accumulation matrix of the camera according to the multi-camera Jacobian matrix and the coordinate errors of all pixels;
and determining the camera rotation translation matrix according to the camera first accumulation matrix and the camera second accumulation matrix.
8. The method of claim 6, wherein the obtaining a lidar rotational translation matrix from the multi-lidar jacobian matrix comprises:
acquiring the point cloud coordinate error of each laser radar;
determining a first accumulation matrix of the laser radar according to the multi-laser-radar Jacobian matrix, and determining a second accumulation matrix of the laser radar according to the multi-laser-radar Jacobian matrix and the point cloud coordinate errors of the laser radars;
and determining a laser radar rotation translation matrix according to the first laser radar accumulation matrix and the second laser radar accumulation matrix.
9. A positioning device for an unmanned vehicle, comprising:
the multi-sensor acquisition module is used for acquiring images acquired by cameras on the vehicle and point clouds acquired by the laser radars;
the positioning mark coordinate acquisition module is used for acquiring the characteristic point pixel coordinates of the positioning mark in the vehicle running environment according to the image and acquiring the characteristic point cloud coordinates of the positioning mark according to the point cloud;
The target error function construction module is used for constructing a target error function according to the characteristic point pixel coordinates of each camera and the characteristic point cloud coordinates of each laser radar;
and the vehicle pose determining module is used for determining the pose of the unmanned vehicle according to the target error function.
10. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A computer readable storage medium storing computer instructions for causing a processor to perform the method of any one of claims 1-8.
CN202310320159.0A 2023-03-29 2023-03-29 Positioning method, device and equipment of unmanned vehicle and storage medium Pending CN116295353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310320159.0A CN116295353A (en) 2023-03-29 2023-03-29 Positioning method, device and equipment of unmanned vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310320159.0A CN116295353A (en) 2023-03-29 2023-03-29 Positioning method, device and equipment of unmanned vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN116295353A true CN116295353A (en) 2023-06-23

Family

ID=86823988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310320159.0A Pending CN116295353A (en) 2023-03-29 2023-03-29 Positioning method, device and equipment of unmanned vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN116295353A (en)

Similar Documents

Publication Publication Date Title
CN109345588B (en) Tag-based six-degree-of-freedom attitude estimation method
Muhovič et al. Obstacle tracking for unmanned surface vessels using 3-D point cloud
US20180313940A1 (en) Calibration of laser and vision sensors
CN113264066A (en) Obstacle trajectory prediction method and device, automatic driving vehicle and road side equipment
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
CN111986261B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
CN111354022B (en) Target Tracking Method and System Based on Kernel Correlation Filtering
CN112683228A (en) Monocular camera ranging method and device
CN114494629A (en) Three-dimensional map construction method, device, equipment and storage medium
CN116188893A (en) Image detection model training and target detection method and device based on BEV
CN116681730A (en) Target tracking method, device, computer equipment and storage medium
CN116246119A (en) 3D target detection method, electronic device and storage medium
US20220306311A1 (en) Segmentation-based fuel receptacle localization for air-to-air refueling (a3r)
CN117218350A (en) SLAM implementation method and system based on solid-state radar
Badrloo et al. A novel region-based expansion rate obstacle detection method for MAVs using a fisheye camera
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
CN112219225A (en) Positioning method, system and movable platform
CN116958452A (en) Three-dimensional reconstruction method and system
CN115239899B (en) Pose map generation method, high-precision map generation method and device
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN115952248A (en) Pose processing method, device, equipment, medium and product of terminal equipment
CN116295353A (en) Positioning method, device and equipment of unmanned vehicle and storage medium
JPH07146121A (en) Recognition method and device for three dimensional position and attitude based on vision
CN114972491A (en) Visual SLAM method, electronic device, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination