CN115388880A - Low-cost memory parking map building and positioning method and device and electronic equipment - Google Patents

Low-cost memory parking map building and positioning method and device and electronic equipment Download PDF

Info

Publication number
CN115388880A
CN115388880A CN202211326333.4A CN202211326333A CN115388880A CN 115388880 A CN115388880 A CN 115388880A CN 202211326333 A CN202211326333 A CN 202211326333A CN 115388880 A CN115388880 A CN 115388880A
Authority
CN
China
Prior art keywords
map
wheel speed
data
pose
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211326333.4A
Other languages
Chinese (zh)
Other versions
CN115388880B (en
Inventor
谢浪雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianyou Zhilian Technology Co ltd
Original Assignee
Lianyou Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianyou Zhilian Technology Co ltd filed Critical Lianyou Zhilian Technology Co ltd
Priority to CN202211326333.4A priority Critical patent/CN115388880B/en
Publication of CN115388880A publication Critical patent/CN115388880A/en
Application granted granted Critical
Publication of CN115388880B publication Critical patent/CN115388880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a low-cost memory parking map building and positioning method, a low-cost memory parking map building and positioning device and electronic equipment, and relates to the technical field of memory parking positioning. The invention comprises the following steps: calculating a wheel speed odometer based on rear axle wheel speed pulse data output by a vehicle body wheel speed odometer, and carrying out time synchronization on the wheel speed odometer data and forward looking fish eye data; the method comprises the steps of constructing an SLAM sparse point cloud map through forward-looking fisheye image data, performing scale recovery on the sparse point cloud map by using partial key frame tracks and wheel speed odometer tracks, performing false loop detection by using wheel speed odometer information of a vehicle body in the map construction process, improving map construction robustness, performing fusion positioning by using an SLAM positioning pose and the wheel speed odometer, and improving positioning frequency while ensuring positioning accuracy. The method only utilizes two conventional sensors of a forward looking fish eye and a vehicle wheel speed odometer in the looking around process, realizes the memory distance of more than 1 kilometer, and has the characteristics of low cost, simple realization, long parking distance and the like.

Description

Low-cost memory parking map building and positioning method and device and electronic equipment
Technical Field
The invention belongs to the technical field of memory parking positioning, and particularly relates to a low-cost memory parking map building and positioning method, device and electronic equipment.
Background
With the development of automobile technology, the demand for automatic driving is increasing. In automotive applications, accurate positioning is the most important basic technology. Sensing, prediction, planning and control are all based on accurate positioning results. To achieve a more robust positioning, smart-driven vehicles are typically equipped with a variety of sensors, such as GPS, cameras, lidar, IMU, wheel odometers, and the like. Existing positioning algorithms, such as vision based, lidar based, and the like. From a commercial point of view, low cost sensor solutions like cameras, wheel speed odometers, etc. are more readily accepted by the market.
Memory parking systems are also receiving increasing market attention as a special application in autonomous driving. The memory parking mainly comprises two core modules, wherein the memory parking is firstly carried out on the scene of a parking lot to be parked, and then the automobile is automatically parked to a designated parking space based on the memory environment. Because parking lot environment all has narrowness, crowded and characteristics that the scope is little usually, any positioning error all probably causes the collision of car, therefore accurate location seems more important, only relies on the fast location mode of wheel obviously can not satisfy the demand of current location.
However, the intelligent driving car basically installs a low-cost sensor such as a look-around sensor except a car body wheel speed odometer, so that the intelligent driving car has higher commercial value if a high-precision long-distance SLAM mapping and positioning can be completed by only using a look-around fisheye camera and the car body wheel speed odometer under the condition of not additionally adding other expensive sensors. For example, patent publication No. CN109887053A discloses a vehicle positioning method and system based on monocular vision SLAM: during map building, in the monocular vision SLAM initialization process, determining the scale of a monocular vision SLAM map according to the actual moving distance of a vehicle, and continuously optimizing the scale and other information of the SLAM map according to the actual moving distance of the vehicle; and matching the target image and the characteristic points in the SLAM map during map-based positioning so as to determine the repositioning pose of the vehicle in the SLAM map, so as to obtain the conversion relation between the vehicle body pose measured by the vehicle positioning module and the visual pose of the monocular camera, and continuously optimizing the conversion relation by utilizing a plurality of vehicle body poses and visual poses. The method has the defects that in the SLAM positioning process, when two very similar scenes exist in a parking lot, the risk of mismatching occurs in relocation, and finally the whole system is abnormal in operation; secondly, when each piece of image data obtained in the system is taken to create a map, the map creating speed of the whole system is low, so that the user experience is poor; in addition, because a lot of similar scenes exist in the parking lot, the system provided by the invention has the risk of false loop in the process of drawing construction.
Also, for example, a patent publication No. CN110132280a discloses a vehicle positioning method, a vehicle positioning device and a vehicle in an indoor scene, and the invention discloses a vehicle positioning method and a system in an indoor scene. Acquiring a current frame image of a road ahead shot by a camera, and acquiring inertial navigation attitude information of a vehicle from an Inertial Measurement Unit (IMU); acquiring a main vanishing point in a current frame image of a front road; judging whether the main vanishing point in the current frame image is abnormal or not according to the inertial navigation attitude information; if the main vanishing point in the current frame image is judged to be not abnormal, calculating the global attitude information of the camera according to the main vanishing point in the current frame image; correcting the SLAM algorithm according to the global attitude information; and positioning the vehicle according to the corrected SLAM algorithm. The method can assist the SLAM algorithm to correct the accumulated error by using a mode of fusing the main vanishing point and the inertial navigation in an indoor scene, so that the accuracy of vehicle positioning is improved. The method has the defects that when the vehicle pose is calculated by adopting the low-precision IMU, the calculated vehicle pose has larger error due to the influence of factors such as zero offset, temperature drift and the like, and the system cost is increased if the high-precision IMU is used; secondly, in the case of an indoor scene such as a parking building, if the camera is facing outdoors, the main vanishing point in the image may not be calculated all the time, and the risk of algorithm failure may exist; in addition, the invention can only be used in indoor scenes, so that the user has great limitation in the process of using the function.
Therefore, the invention provides a low-cost parking map memorizing and positioning method, device and electronic equipment.
Disclosure of Invention
The invention aims to provide a low-cost memory parking mapping and positioning method, a device and electronic equipment, which are used for mapping and positioning by using low-cost sensors such as a camera and a vehicle body wheel speed odometer, screening pictures by using a wheel speed odometer with synchronous images of two adjacent frames to calculate the distance between the two adjacent frames, realizing the detection of false loop by using the wheel speed odometer information of a current key frame and a loop key frame, performing scale recovery on a constructed SLAM map by using part of key frame tracks and wheel speed odometer tracks based on a Umeyama model, and performing fusion positioning by using SLAM pose information and wheel speed odometer information, thereby solving the existing problems.
In order to solve the technical problems, the invention is realized by the following technical scheme:
as a first aspect, the present invention provides a low-cost memory parking map and positioning method, including the following steps:
step1: parameter calibration: establishing a coordinate system by taking the center of a rear axle of the vehicle body as an origin of coordinates and adopting a right-hand rule, and calibrating parameters of a forward-looking fisheye camera in the looking around;
step2: map construction: performing time synchronization by using the image and a time stamp of wheel speed odometer data, then screening image frames, performing local BA optimization on the pose of a key frame and map points generated by the pose of the key frame, performing transformation of a similar matrix according to the pose of the key frame and adjacent key frames and related map points to finish loop correction, and performing false loop detection by using the wheel speed odometer data; carrying out scale recovery on the sparse point cloud map and then carrying out block storage on the map;
and step 3: and (3) real-time positioning: after the initial position is successfully matched, predicting and tracking the pose of the next frame by combining the poses of the first two frames and the feature points, and optimizing the previously tracked pose by loading the local map of the current frame;
and 4, step 4: fusion positioning: and fusing the SLAM output pose and the wheel speed odometer by utilizing an extended Kalman filtering algorithm, projecting the wheel speed track to the reference direction to correct by using the SLAM direction of the two nearest frames as the reference direction at the moment far away from the update point, and issuing the corrected track as a fused track.
Further, the time synchronization method comprises the following steps:
finding two wheel speed odometer data which are most adjacent to the image time stamp by utilizing the image time stamp, and obtaining the wheel speed odometer data which are synchronous with each frame of image in a linear interpolation mode when the time difference between the image time and the two wheel speed odometer data which are most adjacent to the image time is less than a certain fixed threshold value, namely comparing the two wheel speed odometer data at the previous moment and the next moment, wherein the two wheel speed odometer data are less than the certain threshold value;
the wheel speed odometer data of the two nearest neighbors refers to the wheel speed odometer data at the previous moment and the wheel speed odometer data at the next moment.
Further, the method of linear interpolation is as follows:
let (
Figure 213162DEST_PATH_IMAGE001
) And (a) and (b)
Figure 4401DEST_PATH_IMAGE002
) Are respectively as
Figure 83215DEST_PATH_IMAGE003
Time of day and
Figure 581192DEST_PATH_IMAGE004
the position and pose data corresponding to the t image at the current moment
Figure 87260DEST_PATH_IMAGE005
Comprises the following steps:
Figure 303478DEST_PATH_IMAGE006
Figure 999032DEST_PATH_IMAGE007
Figure 351516DEST_PATH_IMAGE008
wherein, the pose data composition form is: (abscissa of the position of the vehicle, ordinate of the position of the vehicle, and vehicle heading angle);
the pose data corresponding to the wheel speed odometer is obtained by calculating the pulse numerical values of the left wheel and the right wheel of the rear axle output by the wheel speed odometer, and the calculation mode is as follows:
Figure 294065DEST_PATH_IMAGE009
Figure 731999DEST_PATH_IMAGE010
Figure 214933DEST_PATH_IMAGE011
in the formula:
s l : the moving distance of the left wheel;
s r : the moving distance of the right wheel;
wheelbase: is the vehicle wheelbase;
Figure 687503DEST_PATH_IMAGE012
respectively an abscissa, an ordinate and a vehicle course angle of a corresponding position of the vehicle at a moment;
Figure 535373DEST_PATH_IMAGE013
the predicted values of the abscissa, the ordinate and the vehicle course angle of the corresponding position of the vehicle at the current moment are respectively.
Further, the image frame screening comprises the following steps:
and calculating the vehicle travelling distance corresponding to the two adjacent frames of images by using wheel speed odometer data synchronized with the two adjacent frames of images, and deleting the current image frame when the distance is smaller than a certain fixed threshold value so as to prevent a large amount of redundant images generated when the vehicle travels slowly or is static.
Further, in the step2, the scale recovery method of the sparse point cloud map comprises:
a Umeyama model is constructed by using partial key frame trajectory data calculated by using an SLAM algorithm and a wheel speed odometer trajectory synchronous with the partial key frame trajectory data;
solving to obtain an optimal scale factor between the pose track of part of the key frame images and the track of the wheel speed odometer;
multiplying the sparse point cloud map by the optimal scale factor to complete scale recovery of the sparse point cloud map;
the partial key frame track data refers to corresponding image key frame data in a certain range of the starting position, so that the condition that the scale factor is inaccurate due to the accumulated deviation of the wheel speed odometer can be reduced.
Further, in step2, the method for storing the map in blocks includes:
dividing the map into a plurality of data blocks, respectively carrying out binary conversion on the data blocks and recording byte positions of the data blocks in a file in a mode of indexing the file;
the data block is composed of a plurality of keyframes and a sparse point map cloud.
Further, the real-time positioning method in step3 includes:
step31: loading a map: loading the constructed sparse point cloud map;
step32: extracting characteristic points: detecting corner point information of a real-time incoming picture data frame;
step33: initial position matching: the position matching is carried out on the pictures coming from the initial position by utilizing the bag-of-words information, if the matched position exceeds a certain range of the initial point of drawing construction, the matching of the initial position is considered to be failed, and thus the risk of wrong matching of the relocation position caused by excessive similarity of the parking lot environment can be prevented;
step34: tracking a constant-speed model: when the initial position is successfully matched, predicting the pose of the next frame by using the poses of the first two frames, and tracking the pose of the next frame through the feature points;
step35: pose optimization: and optimizing the pose tracked in Step34 by loading the local map of the current frame, and improving the pose precision of the current frame.
Further, the fusion positioning method in step 4 comprises:
step1: firstly, wheel speed data corresponding to a current SLAM frame is obtained through a linear interpolation mode and used as prediction, SLAM positioning information is used as measurement and input into an extended Kalman filter for fusion to obtain a posterior pose; because the time is possibly asynchronous, the wheel speed pose of the previous frame is predicted again by using the posterior pose obtained by fusion, and the pose of the current frame is predicted based on the wheel speed of the previous frame in the real-time resolving thread, so that the updated quantity generated by fusion is transmitted to the current moment;
step2: on the basis of the previous frame of wheel speed pose, predicting the position and the course angle by using wheel speed integration; at the time far away from the update point, the error of the integral prediction is gradually accumulated, and in order to reduce the accumulated error, the wheel speed track is projected to the reference direction to be corrected by using the SLAM direction of the two latest frames as the reference direction, and the corrected track is issued as a fusion track.
As a second aspect of the present invention, the present invention provides a low-cost memory parking map and positioning device, which is used to implement the memory parking map and positioning method of the first aspect, and the device includes:
the parameter calibration module is used for calibrating internal parameters and external parameters of the forward-looking fisheye camera in the panoramic view;
the map building module comprises a time synchronization sub-module, an image frame screening sub-module, a false loop detection sub-module, a scale recovery sub-module and a map block storage sub-module;
a map positioning module for initial position matching, and the position of the initial relocation is limited in a certain range of the starting point;
and the fusion positioning module is used for correcting the track and issuing the corrected track as a fusion track.
As a third aspect provided by the present invention, the present invention is an electronic device, which includes a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor implements the method provided by the first aspect when executing the computer program.
The invention has the following beneficial effects:
the invention utilizes low-cost sensors such as a camera and a vehicle body wheel speed odometer to establish a picture and position, and has higher commercial value; the distance between two adjacent frames is calculated by using a wheel speed odometer for synchronizing the two adjacent frames of images to screen the images, so that the image building speed of the LAPA system is increased; because a large number of similar scenes exist in the parking lot, when the image is built, the detection of the false loop is realized by using the wheel speed odometer information of the current key frame and the loop key frame, and the image building robustness of the system is improved; and the SLAM pose information and the wheel speed odometer information are used for fusion positioning, so that the positioning frequency of the whole system is finally improved, and the problem that the SLAM cannot be positioned in real time when the domain control performance is low is well solved.
Of course, it is not necessary for any product to practice the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a low-cost memory parking map and positioning method of the present invention;
FIG. 2 is a schematic view of a central coordinate system of a rear axle of a vehicle body according to the present invention;
FIG. 3 is a flow chart of a map construction method of the present invention;
fig. 4 is a flowchart of a real-time positioning method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "front", "rear", "left", "right", "center", and the like, indicate orientation or positional relationship, are merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced components or elements must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the present invention.
The first embodiment is as follows:
as an embodiment provided by the present invention, the present invention is a low-cost memory parking map and positioning method, as shown in fig. 1, the method includes the following steps:
step1: parameter calibration: as an embodiment of the present invention, preferably, the parameter calibration is performed on the forward looking fisheye camera in the looking around, and includes the following steps:
step11: calibrating internal reference: carrying out internal reference calibration on a forward-looking fisheye camera in the looking around; as an embodiment provided by the present invention, preferably, the parameters calibrated in the internal reference calibration include fx, fy, cx and cy, where fx = F/dx fy = F/dy
F is the length of the focal length of the front-looking fisheye camera;
dx refers to the length of one pixel in the horizontal x-direction;
dy refers to the length of one pixel in the horizontal y-direction;
cx and cy refer to the number of horizontal and vertical pixels of the difference between the central pixel coordinate of the image and the original pixel coordinate of the image;
step12: external reference calibration: performing external reference calculation on parameters of a forward-looking fisheye camera in the looking around, namely a camera coordinate system to a vehicle body rear axle center coordinate system, wherein the vehicle body rear axle center coordinate system adopts a right-hand coordinate system, the vehicle body rear axle center is taken as a coordinate origin, the front side is an x-axis, the left side is a y-axis, the upper side is a z-axis, and the specific reference is 2 shown in the following figure;
step2: map construction: performing time synchronization by using an image (an image acquired by a forward looking fisheye camera) and a time stamp of wheel speed odometer data, then performing image frame screening, performing local BA optimization on the pose of a key frame and a map point generated by the pose, and performing false loop detection according to the wheel speed odometer data of a current key frame and a loop key frame; carrying out scale recovery on the sparse point cloud map, and then partitioning and storing the map; BA is called Bundle Adjustment in English, and is also called a Bundle Adjustment optimization model;
as an embodiment provided by the present invention, preferably, as shown in fig. 3, the map construction steps are as follows:
step21: tracking: firstly, graying an extracted image (an image collected by a forward-looking fisheye camera), and extracting characteristic angular points of the grayed image; then, time synchronization is carried out by utilizing the images and the time stamps of the wheel speed odometer data; then, image frame screening is carried out by utilizing wheel speed odometer data on image synchronization so as to accelerate the speed of drawing establishment of the LAPA system; finally, screening out common frames with rich characteristic points and high common visual range as key frames to be provided to a local image building module and a loop detection module;
step22: local map building: firstly, triangularization is carried out by utilizing the poses of key frames and the common viewpoint between the first-level adjacent key frames and the second-level adjacent key frames to generate new map points, and some redundant map points are fused; then, local BA optimization is carried out on the pose of the key frame and the map points generated by the pose, so that the precision of the pose of the key frame and the map points of the key frame are improved; finally, if more than 80% of feature points in the key frame can be observed by other key frames, removing the redundant key frame;
step23: loop detection: firstly, detecting the descriptor co-view degree of a current key frame and a historical key frame in a word bag mode, and if the co-view degree is detected to be greater than a certain threshold value, putting the current key frame into a subsequent key frame queue; then, carrying out scene similarity identification on subsequent key frames, when the scene similarity is greater than certain preset conditions, regarding the key frame as a loop key frame, and carrying out transformation of a similarity matrix, namely loop correction, on the key frame, the poses of adjacent key frames and relevant map points; in addition, because a lot of similar scenes exist in the parking lot, the system can utilize wheel speed odometer information to realize false loop detection, and therefore the graph building robustness of the whole system is improved.
Step24, scale recovery: and (3) constructing a Umeyama model by using part of key frame image track data obtained by calculation of the SLAM algorithm and a wheel speed odometer track synchronized with the key frame image track, solving to obtain an optimal scale factor between a key frame pose track and the wheel speed odometer track, and finally multiplying the sparse point cloud map by the optimal scale factor to complete scale recovery of the sparse point cloud map. Because the accumulated deviation of the wheel speed odometer is larger and larger along with the increase of the distance in the integration process, the partial key frame image pose data refer to the corresponding image key frame data in a certain range of the initial position, so that the condition that the scale factor is inaccurate due to the accumulated deviation of the wheel speed odometer can be reduced; SLAM is also known as Simultaneous outer localization and mapping, also known as Simultaneous localization and mapping;
step25: and (4) map saving: in an embedded system, because memory resources are limited, when a mapping distance is long, if a whole map is directly stored into binary data, the memory is increased sharply, and even the risk of memory overflow can exist, therefore, in the invention, the map is divided into a plurality of data blocks, the binary conversion is respectively carried out on the data blocks, and the byte positions of the data blocks in a file are recorded in an index file manner, so as to prevent the risk of abnormal exit of the system caused by memory overflow when the whole map is stored; the data block is a data module consisting of a plurality of key frames and sparse point clouds thereof;
and step 3: and (3) real-time positioning: after the initial position is successfully matched, predicting and tracking the pose of the next frame by combining the poses of the first two frames and the feature points, and optimizing the previously tracked pose by loading the local map of the current frame; as shown in fig. 4, the real-time positioning includes the following steps:
step31, map loading: loading the previously constructed sparse point cloud map;
and Step32, extracting characteristic points: detecting corner information of a real-time incoming picture data frame, namely fast corners;
step33, initial position matching: the position matching is carried out on the pictures coming from the initial position by utilizing the bag-of-words information, if the matched position exceeds a certain range of the initial point of drawing construction, the initial matching is considered to fail, and thus the risk of wrong matching of the repositioning position caused by excessive similarity of the parking lot environment can be prevented; the starting position (map building starting point) refers to a learning starting point in the process of memorizing parking;
step34 constant velocity model tracking: when the initial position is matched successfully, predicting the pose of the next frame by using the poses of the first two frames, and tracking the pose of the next frame through the feature points;
step35, pose optimization: optimizing the previously tracked pose by loading the local map of the current frame, and improving the pose precision of the current frame, namely optimizing the pose by a map optimization mode;
as an embodiment provided by the present invention, preferably, the step of tracking the pose of the next frame is:
SS1, projecting a map point of a previous frame into a current frame;
SS2, if the projection can be carried out on the current frame, searching all related characteristic points in the current frame in a window range by taking the projection point of the current frame as the center;
SS3, comparing the descriptor of the map point of the previous frame with all the related feature point descriptors of the current frame to find out the feature point corresponding to the most similar descriptor, and then taking the map point of the previous frame as the map point corresponding to the feature point;
SS4, repeating the SS1-SS3 process for all map points in the previous frame, and finally obtaining a current frame feature point and a map point pair set corresponding to the current frame feature point;
SS5, constructing a map optimization model by using the map point pairs, and further optimizing the predicted pose to obtain a more accurate pose of the current frame;
and 4, step 4: fusion and positioning: the SLAM output pose and the wheel speed odometer are fused by using an extended Kalman filtering algorithm, and the fusion positioning method comprises the following steps:
step1: firstly, wheel speed data corresponding to a current SLAM frame is obtained through a linear interpolation mode and used as prediction, SLAM positioning information is used as measurement and input into an extended Kalman filter for fusion to obtain a posterior pose. Because the time is possibly asynchronous, the wheel speed pose of the previous frame is predicted again by using the posterior pose obtained by fusion, and the pose of the current frame is predicted based on the wheel speed of the previous frame in the real-time resolving thread, so that the updated quantity generated by fusion is transmitted to the current moment;
step2: and on the basis of the wheel speed pose of the previous frame, predicting the position and the course angle by using wheel speed integration. At moments away from the update point, the error of the integral prediction will gradually accumulate. In order to reduce such accumulated errors, the SLAM direction of the last two frames is used as a reference direction, only one of which is the direction in which the vehicle advances, the wheel speed trajectory is projected onto the reference direction for correction, and the corrected trajectory is issued as a fused trajectory.
As an embodiment provided by the present invention, preferably, the time synchronization method is:
and finding the wheel speed odometer data of two nearest neighbors by using the image time stamp, and obtaining the wheel speed odometer data synchronized with each frame of image in a linear interpolation mode when the time difference between the image time and the two wheel speed odometer data of the nearest neighbors is smaller than a preset threshold value.
As an embodiment provided by the present invention, preferably, the linear interpolation method is:
let (1)
Figure 460604DEST_PATH_IMAGE001
) And (a)
Figure 684912DEST_PATH_IMAGE002
) Are respectively as
Figure 822107DEST_PATH_IMAGE003
Time of day and
Figure 106458DEST_PATH_IMAGE004
the position and attitude data corresponding to the t image at the current moment
Figure 518985DEST_PATH_IMAGE005
Comprises the following steps:
Figure 281405DEST_PATH_IMAGE006
Figure 462987DEST_PATH_IMAGE007
Figure 980556DEST_PATH_IMAGE008
wherein, the pose data composition form is: (abscissa of the position of the vehicle, ordinate of the position of the vehicle, and vehicle heading angle); that is to
Figure 880379DEST_PATH_IMAGE001
) In (1)
Figure 446490DEST_PATH_IMAGE001
Are respectively as
Figure 482579DEST_PATH_IMAGE003
The abscissa of the position of the vehicle at the moment, the ordinate of the position of the vehicle and the heading angle of the vehicle; (
Figure 108732DEST_PATH_IMAGE002
) In (1)
Figure 308901DEST_PATH_IMAGE002
Are respectively as
Figure 413123DEST_PATH_IMAGE004
The abscissa of the position of the vehicle at any moment, the ordinate of the position of the vehicle, and the heading angle of the vehicle;
Figure 569298DEST_PATH_IMAGE005
In (1)
Figure 366352DEST_PATH_IMAGE014
Respectively an abscissa of the position of the vehicle at the current moment t, an ordinate of the position of the vehicle and a vehicle course angle;
the pose data corresponding to the wheel speed odometer is obtained by calculating the pulse numerical values of the left wheel and the right wheel of the rear axle output by the wheel speed odometer, and the calculation mode is as follows:
Figure 975188DEST_PATH_IMAGE009
Figure 210998DEST_PATH_IMAGE010
Figure 221679DEST_PATH_IMAGE011
in the formula:
s l : the moving distance of the left wheel;
s r : the moving distance of the right wheel;
wheelbase: is the vehicle wheelbase;
Figure 189635DEST_PATH_IMAGE012
respectively an abscissa, an ordinate and a vehicle course angle of a corresponding position of the vehicle at a moment;
Figure 285767DEST_PATH_IMAGE013
the predicted values of the abscissa, the ordinate and the vehicle course angle of the corresponding position of the vehicle at the current moment are respectively.
As an embodiment provided by the present invention, preferably, the image frame screening includes:
and calculating the vehicle travelling distance corresponding to the two adjacent frames of images by using the wheel speed odometer data synchronized with the two adjacent frames of images, and deleting the current image frame when the distance is smaller than a preset threshold value.
As an embodiment provided by the present invention, preferably, the first-level neighboring keyframe refers to a neighboring keyframe whose number of common-view map points with the current keyframe exceeds a certain threshold; the secondary adjacent key frames refer to adjacent key frames which have common view points with the primary adjacent key frames and the number of the adjacent key frames exceeds a certain threshold value.
As an embodiment provided by the present invention, preferably, the BA optimization model is:
Figure 731792DEST_PATH_IMAGE015
wherein, the first and the second end of the pipe are connected with each other,
Figure 675608DEST_PATH_IMAGE016
a mapping function from the camera coordinate system to the pixel coordinate system is represented, as follows:
Figure 814465DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 397893DEST_PATH_IMAGE018
pixel coordinate values respectively representing the x direction and the y direction in the image; x, y and z refer to a space coordinate of a certain point in the space under a world coordinate system;
Figure 647609DEST_PATH_IMAGE019
Figure 695200DEST_PATH_IMAGE020
Figure 4958DEST_PATH_IMAGE021
Figure 75682DEST_PATH_IMAGE022
the camera internal parameters calibrated as described above, respectively, wherein:
Figure 129089DEST_PATH_IMAGE019
=F/dx
Figure 781918DEST_PATH_IMAGE020
=F/dy
f refers to the length of the focal length;
dx refers to the length of one pixel in the horizontal x-direction;
dy refers to the length of one pixel in the horizontal y-direction;
cx and cy refer to the number of horizontal and vertical pixels of the difference between the central pixel coordinate of the image and the original pixel coordinate of the image;
k L a primary neighboring key frame list representing a current key frame;
Figure 262578DEST_PATH_IMAGE023
a local map point list corresponding to a first-level adjacent key frame representing the current key frame; k is a radical of F A secondary neighboring key frame list representing the current key frame, but not including key frames in the primary neighboring key frame list;
Figure 555019DEST_PATH_IMAGE024
the pixel coordinate value corresponding to the jth map point of the kth key frame is represented;
Figure 412117DEST_PATH_IMAGE025
a coordinate value representing a pth map point in the primary local map points;
Figure 168720DEST_PATH_IMAGE026
a rotation matrix representing the kth key frame,
Figure 820282DEST_PATH_IMAGE027
the translation vector representing the kth key frame, in particular, is represented by different characters l and k CW And a superscript of cP ofSince the meaning of the two position representations is not the same, the left side l of the equation indicates
Figure 865598DEST_PATH_IMAGE028
The elements in the set, which refer to the variables that need to be optimized, are represented by the equation k on the right
Figure 260807DEST_PATH_IMAGE029
The elements of the set are divided into two parts, one part is the variables needed to be used for optimization, and the other part is the variables which only participate in calculation and do not perform optimization. Since the sets are different, they are represented by different elements.
As an embodiment provided by the present invention, preferably, the loopback means that an accumulated error occurs in an operating process of the SLAM system, in order to eliminate the accumulated error, the SLAM system performs loopback detection, that is, the SLAM system determines whether the SLAM system has come from the position before through semantics of a picture feature point, and if the SLAM system has come from the position before, the key frame with the accumulated error and the feature point cloud thereof are corrected by using a similarity transformation matrix, so as to eliminate the accumulated error. However, it should be noted that when there is a similar scene, the SLAM system may regard the wrong position as the loop position, and then perform loop correction, which causes the whole mapping system to be wrong, so that the false loop detection is important. In the invention, the auxiliary detection is mainly carried out through wheel speed odometer data on image synchronization, namely when the SLAM system detects a loop key frame, the wheel speed odometer data of the current key frame and the loop key frame are compared, and if the distance between the current key frame and the loop key frame is greater than certain threshold values, the false loop is considered.
As an embodiment provided by the present invention, preferably, in the step2, the scale recovery method for the sparse point cloud map comprises:
a Umeyama model is constructed by using partial key frame trajectory data calculated by using an SLAM algorithm and a wheel speed odometer trajectory synchronous with the partial key frame trajectory data;
solving to obtain an optimal scale factor between the pose track of part of the key frame images and the track of the wheel speed odometer;
multiplying the sparse point cloud map by the optimal scale factor to complete scale recovery of the sparse point cloud map;
the partial key frame track data refers to corresponding image key frame data within a certain range of the initial position;
as an embodiment provided by the present invention, preferably, the Umeyama model is mainly used for aligning two tracks, and the specific principle is as follows:
let x be the position coordinate of the track to be evaluated, and y be the position coordinate of the wheel speed odometer track, at this time, a scale s, a rotation matrix R, and a translation vector t need to be found, so that the track to be evaluated and the wheel speed odometer track can be aligned on the scale, and the error model is as follows:
Figure 809600DEST_PATH_IMAGE030
wherein
Figure 701902DEST_PATH_IMAGE031
And
Figure 968936DEST_PATH_IMAGE032
coordinate values of the ith position respectively representing the wheel speed odometer track and the track to be evaluated; writing all trace points into the least squares form can obtain:
Figure 902256DEST_PATH_IMAGE033
and solving the least square model to obtain the optimal scale factor of the SLAM point cloud map.
A low-cost memory parking map building and positioning method, carry on internal reference and external reference calibration to the forward-looking fisheye camera in looking around, wherein the external reference refers to the transformation matrix from camera coordinate system to car body coordinate system; calculating a wheel speed odometer based on rear axle wheel speed pulse data output by the vehicle body wheel speed odometer; performing time synchronization on the wheel speed odometer data and the forward looking fish eye data; the SLAM sparse point cloud map is constructed through the foresight fisheye picture data, and the scale of the sparse point cloud map is recovered by using partial key frame tracks and wheel speed odometer tracks; in the automatic parking process, based on the constructed sparse point cloud map, carrying out repositioning or constant-speed model tracking on the feature points of the acquired real-time image data to realize a positioning function;
it is worth noting that in the relocation stage, if the matched position exceeds the mapping starting point by a certain range, the initial relocation is considered to be failed, so that the risk of matching errors of the relocation position caused by excessive similarity of the parking lot environment can be reduced; because the domain control performance is limited and the positioning frequency realized by the algorithm is low, the SLAM positioning pose and the wheel speed odometer are required to be used for fusion positioning, and the positioning frequency is improved while the positioning accuracy is ensured. The method only utilizes two conventional sensors of a forward looking fish eye and a vehicle wheel speed odometer in the looking around process and realizes the memory distance of more than 1 kilometer, so the method has the characteristics of low cost, simple realization, long parking distance and the like.
Example two:
as another embodiment provided by the present invention, the present invention is a low-cost memory parking map and positioning device, which is used to implement the memory parking map and positioning method provided by the first embodiment, and the device includes:
the parameter calibration module is used for calibrating internal parameters and external parameters of the forward-looking fisheye camera in the looking-around process, and the external parameters are calibrated, namely a transformation matrix from a camera coordinate system to a vehicle body rear axle center coordinate system;
a map construction module, comprising:
the time synchronization submodule finds two wheel speed odometer data which are most adjacent to the image time stamp by utilizing the image time stamp, when the time difference between the image time and the adjacent wheel speed odometer data is less than a certain fixed threshold value, the wheel speed odometer data which are synchronous with each frame of image are obtained in a linear interpolation mode, and the wheel speed odometer data are calculated on the basis of an Ackerman formula through the rear axle left and right wheel pulse values output by the wheel speed odometer;
the image frame screening submodule calculates the distance between two adjacent frames by using a wheel speed odometer synchronous with the two adjacent frames of images, and deletes the current image frame when the distance is smaller than a certain fixed threshold value;
the false loop detection submodule is used for carrying out false loop detection through wheel speed odometer data in image synchronization, namely when the SLAM system detects a loop key frame, comparing the current key frame with the wheel speed odometer data in the loop key frame synchronization, and if the distance between the current key frame and the loop key frame is greater than certain threshold values, determining that the current key frame and the loop key frame are false loops;
and the scale recovery submodule is used for constructing a Umeyama model by using part of key frame track data obtained by calculation of the SLAM algorithm and a wheel speed odometer track synchronized with the key frame track, solving to obtain an optimal scale factor between a key frame image pose track and the wheel speed odometer track, and finally multiplying the sparse point cloud map by the optimal scale factor to complete scale recovery of the sparse point cloud map. The partial key frame track data refers to corresponding image key frame data in a certain range of the initial position, so that the condition that scale factors are inaccurate due to the accumulated deviation of the wheel speed odometer can be reduced;
the map partitioning storage sub-module is used for dividing a map into a plurality of data blocks, respectively carrying out binary conversion on the data blocks and recording byte positions of the data blocks in a file in an index file mode, wherein the data blocks are data modules consisting of a plurality of key frames and sparse point clouds thereof;
and the map positioning module is used for initial position matching, and the initial repositioning position is limited within a certain range of a starting point (the starting point is consistent with the starting point position and refers to a learning starting point in the process of memorizing parking), so that the risk of error in repositioning position matching caused by excessive similarity of parking lot environments can be prevented. The initial relocation refers to a time point when the SLAM system is just started in a positioning mode, and at the time, the initial position of the current vehicle in the SLAM map needs to be determined;
as an embodiment provided by the invention, preferably, the memory parking process is divided into two stages, namely a learning stage and a memory stage, wherein the learning stage refers to that a user drives a vehicle to walk around a certain route once to memorize the environment of the route; the memory stage is that the user can know which position of the learned environment according to the memorized environment; therefore, the vehicle has a starting point and an end point position during learning, then the vehicle can drive into the vicinity of the learning starting point during memorizing, the vehicle can automatically identify the position of the vehicle in the memory environment according to the picture transmitted by the camera, and the position near the starting point is the initial repositioning position;
the fusion positioning module is used for correcting the track and issuing the corrected track as a fusion track; specifically, an extended Kalman filtering algorithm is used for fusing the SLAM output pose and a wheel speed odometer, and at the moment far away from an update point, the error of wheel speed integral prediction can be gradually accumulated. To reduce such accumulated errors, the wheel speed trajectory is projected onto the reference direction for correction using the SLAM direction of the last two frames as the reference direction, and the corrected trajectory is issued as a fused trajectory.
Example three:
as another embodiment provided by the present invention, the present invention is an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor implements the method provided in the first embodiment when executing the computer program.
Example four:
as a further embodiment provided by the present invention, the present invention is a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method provided by the first embodiment.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (10)

1. A low-cost memory parking map building and positioning method is characterized by comprising the following steps:
step1: parameter calibration: calibrating parameters of a forward-looking fisheye camera in the looking around;
step2: map construction: performing time synchronization by using the image and a time stamp of wheel speed odometer data, then screening image frames, performing local BA optimization on the pose of a key frame and a map point generated by the pose, and performing false loop detection according to the wheel speed odometer data of the current key frame and a loop key frame; carrying out scale recovery on the sparse point cloud map, and then partitioning and storing the map;
and step 3: and (3) real-time positioning: after the initial position is successfully matched, predicting and tracking the pose of the next frame by combining the poses of the first two frames and the feature points, and optimizing the previously tracked pose by loading the local map of the current frame;
and 4, step 4: fusion and positioning: and fusing the SLAM output pose and the wheel speed odometer by using an extended Kalman filtering algorithm.
2. A low-cost memory vehicle mapping and positioning method as claimed in claim 1, wherein the time synchronization method comprises:
and finding the wheel speed odometer data of two nearest neighbors by using the image time stamp, and obtaining the wheel speed odometer data synchronized with each frame of image in a linear interpolation mode when the time difference between the image time and the two wheel speed odometer data of the nearest neighbors is smaller than a preset threshold value.
3. A low-cost memory vehicle mapping and positioning method as claimed in claim 2, wherein the linear interpolation method is:
let (
Figure 18657DEST_PATH_IMAGE001
) And (a)
Figure 169016DEST_PATH_IMAGE002
) Are respectively as
Figure 436049DEST_PATH_IMAGE003
Time of day and
Figure 385682DEST_PATH_IMAGE004
the position and attitude data corresponding to the t image at the current moment
Figure 788981DEST_PATH_IMAGE005
Comprises the following steps:
Figure 860973DEST_PATH_IMAGE006
Figure 615303DEST_PATH_IMAGE007
Figure 352315DEST_PATH_IMAGE008
wherein, the pose data composition form is: (abscissa of the position of the vehicle, ordinate of the position of the vehicle, and vehicle heading angle);
the pose data corresponding to the wheel speed odometer is obtained by calculating the pulse numerical values of the left wheel and the right wheel of the rear axle output by the wheel speed odometer, and the calculation mode is as follows:
Figure 875700DEST_PATH_IMAGE009
Figure 367861DEST_PATH_IMAGE010
Figure 343907DEST_PATH_IMAGE011
in the formula:
s l : the moving distance of the left wheel;
s r : the moving distance of the right wheel;
wheelbase: is the vehicle wheelbase;
Figure 884610DEST_PATH_IMAGE012
respectively an abscissa, an ordinate and a vehicle course angle of a corresponding position of the vehicle at a moment;
Figure 528081DEST_PATH_IMAGE013
the predicted values of the abscissa, the ordinate and the vehicle course angle of the corresponding position of the vehicle at the current moment are respectively.
4. A low-cost memory parking mapping and positioning method as claimed in claim 2, wherein the image frame screening step comprises:
and calculating the vehicle travelling distance corresponding to the two adjacent frames of images by using the wheel speed odometer data synchronized with the two adjacent frames of images, and deleting the current image frame when the distance is smaller than a preset threshold value.
5. The low-cost memory parking map and positioning method according to claim 1, wherein in the step2, the scale recovery method of the sparse point cloud map comprises the following steps:
a Umeyama model is constructed by using partial key frame trajectory data calculated by using an SLAM algorithm and a wheel speed odometer trajectory synchronous with the partial key frame trajectory data;
solving to obtain an optimal scale factor between the pose track of part of the key frame images and the track of the wheel speed odometer;
multiplying the sparse point cloud map by the optimal scale factor to complete scale recovery of the sparse point cloud map;
the partial key frame track data refers to corresponding image key frame data within a certain range of the initial position.
6. A low-cost memory parking map building and positioning method according to claim 1, wherein in the step2, the method for storing the map in blocks is as follows:
dividing the map into a plurality of data blocks, respectively carrying out binary conversion on the data blocks and recording byte positions of the data blocks in a file in a mode of indexing the file;
the data block is composed of a plurality of keyframes and a sparse point map cloud.
7. A low-cost memory parking mapping and positioning method according to claim 1, wherein the real-time positioning method in step3 is:
step31: loading a map: loading the constructed sparse point cloud map;
step32: extracting characteristic points: detecting corner point information of a picture data frame coming in real time;
step33: initial position matching: performing position matching on the picture coming from the initial position by utilizing the bag-of-words information, and if the matched position exceeds a certain range of the initial point of the picture construction, considering that the initial position matching fails;
step34: tracking a constant-speed model: when the initial position is matched successfully, predicting the pose of the next frame of image by using the corresponding poses of the previous two frames of images, and tracking the pose of the next frame through the feature points;
step35: pose optimization: and optimizing the pose tracked in Step34 by loading a local map of the current frame.
8. The low-cost memory parking mapping and positioning method according to claim 1, wherein the method for fusing and positioning in step 4 comprises:
step1: firstly, wheel speed data corresponding to a current SLAM frame is obtained through a linear interpolation mode and used as prediction, SLAM positioning information is used as measurement and input into an extended Kalman filter for fusion to obtain a posterior pose;
step2: on the basis of the previous frame of wheel speed pose, predicting the position and the course angle by using wheel speed integration; and at the time far away from the updating point, projecting the wheel speed track to the reference direction by using the SLAM direction of the two latest frames as the reference direction for correction, and issuing the corrected track as a fused track.
9. A low-cost memory map-parking and positioning device, wherein the device is used for implementing the memory map-parking and positioning method according to any one of claims 1-8, and the device comprises:
the parameter calibration module is used for calibrating internal parameters and external parameters of the forward-looking fisheye camera in the panoramic view;
the map building module comprises a time synchronization sub-module, an image frame screening sub-module, a false loop detection sub-module, a scale recovery sub-module and a map block storage sub-module;
a map location module for initial location matching, and the location of the initial relocation is limited within a certain range of the starting point;
and the fusion positioning module is used for correcting the track and releasing the corrected track as a fusion track.
10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program operable on the processor, wherein the processor, when executing the computer program, implements the method of any one of claims 1-8.
CN202211326333.4A 2022-10-27 2022-10-27 Low-cost parking map construction and positioning method and device and electronic equipment Active CN115388880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211326333.4A CN115388880B (en) 2022-10-27 2022-10-27 Low-cost parking map construction and positioning method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211326333.4A CN115388880B (en) 2022-10-27 2022-10-27 Low-cost parking map construction and positioning method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115388880A true CN115388880A (en) 2022-11-25
CN115388880B CN115388880B (en) 2023-02-03

Family

ID=84129388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211326333.4A Active CN115388880B (en) 2022-10-27 2022-10-27 Low-cost parking map construction and positioning method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115388880B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024120269A1 (en) * 2022-12-05 2024-06-13 武汉大学 Position recognition method for fusing point cloud map, motion model and local feature

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322511A (en) * 2019-06-28 2019-10-11 华中科技大学 A kind of semantic SLAM method and system based on object and plane characteristic
US20200047340A1 (en) * 2018-08-13 2020-02-13 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN112381841A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on GMS feature matching in dynamic scene
CN112833892A (en) * 2020-12-31 2021-05-25 杭州普锐视科技有限公司 Semantic mapping method based on track alignment
CN113865580A (en) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 Map construction method and device, electronic equipment and computer readable storage medium
CN113870379A (en) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 Map generation method and device, electronic equipment and computer readable storage medium
CN114612847A (en) * 2022-03-31 2022-06-10 长沙理工大学 Method and system for detecting distortion of Deepfake video
CN114693787A (en) * 2022-03-18 2022-07-01 东风汽车集团股份有限公司 Parking garage map building and positioning method and system and vehicle
CN114812573A (en) * 2022-04-22 2022-07-29 重庆长安汽车股份有限公司 Monocular visual feature fusion-based vehicle positioning method and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200047340A1 (en) * 2018-08-13 2020-02-13 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for autonomous navigation using visual sparse map
CN110322511A (en) * 2019-06-28 2019-10-11 华中科技大学 A kind of semantic SLAM method and system based on object and plane characteristic
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN112381841A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on GMS feature matching in dynamic scene
CN112833892A (en) * 2020-12-31 2021-05-25 杭州普锐视科技有限公司 Semantic mapping method based on track alignment
CN113865580A (en) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 Map construction method and device, electronic equipment and computer readable storage medium
CN113870379A (en) * 2021-09-15 2021-12-31 北京易航远智科技有限公司 Map generation method and device, electronic equipment and computer readable storage medium
CN114693787A (en) * 2022-03-18 2022-07-01 东风汽车集团股份有限公司 Parking garage map building and positioning method and system and vehicle
CN114612847A (en) * 2022-03-31 2022-06-10 长沙理工大学 Method and system for detecting distortion of Deepfake video
CN114812573A (en) * 2022-04-22 2022-07-29 重庆长安汽车股份有限公司 Monocular visual feature fusion-based vehicle positioning method and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨理欣等: "基于多相机的视觉里程计方法研究", 《机械设计与研究》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024120269A1 (en) * 2022-12-05 2024-06-13 武汉大学 Position recognition method for fusing point cloud map, motion model and local feature

Also Published As

Publication number Publication date
CN115388880B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN109631896B (en) Parking lot autonomous parking positioning method based on vehicle vision and motion information
CN111986506B (en) Mechanical parking space parking method based on multi-vision system
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN112734852B (en) Robot mapping method and device and computing equipment
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN112197770B (en) Robot positioning method and positioning device thereof
CN110044256B (en) Self-parking position estimation device
JP7036400B2 (en) Vehicle position estimation device, vehicle position estimation method, and vehicle position estimation program
CN111830953A (en) Vehicle self-positioning method, device and system
US12008785B2 (en) Detection, 3D reconstruction and tracking of multiple rigid objects moving in relation to one another
CN111274847B (en) Positioning method
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
JP2014034251A (en) Vehicle traveling control device and method thereof
CN110570453A (en) Visual odometer method based on binocular vision and closed-loop tracking characteristics
US11151729B2 (en) Mobile entity position estimation device and position estimation method
WO2022012316A1 (en) Control method, vehicle, and server
CN110794828A (en) Road sign positioning method fusing semantic information
CN115388880B (en) Low-cost parking map construction and positioning method and device and electronic equipment
CN114550042A (en) Road vanishing point extraction method, vehicle-mounted sensor calibration method and device
CN116359873A (en) Method, device, processor and storage medium for realizing SLAM processing of vehicle-end 4D millimeter wave radar by combining fisheye camera
CN114719840A (en) Vehicle intelligent driving guarantee method and system based on road characteristic fusion
EP3288260B1 (en) Image processing device, imaging device, equipment control system, equipment, image processing method, and carrier means
CN115546303A (en) Method and device for positioning indoor parking lot, vehicle and storage medium
CN114998436A (en) Object labeling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant