CN111351493B - Positioning method and system - Google Patents

Positioning method and system Download PDF

Info

Publication number
CN111351493B
CN111351493B CN201811583366.0A CN201811583366A CN111351493B CN 111351493 B CN111351493 B CN 111351493B CN 201811583366 A CN201811583366 A CN 201811583366A CN 111351493 B CN111351493 B CN 111351493B
Authority
CN
China
Prior art keywords
vehicle
real
point cloud
image
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811583366.0A
Other languages
Chinese (zh)
Other versions
CN111351493A (en
Inventor
张祎
刘勇超
王路遥
张磊
蒋如意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ofilm Intelligent Vehicle Co ltd
Original Assignee
Shanghai Ofilm Intelligent Vehicle Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ofilm Intelligent Vehicle Co ltd filed Critical Shanghai Ofilm Intelligent Vehicle Co ltd
Priority to CN201811583366.0A priority Critical patent/CN111351493B/en
Publication of CN111351493A publication Critical patent/CN111351493A/en
Application granted granted Critical
Publication of CN111351493B publication Critical patent/CN111351493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to the field of navigation, and in particular, to a positioning method and system. A positioning method is applied to real-time positioning of a first vehicle in a preset scene, and comprises the following steps: providing a point cloud map matched with the preset scene, wherein the point cloud map is provided with a plurality of reference characteristic points; acquiring a real-time image of the current environment of the first vehicle in the preset scene; performing feature point matching on the real-time image and the point cloud map to calculate a positioning parameter of the first vehicle in the point cloud map; and obtaining real-time positioning information of the first vehicle according to the positioning parameters. The method acquires the real-time image of the current environment of the vehicle in the preset scene and matches the real-time image with the matched point cloud map to obtain the real-time positioning information of the vehicle, and can be applied to various scenes, and the positioning accuracy and the speed are high.

Description

Positioning method and system
Technical Field
The present invention relates to the field of navigation, and in particular, to a positioning method and system.
Background
Vehicle positioning is an important part of a vehicle system, and a general vehicle positioning method cannot accurately position or even fails in many places such as closed environments, urban canyons, residential quarters and the like.
Disclosure of Invention
Based on this, a positioning method and system are provided.
A positioning method is applied to real-time positioning of a first vehicle in a preset scene, and comprises the following steps:
providing a point cloud map matched with the preset scene, wherein the point cloud map is provided with a plurality of reference characteristic points;
acquiring a real-time image of the current environment of the first vehicle in the preset scene;
performing feature point matching on the real-time image and the point cloud map to calculate a positioning parameter of the first vehicle in the point cloud map; and
and obtaining real-time positioning information of the first vehicle according to the positioning parameters.
According to the positioning method, the real-time positioning information of the vehicle is obtained by acquiring the real-time image of the current environment of the first vehicle in the preset scene and matching the real-time image with the matched point cloud map, and the method can be applied to various scenes and is high in positioning accuracy and speed.
In one embodiment, the positioning method further comprises:
providing a semantic map matched with the preset scene, wherein the semantic map is provided with a plurality of reference targets;
performing target matching on the real-time image and the semantic map to correct the positioning parameters; the positioning is more accurate, the anti-interference capability is strong, the safety is high, the positioning can still be identified under the condition that the surrounding environment is obviously changed, and the system precision and the robustness are high.
In one embodiment, the positioning parameters include at least one of coordinates and declination.
In one embodiment, the step of providing the point cloud map matching the preset scene comprises:
acquiring first reference images of all traffic lane environments of a second vehicle in the preset scene;
obtaining a motion estimate of the second vehicle;
acquiring the pose of each frame of image in the first reference image;
extracting and matching feature points of each frame of image in the first reference image, and generating disordered point cloud by combining motion estimation of the second vehicle, the pose of each frame of image in the first reference image and the matched feature points in each frame of image in the first reference image;
triangularization processing is carried out on the disordered point cloud to obtain a three-dimensional sparse point cloud; and
and optimizing the three-dimensional coordinates of each characteristic point in the three-dimensional sparse point cloud by using a light beam adjustment method, and generating a point cloud map matched with the preset scene.
In one embodiment, the step of providing the semantic map matching the preset scene includes:
acquiring second reference images of all traffic lanes of a third vehicle in the preset scene;
detecting a reference target in the second reference image by loading a convolutional neural network model;
classifying each frame of image in the second reference image and labeling a two-dimensional box; and
and matching each frame of image of the second reference image to generate a cubic square frame, so as to generate a semantic map matched with the preset scene.
In one embodiment, the positioning method further comprises:
performing Kalman filtering on the real-time positioning information of the first vehicle to reduce the noise of the real-time positioning information of the first vehicle and obtain accurate positioning information;
wherein the step of performing kalman filtering on the real-time positioning information of the first vehicle comprises:
obtaining a real-time motion estimate of the first vehicle;
taking the real-time positioning information of the first vehicle acquired at a first moment as an initial value, taking the real-time motion estimation of the first vehicle as a process prediction quantity, taking the real-time positioning information of the first vehicle acquired at a second moment as an observation updated value, and calculating the variance of the observation updated quantity according to a preset confidence coefficient; and
combining the observation update value and the variance, and fusing by using a preset frequency through an extended Kalman filtering theory to obtain and output accurate positioning information;
acquiring real-time positioning information of the first vehicle at intervals of a preset period of time; and the second moment and the first moment have a preset time interval, the real-time positioning information of the first vehicle is acquired at the first moment, and the real-time positioning information of the first vehicle is acquired at the second moment after the interval of the preset time.
In one embodiment, the positioning method further includes:
recording the difference between the real-time image and the characteristic points of the point cloud map;
when the difference between the real-time image and the feature points of the point cloud map is accumulated to a first preset degree, updating the feature points in the point cloud map;
wherein the updating the feature points is achieved by constructing or removing the difference feature points in the point cloud map.
In one embodiment, the positioning method further includes:
recording differences between the real-time image and the target of the semantic map;
when the difference between the real-time image and the target of the semantic map is accumulated to a second preset degree, updating the target in the semantic map;
wherein the updating the target is achieved by building or removing the difference target in the semantic map.
By continuously updating the point cloud map and/or the semantic map, the positioning accuracy under the condition of slow change of the surrounding environment is ensured.
A positioning system for real-time positioning of a first vehicle in a predetermined scenario, the system comprising:
the storage device is used for storing a point cloud map matched with the preset scene;
the vehicle-mounted all-around device is used for acquiring a real-time image of the current environment of the first vehicle in the preset scene; and
the processing device is respectively connected with the storage device and the vehicle-mounted looking-around device;
the processing device comprises a vehicle positioning module, and the vehicle positioning module is used for performing feature point matching on the real-time image and the point cloud map so as to calculate the positioning parameters of the first vehicle in the point cloud map.
In one embodiment, the system further comprises:
the vehicle-mounted odometer is connected with the processing device and is used for obtaining real-time motion estimation of the first vehicle;
wherein the processing device further comprises:
the point cloud map generation module is used for generating the point cloud map and transmitting the generated point cloud map to the storage device;
the semantic map generating module is used for generating the semantic map and transmitting the generated semantic map to the storage device;
the Kalman filtering module is respectively connected with the vehicle positioning module and the vehicle-mounted odometer and is used for carrying out Kalman filtering on the real-time positioning information of the first vehicle so as to reduce the noise of the real-time positioning information of the first vehicle and obtain accurate positioning information; and
and the map updating module is used for updating the point cloud map and/or the semantic map.
The positioning system adopts the vehicle-mounted looking-around device to realize the vehicle positioning function, is low in cost, can acquire the real-time positioning information of the vehicle by acquiring the real-time image of the current environment of the first vehicle in the preset scene and matching the real-time image with the point cloud map, can be applied to various scenes, and is high in positioning accuracy and high in speed.
Drawings
FIG. 1 is a flow diagram of a positioning method in one embodiment;
FIG. 2 is a flow chart of a positioning method in another embodiment;
FIG. 3 is a flow diagram of the steps of providing a point cloud map in one embodiment;
FIG. 4 is a flow diagram of the steps of providing a semantic map in one embodiment;
FIG. 5 is a flow diagram of the steps of Kalman filtering in one embodiment;
FIG. 6 is a flow diagram of the steps to update a point cloud map in one embodiment;
FIG. 7 is a flow diagram of the steps of updating a semantic map in one embodiment;
FIG. 8 is a block diagram of the positioning system in one embodiment;
fig. 9 is a block diagram showing the structure of a positioning system in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 is a flowchart of a positioning method in an embodiment, and as shown in fig. 1, a positioning method is applied to real-time positioning of a first vehicle in a preset scene, and the method includes:
s210, providing a point cloud map matched with the preset scene, wherein the preset scene can be a parking lot, a cell and the like, and the point cloud map is provided with a plurality of reference feature points;
s220, acquiring a real-time image of the current environment of the first vehicle in the preset scene, specifically, the first vehicle is a vehicle which needs to be positioned in real time currently;
s230, performing feature point matching on the real-time image and the point cloud map to calculate a positioning parameter of the first vehicle in the point cloud map;
s240, obtaining real-time positioning information of the first vehicle according to the positioning parameters.
According to the positioning method, the real-time positioning information of the vehicle is obtained by acquiring the real-time image of the current environment of the first vehicle in the preset scene and matching the real-time image with the matched point cloud map, and the method can be applied to various scenes and is high in positioning accuracy and speed.
Fig. 2 is a flowchart of a positioning method in another embodiment, as shown in fig. 2, the method may further include:
s250, providing a semantic map matched with the preset scene, wherein the semantic map is provided with a plurality of reference targets, and the reference targets can be trees, vehicles, lane lines and the like;
and S260, performing target matching on the real-time image and the semantic map to correct the positioning parameters, so that the positioning is more accurate, the anti-interference capability is strong, the safety is high, the positioning can still be identified under the condition that the surrounding environment is obviously changed, and the system precision and the robustness are high.
S270, performing Kalman filtering on the real-time positioning information of the first vehicle to reduce noise of the real-time positioning information of the first vehicle and obtain accurate positioning information;
s280, updating the point cloud map;
and S290, updating the semantic map.
Specifically, the point cloud map and/or the semantic map are continuously updated, so that the positioning accuracy under the condition of slow change of the surrounding environment is ensured.
Specifically, in one embodiment, the positioning parameter may be at least one of a coordinate and a declination angle, and it should be noted that the positioning parameter may also be other positioning indexes that can be obtained from target matching by those skilled in the art, and is not limited to the coordinate and the declination angle.
Specifically, it should be noted that, for the step S210 of providing the point cloud map matching the preset scene and the step S250 of providing the semantic map matching the preset scene, the order of occurrence of the two steps is not required, the step S210 of providing the point cloud map matching the preset scene may be performed first, the step S220 of providing the semantic map matching the preset scene may be performed later, or the step S220 of providing the semantic map matching the preset scene may be performed first, and the step S210 of providing the point cloud map matching the preset scene may be performed later; the occurrence sequence of the steps of the step S280 of updating the point cloud map and the step S290 of the semantic map is not required.
Specifically, in one embodiment, as shown in fig. 3, the step S210 of providing the point cloud map matching the preset scene may include:
s211, acquiring first reference images of all traffic lane environments of a second vehicle in the preset scene;
s212, obtaining motion estimation of the second vehicle;
s213, acquiring the pose of each frame of image in the first reference image;
s214, extracting and matching feature points of each frame of image in the first reference image, and generating a disordered point cloud by combining the motion estimation of the second vehicle, the pose of each frame of image in the first reference image and the matched feature points in each frame of image in the first reference image;
s215, triangularizing the disordered point cloud to obtain a three-dimensional sparse point cloud;
s216, optimizing the three-dimensional coordinates of each feature point in the three-dimensional sparse point cloud by using a light beam adjustment method, and generating a point cloud map matched with the preset scene.
Specifically, the second vehicle may be the first vehicle, or may be another dedicated vehicle, or may even be another vehicle in all other broad senses, and the unordered point cloud is generated by combining the motion estimation of the second vehicle, the pose of each frame image in the reference image, and the feature points in each frame image after matching.
In one embodiment, as shown in fig. 4, the step S250 of providing the semantic map matching the preset scene may include:
s251, collecting second reference images of all traffic lanes of a third vehicle in the preset scene;
s252, detecting a reference target in the second reference image by loading a convolutional neural network model;
s253, classifying and labeling each frame image in the second reference image with a two-dimensional box,
and S254, matching each frame of image of the second reference image to generate a cubic box, so as to generate a semantic map matched with the preset scene.
Specifically, the third vehicle may be the second vehicle, the first vehicle, or another dedicated vehicle, or even another vehicle in all other broad sense, and when the third vehicle is the second vehicle, the second reference image may be the first reference image; the convolutional neural network model can be trained in advance; the cube box may include information of the category, location, size, etc. of the object.
In one embodiment, as shown in fig. 5, the step S270 of performing kalman filtering on the real-time positioning information of the first vehicle may include:
s271, acquiring real-time motion estimation of the first vehicle;
s272, taking the real-time positioning information of the first vehicle acquired at the first moment as an initial value, taking the real-time motion estimation of the first vehicle as a process prediction quantity, taking the real-time positioning information of the first vehicle acquired at the second moment as an observation updated value, calculating the variance of the observation updated value according to a preset confidence degree,
s273, combining the observation updating value and the variance, and fusing by using a preset frequency through an extended Kalman filtering theory to obtain and output accurate positioning information;
the real-time positioning information of the first vehicle is obtained at intervals of a preset period of time, the second moment and the first moment have a preset time interval, the real-time positioning information of the first vehicle is obtained at the first moment, and the real-time positioning information of the first vehicle is obtained at the second moment after the interval of the preset time; specifically, the confidence coefficient can be calculated by the number of matching features between the key frame of the image and the point cloud map and between the key frame and the semantic map and the optimized minimum error; the real-time motion estimation of the first vehicle may be obtained by using an on-board odometer, and at this time, the preset frequency may be a frequency of an input signal of the on-board odometer and may also be a highest frequency that can be reached, and the preset frequency may be 25HZ.
In one embodiment, as shown in fig. 6, the step S280 of updating the point cloud map may include:
s281, recording the difference between the real-time image and the feature point of the point cloud map;
s282, when the difference between the real-time image and the feature points of the point cloud map is accumulated to a first preset degree, updating the feature points in the point cloud map;
wherein the updating the feature points is achieved by constructing or removing the difference feature points in the point cloud map.
In one embodiment, as shown in fig. 7, the step S290 of updating the semantic map may include:
s291, recording the difference between the real-time image and the target of the semantic map;
s292, when the difference between the real-time image and the target of the semantic map is accumulated to a second preset degree, updating the target in the semantic map;
wherein the updating the target is achieved by constructing or removing the difference target in the semantic map.
Fig. 8 is a block diagram of a positioning system in an embodiment, and as shown in fig. 8, a positioning system 10 is applied to perform real-time positioning on a first vehicle in a preset scene, where the positioning system 10 includes a storage device 110, a vehicle-mounted looking-around device 120, and a processing device 130, where the processing device 130 is connected to the storage device 110 and the vehicle-mounted looking-around device 120 respectively, the storage device 110 stores a point cloud map matched with the preset scene, the vehicle-mounted looking-around device 120 is configured to acquire a real-time image of an environment where the first vehicle is currently located in the preset environment, and the processing device 130 may include a vehicle positioning module 131, and the vehicle positioning module 131 is configured to perform feature point matching on the real-time image and the point cloud map to calculate a positioning parameter of the first vehicle in the point cloud map.
The positioning system adopts the vehicle-mounted looking-around device to realize the vehicle positioning function, and is low in cost.
In one embodiment, as shown in fig. 9, the positioning system 10 may further include an on-board odometer 140, and the on-board odometer 140 is connected to the processing device 130 and may be configured to obtain a real-time motion estimation of the first vehicle.
In one embodiment, the processing device 130 may further include a point cloud map generating module 132, configured to generate the point cloud map according to the first reference image of all traffic lane environments in the preset scene, the camera external reference of the vehicle-mounted looking-around device 120, and the real-time motion estimation of the first vehicle, and the point cloud map generating module 132 may further transmit the generated point cloud map to the storage device 110; specifically, the pose of each frame image in the first reference image can be calculated according to the camera external parameters of the vehicle-mounted looking-around device 120, and the real-time motion estimation of the first vehicle can be calculated through the vehicle-mounted odometer 140.
In one embodiment, the vehicle localization module 131 may be further configured to target match the real-time image with the semantic map to correct the localization parameters of the first vehicle in the point cloud map. In one embodiment, the storage device 110 may further store a semantic map matching the preset scene, and specifically, the point cloud map and/or the semantic map stored in the storage device 110 may be obtained by the positioning system 10 itself, or may be directly obtained from another place.
In one embodiment, the processing device 130 may further include a semantic map generating module 133, configured to generate the semantic map according to the second reference images of all traffic lane environments in the preset scene and the convolutional neural network model stored in the storage device 110, and the semantic map generating module 133 may further transmit the generated semantic map to the storage device 110.
In one embodiment, the processing device 130 may further include a kalman filtering module 134, which is respectively connected to the vehicle positioning module 131 and the vehicle-mounted odometer 140, and configured to perform kalman filtering on the real-time positioning information of the first vehicle, so as to reduce noise of the real-time positioning information of the first vehicle, and obtain accurate positioning information.
In one embodiment, the processing device 130 may further include a map updating module 135, configured to update the point cloud map and/or the semantic map, so as to ensure that the positioning accuracy of the positioning system 10 is not reduced in the case that the environment of the preset scene changes slowly.
The following describes the positioning method and system in detail with reference to specific applications:
the preset scene is an underground garage, the point cloud map and the semantic map are obtained through self generation of the positioning system, and the second vehicle and the first vehicle are the same vehicle to be positioned. In an underground garage, a driver drives a vehicle in advance to drive the vehicle to run for a circle at the speed of 0-15 km/h (such as 0km/h, 5km/h, 10km/h and 15 km/h) in an underground garage scene, the motion estimation of the vehicle is obtained by collecting a plurality of paths of images input by a vehicle-mounted looking-around device and combining a vehicle-mounted odometer, a point cloud map generation module extracts feature points of the plurality of paths of images and completes matching, a three-dimensional point cloud is obtained by calculation, a point cloud map is generated, and a semantic map generation module continuously detects and extracts obstacles, markers and moving objects in the plurality of paths of images to form a semantic map. When the vehicle drives into the scene with the point cloud map and the semantic map again, namely the underground garage, the vehicle positioning module extracts the characteristic points of the image, matches the characteristic points with the point cloud map, quickly obtains the initial coordinates of the vehicle body, and the semantic matching assists in correcting the positioning information. And then continuously matching the point cloud and the semantics in the vehicle driving process, and outputting a positioning result by combining the vehicle-mounted odometer. In the process of matching the point cloud and the semantics, the map updating module obtains the difference between the current scene and the point cloud map and the semantics map and records the state, and when the difference state reaches a critical value (for example, exists or disappears for 5 times), the part of point cloud and semantics information is created or erased in the map, so that the purpose of updating the map is achieved.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. The positioning method is characterized by being applied to real-time positioning of a first vehicle in a preset scene, wherein the preset scene comprises a parking lot and/or a cell; the method comprises the following steps:
providing a point cloud map matched with the preset scene, wherein the point cloud map is provided with a plurality of reference characteristic points; the reference feature points have three-dimensional coordinates;
acquiring a real-time image of the current environment of the first vehicle in the preset scene;
performing feature point matching on the real-time image and the point cloud map to calculate a positioning parameter of the first vehicle in the point cloud map;
providing a semantic map matched with the preset scene, wherein the semantic map is provided with a plurality of reference targets and cube boxes; the cube box comprises the category, position and size of the target; performing target matching on the real-time image and the semantic map to correct the positioning parameters; and
obtaining real-time positioning information of the first vehicle according to the positioning parameters;
the step of providing the semantic map matched with the preset scene comprises the following steps:
acquiring second reference images of all traffic lanes of a third vehicle in the preset scene;
detecting a reference target in the second reference image by loading a convolutional neural network model;
classifying each frame of image in the second reference image and labeling a two-dimensional box; and
matching each frame of image of the second reference image to generate the cube square so as to generate a semantic map matched with the preset scene;
the step of providing the point cloud map matched with the preset scene comprises the following steps:
acquiring first reference images of all traffic lane environments of a second vehicle in the preset scene;
obtaining a motion estimate of the second vehicle;
acquiring the pose of each frame of image in the first reference image;
extracting and matching feature points of each frame image in the first reference image, and generating a disordered point cloud by combining the motion estimation of the second vehicle, the pose of each frame image in the first reference image and the matched feature points in each frame image in the first reference image;
and generating the point cloud map according to the unordered point cloud.
2. The method of claim 1, wherein the reference target comprises at least one of a tree, a vehicle, and a lane line.
3. The method according to claim 1, wherein the positioning parameters include at least one of coordinates and declination.
4. The method according to claim 1, wherein the step of providing a point cloud map matching the preset scene further comprises:
performing triangularization processing on the disordered point cloud to obtain a three-dimensional sparse point cloud; and
and optimizing the three-dimensional coordinates of each characteristic point in the three-dimensional sparse point cloud by using a light beam adjustment method, and generating a point cloud map matched with the preset scene.
5. The positioning method according to any one of claims 1 to 4, further comprising:
performing Kalman filtering on the real-time positioning information of the first vehicle to reduce noise of the real-time positioning information of the first vehicle and obtain accurate positioning information;
wherein the step of performing kalman filtering on the real-time positioning information of the first vehicle comprises:
obtaining a real-time motion estimate of the first vehicle;
taking the real-time positioning information of the first vehicle acquired at a first moment as an initial value, taking the real-time motion estimation of the first vehicle as a process prediction quantity, taking the real-time positioning information of the first vehicle acquired at a second moment as an observation updated value, and calculating the variance of the observation updated quantity according to a preset confidence coefficient; and
combining the observation update value and the variance, and fusing by using a preset frequency through an extended Kalman filtering theory to obtain and output accurate positioning information;
acquiring real-time positioning information of the first vehicle at preset intervals; and
the second moment and the first moment have a preset time interval, the real-time positioning information of the first vehicle is obtained at the first moment, and the real-time positioning information of the first vehicle is obtained at the second moment after the interval of the preset time.
6. The positioning method according to any one of claims 1 to 4, further comprising:
recording the difference between the real-time image and the characteristic points of the point cloud map;
when the difference between the real-time image and the characteristic points of the point cloud map is accumulated to a first preset degree, updating the characteristic points in the point cloud map;
wherein the updating the feature points is achieved by constructing or removing the difference feature points in the point cloud map.
7. The positioning method according to any one of claims 1 to 4, further comprising:
recording differences between the real-time image and the target of the semantic map;
when the difference between the real-time image and the target of the semantic map is accumulated to a second preset degree, updating the target in the semantic map;
wherein the updating the target is achieved by building or removing the difference target in the semantic map.
8. A positioning system is applied to real-time positioning of a first vehicle in a preset scene, wherein the preset scene comprises a parking lot and/or a cell; the system comprises:
the storage device is used for storing a point cloud map matched with the preset scene; the point cloud map is provided with a plurality of reference feature points; the reference feature points have three-dimensional coordinates;
the vehicle-mounted all-around device is used for acquiring a real-time image of the current environment of the first vehicle in the preset scene; and
the processing device is respectively connected with the storage device and the vehicle-mounted looking-around device;
the processing device comprises a vehicle positioning module and a point cloud map generating module, wherein the vehicle positioning module is used for carrying out feature point matching on the real-time image and the point cloud map so as to calculate the positioning parameters of the first vehicle in the point cloud map; providing a semantic map matched with the preset scene, wherein the semantic map is provided with a plurality of reference targets and cube boxes; the cube box comprises the category, position and size of the target; performing target matching on the real-time image and the semantic map to correct the positioning parameters; obtaining real-time positioning information of the first vehicle according to the positioning parameters;
the vehicle positioning module is to:
acquiring second reference images of all traffic lanes of a third vehicle in the preset scene;
detecting a reference target in the second reference image by loading a convolutional neural network model;
classifying each frame of image in the second reference image and labeling a two-dimensional box; and
matching each frame of image of the second reference image to generate the cube square so as to generate a semantic map matched with the preset scene;
the point cloud map generation module is used for:
acquiring first reference images of all traffic lane environments of a second vehicle in the preset scene;
obtaining a motion estimate for the second vehicle;
acquiring the pose of each frame of image in the first reference image;
extracting and matching feature points of each frame of image in the first reference image, and generating disordered point cloud by combining motion estimation of the second vehicle, the pose of each frame of image in the first reference image and the matched feature points in each frame of image in the first reference image;
and generating the point cloud map according to the unordered point cloud, and transmitting the generated point cloud map to the storage device.
9. The positioning system of claim 8, further comprising:
the vehicle-mounted odometer is connected with the processing device and is used for obtaining real-time motion estimation of the first vehicle;
wherein the processing device further comprises:
the semantic map generating module is used for generating the semantic map and transmitting the generated semantic map to the storage device;
the Kalman filtering module is respectively connected with the vehicle positioning module and the vehicle-mounted odometer and is used for carrying out Kalman filtering on the real-time positioning information of the first vehicle so as to reduce the noise of the real-time positioning information of the first vehicle and obtain accurate positioning information; and
and the map updating module is used for updating the point cloud map and/or the semantic map.
CN201811583366.0A 2018-12-24 2018-12-24 Positioning method and system Active CN111351493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811583366.0A CN111351493B (en) 2018-12-24 2018-12-24 Positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811583366.0A CN111351493B (en) 2018-12-24 2018-12-24 Positioning method and system

Publications (2)

Publication Number Publication Date
CN111351493A CN111351493A (en) 2020-06-30
CN111351493B true CN111351493B (en) 2023-04-18

Family

ID=71196805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811583366.0A Active CN111351493B (en) 2018-12-24 2018-12-24 Positioning method and system

Country Status (1)

Country Link
CN (1) CN111351493B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063091A (en) * 2020-07-30 2022-02-18 北京四维图新科技股份有限公司 High-precision positioning method and product
CN114323035A (en) * 2020-09-30 2022-04-12 华为技术有限公司 Positioning method, device and system
CN112284402B (en) * 2020-10-15 2021-12-07 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device
CN112631288B (en) * 2020-12-09 2023-01-06 上海欧菲智能车联科技有限公司 Parking positioning method and device, vehicle and storage medium
CN112762824B (en) * 2020-12-24 2022-04-22 中南大学 Unmanned vehicle positioning method and system
CN112782733B (en) * 2021-01-28 2023-08-01 北京斯年智驾科技有限公司 High-precision positioning method, device, system, electronic device and storage medium
CN114913491A (en) * 2021-02-08 2022-08-16 广州汽车集团股份有限公司 Vehicle positioning method and system and computer readable storage medium
CN113419235A (en) * 2021-05-28 2021-09-21 同济大学 Unmanned aerial vehicle positioning method based on millimeter wave radar
CN113343830B (en) * 2021-06-01 2024-05-24 上海追势科技有限公司 Method for quickly repositioning vehicles in underground parking garage
CN113465620A (en) * 2021-06-02 2021-10-01 上海追势科技有限公司 Parking lot particle filter positioning method based on semantic information
CN113327444A (en) * 2021-06-09 2021-08-31 上海智能新能源汽车科创功能平台有限公司 Control method for cooperatively optimizing vehicle speed based on vehicle road cloud
EP4375856A1 (en) * 2021-08-19 2024-05-29 Zhejiang Geely Holding Group Co., Ltd. Environment matching-based vehicle localization method and apparatus, vehicle, and storage medium
CN114053730B (en) * 2021-10-25 2023-08-18 浙江小族智能科技有限公司 Amusement vehicle and running control method and device thereof, storage medium and terminal
CN114646320B (en) * 2022-02-09 2023-04-28 江苏泽景汽车电子股份有限公司 Path guiding method and device, electronic equipment and readable storage medium
CN114577215B (en) * 2022-03-10 2023-10-27 山东新一代信息产业技术研究院有限公司 Method, equipment and medium for updating characteristic map of mobile robot

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230337A (en) * 2017-12-31 2018-06-29 厦门大学 A kind of method that semantic SLAM systems based on mobile terminal are realized

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940186B (en) * 2017-02-16 2019-09-24 华中科技大学 A kind of robot autonomous localization and navigation methods and systems
CN107145578B (en) * 2017-05-08 2020-04-10 深圳地平线机器人科技有限公司 Map construction method, device, equipment and system
CN107144285B (en) * 2017-05-08 2020-06-26 深圳地平线机器人科技有限公司 Pose information determination method and device and movable equipment
CN107340522B (en) * 2017-07-10 2020-04-17 浙江国自机器人技术有限公司 Laser radar positioning method, device and system
CN107742311B (en) * 2017-09-29 2020-02-18 北京易达图灵科技有限公司 Visual positioning method and device
CN107741234B (en) * 2017-10-11 2021-10-19 深圳勇艺达机器人有限公司 Off-line map construction and positioning method based on vision
CN109029463B (en) * 2018-08-20 2021-08-24 重庆师范大学 Indoor multi-balance vehicle autonomous navigation and scheduling system for safe driving of vehicle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230337A (en) * 2017-12-31 2018-06-29 厦门大学 A kind of method that semantic SLAM systems based on mobile terminal are realized

Also Published As

Publication number Publication date
CN111351493A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111351493B (en) Positioning method and system
US20220292711A1 (en) Pose estimation method and device, related equipment and storage medium
US11294387B2 (en) Systems and methods for training a vehicle to autonomously drive a route
WO2019092418A1 (en) Method of computer vision based localisation and navigation and system for performing the same
JP5162849B2 (en) Fixed point position recorder
US20220057806A1 (en) Systems and methods for obstacle detection using a neural network model, depth maps, and segmentation maps
CN110945320B (en) Vehicle positioning method and system
JP2017181870A (en) Information processing device and information processing server
CN115552200A (en) Method and system for generating importance occupancy grid map
KR102595886B1 (en) Multi-modal segmentation network for enhanced semantic labeling in mapping
US11200679B1 (en) System and method for generating a probability distribution of a location of an object
KR20230051035A (en) Object detection using radar and lidar fusion
US20230394694A1 (en) Methods and apparatus for depth estimation using stereo cameras in a vehicle system
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
US20230159033A1 (en) High fidelity data-driven multi-modal simulation
Chipka et al. Estimation and navigation methods with limited information for autonomous urban driving
US11400958B1 (en) Learning to identify safety-critical scenarios for an autonomous vehicle
Elfring et al. Vehicle localization using a traffic sign map
CN115164910A (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114426030A (en) Pedestrian passing intention estimation method, device and equipment and automobile
US20240239378A1 (en) Systems and Methods for Handling Traffic Signs
KR102631148B1 (en) Automatically detecting traffic signals using sensor data
US20240127603A1 (en) Unified framework and tooling for lane boundary annotation
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle
JP7294476B2 (en) Camera control system, camera control method, and non-transitory computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant