CN111780771A - Positioning method, positioning device, electronic equipment and computer readable storage medium - Google Patents

Positioning method, positioning device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111780771A
CN111780771A CN202010398086.3A CN202010398086A CN111780771A CN 111780771 A CN111780771 A CN 111780771A CN 202010398086 A CN202010398086 A CN 202010398086A CN 111780771 A CN111780771 A CN 111780771A
Authority
CN
China
Prior art keywords
landmark
vector
information
map
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010398086.3A
Other languages
Chinese (zh)
Other versions
CN111780771B (en
Inventor
何潇
韩文华
张丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN202010398086.3A priority Critical patent/CN111780771B/en
Publication of CN111780771A publication Critical patent/CN111780771A/en
Application granted granted Critical
Publication of CN111780771B publication Critical patent/CN111780771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application discloses a positioning method, which is characterized by comprising the following steps: acquiring image information; detecting landmark vector information based on the image information; determining a local vector semantic map based on a vector semantic map, wherein the vector semantic map is a vector map comprising landmark semantic features; matching based on the landmark vector information and the local vector semantic map; and determining a positioning result based on the matching result.

Description

Positioning method, positioning device, electronic equipment and computer readable storage medium
Technical Field
The application relates to the field of unmanned driving, in particular to a positioning method, a positioning device, electronic equipment and a storage medium.
Background
The accurate positioning of vehicles is a key problem to be solved in the field of intelligent driving, and the positioning technology applied to intelligent driving at present mainly comprises a GPS (global positioning system) positioning method, a positioning method based on external facilities such as a base station and the like, and a positioning method based on vehicle-mounted sensors such as a camera, a laser radar, an inertial measurement instrument and the like. The GPS positioning method is more generally applied, but the GPS is easy to lose effectiveness in scenes with high and large shelters and ground reservoir scenes, the positioning method based on the base station depends on the establishment of external facilities, has certain limitations, and is higher in application cost, so that the positioning technology based on the vehicle-mounted sensor is a main research hotspot in the field of intelligent driving at present. Meanwhile, among a plurality of vehicle sensors, the laser radar has higher precision, but also has the defect of high cost, the inertial measurement instrument is easy to drift in a scene of a large-range positioning area, the visual positioning technology based on the camera is low in cost, and meanwhile, the precision is also guaranteed to a certain extent, so that the laser radar has a wide development prospect.
In the application of the existing vision-based positioning technology, a direct method, a characteristic point method and an optical flow method are mainly used, the methods have certain robustness while ensuring precision, however, under the condition that the illumination change is large or the characteristic points are difficult to extract, the methods are easy to lose effectiveness.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, electronic equipment and a computer-readable storage medium, aiming at the problems that in the prior art, feature points are not easy to extract in scenes such as high positioning cost and large illumination change.
A first aspect of an embodiment of the present application provides a positioning method, including: acquiring image information; detecting road sign vector information based on the image information, wherein the road sign vector information is vector information representing a road sign; determining a local vector semantic map based on a vector semantic map, wherein the vector semantic map is a vector map comprising landmark semantic features; matching based on the landmark vector information and the local vector semantic map; and determining a positioning result based on the road signs in the local semantic map on the matching.
In some embodiments, the obtaining image information includes preprocessing the image information, the preprocessing including grey-scale image transformation, and de-distortion.
In some embodiments, the detecting landmark vector information based on the image information comprises: acquiring all pixel points of the road sign based on a semantic segmentation algorithm; and fitting the pixel points based on a shape fitting algorithm to obtain the landmark vector information.
In some embodiments, the detecting landmark vector information based on the image information comprises: and acquiring the landmark vector information based on a deep learning detection algorithm.
In some embodiments, the construction process of the vector semantic map comprises: establishing a global point cloud map based on the perception point cloud data and the positioning data; extracting point cloud semantic information based on the global point cloud map to obtain a point cloud semantic map; screening road sign point clouds based on the point cloud semantic map; performing shape fitting on each point cloud group of the screened road signs, and determining a shape characteristic vector of each road sign; and establishing a vector semantic map based on the shape feature vectors of the road signs.
In some embodiments, the extracting point cloud semantic information based on the global point cloud map, and obtaining the point cloud semantic map includes: semantic information is acquired by any one of the following methods: manual marking; clustering; and (4) semantic segmentation based on a deep learning algorithm. And each point cloud in the point cloud semantic map has at least one semantic label.
In some embodiments, the road markings include pavement markings and non-pavement space markings.
In some embodiments, said matching based on said signpost vector information and local vector semantic maps comprises: when the landmark vector information corresponds to a road surface landmark, converting the landmark vector information into a road plane for matching in an overlooking manner; and when the landmark vector information corresponds to the non-road space landmark, the non-road space landmark in the local semantic map is subjected to perspective transformation to an image plane for matching.
In some embodiments, the method further comprises: the road sign matching meets the following conditions: the landmark vector information and the landmarks in the vector semantic map have the same semantic tags; the measurement distance between the landmark vector information and the landmarks in the vector semantic map is smaller than a preset threshold value; the metric distance between the landmark vector information and the landmarks in the vector semantic map is the smallest among the candidate landmarks.
In some embodiments, the determining a local vector semantic map based on a vector semantic map comprises: determining an initial positioning estimate for the vehicle based on the external positioning source or the movement trend; and determining the landmark vector information of a preset number in a preset range based on the initial positioning estimated value.
In some embodiments, the office based matchAnd determining a positioning result by using the road sign in the semantic map, wherein the positioning result comprises the following steps: based on a function T*=argmin∑e(mi,Mi) Determining a positioning result, wherein T*And e is an observation error function related to T, M is landmark vector information in the image, and M is landmark vector information in the local semantic map.
A second aspect of an embodiment of the present application provides a positioning apparatus, including: an information acquisition unit for acquiring image information; the road sign acquisition unit is used for detecting road sign vector information based on the image information, wherein the road sign vector information is vector information representing a road sign; the map query unit is used for determining a local vector semantic map based on the vector semantic map, wherein the vector semantic map is a vector map comprising landmark semantic features; the matching unit is used for matching based on the landmark vector information and the local vector semantic map; and the positioning unit is used for determining a positioning result based on the road signs in the matched local semantic map.
In some embodiments, the positioning device further comprises a preprocessing unit for preprocessing the image information, the preprocessing including grey-scale map transformation and distortion removal.
In some embodiments, the landmark obtaining unit is specifically configured to: acquiring all pixel points of the road sign based on a semantic segmentation algorithm; and fitting the pixel points based on a shape fitting algorithm to obtain the landmark vector information.
In some embodiments, the landmark obtaining unit is specifically configured to: and acquiring the landmark vector information based on a deep learning detection algorithm.
In some embodiments, the construction process of the vector semantic map comprises: establishing a global point cloud map based on the perception point cloud data and the positioning data; extracting point cloud semantic information based on the global point cloud map to obtain a point cloud semantic map; screening road sign point clouds based on the point cloud semantic map; performing shape fitting on each point cloud group of the screened road signs, and determining a shape characteristic vector of each road sign; and establishing a vector semantic map based on the shape feature vectors of the road signs.
In some embodiments, the extracting point cloud semantic information based on the global point cloud map, and obtaining the point cloud semantic map includes: semantic information is acquired by any one of the following methods: manual marking; clustering; semantic segmentation based on a deep learning algorithm; and each point cloud in the point cloud semantic map has at least one semantic label.
In some embodiments, the road markings include pavement markings and non-pavement space markings.
In some embodiments, the matching unit is specifically configured to: when the landmark vector information corresponds to a road surface landmark, converting the landmark vector information into a road plane for matching in an overlooking manner; and when the landmark vector information corresponds to the non-road space landmark, the non-road space landmark in the local semantic map is subjected to perspective transformation to an image plane for matching.
In some embodiments, the matching unit performs landmark matching by following the following conditions: the landmark vector information and the landmarks in the vector semantic map have the same semantic tags; the measurement distance between the landmark vector information and the landmarks in the vector semantic map is smaller than a preset threshold value; the metric distance between the landmark vector information and the landmarks in the vector semantic map is the smallest among the candidate landmarks.
In some embodiments, the map querying unit is specifically configured to: determining an initial positioning estimate for the vehicle based on the external positioning source or the movement trend; and determining the landmark vector information of a preset number in a preset range based on the initial positioning estimated value.
In some embodiments, the positioning unit is specifically configured to: based on a function T*=argmin∑e(mi,Mi) Determining a positioning result, wherein T*And e is an observation error function related to T, M is landmark vector information in the image, and M is landmark vector information in the local semantic map.
A third aspect of an embodiment of the present application provides an electronic device, including: a memory and one or more processors; wherein the memory is communicatively connected to the one or more processors, and the memory stores instructions executable by the one or more processors, and when the instructions are executed by the one or more processors, the electronic device is configured to implement the positioning method according to the foregoing embodiments.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a computing device, the computer-executable instructions can be used to implement the positioning method according to the foregoing embodiments.
The invention provides a localization frame based on road sign level semantics, wherein a deep learning algorithm is used for detecting road signs in the frame, compared with the traditional scheme, the localization frame has stronger robustness, the localization effect can be ensured under the scene with larger illumination change, meanwhile, due to the sparsity, the storage space demand of a map is small, the calculation resources consumed by real-time localization are lower, and in addition, the frame utilizes high-level semantic information which is not utilized by the traditional method, and is a good supplement to the high-level semantic information.
Compared with the prior art, the application has the following beneficial effects:
firstly, a deep learning algorithm is used for detecting the road signs, and compared with the traditional scheme, the method has stronger robustness;
secondly, the positioning effect can be ensured under the scene with larger illumination change, and meanwhile, due to the sparsity, the storage space demand of the map is small, and the calculation resources consumed by real-time positioning are low;
thirdly, high-level semantic information which is not utilized by the traditional method is utilized, and the positioning precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that it is also possible for a person skilled in the art to apply the application to other similar scenarios without inventive effort on the basis of these drawings. Unless otherwise apparent from the context of language or otherwise indicated, like reference numerals in the figures refer to like structures and operations.
Fig. 1 is an application scenario diagram of an intelligent driving vehicle according to an embodiment of the present application;
FIG. 2 is a functional block diagram of an intelligent driving system 200 provided in the disclosed embodiments of the present application;
FIG. 3 illustrates a functional block diagram of a positioning sub-module 300 provided in accordance with an embodiment of the present application;
FIG. 4 is a flowchart of a method for creating a global landmark semantic map according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method for vehicle localization based on a vector semantic map according to an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of an electronic device suitable for implementing the embodiments according to the present application.
Detailed Description
In the following detailed description, numerous specific details of the present application are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. It will be apparent, however, to one skilled in the art that the present application may be practiced without these specific details. It should be understood that the use of the terms "system," "apparatus," "unit" and/or "module" herein is a method for distinguishing between different components, elements, portions or assemblies at different levels of sequential arrangement. However, these terms may be replaced by other expressions if they can achieve the same purpose.
It will be understood that when a device, unit or module is referred to as being "on" … … "," connected to "or" coupled to "another device, unit or module, it can be directly on, connected or coupled to or in communication with the other device, unit or module, or intervening devices, units or modules may be present, unless the context clearly dictates otherwise. For example, as used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in the specification and claims of this application, the terms "a", "an", and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified features, integers, steps, operations, elements, and/or components, but not to constitute an exclusive list of such features, integers, steps, operations, elements, and/or components.
These and other features and characteristics of the present application, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood upon consideration of the following description and the accompanying drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the application. It will be understood that the figures are not drawn to scale.
Various block diagrams are used in this application to illustrate various variations of embodiments according to the application. It should be understood that the foregoing and following structures are not intended to limit the present application. The protection scope of this application is subject to the claims.
The technical method of the embodiment is mainly used for positioning the vehicle in the automatic driving process.
Fig. 1 is an application scenario diagram of an intelligent driving vehicle according to an embodiment of the present application. As shown in FIG. 1, the application scenario includes a plurality of intelligent driving vehicles 110 and 1,110-2 … 110-3, and a cloud server 120. In some embodiments, the intelligent driving vehicle includes a sensor group 111, an intelligent driving system 112, an underlying execution system 113, and other components or modules for vehicle travel. Wherein the plurality of intelligent driving vehicles have the same or similar functional architecture.
In some embodiments, the smart driving vehicle 110 may support both manual driving and smart driving. In some embodiments, when the intelligent driving vehicle 110 is in manual driving mode, the driver may drive the vehicle by operating devices that control the vehicle's travel, such as a brake pedal, a steering wheel, and an accelerator pedal. In some embodiments, when the smart driving vehicle 110 is in the smart driving mode, the smart driving system 112 may sense the surrounding environment and position the smart driving vehicle based on the sensing information of the sensor group 111, perform a planning decision on the driving of the smart driving vehicle according to the sensing information and the positioning result, generate a control instruction based on the planning decision, and issue the control instruction to the bottom layer execution system 113 for controlling the driving of the vehicle.
In some embodiments, the sensor group 111 is used to sense the vehicle surroundings, locate the vehicle, and obtain the vehicle state. The sensor group includes, but is not limited to, a camera, a laser radar, a millimeter wave radar, a GPS (global positioning System), an IMU (Inertial Measurement Unit), a wheel speed sensor, a speed sensor, an acceleration sensor, a steering wheel angle sensor, a front wheel angle sensor, and the like.
In some embodiments, the intelligent driving system 112 is used to control the vehicle to travel in the intelligent driving mode. The smart driving system 112 may receive the sensed information from the sensor group and determine the ambient information, pose, and vehicle state of the smart driven vehicle based on the sensed information. The intelligent driving system generates planning decision information of the intelligent driving vehicle according to the driving information, the environment information, the pose and the vehicle state, then generates control information based on the planning decision information, and sends the control information to the bottom layer execution system 113 for controlling the vehicle. In some embodiments, the smart driving system may further wirelessly communicate with a cloud server for information interaction, including but not limited to sensory information, environmental information, pose, vehicle status, cloud instructions, smart driving vehicle planning decision information, map information, and the like.
In some embodiments, the intelligent driving system 112 may be a software system, a hardware system, or a combination of software and hardware. For example, the smart driving system is a software system running on an operating system, and the on-board hardware system is a hardware system supporting the operating system.
In some embodiments, the floor-based execution system 113 is used to execute travel of the vehicle. The underlying implement systems 113 include, but are not limited to, chassis systems, drive systems, steering systems, braking systems, and the like. The floor system 113 receives information from the operation device in the manual driving mode, and controls the vehicle to travel. The bottom layer execution system 113 receives a control instruction from the intelligent driving system in the intelligent driving mode, and controls the vehicle to run. The bottom-layer execution system belongs to a mature system in the vehicle field in the existing vehicle system, and therefore, the description is omitted here.
In some embodiments, the cloud server 120 may be used to uniformly schedule and interact with smart driving vehicles.
Fig. 2 is a functional block diagram of a smart driving system 200 in an embodiment according to the present disclosure. In some embodiments, the smart driving system 200 has the same or similar structure as the smart driving system 112 described in fig. 1. As shown in fig. 2, the intelligent driving system 200 includes a perception module 210, a location module 220, a planning module 230, a control module 240, and other functional modules or components that may be used to control intelligent driving of a vehicle.
In some embodiments, the perception module 210 is configured to obtain perception information of the environment surrounding the smart-driven vehicle. In some embodiments, the sensing module 210 obtains sensing information of a sensor group, and interaction information of a road device or a cloud server to generate sensing information. The perceptual information department includes, but is not limited to, at least one of: obstacle information, road signs/markings, pedestrian/vehicle information, drivable zones.
In some embodiments, the location module 220 is configured to obtain location information of a vehicle. The positioning information comprises a vehicle pose, and the vehicle pose comprises vehicle coordinates and included angles between the vehicle course and each coordinate axis. In some embodiments, the positioning module 220 may perform positioning based on a plurality of positioning methods or devices. The positioning source includes but is not limited to a GPS positioning source, a visual positioning source, a lidar positioning source, and the like. In some embodiments, when there are multiple positioning sources, the positioning module 220 may select a positioning source with higher confidence, or perform fusion positioning. In some embodiments, the positioning module 220 further includes a positioning sub-module 221, wherein the positioning sub-module 221 is configured to position the vehicle based on the landmark semantic map and obtain the pose of the vehicle.
In some embodiments, the planning module 230 is used to perform path planning and decision making. In some embodiments, planning module 230 generates planning and decision information based on the perception information generated by perception module 210 and the location information generated by location module 220. In some embodiments, planning module 202 may generate planning and decision information in conjunction with at least one of V2X data, high precision maps, and the like. Wherein, the decision information may include but is not limited to at least one of the following: behavior (e.g., including but not limited to following, overtaking, parking, circumventing, etc.), vehicle heading, vehicle speed, desired acceleration of the vehicle, desired steering wheel angle, etc.
In some embodiments, the control module 240 is configured to generate a control instruction of the vehicle bottom layer execution system based on the planning and decision information, and issue the control instruction, so that the vehicle bottom layer execution system controls the vehicle to travel according to the desired path. The control instructions may include, but are not limited to: steering wheel steering, lateral control commands, longitudinal control commands, and the like.
In some embodiments, the intelligent driving system further comprises a map module (not shown in the figures). The map module is used for providing a map used by the intelligent driving system in the process of controlling the driving of the vehicle, and the map comprises a high-precision map, a global visual map, a global landmark semantic map, a landmark vector semantic map and the like. The global landmark semantic map is a global map established based on landmark semantics and can be used for positioning vehicles based on landmarks. The process of the map building method of the global landmark semantic map and the landmark vector semantic map can refer to fig. 5.
Fig. 3 shows a functional block diagram of the positioning sub-module 300. In some embodiments, the localization submodule 300 is configured to localize the vehicle based on the landmark vector semantic map. The positioning sub-module 300 has the same or similar functions as the positioning sub-module 221 shown in fig. 2. As shown in fig. 3, the positioning sub-module 300 includes an information obtaining unit 310, a landmark obtaining unit 320, a map querying unit 330, a matching unit 340, a positioning unit 350, and other functional unit modules or components that can be used for positioning a vehicle.
In some embodiments, the information obtaining unit 310 is configured to obtain sensing information. Wherein the sensing information comprises image information or point cloud data. In some embodiments, the image information is a real-time image, and the image information may be a single-frame image or a multi-frame image obtained by multiple cameras synchronously. The point cloud data may be laser point cloud data taken by a laser radar. In some embodiments, the information acquisition unit 310 preprocesses the sensing information after acquiring the sensing information. The pre-processing does not include, but is not limited to, image gray-scale map transformation, de-distortion, etc. The preprocessing may be performed by a preprocessing unit additionally included in the positioning sub-module or by a preprocessing sub-unit additionally included in the information acquisition unit.
In some embodiments, the landmark obtaining unit 320 is configured to obtain landmark vector information. In some embodiments, the landmark obtaining unit 320 may use, but is not limited to, a deep learning-based detection algorithm, a semantic segmentation algorithm, and the like, which may obtain landmark information. In some embodiments, the landmark obtaining unit 320 may be a landmark detector. In some embodiments, the deep learning based detection algorithm may directly output signpost vector information; the semantic segmentation algorithm is generally at a pixel level, and can output coordinates of all pixel points belonging to a certain landmark, and then fit pixel point groups through a shape fitting algorithm (for example, a straight line fitting algorithm and a polygon fitting algorithm), so as to obtain landmark vector information. In some embodiments, the landmark vector information refers to a vector representation of landmark shapes, such as the coordinates of the end points of utility poles, the coordinates and radius of the corner points or circle centers of traffic sign panels, and the like. In some embodiments, the output result of the landmark obtaining unit 320 includes that the detected landmark vector information corresponds to label of a road landmark or a non-road space landmark.
In some embodiments, the map query unit 330 is configured to determine a local semantic map. In some embodiments, the map query unit 330 may be a map querier, wherein the map querier may output local signposts within a range for subsequent matching according to the input query point and query range and the number of queries. In some embodiments, in order to efficiently implement map query, a vector semantic map needs to be structured in advance, the process may be performed at the time of map building or at the time of initialization of a positioning program, a core idea of the process is to cluster landmark coordinates in a map in a euclidean distance measurement manner, and algorithms such as, but not limited to KD-Tree, Kmeans and the like may be used. After the map is structured, the map query unit 330 may quickly query a specified number of landmark IDs within a preset range near the specified coordinates, as the local semantic map to be matched. The specified coordinate can be an initial estimation value of the current positioning of the vehicle, and the initial estimation value of the positioning can be determined according to other positioning sources, such as a GPS, a vision SLAM, a laser radar SLAM and the like; when there is no external positioning source, the specified coordinates may be an initial estimated value of positioning at the current time according to the positioning value and the motion trend at the previous time.
In some embodiments, the matching unit 340 is configured to match the landmark vector information with landmarks of a local semantic map to obtain matching landmarks. In some embodiments, the matching unit adopts different matching methods according to different road signs, for example, the road signs include road signs including but not limited to lane lines, sidewalks, and the like, and non-road space road signs including but not limited to telegraph poles, traffic signs, and the like. In some embodiments, for the non-road space landmark (i.e. the landmark vector information corresponds to the non-road space landmark), the matching unit projects the non-road space landmark onto an image plane through perspective transformation so as to match with the landmark vector information; aiming at the road signs (namely the road sign vector information corresponds to the road signs), the perspective transformation can generate larger deformation to cause mismatching, so that the matching unit projects the road sign vector information to a road plane by using the overlooking transformation so as to match with the road signs in the local semantic map. In some embodiments, the matching unit performs matching between the landmark vector information and landmarks of the local semantic map based on equation 1:
Figure BDA0002488316110000111
wherein d is a distance measurement, M is a detection landmark, M is a map landmark, p represents perspective transformation, h represents top view transformation, and f represents a distance measurement function. In some embodiments, since the road signs themselves have different shape features, such as points, lines, circles, polygons, etc., the definition of the distance function f corresponding to different shapes is also different, for example, the distance metric function for a point may be euclidean distance; the distance metric function for a line may be the average euclidean distance of the two endpoints of one line to the other line; the distance metric function for the polygon may be the average euclidean distance of the corresponding end points. In some embodiments, the detected landmark refers to a landmark acquired from the sensing data by the landmark acquisition unit 320.
In some embodiments, the detected landmark and the map landmark are based on a metric distance and are considered to be a matching landmark when the following conditions are met;
1) detecting that the road sign and the map road sign have the same semantic label;
2) the measurement distance is smaller than a preset threshold value;
3) the metric distances are all smallest among the candidate landmarks that are not matched to each other.
In some embodiments, the positioning unit 350 is configured to determine the vehicle pose based on the matching landmark optimized positioning result. In some embodiments, the positioning unit 350 may determine an optimal solution for the vehicle pose by minimizing an observation error function using a non-linear optimization-based method. In some embodiments, the nonlinear optimization method may be a least squares method. The determination can be made according to equation 2 when minimizing the observation error function based on the least squares method:
T*=arg min∑e(mi,Mi) (2)
wherein T is*And e is an observation error function related to T and is the optimal solution of the vehicle positioning result.
In some embodiments, the observation error function employs different observation error functions based on different shaped landmarks. Wherein the observation error function can be expressed by equation 3:
Figure BDA0002488316110000121
wherein P is the end point or angular point of the detected road sign, P is the end point or angular point of the map road sign, v is the linear parameter, and n is the number of the road sign angular points.
In some embodiments, the non-linear optimization algorithm may be used in conjunction with a stochastic consistency algorithm for optimizing vehicle positioning results. For example, the random consensus algorithm may be a RANSAC algorithm. In some embodiments, when the input image is a plurality of frames, matching results of the images of the plurality of frames can be unified into a function of minimizing observation error, so that the determined optimal solution is the vehicle pose.
FIG. 4 is a flow chart of a mapping method of a global landmark semantic map. In some embodiments, the global landmark semantic map may be mapped by a dedicated mapping vehicle or by an intelligent driving vehicle. The map building method of the global landmark semantic map is mainly assumed as map building equipment. The mapping equipment can be located in an intelligent driving vehicle or a mapping vehicle. In some embodiments, when the mapping device is located in the intelligent driving vehicle, the mapping device may perform update optimization on the existing global landmark semantic map according to the acquired information during the driving process of the intelligent driving vehicle. As shown in fig. 4, the mapping process of the global landmark semantic map is as follows:
in step 402, the mapping device obtains mapping region sensing data and positioning information. The sensing data refers to laser point cloud data or image data acquired by sensing equipment installed on a vehicle, such as a laser radar, a vision sensor and the like. The positioning information refers to positioning information acquired by a GPS, a visual positioning source, a laser radar positioning source or other positioning sources.
In step 404, the mapping device constructs a global point cloud map based on the sensed data and the positioning data. In some embodiments, the mapping device builds a global point cloud map by correlating a single frame of sensory data with positioning information, and tiling the map to a global coordinate system. In some embodiments, the single frame of sensing data includes information such as spatial coordinates of a plurality of point clouds acquired by a sensing device.
In step 406, the mapping device extracts point cloud semantic information based on the global point cloud map. The image creating equipment can acquire point cloud semantic information in a manual labeling mode. The mapping equipment can also acquire point cloud semantic information in the global point cloud map in a semantic segmentation or clustering mode. The semantic segmentation may be a semantic segmentation algorithm based on a deep learning algorithm. In some embodiments, the mapping device may add semantic tags based on the point cloud semantic information based on each point cloud in the global point cloud map, i.e., there is at least one semantic tag for each point cloud in the global point cloud map. The semantic tags may be lane lines, utility poles, buildings, vehicles, pedestrians, other types, and the like.
In step 408, the mapping device screens out the landmark point clouds based on the point cloud semantic information and constructs a vector semantic map. Wherein the landmark point clouds include, but are not limited to, lane lines, sidewalks, utility poles, traffic signs, and the like. In some embodiments, after the mapping device screens out landmark point clouds based on point cloud semantic map information, the mapping device determines shape feature vectors of the landmark point cloud clusters by methods including, but not limited to, shape fitting, line fitting, and the like, based on each individual landmark point cloud cluster. For example, when the point cloud group of the road sign is a lane line, the feature vector of the point cloud group of the road sign can be described by using parameters of a straight line; when the landmark point cloud group is a telegraph pole, the feature vector of the landmark point cloud group can be represented by three-dimensional space coordinates of two endpoints; when the road sign point cloud group is a traffic sign board, the feature vector can be represented by four corner three-dimensional coordinates or circle center coordinates and the radius of the four corner three-dimensional coordinates or circle center coordinates based on the fact that the traffic sign board is rectangular or circular. In some embodiments, the mapping device constructs a vector semantic map based on the shape feature vectors of the screened landmark point clouds. The vector semantic map converts point cloud information into vector information, so that the storage capacity of the map is greatly reduced.
In some embodiments, the mapping device may structure the vector semantic map for efficient map query, where the structured vector semantic map is obtained by clustering landmark coordinates in a euclidean distance metric manner. The clustering can adopt algorithms such as KD-Tree, Kmeans and the like. In some embodiments, the structured vector map may be executed by the mapping device after the vector semantic map is constructed, or may be executed when the vehicle is positioned based on the vector semantic map, for example, when the vehicle is initially positioned.
Fig. 5 shows a flow chart of a method for vehicle localization based on vector semantic maps. The method is implemented by vehicle-mounted equipment. In some embodiments, the execution subject of the method may be a positioning module or a positioning sub-module as shown in fig. 2. For convenience of explanation and description, the embodiments of the present disclosure are explained below with the vehicle-mounted apparatus as the execution subject, but do not affect the disclosure scope of the embodiments of the present disclosure.
In step 502, the in-vehicle apparatus acquires sensing information. The sensing information includes image information or point cloud data. In some embodiments, the vehicle-mounted device performs preprocessing on the sensing information after acquiring the sensing information. The pre-processing does not include, but is not limited to, image gray-scale map transformation, de-distortion, etc.
In step 504, the vehicle-mounted device acquires landmark vector information based on the sensing information. In some embodiments, the vehicle-mounted device may use, but is not limited to, a deep learning based detection algorithm, a semantic segmentation algorithm, or the like, which may obtain landmark information. The landmark vector information refers to landmarks expressed by vectors, such as endpoint coordinates of a telegraph pole, coordinates of an angular point or a circle center of a traffic marking disc, a radius and the like. The landmark vector information has different vector representation forms according to different landmarks.
In step 506, the vehicle-mounted device acquires a local semantic map based on the initial positioning information. In some embodiments, the initial positioning information refers to an estimated value of the current position of the vehicle, and the initial positioning information may be determined according to other positioning sources, such as GPS, visual SLAM, lidar SLAM, and the like; when there is no external positioning source, the specified coordinates may be initial positioning information estimated at the current moment according to the positioning value and the motion trend at the previous moment. In some embodiments, the vehicle-mounted device may query a specified number of landmark IDs within a specified range as a local semantic map to be matched based on the initial positioning information. In some embodiments, the vehicle-mounted device may structure the vector semantic map when acquiring the initial positioning information, wherein the structured vector semantic map can determine the local semantic map more efficiently. In some embodiments, the structured vector semantic map refers to clustering landmark coordinates in a map in a euclidean distance metric. In some embodiments, the step of structuring the semantic map may also be implemented at the time of mapping.
In step 508, the vehicle-mounted device matches the landmark vector information based on the local semantic map to obtain a matched landmark. In some embodiments, the vehicle-mounted equipment adopts different matching methods according to different road signs. For example, road markings include pavement markings and non-pavement space markings. In some embodiments, when the landmark vector information is a road landmark, the vehicle-mounted device projects the landmark vector information to a road plane so as to match with a map landmark, and determines a matching landmark. And when the landmark vector information is a non-road space coordinate, the vehicle-mounted equipment projects the non-road space landmarks in the local semantic map to an image plane so as to be matched with the landmark vector information. In some embodiments, the in-vehicle device may determine a matching landmark based on determining a measured distance of the landmark vector information from a map landmark in the local semantic map. In some embodiments, the map landmark is a matching landmark when the landmark vector information and the map landmark satisfy the following condition:
1) detecting that the road sign and the map road sign have the same semantic label;
2) the measurement distance is smaller than a preset threshold value;
3) the metric distances are all smallest among the candidate landmarks that are not matched to each other.
In some embodiments, the in-vehicle device may determine a measured distance of the landmark vector information from a map landmark in the local semantic map based on a distance metric function. The metric distance may be determined based on equation 4 below:
Figure BDA0002488316110000151
wherein d is a distance measurement, M is a detection landmark, M is a map landmark, p represents perspective transformation, h represents top view transformation, and f represents a distance measurement function.
In some embodiments, the distance metric function has different definitions for different shaped landmarks. For example, the distance metric function for a point may be a euclidean distance; the distance metric function for a line may be the average euclidean distance of the two endpoints of one line to the other line; the distance metric function for the polygon may be the average euclidean distance of the corresponding end points.
In step 510, the in-vehicle device determines a vehicle location based on the matching landmarks. In some embodiments, the in-vehicle device may determine an optimal solution for the vehicle pose by minimizing an observation error function. Wherein the function for minimizing the observation error can be determined according to the following equation 5:
T*=arg min∑e(mi,Mi) (5)
wherein T is*Is the optimal solution of the vehicle positioning result,e is the observed error function associated with T.
In some embodiments, the onboard device may employ different observation error functions depending on the different shaped landmarks. For example, different observation error functions are defined for a point-like landmark, a line-like landmark, or a polygon landmark, respectively, according to the landmark. More specifically, the observation error function may be determined according to the following formula:
Figure BDA0002488316110000161
wherein P is the end point or angular point of the detected road sign, P is the end point or angular point of the map road sign, v is the linear parameter, and n is the number of the road sign angular points.
In some embodiments, the vehicle-mounted device can minimize the observation error function based on a nonlinear optimization method, so as to determine an optimal solution of the vehicle pose. In some embodiments, the vehicle-mounted device can optimize the vehicle positioning result by combining nonlinear optimization and random consistency algorithm.
In some embodiments, when the vehicle-mounted device receives multiple frames of images, matching results of the multiple frames of images can be unified into a function of minimizing observation error, so that the determined optimal solution is the vehicle pose.
The embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, where a program or an instruction is stored, where the program or the instruction causes a computer to execute steps of each embodiment in a method for constructing a landmark vector semantic map or positioning a vehicle based on the landmark vector semantic map, and details are not repeated here in order to avoid repeated description.
Fig. 6 is a schematic structural diagram suitable for implementing an electronic device according to an embodiment of the present application.
As shown in fig. 6, the electronic apparatus 600 includes a Central Processing Unit (CPU)601 which can execute various processes in the foregoing embodiments in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, the above described methods may be implemented as computer software programs according to embodiments of the present application. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the aforementioned positioning method. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described herein.
In summary, the present application provides a positioning method, an apparatus, an electronic device and a computer-readable storage medium thereof. The embodiment of the application is a positioning frame based on the road sign level semantics, the detection of the road signs in the frame uses a deep learning algorithm, and compared with the traditional scheme, the frame has stronger robustness, and can also ensure the positioning effect under the scene with larger illumination change.
It is to be understood that the above-described embodiments of the present application are merely illustrative of or illustrative of the principles of the present application and are not to be construed as limiting the present application. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present application shall be included in the protection scope of the present application. Further, it is intended that the appended claims cover all such changes and modifications that fall within the scope and range of equivalents of the appended claims, or the equivalents of such scope and range.

Claims (10)

1. A method of positioning, the method comprising:
acquiring image information;
detecting road sign vector information based on the image information, wherein the road sign vector information is vector information representing a road sign;
determining a local vector semantic map based on a vector semantic map, wherein the vector semantic map is a vector map comprising landmark semantic features;
matching based on the landmark vector information and the local vector semantic map;
and determining a positioning result based on the road signs in the local semantic map on the matching.
2. The method of claim 1, wherein the obtaining image information comprises preprocessing the image information, the preprocessing comprising grey-scale image transformation and distortion removal.
3. The positioning method according to claim 1, wherein the detecting landmark vector information based on the image information comprises:
acquiring all pixel points of the road sign based on a semantic segmentation algorithm;
and fitting the pixel points based on a shape fitting algorithm to obtain the landmark vector information.
4. The positioning method according to claim 1, wherein the detecting landmark vector information based on the image information comprises:
and acquiring the landmark vector information based on a deep learning detection algorithm.
5. The positioning method according to claim 1, wherein the construction process of the vector semantic map comprises:
establishing a global point cloud map based on the perception point cloud data and the positioning data;
extracting point cloud semantic information based on the global point cloud map to obtain a point cloud semantic map;
screening road sign point clouds based on the point cloud semantic map;
performing shape fitting on each point cloud group of the screened road signs, and determining a shape characteristic vector of each road sign;
and establishing a vector semantic map based on the shape feature vectors of the road signs.
6. The method according to claim 5, wherein the extracting point cloud semantic information based on the global point cloud map and obtaining the point cloud semantic map comprises:
semantic information is acquired by any one of the following methods:
manual marking; clustering; semantic segmentation based on a deep learning algorithm;
and each point cloud in the point cloud semantic map has at least one semantic label.
7. The method of claim 5, wherein the road markings comprise road markings and off-road space markings.
8. The method of claim 7, wherein the matching based on the landmark vector information and a local vector semantic map comprises:
when the landmark vector information corresponds to a road surface landmark, converting the landmark vector information into a road plane for matching in an overlooking manner;
and when the landmark vector information corresponds to the non-road space landmark, the non-road space landmark in the local semantic map is subjected to perspective transformation to an image plane for matching.
9. The method of claim 1, further comprising the step of matching the road signs with the following conditions:
the landmark vector information and the landmarks in the vector semantic map have the same semantic tags;
the measurement distance between the landmark vector information and the landmarks in the vector semantic map is smaller than a preset threshold value;
the metric distance between the landmark vector information and the landmarks in the vector semantic map is the smallest among the candidate landmarks.
10. The method according to claim 1, wherein the determining a local vector semantic map based on a vector semantic map comprises:
determining an initial positioning estimate for the vehicle based on the external positioning source or the movement trend;
and determining the landmark vector information of a preset number in a preset range based on the initial positioning estimated value.
CN202010398086.3A 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium Active CN111780771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010398086.3A CN111780771B (en) 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010398086.3A CN111780771B (en) 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111780771A true CN111780771A (en) 2020-10-16
CN111780771B CN111780771B (en) 2022-09-23

Family

ID=72753548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010398086.3A Active CN111780771B (en) 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111780771B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767477A (en) * 2020-12-31 2021-05-07 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN113034584A (en) * 2021-04-16 2021-06-25 广东工业大学 Mobile robot visual positioning method based on object semantic road sign
CN113191323A (en) * 2021-05-24 2021-07-30 上海商汤临港智能科技有限公司 Semantic element processing method and device, electronic equipment and storage medium
CN114526720A (en) * 2020-11-02 2022-05-24 北京四维图新科技股份有限公司 Positioning processing method, device, equipment and storage medium
CN114550172A (en) * 2020-11-24 2022-05-27 株式会社理光 Electronic equipment positioning method and device and computer readable storage medium
WO2022188154A1 (en) * 2021-03-12 2022-09-15 深圳市大疆创新科技有限公司 Front view to top view semantic segmentation projection calibration parameter determination method and adaptive conversion method, image processing device, mobile platform, and storage medium
CN115638788A (en) * 2022-12-23 2023-01-24 安徽蔚来智驾科技有限公司 Semantic vector map construction method, computer equipment and storage medium
EP4151951A1 (en) * 2021-09-16 2023-03-22 Beijing Xiaomi Mobile Software Co., Ltd. Vehicle localization method and device, electronic device and storage medium
CN116105603A (en) * 2023-04-13 2023-05-12 安徽蔚来智驾科技有限公司 Method and system for determining the position of a moving object in a venue
CN116106853A (en) * 2023-04-12 2023-05-12 陕西欧卡电子智能科技有限公司 Method for identifying dynamic and static states of water surface scene target based on millimeter wave radar

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008287379A (en) * 2007-05-16 2008-11-27 Hitachi Ltd Road sign data input system
US20110224902A1 (en) * 2010-03-09 2011-09-15 Oi Kenichiro Information processing device, map update method, program, and information processing system
US20120203455A1 (en) * 2010-05-05 2012-08-09 Thales Method of definition of a navigation system
CN104330090A (en) * 2014-10-23 2015-02-04 北京化工大学 Robot distributed type representation intelligent semantic map establishment method
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN107339992A (en) * 2017-08-24 2017-11-10 武汉大学 A kind of method of the semantic mark of the indoor positioning and terrestrial reference of Behavior-based control
CN107957266A (en) * 2017-11-16 2018-04-24 北京小米移动软件有限公司 Localization method, device and storage medium
GB201803801D0 (en) * 2017-03-14 2018-04-25 Ford Global Tech Llc Vehicle localization using cameras
US20180188059A1 (en) * 2016-12-30 2018-07-05 DeepMap Inc. Lane Line Creation for High Definition Maps for Autonomous Vehicles
EP3462377A2 (en) * 2017-09-28 2019-04-03 Samsung Electronics Co., Ltd. Method and apparatus for identifying driving lane
CN109583329A (en) * 2018-11-13 2019-04-05 杭州电子科技大学 Winding detection method based on the screening of road semanteme road sign
CN110097064A (en) * 2019-05-14 2019-08-06 驭势科技(北京)有限公司 One kind building drawing method and device
CN110398255A (en) * 2019-07-05 2019-11-01 上海博泰悦臻网络技术服务有限公司 Localization method, device and vehicle
CN110455306A (en) * 2018-05-07 2019-11-15 南京图易科技有限责任公司 A kind of robot scene identification and semantic navigation map label method based on deep learning
CN110794828A (en) * 2019-10-08 2020-02-14 福瑞泰克智能***有限公司 Road sign positioning method fusing semantic information
CN110866079A (en) * 2019-11-11 2020-03-06 桂林理工大学 Intelligent scenic spot real scene semantic map generating and auxiliary positioning method

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008287379A (en) * 2007-05-16 2008-11-27 Hitachi Ltd Road sign data input system
US20110224902A1 (en) * 2010-03-09 2011-09-15 Oi Kenichiro Information processing device, map update method, program, and information processing system
US20120203455A1 (en) * 2010-05-05 2012-08-09 Thales Method of definition of a navigation system
CN104330090A (en) * 2014-10-23 2015-02-04 北京化工大学 Robot distributed type representation intelligent semantic map establishment method
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
US20180188059A1 (en) * 2016-12-30 2018-07-05 DeepMap Inc. Lane Line Creation for High Definition Maps for Autonomous Vehicles
GB201803801D0 (en) * 2017-03-14 2018-04-25 Ford Global Tech Llc Vehicle localization using cameras
US20180268566A1 (en) * 2017-03-14 2018-09-20 Ford Global Technologies, Llc Vehicle Localization Using Cameras
US20190385336A1 (en) * 2017-03-14 2019-12-19 Ford Global Technologies, Llc Vehicle Localization Using Cameras
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN107339992A (en) * 2017-08-24 2017-11-10 武汉大学 A kind of method of the semantic mark of the indoor positioning and terrestrial reference of Behavior-based control
EP3462377A2 (en) * 2017-09-28 2019-04-03 Samsung Electronics Co., Ltd. Method and apparatus for identifying driving lane
CN107957266A (en) * 2017-11-16 2018-04-24 北京小米移动软件有限公司 Localization method, device and storage medium
CN110455306A (en) * 2018-05-07 2019-11-15 南京图易科技有限责任公司 A kind of robot scene identification and semantic navigation map label method based on deep learning
CN109583329A (en) * 2018-11-13 2019-04-05 杭州电子科技大学 Winding detection method based on the screening of road semanteme road sign
CN110097064A (en) * 2019-05-14 2019-08-06 驭势科技(北京)有限公司 One kind building drawing method and device
CN110398255A (en) * 2019-07-05 2019-11-01 上海博泰悦臻网络技术服务有限公司 Localization method, device and vehicle
CN110794828A (en) * 2019-10-08 2020-02-14 福瑞泰克智能***有限公司 Road sign positioning method fusing semantic information
CN110866079A (en) * 2019-11-11 2020-03-06 桂林理工大学 Intelligent scenic spot real scene semantic map generating and auxiliary positioning method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114526720B (en) * 2020-11-02 2024-04-16 北京四维图新科技股份有限公司 Positioning processing method, device, equipment and storage medium
CN114526720A (en) * 2020-11-02 2022-05-24 北京四维图新科技股份有限公司 Positioning processing method, device, equipment and storage medium
CN114550172A (en) * 2020-11-24 2022-05-27 株式会社理光 Electronic equipment positioning method and device and computer readable storage medium
CN112836698A (en) * 2020-12-31 2021-05-25 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN112767477A (en) * 2020-12-31 2021-05-07 北京纵目安驰智能科技有限公司 Positioning method, positioning device, storage medium and electronic equipment
WO2022188154A1 (en) * 2021-03-12 2022-09-15 深圳市大疆创新科技有限公司 Front view to top view semantic segmentation projection calibration parameter determination method and adaptive conversion method, image processing device, mobile platform, and storage medium
CN113034584A (en) * 2021-04-16 2021-06-25 广东工业大学 Mobile robot visual positioning method based on object semantic road sign
CN113034584B (en) * 2021-04-16 2022-08-30 广东工业大学 Mobile robot visual positioning method based on object semantic road sign
CN113191323A (en) * 2021-05-24 2021-07-30 上海商汤临港智能科技有限公司 Semantic element processing method and device, electronic equipment and storage medium
EP4151951A1 (en) * 2021-09-16 2023-03-22 Beijing Xiaomi Mobile Software Co., Ltd. Vehicle localization method and device, electronic device and storage medium
CN115638788A (en) * 2022-12-23 2023-01-24 安徽蔚来智驾科技有限公司 Semantic vector map construction method, computer equipment and storage medium
CN116106853A (en) * 2023-04-12 2023-05-12 陕西欧卡电子智能科技有限公司 Method for identifying dynamic and static states of water surface scene target based on millimeter wave radar
CN116106853B (en) * 2023-04-12 2023-09-01 陕西欧卡电子智能科技有限公司 Method for identifying dynamic and static states of water surface scene target based on millimeter wave radar
CN116105603A (en) * 2023-04-13 2023-05-12 安徽蔚来智驾科技有限公司 Method and system for determining the position of a moving object in a venue
CN116105603B (en) * 2023-04-13 2023-09-19 安徽蔚来智驾科技有限公司 Method and system for determining the position of a moving object in a venue

Also Published As

Publication number Publication date
CN111780771B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN111780771B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
US20220092816A1 (en) Vehicle Localization Using Cameras
CN110019570B (en) Map construction method and device and terminal equipment
WO2020098316A1 (en) Visual point cloud-based semantic vector map building method, device, and electronic apparatus
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
CN111874006B (en) Route planning processing method and device
US20190034740A1 (en) Method, apparatus, and system for vanishing point/horizon estimation using lane models
US20180031384A1 (en) Augmented road line detection and display system
EP3644013B1 (en) Method, apparatus, and system for location correction based on feature point correspondence
CN113657224A (en) Method, device and equipment for determining object state in vehicle-road cooperation
US11055862B2 (en) Method, apparatus, and system for generating feature correspondence between image views
CN110969055A (en) Method, apparatus, device and computer-readable storage medium for vehicle localization
WO2021190167A1 (en) Pose determination method and apparatus, and medium and device
Jeong et al. Hdmi-loc: Exploiting high definition map image for precise localization via bitwise particle filter
CN112115857A (en) Lane line identification method and device for intelligent automobile, electronic equipment and medium
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
JP2021103160A (en) Autonomous traveling vehicle meaning map establishment system and establishment method
Yoneda et al. Mono-camera based vehicle localization using lidar intensity map for automated driving
CN112735163B (en) Method for determining static state of target object, road side equipment and cloud control platform
CN115345944A (en) Method and device for determining external parameter calibration parameters, computer equipment and storage medium
KR20220151572A (en) Method and System for change detection and automatic updating of road marking in HD map through IPM image and HD map fitting
CN111860084B (en) Image feature matching and positioning method and device and positioning system
CN112530270B (en) Mapping method and device based on region allocation
Cattaruzza Design and simulation of autonomous driving algorithms
EP4235616A1 (en) Obstacle information acquisition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant