WO2020083103A1 - Procédé de positionnement de véhicule sur la base d'une reconnaissance d'image de réseau neuronal profond - Google Patents

Procédé de positionnement de véhicule sur la base d'une reconnaissance d'image de réseau neuronal profond Download PDF

Info

Publication number
WO2020083103A1
WO2020083103A1 PCT/CN2019/111840 CN2019111840W WO2020083103A1 WO 2020083103 A1 WO2020083103 A1 WO 2020083103A1 CN 2019111840 W CN2019111840 W CN 2019111840W WO 2020083103 A1 WO2020083103 A1 WO 2020083103A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
deep neural
road sign
vehicle
coordinate system
Prior art date
Application number
PCT/CN2019/111840
Other languages
English (en)
Chinese (zh)
Inventor
冯江华
胡云卿
袁浩
林军
刘悦
游俊
熊群芳
丁驰
岳伟
Original Assignee
中车株洲电力机车研究所有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中车株洲电力机车研究所有限公司 filed Critical 中车株洲电力机车研究所有限公司
Priority to SG11202103814PA priority Critical patent/SG11202103814PA/en
Publication of WO2020083103A1 publication Critical patent/WO2020083103A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Definitions

  • the invention relates to the technical field of image recognition and positioning, in particular to a vehicle positioning method based on deep neural network image recognition, and a training method of the deep neural network.
  • vehicle positioning technology mainly uses GPS technology and high-precision map matching positioning.
  • the GPS technology has the following problems when it is used: when using the ordinary GPS mode for positioning, the positioning error reaches the meter level, which cannot meet the accuracy requirements of the vehicle; if the GPS RTK mode is used, it is necessary to obtain both satellite information and ground reference positioning information It is necessary to install reference positioning communication equipment along the road. The equipment cost and use cost are high. When the vehicle enters the road section with poor satellite signal, such as dense forest or tunnel, the GPS signal is easy to be lost, thereby losing positioning information.
  • Map data needs to be established and stored on the vehicle in advance.
  • point cloud data or image data of the current environment of the vehicle is obtained through an external laser radar or camera device And match with pre-stored map data.
  • the cost of map making and software and hardware cost of matching calculation are relatively high.
  • a low-cost, high-precision vehicle positioning method is needed to provide reliable data support for vehicle positioning, pit route planning, and speed control.
  • the invention provides a vehicle positioning method based on deep neural network image recognition, and a training method of the deep neural network.
  • the invention can increase the training accuracy of the deep neural network by increasing the number of training samples and optimizing the parameters of the neural network, thereby improving the positioning accuracy of the vehicle, and the required equipment cost and use cost are low.
  • the first aspect of the present invention provides a deep neural network training method for road sign recognition, including the following steps:
  • Road sign graphic setting step set a road sign graphic on the road surface of the station inbound direction, and the distance between the marking point of the road sign graphic and the edge of the station inbound direction is L;
  • Steps for setting the camera install the camera on the vehicle, the optical axis of the camera coincides with the longitudinal centerline of the vehicle body, and the distance between the lens center of the camera and the ground is H
  • Training sample production step calculate the position coordinates of the identification point of the road sign graphic in the image coordinate system in each of the image samples, make a label set, and combine each of the image samples with the corresponding labels Set pairing to form training samples;
  • Steps for building a deep neural network on the basis of the target recognition classification deep neural network, the final classification output layer of the network is modified into an output layer composed of 2 nodes to output the position of the marking point of the road marking graphic coordinate;
  • Deep neural network training step input the training samples to the deep neural network for training.
  • the shooting time is selected at noon on a sunny day and at night on a sunny day.
  • the shooting time is selected at noon on rainy days and night on rainy days.
  • the shooting time is selected at noon on a foggy day and at night on a foggy day.
  • the photographing device photographs an image sample of the road marking pattern every 5 ° within an angle range of 5 ° to 180 ° between the optical axis and the road surface.
  • the lens parameters of the shooting device are selected so that when all the road sign graphics appear in the lens screen, the road sign graphics can occupy more than 20% of the area of the lens screen.
  • the photographing device is installed at the front roof position of the vehicle, and points in the direction of the vehicle.
  • the road sign graphic adopts a triangle, a rectangle, an arc, or other geometric element combinations that are easy to recognize.
  • the road sign graphic is a bar code or a two-dimensional code.
  • the identification point of the road identification graphic is its geometric center.
  • the deep neural network adopts ResNet50 network, and replaces the last classification output layer of the network with two fully connected layers with 1024 nodes, and the fully connected layer is connected with an output layer with 2 nodes output.
  • the deep neural network adopts a ResNet50 network, and replaces the final classification output layer of the network with two fully connected layers with 2048 nodes, and the fully connected layer is connected with an output layer with 2 nodes output.
  • the floating point data output by the two nodes belong to the closed interval of [0, 1], and the pixel coordinates can be obtained by multiplying the output floating point data by the corresponding image width and height.
  • a second aspect of the present invention provides a vehicle positioning method using the above deep neural network training method, including the following steps:
  • Road sign pattern recognition step use the deep neural network after training to identify the road sign pattern photographed during the actual pit stop of the vehicle and obtain its sign point P in the image coordinate system Position coordinates (u, v);
  • Vehicle positioning step determine the distance between the shooting device and the edge of the station in the direction of the station according to the distance between the obtained road marking graphic identification point P and the shooting device, and then combine the shooting device on the vehicle To determine the distance between the vehicle and the edge of the station into the station.
  • the marking point P of the road marking graphic is on the optical axis of the lens of the shooting device;
  • the origin of the coordinate system of the shooting device is set at the position of the imaging aperture of the shooting device, and the horizontal distance between the optical center of the lens of the shooting device and the marking point P of the road marking graphic is Z C ;
  • the positive direction of the Z axis of the camera coordinate system is selected as the forward direction of the vehicle, the positive direction of the Y axis of the camera coordinate system is selected as the downward direction of the vehicle, and the positive X axis of the camera coordinate system is positive Select the right direction of the vehicle;
  • the world coordinate system coincides with the camera coordinate system
  • the origin of the image coordinate system is on the Z axis of the camera coordinate system, and the X axis and Y axis of the image coordinate system are parallel to the X axis and Y axis of the camera coordinate system, respectively;
  • the image sample collection process is carried out in multiple periods under different lighting and weather conditions, which reduces the influence of environmental factors on the training results and improves the environmental adaptability of the deep neural network.
  • the above method can provide the distance data between the vehicle and the station, provide data support for vehicle positioning, inbound route planning, and speed control, and has the advantages of simple operation, low cost, and high reliability.
  • FIG. 1 is a flowchart of a deep neural network training method for road sign recognition
  • Figure 2 is a schematic diagram of shooting in the image sample collection step
  • FIG. 3 is a flowchart of a vehicle positioning method based on a deep neural network after training is completed
  • Figure 4 is a side view of the vehicle during the vehicle entering the station
  • 5 is a plan view of the vehicle during the vehicle entering the station
  • Fig. 6 is a schematic diagram of calculating the edge distance between the vehicle and the station in the direction of the station.
  • FIG. 1 is a flowchart of a deep neural network training method for road sign recognition provided by the present invention, including a road sign graphic setting step 101, a camera setting step 102, an image sample acquisition step 103, a training sample production step 104, Deep neural network construction step 105, deep neural network training step 106.
  • Road sign graphic setting step 101 Set a road sign graphic on the road surface of the station inbound direction, and the distance between the marking point of the road sign graphic and the edge of the station inbound direction is L.
  • the road marking graphics may be, but not limited to, triangles, rectangles, arcs, or other easily identifiable combinations of geometric elements, or text graphics, or bar codes or two-dimensional codes incorporating station-related information.
  • the identification point of the road identification graphic may be a geometric center point, vertex or other geometric feature point of the road identification graphic.
  • Shooting device setting step 102 Install a shooting device on the vehicle.
  • the lens of the shooting device points to the direction of the vehicle. It is installed on the roof of the front of the vehicle or other position where the road sign can be photographed.
  • the longitudinal symmetry centerline of the vehicle body coincides, and the distance H between the lens optical center of the shooting device and the ground is recorded.
  • the lens parameters of the shooting device are selected such that, when all the road sign graphics appear in the lens screen, the road sign graphics can occupy more than 20% of the area of the lens screen, and the larger the area, the sign point of the road sign graphics is The more precise the positioning.
  • Image sample collection step 103 Under different lighting or weather conditions, such as sunny noon and sunny night, rainy noon and rainy night, foggy noon and foggy night, the road sign pattern is photographed using the above-mentioned shooting device.
  • the shooting angle is shown in Figure 2, where the letter A represents the road sign graphic, and the angle between the optical axis of the camera and the road surface is changed in the direction of the vehicle's advance and the direction perpendicular to the direction of the vehicle's direction, so that the camera is in its An image sample of a road marking image is taken every 5 ° within an angle of 5 ° to 180 ° between the optical axis and the road surface.
  • Training sample production step 104 Calculate the position coordinates of the road marking graphics in each image sample in the image coordinate system, make a label set, and pair each image sample with the corresponding label set to form a training sample, so that Then input deep neural network for training.
  • Deep neural network construction step 105 Use a target recognition classification deep neural network, but modify the final classification output layer of the network to an output layer composed of two nodes, the values output by these two nodes are the identification points of the road marking graphics. Coordinates in the image frame. More specifically, the ResNet50 network can be used, and the final classification output layer is removed. According to the required recognition effect, two fully connected layers with 1024 nodes or 2048 nodes are used instead. After the fully connected layer, there are 2 connected In the output layer output by the node, the floating point data output by these two nodes belongs to the closed interval of [0, 1], and the pixel coordinates can be obtained by multiplying the output floating point data by the corresponding image width and height.
  • Deep neural network training step 106 The aforementioned training samples are input to the optimized deep neural network for training. After the training is completed, the deep neural network can be used to identify the road sign graphic and obtain the geometric center position coordinates.
  • FIG. 3 is a flowchart of a vehicle positioning method using the above deep neural network training method provided by the present invention, including a road sign pattern recognition step 201, a road sign pattern positioning step 202, and a vehicle positioning step 203.
  • Road sign pattern recognition step 201 Using the deep neural network after the training is completed, the road sign graphics photographed during the actual pit stop of the vehicle are recognized and the position coordinates (u, v) of the sign point P in the image coordinate system are obtained .
  • Road sign graphic positioning step 202 Calculate the coordinates (X w , Y w , Z w ) of the road sign graphic point P in the world coordinate system through the transformation relationship between the image coordinate system and the world coordinate system, thereby obtaining the road sign graphic The distance between the marking point P and the camera.
  • the transformation between the above image coordinate system and the world coordinate system can be described using a small hole imaging model.
  • Z C represents the horizontal distance between the road marking graphic marking point P and the optical center of the camera lens
  • d x , d y , u 0 , v 0 , f are the internal lens parameters related to the camera lens, specific for:
  • d x , d y represent the physical length of the unit pixel in the X and Y directions of the image coordinate system; u 0 , v 0 respectively represent the origin of the image coordinate system and the origin of the camera coordinate system in the X and Y directions Offset; f represents the lens imaging focal length.
  • R represents the rotation relationship between the world coordinate system and the camera coordinate system
  • formula (2) is used to calculate:
  • ⁇ , ⁇ , and ⁇ respectively represent the angles required to rotate around the X axis, Y axis, and Z axis when the world coordinate system and the camera coordinate system coincide.
  • T represents the translation relationship between the world coordinate system and the camera coordinate system
  • formula (3) is used to calculate:
  • t x , t y , and t z represent the translation amounts of the world coordinate system and the camera coordinate system on the X axis, Y axis, and Z axis, respectively.
  • the above parameters d x , d y , u 0 , v 0 , f, ⁇ , ⁇ , ⁇ , t x , t y , t z can be calibrated using but not limited to the conditions described below.
  • the shooting device is installed at the front roof of the vehicle and points in the forward direction.
  • the axis of the lens optical center of the shooting device coincides with the geometric symmetry centerline of the longitudinal axis of the vehicle.
  • the marking point P of the road marking on the road surface in front of the vehicle is on the axis of the lens optical center of the shooting device, and the horizontal distance from the lens optical center of the shooting device is Z C .
  • Set the origin of the camera coordinate system to the location of the imaging aperture of the camera.
  • the world coordinate system coincides with the camera coordinate system, and the forward direction of the vehicle is selected as the positive direction of the Z axis, the downward direction of the vehicle is the positive direction of the Y axis, and the right direction of the vehicle is the positive direction of the X axis.
  • the origin of the image coordinate system is on the Z axis of the camera coordinate system, and the X axis and Y axis of the image coordinate system are parallel to the X axis and Y axis of the camera coordinate system, respectively.
  • Vehicle positioning step 203 As shown in FIG. 6, after obtaining the horizontal distance Z C of the marking point P of the road marking graphic from the optical center of the lens of the shooting device, and then combining the distance L of the marking point P of the road marking graphic from the edge of the station in the direction of stop , You can calculate the horizontal distance L CZ of the optical center of the camera and the edge of the station in the direction of the station:
  • the distance between the vehicle and the edge of the station entering direction is determined, so as to realize vehicle positioning.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention porte sur un procédé de positionnement de véhicule sur la base d'une reconnaissance d'image de réseau neuronal profond, et sur un procédé d'apprentissage pour un réseau neuronal profond. Le procédé d'apprentissage comprend : une configuration graphique de marquage de route (101), une configuration de dispositif de photographie (102), une collecte d'échantillon d'image (103), une réalisation d'échantillon d'apprentissage (104), une construction de réseau neuronal profond (105) et un apprentissage de réseau neuronal profond (106). Le processus de collecte d'échantillon d'image est réalisé pendant différentes périodes dans différentes conditions d'éclairage et de conditions météorologiques de telle sorte que l'adaptabilité environnementale du réseau neuronal profond est améliorée. De plus, en prenant des images d'échantillon chaque angle donné dans une direction de déplacement d'un véhicule et dans la direction perpendiculaire à la direction de déplacement du véhicule, une grande quantité de données d'échantillon d'apprentissage est obtenue, la précision d'apprentissage pour un réseau neuronal profond est améliorée et, donc, la précision du positionnement du véhicule est améliorée.
PCT/CN2019/111840 2018-10-24 2019-10-18 Procédé de positionnement de véhicule sur la base d'une reconnaissance d'image de réseau neuronal profond WO2020083103A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SG11202103814PA SG11202103814PA (en) 2018-10-24 2019-10-18 Vehicle positioning method based on deep neural network image recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811245274.1A CN109446973B (zh) 2018-10-24 2018-10-24 一种基于深度神经网络图像识别的车辆定位方法
CN201811245274.1 2018-10-24

Publications (1)

Publication Number Publication Date
WO2020083103A1 true WO2020083103A1 (fr) 2020-04-30

Family

ID=65547888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111840 WO2020083103A1 (fr) 2018-10-24 2019-10-18 Procédé de positionnement de véhicule sur la base d'une reconnaissance d'image de réseau neuronal profond

Country Status (3)

Country Link
CN (1) CN109446973B (fr)
SG (1) SG11202103814PA (fr)
WO (1) WO2020083103A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914691A (zh) * 2020-07-15 2020-11-10 北京埃福瑞科技有限公司 一种轨道交通车辆定位方法及***
CN113378735A (zh) * 2021-06-18 2021-09-10 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446973B (zh) * 2018-10-24 2021-01-22 中车株洲电力机车研究所有限公司 一种基于深度神经网络图像识别的车辆定位方法
CN110726414B (zh) * 2019-10-25 2021-07-27 百度在线网络技术(北京)有限公司 用于输出信息的方法和装置
CN111161227B (zh) * 2019-12-20 2022-09-06 成都数之联科技股份有限公司 一种基于深度神经网络的靶心定位方法及***
CN113496594A (zh) * 2020-04-03 2021-10-12 郑州宇通客车股份有限公司 一种公交车进站控制方法、装置及***
CN112699823A (zh) * 2021-01-05 2021-04-23 浙江得图网络有限公司 一种用于共享电动车的定点还车方法
CN112950922B (zh) * 2021-01-26 2022-06-10 浙江得图网络有限公司 一种共享电动车的定点还车方法
EP4375856A1 (fr) * 2021-08-19 2024-05-29 Zhejiang Geely Holding Group Co., Ltd. Procédé et appareil de localisation de véhicule à base de correspondance d'environnement, véhicule et support de stockage

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202350794U (zh) * 2011-11-29 2012-07-25 高德软件有限公司 一种导航数据采集装置
CN103925927A (zh) * 2014-04-18 2014-07-16 中国科学院软件研究所 一种基于车载视频的交通标识定位方法
CN108009518A (zh) * 2017-12-19 2018-05-08 大连理工大学 一种基于快速二分卷积神经网络的层次化交通标识识别方法
US20180211120A1 (en) * 2017-01-25 2018-07-26 Ford Global Technologies, Llc Training An Automatic Traffic Light Detection Model Using Simulated Images
CN109446973A (zh) * 2018-10-24 2019-03-08 中车株洲电力机车研究所有限公司 一种基于深度神经网络图像识别的车辆定位方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940553B2 (en) * 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
CN105718860B (zh) * 2016-01-15 2019-09-10 武汉光庭科技有限公司 基于驾驶安全地图及双目交通标志识别的定位方法及***
US9773196B2 (en) * 2016-01-25 2017-09-26 Adobe Systems Incorporated Utilizing deep learning for automatic digital image segmentation and stylization
CN106326858A (zh) * 2016-08-23 2017-01-11 北京航空航天大学 一种基于深度学习的公路交通标志自动识别与管理***
CN106403926B (zh) * 2016-08-30 2020-09-11 上海擎朗智能科技有限公司 一种定位方法和***
CN106845547B (zh) * 2017-01-23 2018-08-14 重庆邮电大学 一种基于摄像头的智能汽车定位与道路标识识别***及方法
CN107563419B (zh) * 2017-08-22 2020-09-04 交控科技股份有限公司 图像匹配和二维码相结合的列车定位方法
CN107703936A (zh) * 2017-09-22 2018-02-16 南京轻力舟智能科技有限公司 基于卷积神经网络的自动导航小车***及小车定位方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202350794U (zh) * 2011-11-29 2012-07-25 高德软件有限公司 一种导航数据采集装置
CN103925927A (zh) * 2014-04-18 2014-07-16 中国科学院软件研究所 一种基于车载视频的交通标识定位方法
US20180211120A1 (en) * 2017-01-25 2018-07-26 Ford Global Technologies, Llc Training An Automatic Traffic Light Detection Model Using Simulated Images
CN108009518A (zh) * 2017-12-19 2018-05-08 大连理工大学 一种基于快速二分卷积神经网络的层次化交通标识识别方法
CN109446973A (zh) * 2018-10-24 2019-03-08 中车株洲电力机车研究所有限公司 一种基于深度神经网络图像识别的车辆定位方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914691A (zh) * 2020-07-15 2020-11-10 北京埃福瑞科技有限公司 一种轨道交通车辆定位方法及***
CN111914691B (zh) * 2020-07-15 2024-03-19 北京埃福瑞科技有限公司 一种轨道交通车辆定位方法及***
CN113378735A (zh) * 2021-06-18 2021-09-10 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质
CN113378735B (zh) * 2021-06-18 2023-04-07 北京东土科技股份有限公司 一种道路标识线识别方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
SG11202103814PA (en) 2021-05-28
CN109446973B (zh) 2021-01-22
CN109446973A (zh) 2019-03-08

Similar Documents

Publication Publication Date Title
WO2020083103A1 (fr) Procédé de positionnement de véhicule sur la base d'une reconnaissance d'image de réseau neuronal profond
CN109945858B (zh) 用于低速泊车驾驶场景的多传感融合定位方法
CN108802785B (zh) 基于高精度矢量地图和单目视觉传感器的车辆自定位方法
CN106651953B (zh) 一种基于交通指示牌的车辆位姿估计方法
CN106441319B (zh) 一种无人驾驶车辆车道级导航地图的生成***及方法
US10860871B2 (en) Integrated sensor calibration in natural scenes
CN108256413B (zh) 可通行区域检测方法及装置、存储介质、电子设备
US10127461B2 (en) Visual odometry for low illumination conditions using fixed light sources
CN106525057A (zh) 高精度道路地图的生成***
US11625851B2 (en) Geographic object detection apparatus and geographic object detection method
CN109815300B (zh) 一种车辆定位方法
CN105930819A (zh) 基于单目视觉和gps组合导航***的实时城区交通灯识别***
JP2021508815A (ja) 妨害物体の検出に基づいて高精細度マップを補正するためのシステムおよび方法
CN109212545A (zh) 基于主动视觉的多信源目标跟踪测量***及跟踪方法
CN109583409A (zh) 一种面向认知地图的智能车定位方法及***
CN112740225B (zh) 一种路面要素确定方法及装置
WO2022041706A1 (fr) Procédé de positionnement, système de positionnement et véhicule
CN110018503B (zh) 车辆的定位方法及定位***
CN112446915B (zh) 一种基于图像组的建图方法及装置
CN113673386A (zh) 一种交通信号灯在先验地图中的标注方法
CN112444251B (zh) 车辆行车位置确定方法、装置、存储介质及计算机设备
CN110135387B (zh) 一种基于传感器融合的图像快速识别方法
CN115127547B (zh) 一种基于捷联惯导***和图像定位的隧道检测车定位方法
CN118274815A (zh) 一种长隧道环境下的实时定位和建图方法
CN116630559A (zh) 一种轻量级道路语义地图的构建方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19876299

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19876299

Country of ref document: EP

Kind code of ref document: A1