CN112633035B - Driverless vehicle-based lane line coordinate true value acquisition method and device - Google Patents

Driverless vehicle-based lane line coordinate true value acquisition method and device Download PDF

Info

Publication number
CN112633035B
CN112633035B CN201910897619.XA CN201910897619A CN112633035B CN 112633035 B CN112633035 B CN 112633035B CN 201910897619 A CN201910897619 A CN 201910897619A CN 112633035 B CN112633035 B CN 112633035B
Authority
CN
China
Prior art keywords
lane line
point
abstraction
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910897619.XA
Other languages
Chinese (zh)
Other versions
CN112633035A (en
Inventor
李俊
许宝杯
蒋竺希
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN201910897619.XA priority Critical patent/CN112633035B/en
Publication of CN112633035A publication Critical patent/CN112633035A/en
Application granted granted Critical
Publication of CN112633035B publication Critical patent/CN112633035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a lane line coordinate true value acquisition method and device based on an unmanned vehicle. The method comprises the following steps: acquiring an image to be processed; the image to be processed is collected by a camera in the process of the unmanned vehicle moving; determining the position of a lane line in an image to be processed; determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and multiplying the two-dimensional coordinate information with a projection matrix between a camera and a laser radar obtained by calibration in advance to obtain three-dimensional coordinate information of each point, projected to the road, of the laser radar corresponding to each point contained in the lane line in the image to be processed; and determining the true coordinate value of each point contained in the lane line in the vehicle coordinate system according to the three-dimensional coordinate information of each point. By applying the scheme provided by the embodiment of the invention, the accuracy of acquiring the true value of the lane line coordinate can be improved.

Description

Driverless vehicle-based lane line coordinate true value acquisition method and device
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a lane line coordinate true value acquisition method and device based on an unmanned vehicle.
Background
In the driving process of the unmanned vehicle, the driverless vehicle needs to detect the lane line on the road, so that the driverless vehicle can drive in a correct area and the driving safety of the driverless vehicle is ensured. Specifically, a camera may be installed on the vehicle, an image of a surrounding environment during the driving process of the vehicle is collected by the camera, and a lane line detection algorithm is used to detect the lane line of the image collected by the camera, so as to obtain coordinate information of the lane line.
The accuracy of the lane line detection algorithm will affect the accuracy of the lane line detection result. When determining the detection accuracy of the lane line detection algorithm, the true value of the coordinates of the lane line, that is, the coordinate information of the lane line in the real scene, needs to be obtained. Therefore, the accuracy of the lane line detection algorithm can be determined by comparing the true coordinate value of the lane line with the detection result of the lane line.
The known method for acquiring the true value of the coordinate of the lane line mainly comprises the steps of dotting the lane line and an unmanned vehicle on a test field, and acquiring the true value of the coordinate of the lane line according to a dotting result. However, due to the difference between the test site and the real scene, the obtained true value of the coordinate of the lane line cannot objectively describe the performance of the lane line in the real scene, that is, the accuracy of the obtained true value of the coordinate of the lane line is poor. Therefore, in order to improve the accuracy of obtaining the true value of the lane line coordinate, a method for obtaining the true value of the lane line coordinate is needed.
Disclosure of Invention
The invention provides a method and a device for acquiring a true value of a lane line coordinate based on an unmanned vehicle, which are used for improving the accuracy of acquiring the true value of the lane line coordinate. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention provides a method for acquiring a true value of a lane line coordinate based on an unmanned vehicle, where the unmanned vehicle is equipped with a camera and a lidar, and the method includes:
acquiring an image to be processed; the image to be processed is acquired by the camera in the process of the unmanned vehicle going;
displaying the image to be processed, and receiving an annotation result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed;
determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and obtaining the product of the two-dimensional coordinate information and a projection matrix between the camera and the laser radar, wherein the product is obtained by calibration in advance, so as to obtain three-dimensional coordinate information of each point, projected to the road, of the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera;
and determining the true coordinate value of each point contained in the lane line in the vehicle coordinate system according to the three-dimensional coordinate information of each point.
Optionally, the determining, according to the three-dimensional coordinate information of each point, a true coordinate value of each point included in the lane line in a vehicle coordinate system includes:
taking the coordinates in the x direction and the y direction in the three-dimensional coordinate information of each point as the true coordinate value of each point contained in the lane line in the vehicle coordinate system; the x-direction is a traveling direction of the unmanned vehicle, and the y-direction is a horizontal direction perpendicular to the x-direction.
Optionally, the method further includes:
and performing curve fitting on the true coordinate values of all points included in the lane line in the vehicle coordinate system to obtain a true lane line.
Optionally, the method further includes:
selecting target truth-value lane lines with preset lengths in the x direction from the truth-value lane lines;
acquiring target predicted lane lines with the x direction of each preset length in the predicted lane lines; the predicted lane line is a lane line obtained by detecting the image to be processed by a lane line detection algorithm;
calculating first coordinates of true extraction points corresponding to the target true values of the lane lines and second coordinates of predicted extraction points corresponding to the target predicted lane lines;
and determining the accuracy of the lane line detection algorithm according to the first coordinates of the true value abstraction points and the second coordinates of the prediction abstraction points.
Optionally, the calculating a first coordinate of each true abstraction point corresponding to each target true value lane line and a second coordinate of each predicted abstraction point corresponding to each target predicted lane line includes:
for each target truth-value lane line, taking a distance value in the x direction corresponding to the target truth-value lane line as an abscissa of a first coordinate of a truth-value abstraction point corresponding to the target truth-value lane line, and taking a distance value in the y direction corresponding to the target truth-value lane line as an ordinate of the first coordinate of a truth-value abstraction point corresponding to the target truth-value lane line;
and regarding each target prediction lane line, taking the distance value in the x direction corresponding to the target prediction lane line as the abscissa of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line, and taking the distance value in the y direction corresponding to the target prediction lane line as the ordinate of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line.
Optionally, the determining, according to the first coordinate of each true abstraction point and the second coordinate of each predicted abstraction point, the accuracy of the lane line detection algorithm includes:
calculating the distance between each true value abstraction point and each prediction abstraction point according to the first coordinate of each true value abstraction point and the second coordinate of each prediction abstraction point;
acquiring each distance threshold, and regarding each distance threshold, taking a true value abstraction point of which the distance between the true value abstraction point and the corresponding prediction abstraction point is smaller than the distance threshold as a matching abstraction point;
counting first numbers of the matching abstraction points corresponding to the distance thresholds, second numbers of the true abstraction points and third numbers of the prediction abstraction points;
and calculating a first average value of the quotient of each first quantity and the second quantity and a second average value of the quotient of each first quantity and the third quantity, and taking the first average value and the second average value as the accuracy of the lane line detection algorithm.
In a second aspect, an embodiment of the present invention provides an apparatus for acquiring a true value of a lane line coordinate based on an unmanned vehicle, where the unmanned vehicle is equipped with a camera and a lidar, the apparatus including:
the image acquisition module is used for acquiring an image to be processed; the image to be processed is acquired by the camera in the process of the unmanned vehicle going;
the image annotation module is used for displaying the image to be processed and receiving an annotation result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed;
the coordinate projection module is used for determining two-dimensional coordinate information of each point contained by a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained by the lane line in the image coordinate system, and obtaining the three-dimensional coordinate information of each point of the road projected by the laser radar corresponding to each point contained by the lane line in the image to be processed by the product of the two-dimensional coordinate information of each point contained by the lane line in the image coordinate system and the projection matrix between the camera and the laser radar, which is obtained by calibration in advance; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera;
and the truth value determining module is used for determining the real coordinate values of the points contained in the lane line in the vehicle coordinate system according to the three-dimensional coordinate information of the points.
Optionally, the truth value determining module is specifically configured to use coordinates in x and y directions in the three-dimensional coordinate information of each point as a true coordinate value of each point included in the lane line in a vehicle coordinate system; the x-direction is a traveling direction of the unmanned vehicle, and the y-direction is a horizontal direction perpendicular to the x-direction.
Optionally, the apparatus further comprises:
and the curve fitting module is used for performing curve fitting on the coordinate true value of each point contained in the lane line in the vehicle coordinate system to obtain a true value lane line.
Optionally, the apparatus further comprises:
the truth value lane line selection module is used for selecting each target truth value lane line with each preset length in the x direction from the truth value lane lines;
the predicted lane line selection module is used for acquiring each target predicted lane line with each preset length in the x direction in the predicted lane lines; the predicted lane line is a lane line obtained by detecting the image to be processed by a lane line detection algorithm;
the coordinate calculation module is used for calculating first coordinates of each true value abstraction point corresponding to each target true value lane line and second coordinates of each predicted abstraction point corresponding to each target predicted lane line;
and the accuracy calculation module is used for determining the accuracy of the lane line detection algorithm according to the first coordinates of the true value abstraction points and the second coordinates of the prediction abstraction points.
Optionally, the coordinate calculation module is specifically configured to:
for each target truth-value lane line, taking a distance value in the x direction corresponding to the target truth-value lane line as an abscissa of a first coordinate of a truth-value abstraction point corresponding to the target truth-value lane line, and taking a distance value in the y direction corresponding to the target truth-value lane line as an ordinate of the first coordinate of a truth-value abstraction point corresponding to the target truth-value lane line;
and regarding each target prediction lane line, taking the distance value in the x direction corresponding to the target prediction lane line as the abscissa of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line, and taking the distance value in the y direction corresponding to the target prediction lane line as the ordinate of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line.
Optionally, the accuracy calculating module includes:
the distance calculation submodule is used for calculating the distance between each true value abstraction point and each prediction abstraction point according to the first coordinate of each true value abstraction point and the second coordinate of each prediction abstraction point;
a matching point determining submodule for obtaining each distance threshold, and regarding each distance threshold, taking the true abstraction point of which the distance between the corresponding prediction abstraction point is smaller than the distance threshold as a matching abstraction point;
the number counting submodule is used for counting each first number of the matching abstraction points corresponding to each distance threshold, the second number of the true value abstraction points and the third number of the prediction abstraction points;
and the accuracy determination submodule is used for calculating a first average value of the quotient of each first quantity and the second quantity and a second average value of the quotient of each first quantity and the third quantity, and taking the first average value and the second average value as the accuracy of the lane line detection algorithm.
As can be seen from the above, according to the method and the device for acquiring a true value of a lane line coordinate based on an unmanned vehicle provided by the embodiment of the present invention, the unmanned vehicle is provided with a camera and a laser radar, and can acquire an image to be processed; the image to be processed is collected by a camera in the process of the unmanned vehicle moving; displaying an image to be processed, and receiving an annotation result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed; determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and multiplying the two-dimensional coordinate information with a projection matrix between a camera and a laser radar obtained by calibration in advance to obtain three-dimensional coordinate information of each point, projected to the road, of the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image collected by the camera; according to the three-dimensional coordinate information of each point, the coordinate truth value of each point contained in the lane line in the vehicle coordinate system is determined, so that the coordinate truth value of the lane line can be determined according to the image acquired by the unmanned vehicle in the actual running process. And the relative positions of the camera, the laser radar and the unmanned vehicle are fixed, so that the relative relation between the position of the point projected to the road by the laser radar in the actual scene and the position of the corresponding point of the point in the image acquired by the camera is also fixed, therefore, the accurate position of the lane line in the actual scene can be deduced according to the pre-calibrated projection matrix between the camera and the laser radar after the calibrated position of the lane line in the image, namely the accurate position of the lane line in the image is known, and the accuracy of acquiring the coordinate information of the lane line is ensured. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. compared with the method for acquiring the true value of the lane line coordinate according to the test field, the method for acquiring the true value of the lane line coordinate based on the image acquired by the unmanned vehicle in the actual driving process can objectively describe the performance of the lane line in a real scene and improve the accuracy of acquiring the true value of the lane line coordinate. And the relative positions of the camera, the laser radar and the unmanned vehicle are fixed, so that the relative relation between the position of the point projected to the road by the laser radar in the actual scene and the position of the corresponding point of the point in the image acquired by the camera is also fixed, therefore, the accurate position of the lane line in the actual scene can be deduced according to the pre-calibrated projection matrix between the camera and the laser radar after the calibrated position of the lane line in the image, namely the accurate position of the lane line in the image is known, and the accuracy of acquiring the true value of the lane line coordinate is ensured.
2. Considering that the laser radar scans sparse points and has small deviation, curve fitting is carried out on all points included in the lane line, deviation points can be removed, and an accurate true lane line is obtained.
3. The accuracy of the lane line detection algorithm can be obtained by comparing the true lane line with the predicted lane line obtained according to the lane line detection algorithm. In addition, considering that the lengths of the true value lane line and the predicted lane line may be different, a part of lane lines with the preset length may be selected from the true value lane line and the predicted lane line, and the accuracy of the lane line detection algorithm may be accurately determined by comparing the selected part of lane lines.
4. According to the perception characteristic of the automatic driving demand, abstracting a selected part of true value lane lines and predicted lane lines into one point, so that the number of points matched between the true value abstraction points and the predicted abstraction points can be obtained by calculating the distance between the true value abstraction points and the predicted abstraction points and comparing the calculated distance with a plurality of preset distance thresholds, and the number of the points is respectively compared with the ratio between the true value abstraction points and the predicted abstraction points, wherein the high ratio indicates that the number of the matching points is more, namely the detection accuracy of the lane lines is higher, and the low ratio indicates that the number of the matching points is less, namely the detection accuracy of the lane lines is lower; therefore, the ratio is determined as the accuracy of the lane line detection algorithm, the lane line sensing algorithm can be accurately evaluated, and the lane line sensing iteration direction is reflected.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a flowchart of a method for acquiring a true value of a lane line coordinate based on an unmanned vehicle according to an embodiment of the present invention;
fig. 2 is another flowchart of a method for acquiring a true value of a lane line coordinate based on an unmanned vehicle according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a lane line coordinate true value acquisition device based on an unmanned vehicle according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a method and a device for acquiring a true value of a lane line coordinate based on an unmanned vehicle, which can improve the accuracy of acquiring the true value of the lane line coordinate. The following provides a detailed description of embodiments of the invention.
In an embodiment of the present invention, a camera and a lidar may be installed in an unmanned vehicle, wherein the camera and the lidar satisfy time synchronization and space synchronization. The camera can collect images of the surrounding environment of the unmanned vehicle in the driving process, the laser radar can project laser points, and position coordinates of all the points projected on a road are obtained.
It will be appreciated that the relative positions of the camera and lidar and the unmanned vehicle are fixed, so the relative relationship between the position of the point projected by the lidar onto the road in the actual scene and the position of the corresponding point in the image captured by the camera is also fixed. Therefore, the camera and the laser radar can be calibrated in advance, and the projection matrix between the camera and the laser radar is determined. That is, the mapping relationship between the position of a point projected to the road by the lidar and the position of the corresponding point in the image captured by the camera may be determined. Therefore, after the accurate position of the lane line in the image collected by the camera is known, the accurate position of the lane line in the actual scene can be deduced according to the calibrated projection matrix, and the accuracy of acquiring the true value of the coordinate of the lane line is ensured.
The calibration process may be, for example, when the unmanned vehicle is located at a certain position, acquiring two-dimensional coordinates of points included in a lane line in an image coordinate system in an image acquired by the camera, and acquiring three-dimensional coordinates of corresponding points projected to the lane line on the road by the laser radar. And calculating to obtain a matrix for converting the two-dimensional coordinates to the three-dimensional coordinates between the points according to the two-dimensional coordinates of the points in the image coordinate system and the three-dimensional coordinates of the corresponding points on the road, namely the projection matrix between the camera and the laser radar.
Fig. 1 is a schematic flow chart of a method for acquiring a true value of a lane line coordinate based on an unmanned vehicle according to an embodiment of the present invention. The method is applied to the electronic equipment. The method specifically comprises the following steps.
S110: acquiring an image to be processed; the images to be processed are collected by the camera in the process of the unmanned vehicle moving.
The method provided by the embodiment of the invention can be performed in the electronic equipment off-line, namely after the unmanned vehicle runs in the road. In the process of driving of the unmanned vehicle on the road, the camera can acquire images of the surrounding environment of the vehicle in real time and store the acquired images at a preset position.
When the electronic device obtains the true value of the lane line coordinate, the image collected by the camera can be obtained from the preset position and used as the image to be processed.
S120: displaying an image to be processed, and receiving an annotation result input by a user for the image to be processed; and the marking result comprises the position of the lane line in the image to be processed.
In the embodiment of the invention, in order to improve the accuracy of acquiring the true value of the lane line coordinate, the lane line position can be manually marked based on the image to be processed.
Specifically, the electronic device may display the to-be-processed image on a display screen, so that the user may mark the position of the lane line in the to-be-processed image. For example, the user may select the area of the lane line and modify it to a preset color. The electronic equipment receives the marking result input by the user, and the position of the lane line in the image to be processed can be determined.
S130: determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and multiplying the two-dimensional coordinate information by a projection matrix between a camera and a laser radar obtained by calibration in advance to obtain three-dimensional coordinate information of each point of the road projected by the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies a mapping relation between the position of a point projected to the road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera.
In the embodiment of the invention, an image coordinate system can be constructed in the image to be processed. For example, an image coordinate system may be constructed with the center of the image as the center, and along the two sides of the image perpendicular, the x-direction and the y-direction, respectively. After determining the position of the lane line in the image to be processed, the electronic device may further determine two-dimensional coordinate information of each point included in the lane line in the image to be processed in the image coordinate system. For example, each point included in the lane line may be selected according to a preset selection rule, and then two-dimensional coordinate information of each point in the constructed image coordinate system may be determined.
And calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system in the image to be processed, and multiplying the two-dimensional coordinate information by the projection matrix between the camera and the laser radar obtained by calibration in advance to obtain the three-dimensional coordinate information of each point of the road projected by the laser radar corresponding to each point contained in the lane line in the image to be processed.
S140: and determining the true coordinate value of each point contained in the lane line in the vehicle coordinate system according to the three-dimensional coordinate information of each point.
For example, the three-dimensional coordinate information of each point may be directly used as a true coordinate value of each point included in the lane line in the vehicle coordinate system. The vehicle coordinate system may be a horizontal direction perpendicular to the x direction, where the origin of the unmanned vehicle is an origin, the traveling direction of the vehicle is the x direction, and the horizontal direction is the y direction.
Alternatively, when the laser radar is placed in the forward direction, the coordinates in the x and y directions in the three-dimensional coordinate information of each point may be used as the true coordinate values of each point included in the lane line in the vehicle coordinate system.
As can be seen from the above, according to the lane line coordinate true value obtaining method based on the unmanned vehicle provided by the embodiment of the present invention, the unmanned vehicle is provided with the camera and the laser radar, and can obtain the image to be processed; the image to be processed is collected by a camera in the process of the unmanned vehicle moving; displaying an image to be processed, and receiving an annotation result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed; determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and multiplying the two-dimensional coordinate information with a projection matrix between a camera and a laser radar obtained by calibration in advance to obtain three-dimensional coordinate information of each point, projected to the road, of the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera; according to the three-dimensional coordinate information of each point, the coordinate truth value of each point contained in the lane line in the vehicle coordinate system is determined, so that the coordinate truth value of the lane line can be determined according to the image acquired by the unmanned vehicle in the actual running process. And the relative positions of the camera, the laser radar and the unmanned vehicle are fixed, so that the relative relation between the position of the point projected to the road by the laser radar in the actual scene and the position of the corresponding point of the point in the image acquired by the camera is also fixed, therefore, the accurate position of the lane line in the actual scene can be deduced according to the pre-calibrated projection matrix between the camera and the laser radar after the calibrated position of the lane line in the image, namely the accurate position of the lane line in the image is known, and the accuracy of acquiring the coordinate information of the lane line is ensured. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
It will be appreciated that the lidar scans for sparse points and that minor deviations may be present. Therefore, as an implementation manner of the embodiment of the present invention, after obtaining the true coordinate values of the points included in the lane line in the vehicle coordinate system, the electronic device may further perform curve fitting on the true coordinate values of the points included in the lane line in the vehicle coordinate system to obtain a true lane line. For example, a curve with a maximum of cubic terms may be fitted as a true lane line.
And curve fitting is carried out on each point contained in the lane line, so that deviation points can be removed, and an accurate true lane line is obtained.
As an implementation manner of the embodiment of the present invention, after obtaining the true value of the lane line, the electronic device may determine the accuracy of the lane line detection algorithm. Specifically, as shown in fig. 2, the process may include the following steps.
S210: and selecting the real-value lane lines of each target with each preset length in the x direction from the real-value lane lines.
In the embodiment of the invention, because the lengths of the lane lines in the images are different, the coefficient of the cubic term in the curve obtained by fitting may be different, and the coefficient of the cubic term has no clear physical meaning. In addition, the requirement of the lane line in practical application is to see the position of the lane line at a certain distance.
Therefore, when determining the accuracy of the lane line detection algorithm, the electronic device may select, from the true lane lines, each target true lane line having each preset length in the x direction. The preset length may be, for example, 5 meters, 6 meters, 10 meters, and the like, which is not limited in the embodiment of the present invention.
S220: acquiring target predicted lane lines with preset lengths in the x direction from the predicted lane lines; the predicted lane line is a lane line obtained by detecting the image to be processed by a lane line detection algorithm.
Similarly, for the predicted lane lines obtained according to the lane line detection algorithm, the predicted lane lines for each target of each preset length in the x direction may be obtained.
S230: and calculating a first coordinate of each true value abstraction point corresponding to each target true value lane line and a second coordinate of each predicted abstraction point corresponding to each target predicted lane line.
That is, each target truth-value lane line can be abstracted into one point, which is called a truth-value abstraction point; for each target predicted lane line, it is abstracted into one point, called a predicted abstraction point.
In an implementation manner, when calculating the first coordinate of each true value abstraction point corresponding to each target true value lane line, for each target true value lane line, the distance value in the x direction corresponding to the target true value lane line may be taken as the abscissa of the first coordinate of the true value abstraction point corresponding to the target true value lane line, and the distance value in the y direction corresponding to the target true value lane line may be taken as the ordinate of the first coordinate of the true value abstraction point corresponding to the target true value lane line.
When the second coordinates of the prediction abstraction points corresponding to the target prediction lane lines are calculated, for each target prediction lane line, the distance value in the x direction corresponding to the target prediction lane line may be used as the abscissa of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line, and the distance value in the y direction corresponding to the target prediction lane line may be used as the ordinate of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line.
S240: and determining the accuracy of the lane line detection algorithm according to the first coordinates of the true abstraction points and the second coordinates of the predicted abstraction points.
In one implementation, the electronic device may calculate a distance between each true value abstraction point and each predicted abstraction point according to a first coordinate of each true value abstraction point and a second coordinate of each predicted abstraction point, and a ratio of a number of distances smaller than a preset threshold to a total number is used as an accuracy of the lane line detection algorithm.
In another implementation, the electronic device may preset a plurality of distance thresholds, calculate an accuracy parameter corresponding to each distance threshold, and determine the accuracy of the final lane marking detection algorithm according to the accuracy parameter corresponding to each distance threshold.
For example, the electronic device may first calculate a distance between each true-value abstraction point and each predicted abstraction point according to the first coordinate of each true-value abstraction point and the second coordinate of each predicted abstraction point; then obtaining each distance threshold, and regarding each distance threshold, taking the true value abstraction point with the distance between the true value abstraction point and the corresponding prediction abstraction point smaller than the distance threshold as a matching abstraction point; then, counting each first number of the matching abstraction points corresponding to each distance threshold, the second number of the true abstraction points and the third number of the predicted abstraction points; and finally, calculating a first average value of the quotient of each first quantity and the second quantity and a second average value of the quotient of each first quantity and the third quantity, and taking the first average value and the second average value as the accuracy of the lane line detection algorithm.
That is, for any distance threshold, the number of matched points may be calculated as TP, the number of predicted points is PR, and the number of real points is GT, then the index corresponding to the distance threshold may be calculated as:
Precision=TP/PR
Recall=TP/GT
different distance thresholds are changed to respectively obtain a Precision curve and a Recall curve, and the whole curves are averaged to obtain integral evaluation indexes AP (average Precision) and AR (average Recall).
In this embodiment, the accuracy of the lane line detection algorithm can be obtained by comparing the true lane line with the predicted lane line obtained according to the lane line detection algorithm. In addition, the determined true value lane line and the predicted lane line may have different lengths, so that a part of the lane line with the preset length can be selected from the true value lane line and the predicted lane line, and the accuracy of the lane line detection algorithm can be accurately determined by comparing the selected part of the lane line.
According to the perception characteristic of the automatic driving demand, abstracting a selected part of true value lane lines and predicted lane lines into one point, so that the number of points matched between the true value abstraction points and the predicted abstraction points can be obtained by calculating the distance between the true value abstraction points and the predicted abstraction points and comparing the calculated distance with a plurality of preset distance thresholds, and the number of the points is respectively compared with the ratio between the true value abstraction points and the predicted abstraction points, wherein the high ratio indicates that the number of the matching points is more, namely the detection accuracy of the lane lines is higher, and the low ratio indicates that the number of the matching points is less, namely the detection accuracy of the lane lines is lower; therefore, the ratio is determined as the accuracy of the lane line detection algorithm, the lane line sensing algorithm can be accurately evaluated, and the lane line sensing iteration direction is reflected.
As shown in fig. 3, a schematic structural diagram of a lane line coordinate true value acquiring apparatus based on an unmanned vehicle provided with a camera and a lidar according to an embodiment of the present invention is shown, and the apparatus includes:
an image obtaining module 310, configured to obtain an image to be processed; the image to be processed is acquired by the camera in the process of the unmanned vehicle going;
the image annotation module 320 is configured to display the image to be processed and receive an annotation result input by a user for the image to be processed; the marking result comprises the position of a lane line in the image to be processed;
the coordinate projection module 330 is configured to determine two-dimensional coordinate information of each point included in a lane line in the image coordinate system, calculate two-dimensional coordinate information of each point included in the lane line in the image coordinate system, and obtain a product of a projection matrix between the camera and the laser radar obtained by calibration in advance, to obtain three-dimensional coordinate information of each point of the road projected by the laser radar corresponding to each point included in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera;
and the true value determining module 340 is configured to determine a true value of coordinates of each point included in the lane line in the vehicle coordinate system according to the three-dimensional coordinate information of each point.
As can be seen from the above, according to the lane line coordinate true value acquisition device based on the unmanned vehicle provided by the embodiment of the present invention, the unmanned vehicle is provided with the camera and the laser radar, and can acquire an image to be processed; the image to be processed is collected by a camera in the process of the unmanned vehicle moving; displaying an image to be processed, and receiving an annotation result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed; determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and multiplying the two-dimensional coordinate information with a projection matrix between a camera and a laser radar obtained by calibration in advance to obtain three-dimensional coordinate information of each point, projected to the road, of the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image collected by the camera; according to the three-dimensional coordinate information of each point, the coordinate truth value of each point contained in the lane line in the vehicle coordinate system is determined, so that the coordinate truth value of the lane line can be determined according to the image acquired by the unmanned vehicle in the actual running process. And the relative positions of the camera, the laser radar and the unmanned vehicle are fixed, so that the relative relation between the position of the point projected to the road by the laser radar in the actual scene and the position of the corresponding point of the point in the image acquired by the camera is also fixed, therefore, the accurate position of the lane line in the actual scene can be deduced according to the pre-calibrated projection matrix between the camera and the laser radar after the calibrated position of the lane line in the image, namely the accurate position of the lane line in the image is known, and the accuracy of acquiring the coordinate information of the lane line is ensured.
Optionally, the truth value determining module 340 is specifically configured to use coordinates in x and y directions in the three-dimensional coordinate information of each point as a true coordinate value of each point included in the lane line in a vehicle coordinate system; the x-direction is a traveling direction of the unmanned vehicle, and the y-direction is a horizontal direction perpendicular to the x-direction.
Optionally, the apparatus further comprises:
and the curve fitting module is used for performing curve fitting on the coordinate true value of each point contained in the lane line in the vehicle coordinate system to obtain a true value lane line.
Optionally, the apparatus further comprises:
the truth value lane line selection module is used for selecting each target truth value lane line with each preset length in the x direction from the truth value lane lines;
the predicted lane line selection module is used for acquiring each target predicted lane line with each preset length in the x direction in the predicted lane lines; the predicted lane line is a lane line obtained by detecting the image to be processed by a lane line detection algorithm;
the coordinate calculation module is used for calculating first coordinates of each true value abstraction point corresponding to each target true value lane line and second coordinates of each predicted abstraction point corresponding to each target predicted lane line;
and the accuracy calculation module is used for determining the accuracy of the lane line detection algorithm according to the first coordinates of the true value abstraction points and the second coordinates of the prediction abstraction points.
Optionally, the coordinate calculation module is specifically configured to:
for each target truth-value lane line, taking a distance value in the x direction corresponding to the target truth-value lane line as an abscissa of a first coordinate of a truth-value abstraction point corresponding to the target truth-value lane line, and taking a distance value in the y direction corresponding to the target truth-value lane line as an ordinate of the first coordinate of a truth-value abstraction point corresponding to the target truth-value lane line;
and regarding each target predicted lane line, taking the distance value in the x direction corresponding to the target predicted lane line as the abscissa of the second coordinate of the predicted abstraction point corresponding to the target predicted lane line, and taking the distance value in the y direction corresponding to the target predicted lane line as the ordinate of the second coordinate of the predicted abstraction point corresponding to the target predicted lane line.
Optionally, the accuracy calculating module includes:
the distance calculation submodule is used for calculating the distance between each true value abstraction point and each prediction abstraction point according to the first coordinate of each true value abstraction point and the second coordinate of each prediction abstraction point;
a matching point determining submodule for obtaining each distance threshold, and regarding each distance threshold, taking the true abstraction point of which the distance between the corresponding prediction abstraction point is smaller than the distance threshold as a matching abstraction point;
the number counting submodule is used for counting each first number of the matching abstraction points corresponding to each distance threshold, the second number of the true value abstraction points and the third number of the prediction abstraction points;
and the accuracy determination submodule is used for calculating a first average value of the quotient of each first quantity and the second quantity and a second average value of the quotient of each first quantity and the third quantity, and taking the first average value and the second average value as the accuracy of the lane line detection algorithm.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. A lane line coordinate true value acquisition method based on an unmanned vehicle, wherein the unmanned vehicle is provided with a camera and a laser radar, the method comprising:
acquiring an image to be processed; the image to be processed is acquired by the camera in the process of the unmanned vehicle going;
displaying the image to be processed, and receiving a labeling result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed;
determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and obtaining the product of the two-dimensional coordinate information and a projection matrix between the camera and the laser radar, wherein the product is obtained by calibration in advance, so as to obtain three-dimensional coordinate information of each point, projected to the road, of the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera;
determining a true coordinate value of each point contained in the lane line in a vehicle coordinate system according to the three-dimensional coordinate information of each point;
performing curve fitting on the true coordinate values of all points contained in the lane line in a vehicle coordinate system to obtain a true lane line;
selecting target truth-value lane lines with preset lengths in the x direction from the truth-value lane lines;
acquiring target predicted lane lines with the x direction of each preset length in the predicted lane lines; the predicted lane line is a lane line obtained by detecting the image to be processed by a lane line detection algorithm;
calculating first coordinates of true extraction points corresponding to the target true values of the lane lines and second coordinates of predicted extraction points corresponding to the target predicted lane lines;
determining the accuracy of the lane line detection algorithm according to the first coordinates of the true value abstraction points and the second coordinates of the prediction abstraction points;
determining the accuracy of the lane line detection algorithm according to the first coordinates of the true abstraction points and the second coordinates of the predicted abstraction points comprises:
calculating the distance between each true value abstraction point and each prediction abstraction point according to the first coordinate of each true value abstraction point and the second coordinate of each prediction abstraction point;
acquiring each distance threshold, and regarding each distance threshold, taking a true value abstraction point of which the distance between the true value abstraction point and the corresponding prediction abstraction point is smaller than the distance threshold as a matching abstraction point;
counting first numbers of the matching abstraction points corresponding to the distance thresholds, second numbers of the true abstraction points and third numbers of the prediction abstraction points;
and calculating a first average value of the quotient of each first quantity and the second quantity and a second average value of the quotient of each first quantity and the third quantity, and taking the first average value and the second average value as the accuracy of the lane line detection algorithm.
2. The method according to claim 1, wherein the determining the true coordinate values of the points included in the lane line in the vehicle coordinate system according to the three-dimensional coordinate information of the points comprises:
taking the coordinates in the x direction and the y direction in the three-dimensional coordinate information of each point as the true coordinate value of each point contained in the lane line in the vehicle coordinate system; the x-direction is a traveling direction of the unmanned vehicle, and the y-direction is a horizontal direction perpendicular to the x-direction.
3. The method of claim 1, wherein calculating first coordinates of true abstraction points for each true lane line and second coordinates of predicted abstraction points for each predicted lane line comprises:
for each target truth value lane line, taking a distance value in the x direction corresponding to the target truth value lane line as an abscissa of a first coordinate of a truth value abstraction point corresponding to the target truth value lane line, and taking a distance value in the y direction corresponding to the target truth value lane line as an ordinate of the first coordinate of the truth value abstraction point corresponding to the target truth value lane line;
and regarding each target prediction lane line, taking the distance value in the x direction corresponding to the target prediction lane line as the abscissa of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line, and taking the distance value in the y direction corresponding to the target prediction lane line as the ordinate of the second coordinate of the prediction abstraction point corresponding to the target prediction lane line.
4. A lane line coordinate true value acquisition device based on an unmanned vehicle, characterized in that the unmanned vehicle is equipped with a camera and a laser radar, the device includes:
the image acquisition module is used for acquiring an image to be processed; the image to be processed is acquired by the camera in the process of the unmanned vehicle moving;
the image annotation module is used for displaying the image to be processed and receiving an annotation result input by a user aiming at the image to be processed; the marking result comprises the position of the lane line in the image to be processed;
the coordinate projection module is used for determining two-dimensional coordinate information of each point contained in a lane line in the image coordinate system, calculating the two-dimensional coordinate information of each point contained in the lane line in the image coordinate system, and multiplying the two-dimensional coordinate information with a projection matrix between the camera and the laser radar, which is obtained by calibration in advance, to obtain three-dimensional coordinate information of each point of the road, which is projected to the road by the laser radar corresponding to each point contained in the lane line in the image to be processed; the projection matrix identifies the mapping relation between the position of a point projected to a road by the laser radar and the position of a corresponding point of the point in an image acquired by the camera;
the truth value determining module is used for determining the truth value of the coordinates of each point contained in the lane line in a vehicle coordinate system according to the three-dimensional coordinate information of each point;
the curve fitting module is used for performing curve fitting on a true value of coordinates of each point contained in the lane line in a vehicle coordinate system to obtain a true value lane line;
the truth value lane line selection module is used for selecting each target truth value lane line with each preset length in the x direction from the truth value lane lines;
the predicted lane line selection module is used for acquiring each target predicted lane line with each preset length in the x direction in the predicted lane lines; the predicted lane line is a lane line obtained by detecting the image to be processed by a lane line detection algorithm;
the coordinate calculation module is used for calculating first coordinates of each true value abstraction point corresponding to each target true value lane line and second coordinates of each predicted abstraction point corresponding to each target predicted lane line;
an accuracy calculation module for determining the accuracy of the lane line detection algorithm according to the first coordinates of the true abstraction points and the second coordinates of the predicted abstraction points
The accuracy calculation module comprises:
the distance calculation submodule is used for calculating the distance between each true value abstraction point and each prediction abstraction point according to the first coordinate of each true value abstraction point and the second coordinate of each prediction abstraction point;
a matching point determining submodule for obtaining each distance threshold, and regarding each distance threshold, taking the true abstraction point of which the distance between the corresponding prediction abstraction point is smaller than the distance threshold as a matching abstraction point;
the number counting submodule is used for counting each first number of the matching abstraction points corresponding to each distance threshold, the second number of the true value abstraction points and the third number of the prediction abstraction points;
and the accuracy determination submodule is used for calculating a first average value of the quotient of each first quantity and the second quantity and a second average value of the quotient of each first quantity and the third quantity, and taking the first average value and the second average value as the accuracy of the lane line detection algorithm.
5. The apparatus according to claim 4, wherein the truth determining module is configured to use coordinates in x and y directions in the three-dimensional coordinate information of each point as a true coordinate value of each point included in the lane line in a vehicle coordinate system; the x-direction is a traveling direction of the unmanned vehicle, and the y-direction is a horizontal direction perpendicular to the x-direction.
CN201910897619.XA 2019-09-23 2019-09-23 Driverless vehicle-based lane line coordinate true value acquisition method and device Active CN112633035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910897619.XA CN112633035B (en) 2019-09-23 2019-09-23 Driverless vehicle-based lane line coordinate true value acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910897619.XA CN112633035B (en) 2019-09-23 2019-09-23 Driverless vehicle-based lane line coordinate true value acquisition method and device

Publications (2)

Publication Number Publication Date
CN112633035A CN112633035A (en) 2021-04-09
CN112633035B true CN112633035B (en) 2022-06-24

Family

ID=75282571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910897619.XA Active CN112633035B (en) 2019-09-23 2019-09-23 Driverless vehicle-based lane line coordinate true value acquisition method and device

Country Status (1)

Country Link
CN (1) CN112633035B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205447A (en) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 Road picture marking method and device for lane line identification
CN113340334B (en) * 2021-07-29 2021-11-30 新石器慧通(北京)科技有限公司 Sensor calibration method and device for unmanned vehicle and electronic equipment
CN114252082B (en) * 2022-03-01 2022-05-17 苏州挚途科技有限公司 Vehicle positioning method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463918A (en) * 2017-08-17 2017-12-12 武汉大学 Lane line extracting method based on laser point cloud and image data fusion
CN109858460A (en) * 2019-02-20 2019-06-07 重庆邮电大学 A kind of method for detecting lane lines based on three-dimensional laser radar
CN109902637A (en) * 2019-03-05 2019-06-18 长沙智能驾驶研究院有限公司 Method for detecting lane lines, device, computer equipment and storage medium
CN110163930A (en) * 2019-05-27 2019-08-23 北京百度网讯科技有限公司 Lane line generation method, device, equipment, system and readable storage medium storing program for executing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463918A (en) * 2017-08-17 2017-12-12 武汉大学 Lane line extracting method based on laser point cloud and image data fusion
CN109858460A (en) * 2019-02-20 2019-06-07 重庆邮电大学 A kind of method for detecting lane lines based on three-dimensional laser radar
CN109902637A (en) * 2019-03-05 2019-06-18 长沙智能驾驶研究院有限公司 Method for detecting lane lines, device, computer equipment and storage medium
CN110163930A (en) * 2019-05-27 2019-08-23 北京百度网讯科技有限公司 Lane line generation method, device, equipment, system and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN112633035A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN108694882B (en) Method, device and equipment for labeling map
CN110567469B (en) Visual positioning method and device, electronic equipment and system
CN112633035B (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN112540352B (en) Method and device for evaluating target detection algorithm based on unmanned vehicle
CN102612634B (en) A calibration apparatus, a distance measurement system and a calibration method
US20060078197A1 (en) Image processing apparatus
US10909395B2 (en) Object detection apparatus
US10015394B2 (en) Camera-based speed estimation and system calibration therefor
US11971961B2 (en) Device and method for data fusion between heterogeneous sensors
CN110608746B (en) Method and device for determining the position of a motor vehicle
US10354401B2 (en) Distance measurement method using vision sensor database
JP6552448B2 (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
CN113034586B (en) Road inclination angle detection method and detection system
KR20140144047A (en) System and method for estimating traffic characteristics information in the road network on a real-time basis
EP3333829B1 (en) Step detection device and step detection method
CN114724104B (en) Method, device, electronic equipment, system and medium for detecting visual recognition distance
CN110068826B (en) Distance measurement method and device
CN112833889B (en) Vehicle positioning method and device
CN109344677B (en) Method, device, vehicle and storage medium for recognizing three-dimensional object
KR20140102831A (en) Location Correction Method Using Additional Information of Mobile Instrument
JP2018036225A (en) State estimation device
CN111260538A (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
JPH1096607A (en) Object detector and plane estimation method
CN112837365A (en) Image-based vehicle positioning method and device
CN112017241A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211125

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Applicant after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Address before: Room 601-a32, Tiancheng information building, No. 88, South Tiancheng Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant