CN113139031B - Method and related device for generating traffic sign for automatic driving - Google Patents

Method and related device for generating traffic sign for automatic driving Download PDF

Info

Publication number
CN113139031B
CN113139031B CN202110541380.XA CN202110541380A CN113139031B CN 113139031 B CN113139031 B CN 113139031B CN 202110541380 A CN202110541380 A CN 202110541380A CN 113139031 B CN113139031 B CN 113139031B
Authority
CN
China
Prior art keywords
guideboard
images
point set
feature point
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110541380.XA
Other languages
Chinese (zh)
Other versions
CN113139031A (en
Inventor
单国航
朱磊
贾双成
李倩
李成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202110541380.XA priority Critical patent/CN113139031B/en
Publication of CN113139031A publication Critical patent/CN113139031A/en
Application granted granted Critical
Publication of CN113139031B publication Critical patent/CN113139031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application relates to a method and a related device for generating traffic marks for automatic driving. The method comprises the following steps: acquiring two images containing the same guideboard; calculating a rotation matrix and a translation matrix between the two images; performing guideboard identification on the two images to obtain pixel coordinates of the first road board feature point set in the two images respectively; calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively; determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinates of the first guideboard feature point set relative to the camera and a horizontal reference plane; and calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images. The scheme provided by the application can obtain the geographical coordinates of the guideboard with high accuracy.

Description

Method and related device for generating traffic sign for automatic driving
Technical Field
The application relates to the technical field of navigation, in particular to a method and a related device for generating traffic marks for automatic driving.
Background
With the development of artificial intelligence, automatic driving and other technologies, the construction of intelligent traffic also becomes a research hotspot, and a high-precision map is an indispensable part in the construction of intelligent traffic data. The high-precision map can contain various traffic marks, for example, ground characteristic elements such as lane lines, driving stop lines, crosswalk lines and the like in the real world and high-altitude characteristic elements such as signboards, traffic lights and the like can be expressed through the detailed lane map, so that data support is provided for navigation in application scenes such as automatic driving and the like.
The guideboard in the traffic sign is used as an information carrier of urban geographic entities, has the information navigation functions of place names, routes, distances, directions and the like, is used as an infrastructure distributed at an urban road intersection, has the specificity in space, and is a good carrier of urban foundation Internet of things.
In the related art, the space coordinates of each characteristic point in the guideboard are calculated respectively to generate the geographic coordinates of the guideboard, so that the guideboard is manufactured. If the space coordinate calculation error of one of the characteristic points is larger, the accuracy of guideboard manufacture is directly affected.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a generation method of traffic signs for automatic driving and a related device, which can obtain the geographical coordinates of a guideboard with high accuracy.
The first aspect of the present application provides a method for generating a traffic sign for automatic driving, including:
acquiring two images containing the same guideboard, and acquiring geographic position information of a camera when the two images are respectively shot;
calculating a rotation matrix and a translation matrix between the two images;
performing guideboard identification on the two images, and acquiring pixel coordinates of a first guideboard feature point set in the two images respectively, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively;
determining a guideboard space plane where the first guideboard feature point set is located by using the space coordinates and the horizontal reference plane of the first guideboard feature point set relative to the camera; wherein the guideboard space plane is perpendicular to the horizontal reference plane;
Calculating the space coordinates of a second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard;
and generating the geographic coordinates of the guideboard by using the spatial coordinates of the second guideboard feature point set relative to the camera and the geographic position information of the camera when the two images are shot.
In one embodiment, the calculating a rotation matrix and a translation matrix between the two images includes:
acquiring characteristic points of each image in the two images;
matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images;
and calculating a rotation matrix and a translation matrix between the two images by using the target feature point set.
In one embodiment, the acquiring pixel coordinates of the first road sign feature point set in the two images respectively includes:
acquiring characteristic points in the identified guideboard area in each of the two images;
Matching the characteristic points in the guideboard areas in the two images to obtain a successfully matched first guideboard characteristic point set in the two images;
and acquiring pixel coordinates of the first road sign feature point set in the two images respectively.
In one embodiment, the determining, by using the spatial coordinates of the first set of road sign feature points and the horizontal reference plane relative to the camera, a guideboard spatial plane in which the first set of road sign feature points is located includes:
constructing a vertical plane error equation by utilizing a least square optimization algorithm according to the space coordinates of the first road sign feature point set relative to the camera and a horizontal reference plane;
and obtaining a guideboard space plane where the first guideboard feature point set is located according to the vertical plane error equation.
In one embodiment, the calculating the spatial coordinates of the second guideboard feature point set relative to the camera according to the guideboard spatial plane and the pixel coordinates of the second guideboard feature point set in one of the images includes:
constructing a characteristic point space coordinate solving equation set by utilizing the guideboard space plane and a preset calculation formula;
And substituting pixel coordinates of the second guideboard feature point set in one of the images into the equation set in turn to obtain space coordinates of the second guideboard feature point set relative to the camera.
In one embodiment, the preset position includes:
and one or a combination of more than one of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
A second aspect of the present application provides a traffic sign generating apparatus for automatic driving, including:
the acquisition unit is used for acquiring two images containing the same guideboard and acquiring geographic position information of a camera when the two images are respectively shot;
a first calculation unit configured to calculate a rotation matrix and a translation matrix between the two images;
the identification unit is used for carrying out guideboard identification on the two images, and acquiring pixel coordinates of a first guideboard feature point set in the two images respectively, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
the second calculating unit is used for calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively;
The determining unit is used for determining a guideboard space plane where the first guideboard feature point set is located by utilizing the space coordinates and the horizontal reference plane of the first guideboard feature point set relative to the camera; wherein the guideboard space plane is perpendicular to the horizontal reference plane;
a third calculation unit, configured to calculate, according to the spatial plane of the guideboard and the pixel coordinates of a second guideboard feature point set in one of the images, spatial coordinates of the second guideboard feature point set relative to the camera, where the second guideboard feature point set includes at least two feature points at preset positions on the guideboard;
and the generation unit is used for generating the geographic coordinates of the guideboard by utilizing the spatial coordinates of the second guideboard feature point set relative to the camera and the geographic position information of the camera when the two images are shot.
In one embodiment, the preset position includes:
and one or a combination of more than one of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
A third aspect of the present application provides an electronic apparatus, comprising:
a processor; and
a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the application provides a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform a method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the method provided by the embodiment of the application, the space coordinate of the first road sign feature point set relative to the camera is utilized to determine the guideboard space plane where the first road sign feature point set is located. Because the guideboard space plane is determined together according to all the first road board feature points in the first road board feature point set, the influence on the calculation accuracy of the guideboard space plane caused by calculation errors of certain first road board feature points is avoided to a great extent. By setting the guideboard space plane to be perpendicular to the horizontal reference plane, the fact that the guideboard is perpendicular to the horizontal reference plane in the real world is reflected, so that the guideboard space plane is corrected, the influence of calculation errors of certain first guideboard feature points is removed, and the calculation accuracy of the guideboard space plane is ensured. And calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard. The second guideboard feature points may be feature points at preset positions on the guideboard, so that the second guideboard feature points are more representative, and accuracy of identification and acquisition of the second guideboard feature points is guaranteed. The accurate and reliable guideboard space plane and the second guideboard feature point set which is more representative and has high accuracy are utilized for calculation, so that the accuracy, reliability and stability of the second guideboard feature point set relative to the space coordinates of the camera are ensured, and the high-accuracy guideboard geographic coordinates are further facilitated to be obtained, and the high-accuracy guideboard is manufactured.
Furthermore, the method provided by the embodiment of the application can acquire the characteristic points of each image in the two images, match the characteristic points of the two images to obtain the successfully matched target characteristic point set in the two images, and calculate the rotation matrix and the translation matrix between the two images by utilizing the target characteristic point set, thereby ensuring the accuracy of the calculation of the spatial coordinates of the first road sign characteristic point set relative to the camera and further being beneficial to obtaining the geographical coordinates of the guideboard with high precision.
Furthermore, the method provided by the embodiment of the application can acquire the characteristic points in the guideboard area identified in each of the two images, match the characteristic points in the guideboard area in the two images to obtain the successfully matched first guideboard characteristic point set in the two images, so as to acquire the pixel coordinates of the first guideboard characteristic point set in the two images respectively, and further facilitate obtaining the guideboard geographic coordinates with high accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
FIG. 1 is a flow chart diagram of a method of generating traffic identifications for autopilot, shown in an embodiment of the present application;
fig. 2 is a schematic structural view of a traffic sign generating apparatus for automatic driving according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the related art, the space coordinates of each characteristic point in the guideboard are calculated respectively to generate the geographic coordinates of the guideboard, so that the guideboard is manufactured. If the space coordinate calculation error of one of the characteristic points is larger, the accuracy of guideboard manufacture is directly affected.
In view of the above problems, embodiments of the present application provide a method and a related device for generating traffic signs for automatic driving, which can obtain a guideboard geographic coordinate with high accuracy.
The following describes the technical scheme of the embodiment of the present application in detail with reference to the accompanying drawings.
Fig. 1 is a flow chart illustrating a method of generating traffic signs for automatic driving according to an embodiment of the present application.
Referring to fig. 1, the method includes:
step S101, acquiring two images containing the same guideboard, and acquiring geographic position information of a camera when the two images are respectively shot.
In the embodiment of the application, the video data in the driving process can be collected through the image pickup device, wherein the image pickup device can comprise, but is not limited to, a vehicle recorder, a camera or a mobile phone of a driver and the like which are arranged on a vehicle and have an image pickup function. The imaging device may be a monocular imaging device. The image pickup device may be provided at a head of the vehicle to video-record a guideboard in front of the vehicle to obtain a continuous video image containing the guideboard. In order to process the image later, it is necessary to extract frames from video data including a guideboard acquired while the vehicle is traveling. Typically, the frame rate of the video is 30 frames per second, and the video may be decimated according to a preset rule, for example, 10 frames per second, 15 frames per second, 20 frames per second, or other values, so as to obtain a captured multi-frame image, where the time interval between two adjacent frames of images is the frame-decimating time interval. In addition, the image capturing device captures an image and records the capturing time of the image. The embodiment of the application regards the image pickup device for picking up the image as a camera.
Wherein the orientation of the camera (i.e. the optical axis of the camera) may be set parallel to a horizontal reference plane (i.e. a horizontal plane).
In addition, the geographic position information of the vehicle or the camera can be collected through positioning equipment configured by a vehicle or a mobile phone, wherein the positioning equipment can be realized by adopting existing equipment such as a GPS (Global Positioning System ), beidou, RTK (real time kinematic), and the like, and the application is not limited. The geographic location information of the vehicle (or camera) may include, but is not limited to, information of geographic coordinates (e.g., GPS coordinates, latitude and longitude coordinates, etc.), azimuth, heading angle, heading, etc. of the vehicle (or camera).
The method provided by the embodiment of the application can be applied to the automobile and the mobile phone, and can also be applied to other devices with calculation and processing functions, such as computers, mobile phones and the like. Taking the car machine as an example, the camera and the positioning device can be arranged in the car machine or outside the car machine, and communication connection is established between the camera and the car machine.
The camera captures images and the positioning device collects geographic position information of the vehicle or the camera and transmits the geographic position information to the vehicle. And according to the shooting time of the image, the geographic position information acquired by the positioning equipment at the same time can be searched. It will be appreciated that the timing of the camera and the positioning device may be synchronized in advance in order to enable the captured image to correspond exactly to the current position of the vehicle or camera.
Step S102, calculating a rotation matrix and a translation matrix between the two images.
In an alternative embodiment, the specific embodiment of calculating the rotation matrix and the translation matrix between the two images in step S102 may include the following steps:
11 A feature point of each of the two images is acquired.
The feature points may include points on a guideboard, or may include feature points on other fixed objects (such as buildings, billboards, etc.), which are not limited herein. Specifically, the brisk operator may be used to extract the feature point of each of the two images, describe the feature point of each image, and use the described feature point as the feature point of the image.
12 Matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images.
In the embodiment of the present application, the two images may include the same object (such as a building, a billboard, a guideboard, etc.) under different viewing angles. By matching the feature points on the images, certain feature points of the same object on the two images can be successfully matched. The target feature point set is a set of feature points successfully matched on each picture in the two images.
13 Using the set of target feature points, a rotation matrix and a translation matrix between the two images are calculated.
For example, during the driving of the vehicle, an image a containing a guideboard is acquired at a position a, and an image B containing the same guideboard is acquired at a position B. Assuming that there are eight pairs of successfully matched feature points in the two images, an eight-point method can be used to calculate a rotation matrix and a translation matrix between the two images. It will be appreciated that the above process is illustrated by way of example, and not limitation, with eight-point method. When the feature points matched on the two images are more than eight pairs, a least square method can be constructed by utilizing epipolar constraint to obtain a translation matrix and a rotation matrix between the two images.
It can be understood that in step S102, the feature points acquired in the two images may be feature points inside the guideboard area or feature points outside the guideboard area; that is, the feature points acquired in the two images in step S102 are selected from the entire areas of the two images. And matching the characteristic points in all the areas of the two images to obtain a target characteristic point set successfully matched in the two images. Since the target feature points may be within the guideboard area of each image or outside the guideboard area. Therefore, the result of the rotation matrix and the translation matrix between the two images obtained by calculation is more accurate and reliable by utilizing the target characteristic point set.
And step S103, guideboard identification is carried out on the two images, and pixel coordinates of the first guideboard feature point set in the two images are obtained.
Wherein the first set of road sign feature points may include at least three feature points in the guideboard.
In an optional embodiment, step S103, performing guideboard recognition on the two images, and obtaining pixel coordinates of a first road sign feature point set in the two images, where the specific embodiment of the first road sign feature point set includes at least three feature points in the guideboard may include the following steps:
14 A feature point within the identified guideboard area in each of the two images is acquired.
The two images can be respectively identified to identify the guideboard contained in the images. The specific implementation process of image recognition can comprise: sample training is carried out based on a deep learning algorithm, a model is built, accuracy verification is carried out on the built model, a guideboard in an image is identified by the model passing the accuracy verification, and then feature points on the guideboard are extracted by a preset algorithm. In this embodiment, the guideboard of each of the two images may be identified by the YOLO V5 algorithm to ensure reliability of the acquisition of the guideboard in the image. Further, the brisk operator may be used to extract the feature points in the guideboard area identified in each of the two images, describe the feature points in the guideboard area identified in each image, and use the described feature points as the feature points of the image.
It will be appreciated that other algorithms may be used to identify the guideboard area in the image, such as the deep algorithm, without limitation. Other algorithms may be used to extract feature points in the image, such as ORB, SURF, SIFT, and the like, and are not limited thereto.
15 Matching the characteristic points in the guideboard areas in the two images to obtain a successfully matched first guideboard characteristic point set in the two images.
The first road sign feature point set is a set of successfully matched feature points in a guideboard area on each of the two images. In step S103, the feature points acquired in the two images are feature points in the guideboard area, and the feature points to be matched are also feature points in the guideboard area. In step S102, the feature points acquired in the two images may be feature points inside the guideboard area or feature points outside the guideboard area.
16 Acquiring pixel coordinates of the first road sign feature point set in the two images respectively.
In the embodiment of the application, the extracted feature points can be represented by pixels, one feature point can be regarded as one pixel point, and each pixel point can be represented by pixel coordinates. The pixel coordinates are used to describe the position of the pixel point on the digital image after imaging the object. To determine the coordinates of a pixel, a pixel coordinate system is first determined. In the pixel coordinate system, the left top corner vertex of the image plane is taken as the rectangular coordinate system u-v of the coordinate origin, the abscissa u and the ordinate v of the pixel are the column number and the row number of the pixel in the image array, and the pixel sitting at a certain point can be marked as Puv (u, v). Because the imaging positions of the guideboard on different images are different, the pixel coordinates of the same feature point on the guideboard on the different images are different, and therefore, the pixel coordinates of each feature point on two images need to be acquired.
Step S104, calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively.
The spatial coordinates of the first road sign feature point set relative to the camera can be calculated by using pixel coordinates of the first road sign feature point set in two images, and a rotation matrix and a translation matrix between the two images through a triangulation algorithm.
In an alternative embodiment, step S104, calculating the spatial coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images, may include the following steps:
17 Using the geographical position information of the camera when shooting the two images to calculate the moving distance of the camera;
18 Optimizing a translation matrix between the two images according to the moving distance of the camera to obtain a new translation matrix;
19 According to the pixel coordinates of the first road sign feature point set in the two images, the rotation matrix between the two images and the new translation matrix, the space coordinates of the first road sign feature point set relative to the camera are obtained.
Step S105, determining a guideboard space plane where the first road board feature point set is located by using the space coordinates and the horizontal reference plane of the first road board feature point set relative to the camera; wherein, the guideboard space plane is perpendicular to the horizontal reference plane.
In practical applications, the orientation of the camera is generally set to be parallel to a horizontal reference plane (i.e., a horizontal plane) when an image is acquired, that is, the optical axis of the camera is ensured to be parallel to the horizontal plane, so that the acquired image can restore the positional relationship between the object and the ground plane as much as possible. However, even if the camera is oriented parallel to the horizontal plane, the calculated guideboard space plane may not be perpendicular to the horizontal plane due to factors such as a later point-taking error or a calculation error, and thus it is necessary to correct the guideboard space plane so as to be perpendicular to the horizontal plane.
In an alternative embodiment, the determining, in step S105, the guideboard space plane in which the first set of road sign feature points is located by using the spatial coordinates of the first set of road sign feature points relative to the camera and the horizontal reference plane may include the following steps:
20 According to the space coordinates of the first road sign feature point set relative to the camera and the horizontal reference plane, constructing a vertical plane error equation by using a least square optimization algorithm;
21 According to the vertical plane error equation, obtaining the guideboard space plane where the first guideboard feature point set is located.
Specifically, a least squares optimization algorithm is utilized to construct a plane error equation:
since the guideboard in the real world is perpendicular to the horizontal reference plane, to ensure that the guideboard space plane is perpendicular to the horizontal reference plane, the plane equation can be: ax+by+cz+d=0, and b=0, thereby ensuring that the guideboard space plane is perpendicular to the horizontal reference plane. Thus, the guideboard space plane where the first guideboard feature point set is located uses a vertical plane equation: ax+Cz+D=0, so that the space plane of the guideboard is corrected, the influence of calculation errors of certain first road sign characteristic points is eliminated, and the calculation accuracy of the space plane of the guideboard is ensured.
Namely, the plane error equation is corrected into a vertical plane error equation, which is specifically as follows:
and substituting the spatial coordinates of the first road sign feature point set relative to the camera into the error equation in sequence to obtain the minimum error and the corresponding A, C, D value, thereby determining the guideboard spatial plane where the first road sign feature point set is located. That is, the guideboard space plane in which the first set of road board feature points is located may be represented by the vertical plane equation: ax+cz+d=0.
It can be understood that the guideboard space plane is determined together according to all the first road board feature points in the first road board feature point set, so that the influence on the calculation accuracy of the guideboard space plane caused by calculation errors of certain first road board feature points is avoided to a great extent.
And S106, calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images.
The second guideboard feature point set may include at least two feature points at preset positions on the guideboard.
In an optional embodiment, step S106, calculating the spatial coordinates of the second guideboard feature point set relative to the camera according to the spatial plane of the guideboard and the pixel coordinates of the second guideboard feature point set in one of the images, where a specific embodiment of the second guideboard feature point set includes at least two feature points at preset positions on the guideboard may include the following steps:
22 Constructing a characteristic point space coordinate solving equation set by utilizing a guideboard space plane and a preset calculation formula;
23 And substituting the pixel coordinates of the second guideboard feature point set in one of the images into the equation set in turn to obtain the space coordinates of the second guideboard feature point set relative to the camera.
Specifically, by taking the vertical plane equation of the guideboard space plane: ax+cz+d=0, and a preset calculation formula:constructing a feature point space coordinate solving equation set:
wherein Z is c As the unknown quantity, u and v respectively represent an abscissa value and an ordinate value of a pixel coordinate of the feature point, K is an in-camera parameter, and P represents a spatial coordinate of the feature point.
Pixel coordinates of the second guideboard feature point set in one image are sequentially calculatedSubstituting the set of equations to obtain the spatial coordinates of the second guideboard feature point set relative to the camera. For example, the second guideboard feature point P (x p ,y p ,z p ) The pixel coordinates in the a image are P (u A ,v A ) The pixel coordinate in the B image is P (u B ,v B ) Then, P (u) A ,v A ) Or P (u) B ,v B ) Substituting the obtained set of equations to obtain a second guideboard feature point P (x) p ,y p ,z p ) Is a specific numerical value of the spatial coordinates of (c). Preferably, the pixel coordinates of the second guideboard feature point P in the image acquired closest to the current time are taken.
In the embodiment of the application, the preset positions include: one or a combination of a plurality of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard. That is, the second guideboard feature points are selected from the corner points, center points, line segment intersections, points on the edge lines, and vertices of the font. Therefore, the second feature points are more representative, the second feature points are easier to identify and acquire, the accuracy of the calculation result of the second guideboard feature point set relative to the space coordinates of the camera can be ensured, and the high-accuracy guideboard geographic coordinates can be obtained. For example, when the guideboard is a triangular guideboard, the second set of guideboard feature points may include three corner points of the guideboard; when the guideboard is a square guideboard, the second guideboard feature point set may include four corner points of the guideboard; when the guideboard is a circular guideboard, the second guideboard feature point set may include an intersection point of two diameters (e.g., a vertical diameter and a horizontal diameter) of the guideboard with a circumference, a center of a circle, and the like.
And S107, generating the geographic coordinates of the guideboard by using the spatial coordinates of the second guideboard feature point set relative to the camera and the geographic position information of the camera when two images are shot.
In the embodiment of the application, when the geographic coordinates of the guideboard are determined and the geographic position of the vehicle is also known, the distance between the vehicle and the guideboard can be obtained, so that data support is provided for vehicle navigation, and accurate driving guidance is provided for the vehicle.
According to the method provided by the embodiment of the application, the space coordinates of the first road sign feature point set relative to the camera are utilized to determine the space plane of the guideboard where the first road sign feature point set is located. Because the guideboard space plane is determined together according to all the first road board feature points in the first road board feature point set, the influence on the calculation accuracy of the guideboard space plane caused by calculation errors of certain first road board feature points is avoided to a great extent. By setting the guideboard space plane to be perpendicular to the horizontal reference plane, the fact that the guideboard is perpendicular to the horizontal reference plane in the real world is reflected, so that the guideboard space plane is corrected, the influence of calculation errors of certain first guideboard feature points is removed, and the calculation accuracy of the guideboard space plane is ensured. And calculating the space coordinates of the second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard. The second guideboard feature points may be feature points at preset positions on the guideboard, so that the second guideboard feature points are more representative, and accuracy of identification and acquisition of the second guideboard feature points is guaranteed. The accurate and reliable guideboard space plane and the second guideboard feature point set which is more representative and has high accuracy are utilized for calculation, so that the accuracy, reliability and stability of the second guideboard feature point set relative to the space coordinates of the camera are ensured, and the high-accuracy guideboard geographic coordinates are further facilitated to be obtained, and the high-accuracy guideboard is manufactured.
Corresponding to the embodiment of the application function implementation method, the application also provides a device for generating the traffic sign for automatic driving, electronic equipment and corresponding embodiments.
Fig. 2 is a schematic structural view of a traffic sign generating apparatus for automatic driving according to an embodiment of the present application.
Referring to fig. 2, an embodiment of the present application provides a traffic sign generating apparatus for automatic driving, including:
an acquisition unit 201 for acquiring two images including the same guideboard, and acquiring geographical position information of a camera when the two images are respectively photographed;
a first calculation unit 202 for calculating a rotation matrix and a translation matrix between two images;
the identifying unit 203 is configured to identify the two images by using a guideboard, and obtain pixel coordinates of a first road sign feature point set in the two images, where the first road sign feature point set includes at least three feature points in the guideboard;
a second calculating unit 204, configured to calculate spatial coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and pixel coordinates of the first road sign feature point set in the two images respectively;
a determining unit 205, configured to determine a guideboard space plane where the first road sign feature point set is located by using a space coordinate of the first road sign feature point set relative to the camera and a horizontal reference plane; wherein the guideboard space plane is perpendicular to the horizontal reference plane;
A third calculating unit 206, configured to calculate, according to the spatial plane of the guideboard and the pixel coordinates of the second guideboard feature point set in one of the images, the spatial coordinates of the second guideboard feature point set relative to the camera, where the second guideboard feature point set includes at least two feature points at preset positions on the guideboard;
a generating unit 207, configured to generate geographic coordinates of the guidepost by using the spatial coordinates of the second guidepost feature point set relative to the camera and the geographic position information of the camera when capturing the two images.
Optionally, the manner in which the first computing unit 202 computes the rotation matrix and the translation matrix between the two images may include:
acquiring characteristic points of each of the two images; matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images; and calculating a rotation matrix and a translation matrix between the two images by using the target feature point set.
Optionally, the manner in which the identifying unit 203 obtains the pixel coordinates of the first road sign feature point set in the two images respectively may include:
acquiring characteristic points in the identified guideboard area in each of the two images; matching the characteristic points in the guideboard areas in the two images to obtain a successfully matched first guideboard characteristic point set in the two images; and acquiring pixel coordinates of the first road sign feature point set in the two images respectively.
Optionally, the determining unit 205 may determine, by using the spatial coordinates of the first road sign feature point set relative to the camera and the horizontal reference plane, a guideboard spatial plane in which the first road sign feature point set is located, by:
constructing a vertical plane error equation by using a least square optimization algorithm according to the space coordinates of the first road characteristic point set relative to the camera and the horizontal reference plane; and obtaining the guideboard space plane where the first guideboard feature point set is located according to the vertical plane error equation.
Optionally, the third calculating unit 206 may calculate, according to the spatial plane of the guideboard and the pixel coordinates of the second guideboard feature point set in one of the images, the spatial coordinates of the second guideboard feature point set with respect to the camera by:
constructing a characteristic point space coordinate solving equation set by utilizing a guideboard space plane and a preset calculation formula; and substituting pixel coordinates of the second guideboard feature point set in one of the images into an equation set in turn to obtain space coordinates of the second guideboard feature point set relative to the camera.
Optionally, the preset position on the guideboard may include: one or a combination of a plurality of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
By implementing the device shown in fig. 2, the geographical coordinates of the guideboard with high accuracy can be obtained.
The specific manner in which the respective modules perform the operations in the apparatus of the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Referring to fig. 3, an embodiment of the application further provides an electronic device 300. The electronic device 300 may be configured to perform the method for generating traffic signs for autopilot provided in the above embodiments. The electronic device 300 may be any device having a computing unit, such as a computer, a server, a handheld device (e.g., a smart phone, a tablet computer, etc.), a vehicle recorder, etc., which is not limited by the embodiments of the present application.
Referring to fig. 3, an electronic device 300 includes a memory 310 and a processor 320.
The processor 320 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 310 may include various types of storage units, such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions that are required by the processor 320 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 310 may include any combination of computer-readable storage media including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some implementations, memory 310 may include a readable and/or writable removable storage device such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only blu-ray disc, an super-density optical disc, a flash memory card (e.g., SD card, min SD card, micro-SD card, etc.), a magnetic floppy disk, and so forth. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 310 has stored thereon executable code that, when processed by the processor 320, causes the processor 320 to perform some or all of the steps of the methods described above.
Furthermore, the method according to the application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing part or all of the steps of the above-described method of the application.
Alternatively, the application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or electronic device, server, etc.), causes the processor to perform some or all of the steps of the above-described method according to the application.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of generating a traffic sign for automatic driving, comprising:
acquiring two images containing the same guideboard, and acquiring geographic position information of a camera when the two images are respectively shot;
calculating a rotation matrix and a translation matrix between the two images;
performing guideboard identification on the two images, and acquiring pixel coordinates of a first guideboard feature point set in the two images respectively, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively;
determining a guideboard space plane where the first road board feature point set is located by using the space coordinates of the first road board feature point set relative to the camera and the horizontal reference plane; wherein the guideboard space plane is perpendicular to the horizontal reference plane;
calculating the space coordinates of a second guideboard feature point set relative to the camera according to the guideboard space plane and the pixel coordinates of the second guideboard feature point set in one of the images, wherein the second guideboard feature point set comprises at least two feature points at preset positions on the guideboard;
And generating the geographic coordinates of the guideboard by using the spatial coordinates of the second guideboard feature point set relative to the camera and the geographic position information of the camera when the two images are shot.
2. The method of claim 1, wherein the calculating a rotation matrix and a translation matrix between the two images comprises:
acquiring characteristic points of each image in the two images;
matching the characteristic points of the two images to obtain a target characteristic point set successfully matched in the two images;
and calculating a rotation matrix and a translation matrix between the two images by using the target feature point set.
3. The method of claim 1, wherein the acquiring pixel coordinates of the first set of road sign feature points in the two images, respectively, comprises:
acquiring characteristic points in the identified guideboard area in each of the two images;
matching the characteristic points in the guideboard areas in the two images to obtain a successfully matched first guideboard characteristic point set in the two images;
and acquiring pixel coordinates of the first road sign feature point set in the two images respectively.
4. A method according to any one of claims 1 to 3, wherein said determining a guideboard space plane in which the first set of road sign feature points is located using spatial coordinates of the first set of road sign feature points relative to the camera and a horizontal reference plane, comprises:
constructing a vertical plane error equation by utilizing a least square optimization algorithm according to the space coordinates of the first road sign feature point set relative to the camera and a horizontal reference plane;
and obtaining a guideboard space plane where the first guideboard feature point set is located according to the vertical plane error equation.
5. A method according to any one of claims 1 to 3, wherein said calculating the spatial coordinates of a second set of guideboard feature points relative to the camera from the guideboard spatial plane and the pixel coordinates of the second set of guideboard feature points in one of the images comprises:
constructing a characteristic point space coordinate solving equation set by utilizing the guideboard space plane and a preset calculation formula;
and substituting pixel coordinates of the second guideboard feature point set in one of the images into the equation set in turn to obtain space coordinates of the second guideboard feature point set relative to the camera.
6. A method according to any one of claims 1 to 3, wherein the preset positions comprise:
and one or a combination of more than one of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
7. A traffic sign generating apparatus for automatic driving, comprising:
the acquisition unit is used for acquiring two images containing the same guideboard and acquiring geographic position information of a camera when the two images are respectively shot;
a first calculation unit configured to calculate a rotation matrix and a translation matrix between the two images;
the identification unit is used for carrying out guideboard identification on the two images, and acquiring pixel coordinates of a first guideboard feature point set in the two images respectively, wherein the first guideboard feature point set comprises at least three feature points in the guideboard;
the second calculating unit is used for calculating the space coordinates of the first road sign feature point set relative to the camera according to the rotation matrix and the translation matrix between the two images and the pixel coordinates of the first road sign feature point set in the two images respectively;
the determining unit is used for determining a guideboard space plane where the first guideboard feature point set is located by utilizing the space coordinates and the horizontal reference plane of the first guideboard feature point set relative to the camera; wherein the guideboard space plane is perpendicular to the horizontal reference plane;
A third calculation unit, configured to calculate, according to the spatial plane of the guideboard and the pixel coordinates of a second guideboard feature point set in one of the images, spatial coordinates of the second guideboard feature point set relative to the camera, where the second guideboard feature point set includes at least two feature points at preset positions on the guideboard;
and the generation unit is used for generating the geographic coordinates of the guideboard by utilizing the spatial coordinates of the second guideboard feature point set relative to the camera and the geographic position information of the camera when the two images are shot.
8. The apparatus of claim 7, wherein the preset position comprises:
and one or a combination of more than one of corner points, center points, line segment intersection points, points on edge lines and vertexes of fonts of the guideboard.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1-6.
10. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-6.
CN202110541380.XA 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving Active CN113139031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110541380.XA CN113139031B (en) 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110541380.XA CN113139031B (en) 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving

Publications (2)

Publication Number Publication Date
CN113139031A CN113139031A (en) 2021-07-20
CN113139031B true CN113139031B (en) 2023-11-03

Family

ID=76817561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110541380.XA Active CN113139031B (en) 2021-05-18 2021-05-18 Method and related device for generating traffic sign for automatic driving

Country Status (1)

Country Link
CN (1) CN113139031B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408509B (en) * 2021-08-20 2021-11-09 智道网联科技(北京)有限公司 Signboard recognition method and device for automatic driving
CN114119963A (en) * 2021-11-19 2022-03-01 智道网联科技(北京)有限公司 Method and device for generating high-precision map guideboard
CN114419594A (en) * 2022-01-17 2022-04-29 智道网联科技(北京)有限公司 Method and device for identifying intelligent traffic guideboard

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018196391A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted camera
CN111932627A (en) * 2020-09-15 2020-11-13 蘑菇车联信息科技有限公司 Marker drawing method and system
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN112598743A (en) * 2021-02-08 2021-04-02 智道网联科技(北京)有限公司 Pose estimation method of monocular visual image and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018196391A1 (en) * 2017-04-28 2018-11-01 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted camera
WO2021026705A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Matching relationship determination method, re-projection error calculation method and related apparatus
CN111932627A (en) * 2020-09-15 2020-11-13 蘑菇车联信息科技有限公司 Marker drawing method and system
CN111930877A (en) * 2020-09-18 2020-11-13 蘑菇车联信息科技有限公司 Map guideboard generation method and electronic equipment
CN112598743A (en) * 2021-02-08 2021-04-02 智道网联科技(北京)有限公司 Pose estimation method of monocular visual image and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向井下无人机自主飞行的人工路标辅助位姿估计方法;单春艳;杨维;耿翠博;;煤炭学报(S1);全文 *

Also Published As

Publication number Publication date
CN113139031A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN113139031B (en) Method and related device for generating traffic sign for automatic driving
US10240934B2 (en) Method and system for determining a position relative to a digital map
JP4232167B1 (en) Object identification device, object identification method, and object identification program
CN109949365B (en) Vehicle designated position parking method and system based on road surface feature points
CN110617821B (en) Positioning method, positioning device and storage medium
CN111261016B (en) Road map construction method and device and electronic equipment
WO2020043081A1 (en) Positioning technique
CN101563581A (en) Method and apparatus for identification and position determination of planar objects in images
CN101842808A (en) Method of and apparatus for producing lane information
JP4978615B2 (en) Target identification device
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
CN111930877B (en) Map guideboard generation method and electronic equipment
CN111340877A (en) Vehicle positioning method and device
CN114088114B (en) Vehicle pose calibration method and device and electronic equipment
CN111932627A (en) Marker drawing method and system
CN112595335B (en) Intelligent traffic driving stop line generation method and related device
CN113838129B (en) Method, device and system for obtaining pose information
CN115205382A (en) Target positioning method and device
CN113284194A (en) Calibration method, device and equipment for multiple RS (remote sensing) equipment
CN114863347A (en) Map checking method, device and equipment
CN113009533A (en) Vehicle positioning method and device based on visual SLAM and cloud server
CN112991434B (en) Method for generating automatic driving traffic identification information and related device
CN114299469A (en) Traffic guideboard generation method, device and equipment
CN114863383A (en) Method for generating intelligent traffic circular guideboard and related device
CN117853644A (en) Map model rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant