WO2021062581A1 - Road marking recognition method and apparatus - Google Patents

Road marking recognition method and apparatus Download PDF

Info

Publication number
WO2021062581A1
WO2021062581A1 PCT/CN2019/109290 CN2019109290W WO2021062581A1 WO 2021062581 A1 WO2021062581 A1 WO 2021062581A1 CN 2019109290 W CN2019109290 W CN 2019109290W WO 2021062581 A1 WO2021062581 A1 WO 2021062581A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
dimensional point
road marking
cloud data
recognition result
Prior art date
Application number
PCT/CN2019/109290
Other languages
French (fr)
Chinese (zh)
Inventor
李然
李鑫超
王涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033738.9A priority Critical patent/CN112204568A/en
Priority to PCT/CN2019/109290 priority patent/WO2021062581A1/en
Publication of WO2021062581A1 publication Critical patent/WO2021062581A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Definitions

  • This application relates to the field of automobile technology, and in particular to a road marking recognition method and device.
  • the recognition result of the road marking is obtained by processing the image collected by the image collecting device, and the image collecting device can be installed on the vehicle. Due to the way of road sign recognition based on images, its accuracy is greatly affected by the imaging quality. When the imaging quality is good, because there are more useful information in the image, a more accurate recognition result can be obtained. When the imaging quality is poor When there is less useful information in the image, the recognition accuracy is low. When the image quality is very low, it can no longer be used for road sign recognition.
  • the embodiments of the present application provide a road sign recognition method and device, which are used to solve the problem that the prior art method of road sign recognition based on images has relatively large usage limitations.
  • an embodiment of the present application provides a road marking recognition method, including:
  • the two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
  • an embodiment of the present application provides a road marking recognition device, including: a processor and a memory; the memory is used to store program code; the processor calls the program code, and when the program code is executed To perform the following actions:
  • the two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, the at least one piece of code can be executed by a computer to control all The computer executes the method described in any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer program, when the computer program is executed by a computer, it is used to implement the method described in any one of the above-mentioned first aspects.
  • the embodiment of the application provides a road marking recognition method and device.
  • the three-dimensional point cloud data includes data of the road marking area, and the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map , Process the two-dimensional point cloud feature map to obtain the road marking recognition result.
  • the reliability of lidar is high and the influence of environmental factors is very small, even in the harsh imaging environment based on the three-dimensional point cloud data obtained by lidar can be Obtain high-accuracy road sign recognition results, so that road sign recognition is no longer limited to the imaging environment, expands the use scenarios of road sign recognition, and solves the limited use scenarios of roadside sign recognition based on images in traditional technologies , It is difficult to meet the needs of users.
  • FIGS. 1A-1B are schematic diagrams of application scenarios of the road marking recognition method provided by the embodiments of this application.
  • Figure 1C is a schematic diagram of the structure of the lidar
  • Figure 2 is a schematic diagram of a coaxial optical path used by lidar
  • Fig. 3 is a schematic diagram of a scanning pattern of a lidar
  • FIG. 5 is a schematic flowchart of a road marking recognition method provided by another embodiment of this application.
  • FIG. 6 is a schematic flowchart of a road marking recognition method provided by another embodiment of this application.
  • FIG. 7 is a schematic diagram of a preset neural network model provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram 1 of the relationship between the target direction and the road distance provided by an embodiment of this application.
  • 9A-9B are schematic diagrams showing that the shape of the road marking provided by an embodiment of the application is compressed
  • FIG. 10 is a second schematic diagram of the relationship between the target direction and the road distance provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a target direction of a curved road surface provided by an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of a road marking recognition device provided by an embodiment of the application.
  • the road marking recognition method provided in the embodiments of the present application can be applied to any scene where road marking recognition is required, and the road marking recognition method may be specifically executed by a road marking recognition device.
  • the road marking recognition device may be a device including lidar.
  • a schematic diagram of the application scenario of the road marking recognition method provided in the embodiment of the present application may be as shown in FIG. 1A.
  • the lidar of the road marking recognition device can detect The three-dimensional point cloud data is obtained, and the processor of the road marking recognition device can process the three-dimensional point cloud data obtained by the lidar by using the road marking recognition method provided in the embodiment of the present application.
  • FIG. 1A is only a schematic diagram, and does not limit the structure of the road marking recognition device.
  • the road marking recognition device may also be a device that does not include lidar.
  • the application scenario schematic diagram of the road marking recognition method provided in the embodiment of the present application may be as shown in FIG. 1B.
  • the road marking recognition device The communication interface can receive the three-dimensional point cloud data obtained by lidar detection sent by other devices or equipment, and the processor of the road marking recognition device can process the received three-dimensional point cloud data using the road marking recognition method provided in the embodiments of this application.
  • FIG. 1B is only a schematic diagram, and does not limit the structure of the road marking recognition device and the connection mode between the road marking recognition device and other devices or equipment.
  • the communication interface in the image processing device can be replaced with a transceiver.
  • lidar is used to sense external environmental information, such as distance information, azimuth information, reflection intensity information, and speed information of environmental targets.
  • the lidar can detect the distance from the lidar to the lidar by measuring the time of light propagation between the lidar and the detection object, that is, the time-of-flight (TOF).
  • TOF time-of-flight
  • lidar can also use other technologies to detect the distance from the detected object to the lidar, such as a ranging method based on phase shift measurement, or a ranging method based on frequency shift measurement. Do restrictions.
  • the lidar 100 may include a transmitting circuit 110, a receiving circuit 120, a sampling circuit 130, and an arithmetic circuit 140.
  • the transmitting circuit 110 may emit a light pulse sequence (for example, a laser pulse sequence).
  • the receiving circuit 120 may receive the light pulse sequence reflected by the object to be detected, and perform photoelectric conversion on the light pulse sequence to obtain an electrical signal. After processing the electrical signal, the electrical signal may be output to the sampling circuit 130.
  • the sampling circuit 130 may sample the electrical signal to obtain the sampling result.
  • the arithmetic circuit 140 may determine the distance between the lidar 100 and the object to be detected based on the sampling result of the sampling circuit 130.
  • the lidar 100 may further include a control circuit 150, which can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • a control circuit 150 can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • the lidar shown in FIG. 1C includes a transmitting circuit, a receiving circuit, a sampling circuit, and an arithmetic circuit for emitting a beam for detection
  • the transmitting circuit, The number of any one of the receiving circuit, the sampling circuit, and the arithmetic circuit may also be at least two, which are used to emit at least two light beams in the same direction or in different directions; wherein, the at least two light paths may be emitted at the same time. , It can also be launched at different times.
  • the light-emitting chips in the at least two transmitting circuits are packaged in the same module.
  • each emitting circuit includes a laser emitting chip, and the dies in the laser emitting chips in the at least two emitting circuits are packaged together and housed in the same packaging space.
  • the laser radar 100 may further include a scanning module 160 for changing the propagation direction of at least one laser pulse sequence emitted by the transmitting circuit.
  • the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, and the arithmetic circuit 140, or the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, the arithmetic circuit 140, and the control circuit 150 may be referred to as the measurement circuit.
  • the distance measurement module 150 can be independent of other modules, for example, the scanning module 160.
  • the coaxial optical path can be used in the lidar, that is, the beam emitted by the lidar and the reflected beam share at least part of the optical path in the lidar.
  • the laser pulse sequence reflected by the probe passes through the scanning module and then enters the receiving circuit.
  • the laser radar can also use an off-axis optical path, that is, the beam emitted by the laser radar and the reflected beam are transmitted along different optical paths in the laser radar.
  • Fig. 2 shows a schematic diagram of an embodiment in which the laser radar of the present application adopts a coaxial optical path.
  • the lidar 200 includes a ranging module 201.
  • the ranging module 210 includes a transmitter 203 (which may include the above-mentioned transmitting circuit), a collimating element 204, a detector 205 (which may include the above-mentioned receiving circuit, sampling circuit, and arithmetic circuit), and an optical path. Change element 206.
  • the ranging module 210 is used to emit a light beam, receive the return light, and convert the return light into an electrical signal.
  • the transmitter 203 can be used to emit a light pulse sequence.
  • the transmitter 203 may emit a sequence of laser pulses.
  • the laser beam emitted by the transmitter 203 is a narrow-bandwidth beam with a wavelength outside the visible light range.
  • the collimating element 204 is arranged on the exit light path of the emitter, and is used to collimate the light beam emitted from the emitter 203, and collimate the light beam emitted from the emitter 203 into parallel light and output to the scanning module.
  • the collimating element is also used to condense at least a part of the return light reflected by the probe.
  • the collimating element 204 may be a collimating lens or other elements capable of collimating a light beam.
  • the light path changing element 206 is used to merge the transmitting light path and the receiving light path in the lidar before the collimating element 104, so that the transmitting light path and the receiving light path can share the same collimating element, making the light path more compact.
  • the transmitter 103 and the detector 105 may respectively use their own collimating elements, and the optical path changing element 206 is arranged on the optical path behind the collimating element.
  • the optical path changing element can use a small-area reflector to transmit The light path and the receiving light path are merged.
  • the light path changing element may also use a reflector with a through hole, where the through hole is used to transmit the emitted light of the emitter 203 and the reflector is used to reflect the return light to the detector 205. In this way, the shielding of the back light from the support of the small reflector in the case of using the small reflector can be reduced.
  • the optical path changing element deviates from the optical axis of the collimating element 204.
  • the optical path changing element may also be located on the optical axis of the collimating element 204.
  • the lidar 200 also includes a scanning module 202.
  • the scanning module 202 is placed on the exit light path of the distance measuring module 201, and the scanning module 102 is used to change the transmission direction of the collimated beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 .
  • the returned light is collected on the detector 105 via the collimating element 104.
  • the scanning module 202 may include at least one optical element for changing the propagation path of the light beam, wherein the optical element may change the propagation path of the light beam by reflecting, refracting, or diffracting the light beam.
  • the scanning module 202 includes a lens, a mirror, a prism, a galvanometer, a grating, a liquid crystal, an optical phased array (Optical Phased Array), or any combination of the foregoing optical elements.
  • at least part of the optical element is moving, for example, the at least part of the optical element is driven to move by a driving module, and the moving optical element can reflect, refract, or diffract the light beam to different directions at different times.
  • the multiple optical elements of the scanning module 202 can rotate or vibrate around a common axis 209, and each rotating or vibrating optical element is used to continuously change the propagation direction of the incident light beam.
  • the multiple optical elements of the scanning module 202 may rotate at different speeds or vibrate at different speeds.
  • at least part of the optical elements of the scanning module 202 may rotate at substantially the same rotation speed.
  • the multiple optical elements of the scanning module may also rotate around different axes.
  • the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction, or vibrate in different directions, which is not limited herein.
  • the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214.
  • the driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209 to change the first optical element 214.
  • the direction of the beam 219 is collimated.
  • the first optical element 214 projects the collimated beam 219 to different directions.
  • the angle between the direction of the collimated beam 219 changed by the first optical element and the rotation axis 109 changes with the rotation of the first optical element 214.
  • the first optical element 214 includes a pair of opposing non-parallel surfaces through which the collimated light beam 219 passes.
  • the first optical element 214 includes a prism whose thickness varies along at least one radial direction.
  • the first optical element 114 includes a wedge-angle prism to collimate the beam 119 for refracting.
  • the scanning module 202 further includes a second optical element 215, the second optical element 215 rotates around the rotation axis 209, and the rotation speed of the second optical element 215 is different from the rotation speed of the first optical element 214.
  • the second optical element 215 is used to change the direction of the light beam projected by the first optical element 214.
  • the second optical element 115 is connected to another driver 217, and the driver 117 drives the second optical element 215 to rotate.
  • the first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotation speed and/or rotation of the first optical element 214 and the second optical element 215 are different, so that the collimated light beam 219 is projected to the outside space.
  • the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215, respectively.
  • the rotational speeds of the first optical element 214 and the second optical element 215 can be determined according to the expected scanning area and pattern in actual applications.
  • the drivers 216 and 217 may include motors or other drivers.
  • the second optical element 115 includes a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 115 includes a prism whose thickness varies in at least one radial direction. In one embodiment, the second optical element 115 includes a wedge prism.
  • the scanning module 102 further includes a third optical element (not shown) and a driver for driving the third optical element to move.
  • the third optical element includes a pair of opposite non-parallel surfaces, and the light beam passes through the pair of surfaces.
  • the third optical element includes a prism whose thickness varies in at least one radial direction.
  • the third optical element includes a wedge prism. At least two of the first, second, and third optical elements rotate at different rotation speeds and/or rotation directions.
  • FIG. 3 is a schematic diagram of a scanning pattern of the lidar 200. It is understandable that when the speed of the optical element in the scanning module changes, the scanning pattern will also change accordingly.
  • the detection object 201 When the light 211 projected by the scanning module 202 hits the detection object 201, a part of the light is reflected by the detection object 201 to the lidar 200 in a direction opposite to the projected light 211.
  • the return light 212 reflected by the probe 201 is incident on the collimating element 204 after passing through the scanning module 202.
  • the detector 205 and the transmitter 203 are placed on the same side of the collimating element 204, and the detector 205 is used to convert at least part of the return light passing through the collimating element 204 into electrical signals.
  • an anti-reflection film is plated on each optical element.
  • the thickness of the antireflection coating is equal to or close to the wavelength of the light beam emitted by the emitter 103, which can increase the intensity of the transmitted light beam.
  • a filter layer is plated on the surface of an element located on the beam propagation path in the laser radar, or a filter is provided on the beam propagation path for transmitting at least the wavelength band of the beam emitted by the transmitter and reflecting Other bands to reduce the noise caused by ambient light to the receiver.
  • the transmitter 203 may include a laser diode through which nanosecond laser pulses are emitted.
  • the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or the falling edge time of the electrical signal pulse.
  • the lidar 200 can calculate the TOF using the pulse receiving time information and the pulse sending time information, so as to determine the distance between the detection object 201 and the lidar 200.
  • the lidar of the embodiment of the present application can be applied to a mobile platform, and the lidar can be installed on the platform body of the mobile platform.
  • a mobile platform with lidar can identify road signs.
  • the mobile platform includes at least one of an unmanned aerial vehicle, a car, a remote control car, a robot, and a camera.
  • the lidar is applied to an unmanned aerial vehicle
  • the platform body is the fuselage of the unmanned aerial vehicle.
  • the lidar is applied to a car
  • the platform body is the body of the car.
  • the car can be a self-driving car or a semi-automatic driving car, and there is no restriction here.
  • the lidar is applied to a remote control car
  • the platform body is the body of the remote control car.
  • the platform body is a robot.
  • lidar is applied to a camera, the platform body is the camera itself.
  • the type of the device including the road marking recognition device may not be limited in the embodiment of the present application, and the device may be, for example, a server, an autonomous vehicle, or a semi-autonomous vehicle.
  • the road marking recognition method provided by the embodiment of this application compresses the three-dimensional point cloud data obtained by lidar detection into a two-dimensional point cloud feature map, and processes the two-dimensional point cloud feature map to obtain the road marking recognition result. Due to the reliability of the lidar It has high performance and is very small affected by environmental factors. Even in the harsh imaging environment based on the three-dimensional point cloud data obtained by Lidar, it is possible to obtain high-accuracy road marking recognition results, so that road marking recognition is no longer limited to The imaging environment expands the usage scenarios of road sign recognition, and solves the problem of limited usage scenarios of roadside sign recognition based on images in the traditional technology, and it is difficult to meet the needs of users.
  • FIG. 4 is a schematic flowchart of a road marking recognition method provided by an embodiment of the application.
  • the execution subject of this embodiment may be a road marking recognition device, and specifically may be a processor of the road marking recognition device.
  • the method of this embodiment may include:
  • Step 401 Obtain three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area.
  • the laser radar (Laser Radar) can also be called a laser detection and ranging system (Light Detection and Ranging, LiDAR), and the data obtained by scanning the surrounding environment by the laser radar is the three-dimensional point cloud data.
  • the three-dimensional point cloud data includes the reflection data of the road marking area.
  • the reflection data may refer to the data carried by the back light reflected by the road marking area. It can be understood that the three-dimensional point cloud data is set with the road markings by the lidar. Data obtained by scanning the road surface.
  • Each point in the 3D point cloud data can contain 3D coordinates and reflectance information.
  • the road marking area may refer to the location range where the road marking is located.
  • the road sign can specifically be any type of sign set on the road to mark driving regulations.
  • the road surface markings may be various lane lines, such as double solid lines, single solid lines, discontinuous lines, or road diversion markings.
  • the road markings may also include left-turn arrows, right-turn arrows, straight arrows, and so on. It should be noted that the road markings that can be identified by the road marking recognition method provided in this application may be one or more of all road markings. The one or more road markings can be understood as specific road markings.
  • Step 402 Compress the three-dimensional point cloud data into a two-dimensional point cloud feature map.
  • the three-dimensional point cloud data in the three-dimensional space is compressed into a two-dimensional point cloud feature map in the plane space, and multiple points in the three-dimensional point cloud data can correspond to one pixel in the two-dimensional point cloud feature map.
  • the two-dimensional point cloud feature map contains feature information that can be used to identify road signs.
  • the number of two-dimensional point cloud feature maps can correspond to the number of types of feature information.
  • the number of types of feature information is 2
  • the number of two-dimensional point cloud feature maps can be 2, respectively Two-dimensional point cloud feature graph 1 and two-dimensional point cloud feature graph 2.
  • the pixel value in the two-dimensional point cloud feature graph 1 can represent one type of feature information
  • the pixel value in the two-dimensional point cloud feature graph 2 can represent the other A feature information.
  • Step 403 Process the two-dimensional point cloud feature map to obtain a road marking recognition result.
  • the road surface mark recognition result can be obtained by processing the two-dimensional point cloud feature map.
  • the road marking recognition result may be whether the three-dimensional point cloud data contains a specific road marking, for example, the road marking result may include a single solid line or not including a single solid line.
  • the road surface marking recognition result may be a category of a specific road marking included in the three-dimensional point cloud data.
  • the road marking result may be a single solid line or a discontinuous line.
  • the three-dimensional point cloud data contains data of the road marking area
  • the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map
  • the two-dimensional point cloud feature map is processed
  • Fig. 5 is a schematic flow chart of a road marking recognition method provided by another embodiment of the application. On the basis of the embodiment shown in Fig. 5, this embodiment mainly describes the process of processing a two-dimensional point cloud feature map to obtain a road marking recognition result An optional implementation. As shown in FIG. 5, the method of this embodiment may include:
  • Step 501 Obtain three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area.
  • step 501 is similar to step 401, and will not be repeated here.
  • Step 502 Compress the three-dimensional point cloud data into a two-dimensional point cloud feature map.
  • the three-dimensional point cloud data is compressed according to the target direction to obtain a two-dimensional point cloud feature map containing feature information.
  • the feature information includes relative height information and/or reflectivity information. Any direction used to identify the characteristic information of road markings.
  • the target direction includes a vertical direction.
  • the two-dimensional point cloud feature map may be understood as a two-dimensional horizontal plane point cloud feature map, so as to simplify implementation.
  • the relative height information may refer to the height information of the object relative to the target reference object in the target direction
  • the target reference object may include a lidar or a mobile platform for setting up a lidar. Since the road marking is set on the road, the relative height information between the road surface and the target reference object can usually meet certain conditions, so the road marking can be identified based on the relative height information, that is, the relative height information can be used to identify the road marking.
  • Reflectance information can refer to the percentage of the echo energy collected by the lidar to the emitted energy of the lidar. Since the reflectivity mainly depends on the nature of the object itself, as well as the incident wavelength and angle of incidence, when the incident wavelength and angle of incidence are certain , Can identify objects based on reflectivity, that is, reflectivity information can be used to identify road signs.
  • the problem of information loss due to compression can be avoided, and through feature information including relative height information and reflectivity information, it is beneficial to improve the recognition accuracy of road signs.
  • compressing the three-dimensional point cloud data according to the target direction to obtain a two-dimensional point cloud feature map containing reflectance information and/or relative height information may specifically include the following steps A and B.
  • Step A Perform projection compression on the three-dimensional point cloud data along the target direction to obtain two-dimensional point cloud data.
  • each point in the two-dimensional point cloud data may include two-dimensional coordinates and reflectance information.
  • the two-dimensional point cloud data may also include relative height information to avoid the problem of information loss due to compression.
  • Step B Extract feature information from the two-dimensional point cloud data, and obtain a two-dimensional point cloud feature map containing the feature information.
  • the feature information is extracted for each point in the two-dimensional point cloud data, and a two-dimensional point cloud feature map containing the feature information can be obtained.
  • the pixels in the two-dimensional point cloud feature map can correspond one-to-one with the points in the two-dimensional point cloud data.
  • Step 503 Output a road marking area based on the area reflectivity and height of the two-dimensional point cloud feature map.
  • the road marking area can be used as the road marking recognition result.
  • the two-dimensional point cloud feature map can be divided into multiple regions according to the reflectivity and height of each pixel in the two-dimensional point cloud feature map, where one region can correspond to one object, and further, it can be based on multiple regions.
  • Each area in each area corresponds to the reflectivity and height of the object, as well as the target reflectivity and target height, and the road marking area is determined from multiple areas.
  • the target reflectivity can represent the reflectivity when the object is a road marking
  • the target height can represent the height ratio when the object is a road marking.
  • the method may further include: performing clustering processing on the output road marking area; recognizing the result of the clustering processing, and outputting the road marking corresponding to the result.
  • the road surface identification area output in step 502 may be clustered by a clustering algorithm, so as to divide the road surface identification area corresponding to the same road surface identification into a cluster, and divide the road surface identification areas corresponding to different road surface identifications into different clusters. Cluster clusters.
  • the clustering process can be The result of the clustering process of dividing the road marking areas a and b into one cluster and dividing the road marking areas b and d into another cluster is obtained.
  • the recognizing the result of the clustering process and outputting the road surface identification corresponding to the result may specifically include: performing a pixel-level comparison of the two-dimensional point cloud feature map according to the result to obtain the result.
  • the road markings corresponding to the results may specifically include: performing a pixel-level comparison of the two-dimensional point cloud feature map according to the result to obtain the result.
  • the road markings corresponding to the results may be determined according to the arrangement manner. For example, assuming that the pixels belonging to a cluster in the two-dimensional point cloud feature map are arranged in a non-continuous manner along a straight line, the road surface identifier corresponding to the cluster is a discontinuous line.
  • the three-dimensional point cloud data contains the data of the road marking area, and the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map, which is based on the two-dimensional point cloud feature map.
  • Area reflectivity and height output the road marking area, and realize the road marking recognition method based on the area reflectance and height of the two-dimensional point cloud feature map, which is conducive to simplifying the implementation.
  • Fig. 6 is a schematic flow chart of a road marking recognition method provided by another embodiment of this application. This embodiment mainly describes the processing of a two-dimensional point cloud feature map to obtain a road marking recognition result based on the embodiment shown in Fig. 6 Another alternative implementation. As shown in FIG. 6, the method of this embodiment may include:
  • Step 601 Obtain three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes data of road marking areas.
  • step 601 is similar to step 401, and will not be repeated here.
  • Step 602 Compress the three-dimensional point cloud data into a two-dimensional point cloud feature map.
  • step 602 is similar to step 401 and step 501, and will not be repeated here.
  • Step 603 Input the two-dimensional point cloud feature map into a preset neural network model, and obtain a model output result of the preset neural network model.
  • the preset neural network model is used to determine the surface object category of each pixel in the two-dimensional point cloud feature map.
  • the preset neural network model may include a plurality of output channels, the plurality of output channels are in one-to-one correspondence with a plurality of surface object categories, and the plurality of surface object categories include at least one road marking category.
  • the output channel is used to output a confidence feature map corresponding to the category of the surface object, and the confidence feature map is used to characterize the probability that a pixel is the category of the corresponding surface object.
  • the pixels in the confidence feature map can correspond to the pixels in the two-dimensional point cloud feature map on a one-to-one basis.
  • the output channel output confidence characteristic of the corresponding single solid line Figure 1 can represent the probability that the pixel is a single solid line
  • the pixel value in the confidence characteristic Figure 2 can represent the pixel is Probability of discontinuous lines, confidence characteristics
  • the pixel value in Figure 3 can represent the probability that the pixel is a left-turning arrow.
  • the preset neural network can be used to identify a road marking category, and the road marking category can represent a specific road marking category, or can represent A collection of at least two types of road markings.
  • the multiple surface object categories may further include other surface object categories, and correspondingly, the model output result may also include the confidence characteristic maps of other surface object categories.
  • the model output result may also include a confidence characteristic map of the building, and the pixel value in the confidence characteristic map may represent the probability that the pixel is a building.
  • the model output result may also include a confidence characteristic map of the building, and the pixel value in the confidence characteristic map may represent the probability that the pixel is a building.
  • the multiple surface object categories may further include "other", which is used to indicate that the category of the surface object cannot be identified, so as to distinguish it from the category of the surface object that can be identified by the preset neural network model.
  • the output result of the model may be used as the road marking recognition result, thereby simplifying the realization of the road marking recognition method.
  • the method may further include: obtaining the road marking recognition result based on the model output result.
  • the method may further include: obtaining the road marking recognition result based on the model output result. In this way, more specific road marking recognition results can be obtained, which is beneficial to subsequent processing after obtaining the road marking recognition method.
  • the obtaining the road marking recognition result according to the model output result includes: corresponding the confidence characteristic map of the same pixel position with the largest pixel value in the respective confidence characteristic maps of the multiple surface object categories
  • the surface object category of is used as the surface object category of the pixel position.
  • the 4 confidence feature maps are respectively the confidence feature map 1 to the confidence feature map 4, and the confidence feature map 1 corresponds to a single solid line and a confidence
  • the degree characteristic figure 2 corresponds to the discontinuous line
  • the confidence characteristic figure 3 corresponds to the left-turning arrow
  • the confidence characteristic figure 4 corresponds to "other".
  • the pixel value at the pixel location (100, 100) in the confidence feature map 1 is 70
  • the pixel value at the pixel location (100, 100) in the confidence feature map 2 is 50
  • the pixel at the pixel location (100, 100) in the confidence feature map 3 When the value is 20, and the pixel value at the pixel position (100, 100) in the confidence characteristic figure 4 is 20, it can be determined that the ground object category at the pixel position (100, 100) is a single solid line.
  • the category of the surface object at pixel position (100,80) is "other", that is It is not any of a single solid line, a broken line, and a left-turning arrow.
  • the surface object category of the pixel location can represent the surface object category of the pixel location in the two-dimensional point cloud feature map.
  • the ground surface object category at the pixel position in the two-dimensional point cloud feature map may be used as the road marking recognition result.
  • the method of this embodiment may further include using the surface object category of the pixel location as the surface object category of the point corresponding to the pixel location in the three-dimensional point cloud data, that is, the three-dimensional point cloud
  • the surface object category of the point in the data is used as the road marking recognition result.
  • the preset neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • the structure of the preset neural network model may be as shown in FIG. 7, for example.
  • the preset neural network model can include multiple computing nodes. Each computing node can include a convolution (Conv) layer, batch normalization (BN), and an activation function ReLU. It can be connected in the skip connection mode.
  • the input data of K ⁇ H ⁇ W can be input into the preset neural network model, and after the preset neural network model is processed, the output data of C ⁇ H ⁇ W can be obtained.
  • K can represent the number of two-dimensional point cloud feature maps
  • H can represent the height of the two-dimensional point cloud feature maps
  • W can represent the width of the two-dimensional point cloud feature maps
  • C can represent the number of categories.
  • a two-dimensional point cloud feature map can be cut into N sub-feature maps.
  • the input data can be N ⁇ K ⁇ H' ⁇ W'
  • the output The data can be N ⁇ C ⁇ H' ⁇ W', where H'can represent the height of the sub-feature map, and W'can represent the width of the sub-feature map.
  • the three-dimensional point cloud data contains data of the road marking area
  • the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map
  • the two-dimensional point cloud feature The map is input to the preset neural network model, and the model output result of the preset neural network model is obtained.
  • the semantics in the two-dimensional point cloud feature map are distinguished to obtain the road marking recognition result, which is achieved through
  • the neural network model is preset to obtain the road marking recognition result.
  • compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map may specifically include: filtering the three-dimensional point cloud data according to a filtering condition to obtain the filtered three-dimensional point cloud data.
  • a filtering condition to obtain the filtered three-dimensional point cloud data.
  • Three-dimensional point cloud data and compressing the filtered three-dimensional point cloud data into a two-dimensional point cloud feature map.
  • the filtering conditions may include distance conditions and/or altitude conditions.
  • the distance condition may be, for example, a distance of less than 100 meters, to filter out the three-dimensional point cloud data that meets the condition that the distance to the lidar is less than 100 meters from the three-dimensional point cloud data, so as to avoid unnecessary recognition of points that are too far away. Conducive to saving computing resources.
  • the height condition can be, for example, that the relative height is greater than the height threshold 1 and less than the height threshold 2, and the height threshold 1 is less than the height threshold 2, so as to filter the 3D point cloud data that meets the 3D point cloud with the relative height greater than the height threshold 1 and less than the height threshold 2. Data, so as to avoid unnecessary identification of points that will not be road markings, which is conducive to saving computing resources.
  • the obtaining three-dimensional point cloud data detected by lidar may specifically include: obtaining multiple frames of three-dimensional point cloud data detected by lidar; The data is accumulated, and the accumulated three-dimensional point cloud data is obtained. Through multi-frame accumulation, a denser point cloud can be obtained, and the problem of sparse point cloud due to the small number of points detected in a single frame can be avoided.
  • the compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map may specifically include: compressing the accumulated three-dimensional point cloud data into a two-dimensional point cloud feature map.
  • step C and step D may also be included.
  • Step C Determine whether the preset environmental conditions are met.
  • Step D If the preset environmental conditions are met, the image information collected by the image acquisition module is obtained, the image information includes the road marking area, and the image information is processed to obtain a road marking recognition result.
  • the road marking recognition device may include the image acquisition module, and the acquiring of image information in step D may specifically include image acquisition of the road marking recognition device to obtain image information; or, the road marking recognition device
  • the image acquisition module may not be included, and the acquisition of image information in step D may specifically include receiving the image information sent by other devices/equipment and collected by the image acquisition module.
  • the image acquisition module may be a camera, for example.
  • the preset environmental conditions may refer to imaging environmental conditions that need to be met by using the method of obtaining road sign recognition results based on image information.
  • the preset environmental conditions When the preset environmental conditions are met, it can indicate that the imaging environment is better, and the image quality of the obtained image information is better. Therefore, it can be considered that the accuracy of the road marking recognition result obtained based on the image information is high, and the image information-based Ways to obtain road marking recognition results.
  • the preset environmental conditions are not met, it can indicate that the imaging environment is poor, and the imaging quality of the obtained image information is poor. Therefore, it can be considered that the accuracy of the road marking recognition results obtained based on the image information is low, so the The way the image information obtains the road marking recognition result.
  • the preset environmental conditions may include one or more of any environmental factors that affect the imaging quality.
  • the preset environmental conditions may include ambient light conditions and/or lens contamination degree conditions.
  • the ambient light condition may include that the intensity of the ambient light is greater than an intensity threshold.
  • the condition of the degree of lens contamination includes that the degree of lens contamination is less than a degree threshold.
  • the road sign recognition device can not only support the road sign recognition function based on three-dimensional point cloud data, but also support the road sign recognition function based on image information, which improves the flexibility of the road sign recognition device to recognize road signs. Sex.
  • the road marking recognition result obtained by processing the two-dimensional point cloud feature map (hereinafter referred to as the first road marking recognition result) and the road marking recognition result obtained by processing the image information (hereinafter referred to as the second road marking Recognition result), which is juxtaposed as the final road marking recognition result.
  • the first road marking recognition result can be used as the input of one function
  • the second road marking result can be used as the input of another function.
  • the first road marking recognition result and the second road marking recognition result may be merged according to the fusion strategy to obtain the fused road marking recognition result, that is, the fused road marking recognition result can be used as the final road marking recognition result.
  • the recognition effect of the road marking can be improved by fusing the recognition result of the first road marking and the recognition result of the second road marking.
  • the fusion strategy may include a fusion strategy based on distance.
  • the distance may refer to the road distance.
  • the first road marking recognition result and the second road marking result are merged to obtain the fused road marking recognition result, which may specifically include: for a visual field with a distance greater than a distance threshold, comparing the first road marking recognition result with the second road marking result.
  • a road sign recognition result is used as the merged road sign recognition result. Since the longer the distance, the larger the actual area corresponding to a single pixel in the image information, and the worse the accuracy of the distance determined according to the image information, the first road marking recognition result is taken as the fusion of the field of view with a distance greater than the distance threshold.
  • the road sign recognition result can avoid the problem of inaccurate recognition results caused by using the second road sign recognition result as the road sign recognition result with a distance greater than the distance threshold.
  • the fusion of the first road marking recognition result and the second road marking result according to the fusion strategy to obtain the fused road marking recognition result may specifically include: for a field of view whose distance is less than a distance threshold, The second road marking recognition result is used as the fused road marking recognition result. Since the cost of the lidar is much higher than that of the image acquisition module, using the second road marking recognition result as the fused road marking recognition result for the field of view with a distance less than the distance threshold is beneficial to cost saving.
  • obtaining the three-dimensional point cloud data detected by the lidar includes: if the preset environmental conditions are not satisfied, obtaining the three-dimensional point cloud data detected by the lidar.
  • the displaying the road surface mark recognition result includes: marking the road surface mark in a target image according to the road surface mark recognition result to obtain a marked image, and displaying the marked image.
  • different colors may be used to mark different types of road signs, for example, green represents a single solid line, yellow represents a discontinuous line, and purple represents an edge line.
  • the target image includes one or more of the following: an all-black image, an all-white image, or an image containing the road marking area.
  • the all-black image can be an image in which the red (Red, R) value, green (Green, G) value and blue (Blue, B) value of each pixel are all 0, and the all-white image can be the R value of each pixel, An image whose G value and B value are both 255.
  • the lidar and road markings obtained from the compressed three-dimensional point cloud data ie, two-dimensional point cloud feature map
  • the road distance between the laser radar and the road marking can be consistent with the actual road distance between the laser radar and the road marking. Therefore, the target direction is vertical, and the road surface distance between the road marking and the laser radar can be ensured when the road is flat. accuracy.
  • the road distance L1 between the lidar O and the road marking A1 obtained based on the three-dimensional point cloud data compressed in the vertical direction d1 is the lidar O and the road marking A1
  • the road distance L2 between the lidar O and the road marking A2 obtained according to the three-dimensional point cloud data compressed based on the vertical direction d1 is smaller than the actual road distance L3 between the lidar O and the road marking A2 +L4.
  • the road surface is a flat road surface as an example. For a scene where the road surface is a curve, when the target direction is a vertical direction, there is also the problem of inaccurate road distance between the lidar and the road marking.
  • Fig. 9A the relationship between the actual single solid line 901 and the single solid line 902 identified according to the compressed three-dimensional point cloud data can be shown in Fig. 9A, which can be seen It can be seen that the length of the recognized single solid line 902 is shorter than the actual length of the single straight line 901.
  • Fig. 9B the relationship between the actual arrow 903 and the arrow 904 recognized according to the compressed three-dimensional point cloud data can be shown in Fig. 9B. It can be seen that the recognized arrow The length of 904 is shorter than the length of the actual arrow 903, and the recognized triangle of the arrow 904 is flatter than the actual triangle of the arrow 903.
  • the target direction can be dynamically determined according to the undulating state of the road surface, so that the distance between the lidar and the road markings obtained from the three-dimensional point cloud data compressed based on the target direction
  • the road distance can be consistent with the actual road distance.
  • the road surface undulation state can be obtained through three-dimensional modeling, and the three-dimensional point cloud data can be compressed according to the road surface undulation state.
  • the object relationship between different point cloud ranges and target directions can be determined according to the undulating state of the road surface, and the point cloud data in the corresponding point cloud ranges can be compressed according to the target direction.
  • the target direction corresponding to the three-position point cloud data in the range of the flat road surface can be determined as the vertical direction d1 according to the state of the road platform, and the target direction corresponding to the three-dimensional point cloud data in the range of the uphill road surface can be determined as The inclination direction d2 perpendicular to the road surface can further compress the three-dimensional point cloud data of the flat road surface area according to the vertical direction d1, and compress the three-dimensional point cloud data of the uphill road surface area according to the inclination direction d2.
  • the road distance L1 between the lidar and the road marking A1 obtained from the compressed three-dimensional point cloud data is the actual road distance between the lidar and the road marking.
  • the road distance L3+L4 between the lidar and the road marking A2 obtained from the point cloud data is the actual road distance between the lidar and the road marking.
  • the problem that the size or shape of the vehicle identification road marking changes can be solved.
  • the road surface is a flat road surface as an example.
  • the road surface can be divided into multiple road surface ranges according to a certain granularity.
  • the direction perpendicular to the tangent plane can be used as the three-dimensional point cloud data corresponding to the range. Target direction. It is understandable that the smaller the particle size, the higher the accuracy of the road distance between the lidar and the road marking.
  • the road surface in front of the vehicle can be divided into multiple road surface ranges, and each road surface range can be compressed into three-dimensional point cloud data according to the corresponding target direction.
  • each road surface range can be compressed into three-dimensional point cloud data according to the corresponding target direction.
  • the division into 13 road ranges is taken as an example, and an arrow above a road range can indicate the target direction corresponding to the road range.
  • FIG. 12 is a schematic structural diagram of a road marking recognition device provided by an embodiment of the application. As shown in FIG. 12, the device 1200 may include a processor 1201 and a memory 1202.
  • the memory 1202 is used to store program codes
  • the processor 1201 calls the program code, and when the program code is executed, is configured to perform the following operations:
  • the two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
  • the road marking recognition device provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar to those of the method embodiments, and will not be repeated here.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A road marking recognition method and apparatus. The method comprises: obtaining three-dimensional point cloud data obtained by means of detection carried out by a laser radar, wherein the three-dimensional point cloud data includes reflection data of a road marking region (401); compressing the three-dimensional point cloud data into a two-dimensional point cloud characteristic pattern (402); and processing the two-dimensional point cloud characteristic pattern to obtain a road marking recognition result (403). By means of the method, road marking recognition is no longer limited by an imaging environment, thereby expanding the usage scenarios for road marking recognition.

Description

路面标识识别方法及装置Road marking recognition method and device 技术领域Technical field
本申请涉及汽车技术领域,尤其涉及一种路面标识识别方法及装置。This application relates to the field of automobile technology, and in particular to a road marking recognition method and device.
背景技术Background technique
随着汽车技术的不断发展,对路面标识进行识别的需求也越来越多。With the continuous development of automobile technology, there is an increasing demand for identifying road signs.
通常,是通过对图像采集装置所采集到的图像进行处理,获得路面标识的识别结果,该图像采集装置可以设置在车辆上。由于基于图像进行路面标识识别的方式,其准确性受成像质量影响较大,当成像质量较好时,由于图像中有用信息较多因此可以获得准确性较高的识别结果,当成像质量较差时,由于图像中有用信息较少因此识别准确性较低,当成像质量非常低时,已无法用于路面标识的识别。Generally, the recognition result of the road marking is obtained by processing the image collected by the image collecting device, and the image collecting device can be installed on the vehicle. Due to the way of road sign recognition based on images, its accuracy is greatly affected by the imaging quality. When the imaging quality is good, because there are more useful information in the image, a more accurate recognition result can be obtained. When the imaging quality is poor When there is less useful information in the image, the recognition accuracy is low. When the image quality is very low, it can no longer be used for road sign recognition.
因此,上述基于图像进行路面标识识别的方式,使用场景较为局限,难以满足用户的需求。Therefore, the above-mentioned method for road sign recognition based on images has limited usage scenarios and is difficult to meet the needs of users.
发明内容Summary of the invention
本申请实施例提供一种路面标识识别方法及装置,用以解决现有技术中基于图像进行路面标识识别的方式,存在使用局限性较大的问题。The embodiments of the present application provide a road sign recognition method and device, which are used to solve the problem that the prior art method of road sign recognition based on images has relatively large usage limitations.
第一方面,本申请实施例提供一种路面标识识别方法,包括:In the first aspect, an embodiment of the present application provides a road marking recognition method, including:
获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据;Obtaining three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area;
将所述三维点云数据压缩成二维点云特征图;Compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map;
处理所述二维点云特征图,得到路面标识识别结果。The two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
第二方面,本申请实施例提供一种路面标识识别装置,包括:处理器和 存储器;所述存储器,用于存储程序代码;所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:In a second aspect, an embodiment of the present application provides a road marking recognition device, including: a processor and a memory; the memory is used to store program code; the processor calls the program code, and when the program code is executed To perform the following actions:
获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据;Obtaining three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area;
将所述三维点云数据压缩成二维点云特征图;Compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map;
处理所述二维点云特征图,得到路面标识识别结果。The two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
第三方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行上述第一方面任一项所述的方法。In a third aspect, an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, the at least one piece of code can be executed by a computer to control all The computer executes the method described in any one of the above-mentioned first aspects.
第四方面,本申请实施例提供一种计算机程序,当所述计算机程序被计算机执行时,用于实现上述第一方面任一项所述的方法。In a fourth aspect, an embodiment of the present application provides a computer program, when the computer program is executed by a computer, it is used to implement the method described in any one of the above-mentioned first aspects.
本申请实施例提供一种路面标识识别方法及装置,通过获得激光雷达探测得到的三维点云数据,三维点云数据包含路面标识区域的数据,将三维点云数据压缩成二维点云特征图,处理二维点云特征图,得到路面标识识别结果,由于激光雷达的可靠性较高,且受环境因素影响非常小,即使在恶劣的成像环境下基于激光雷达获得的三维点云数据也能够获得准确性较高的路面标识识别结果,使得路面标识识别不再受限于成像环境,扩大了路面标识识别的使用场景,解决了传统技术中基于图像进行路边标识识别的方式使用场景较为局限,难以满足用户需求的问题。The embodiment of the application provides a road marking recognition method and device. By obtaining the three-dimensional point cloud data detected by lidar, the three-dimensional point cloud data includes data of the road marking area, and the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map , Process the two-dimensional point cloud feature map to obtain the road marking recognition result. Because the reliability of lidar is high and the influence of environmental factors is very small, even in the harsh imaging environment based on the three-dimensional point cloud data obtained by lidar can be Obtain high-accuracy road sign recognition results, so that road sign recognition is no longer limited to the imaging environment, expands the use scenarios of road sign recognition, and solves the limited use scenarios of roadside sign recognition based on images in traditional technologies , It is difficult to meet the needs of users.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1A-图1B为本申请实施例提供的路面标识识别方法的应用场景示意图;1A-1B are schematic diagrams of application scenarios of the road marking recognition method provided by the embodiments of this application;
图1C为激光雷达的结构示意图;Figure 1C is a schematic diagram of the structure of the lidar;
图2为激光雷达采用同轴光路的示意图;Figure 2 is a schematic diagram of a coaxial optical path used by lidar;
图3为激光雷达的扫描图案的示意图;Fig. 3 is a schematic diagram of a scanning pattern of a lidar;
图4为本申请一实施例提供的路面标识识别方法的流程示意图;4 is a schematic flowchart of a road marking recognition method provided by an embodiment of this application;
图5为本申请另一实施例提供的路面标识识别方法的流程示意图;5 is a schematic flowchart of a road marking recognition method provided by another embodiment of this application;
图6为本申请又一实施例提供的路面标识识别方法的流程示意图;FIG. 6 is a schematic flowchart of a road marking recognition method provided by another embodiment of this application;
图7为本申请实施例提供的预设神经网络模型的示意图;FIG. 7 is a schematic diagram of a preset neural network model provided by an embodiment of the application;
图8为本申请实施例提供的目标方向与路面距离的关系示意图一;FIG. 8 is a schematic diagram 1 of the relationship between the target direction and the road distance provided by an embodiment of this application;
图9A-图9B为本申请实施例提供的路面标识的形状被压缩的示意图;9A-9B are schematic diagrams showing that the shape of the road marking provided by an embodiment of the application is compressed;
图10为本申请实施例提供的目标方向与路面距离的关系示意图二;FIG. 10 is a second schematic diagram of the relationship between the target direction and the road distance provided by an embodiment of this application;
图11为本申请实施例提供的曲面路面的目标方向的示意图;FIG. 11 is a schematic diagram of a target direction of a curved road surface provided by an embodiment of the application;
图12为本申请一实施例提供的路面标识识别装置的结构示意图。FIG. 12 is a schematic structural diagram of a road marking recognition device provided by an embodiment of the application.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请实施例提供的路面标识识别方法可以应用于任何需要进行路面标识识别的场景中,该路面标识识别方法具体可以由路面标识识别装置执行。该路面标识识别装置可以为包括激光雷达的装置,相应的,本申请实施例提供的路面标识识别方法的应用场景示意图可以如图1A所示,具体的,该路面标识识别装置的激光雷达可以探测得到三维点云数据,路面标识识别装置的处理器可以对激光雷达得到的三维点云数据采用本申请实施例提供的路面标识识别方法进行处理。需要说明的是,图1A仅为示意图,并不对路面标识识别装置的结构作限定。The road marking recognition method provided in the embodiments of the present application can be applied to any scene where road marking recognition is required, and the road marking recognition method may be specifically executed by a road marking recognition device. The road marking recognition device may be a device including lidar. Correspondingly, a schematic diagram of the application scenario of the road marking recognition method provided in the embodiment of the present application may be as shown in FIG. 1A. Specifically, the lidar of the road marking recognition device can detect The three-dimensional point cloud data is obtained, and the processor of the road marking recognition device can process the three-dimensional point cloud data obtained by the lidar by using the road marking recognition method provided in the embodiment of the present application. It should be noted that FIG. 1A is only a schematic diagram, and does not limit the structure of the road marking recognition device.
或者,该路面标识识别装置也可以为不包括激光雷达的装置,相应的,本申请实施例提供的路面标识识别方法的应用场景示意图可以如图1B所示,具体的,该路面标识识别装置的通信接口可以接收其他装置或设备发送的由激光雷达探测获得的三维点云数据,路面标识识别装置的处理器可以对接收到的三维点云数据采用本申请实施例提供的路面标识识别方法进行处理。需 要说明的是,图1B仅为示意图,并不对路面标识识别装置的结构以及路面标识识别装置与其他装置或设备之间的连接方式作限定,例如图像处理装置中通信接口可以替换为收发器。Alternatively, the road marking recognition device may also be a device that does not include lidar. Correspondingly, the application scenario schematic diagram of the road marking recognition method provided in the embodiment of the present application may be as shown in FIG. 1B. Specifically, the road marking recognition device The communication interface can receive the three-dimensional point cloud data obtained by lidar detection sent by other devices or equipment, and the processor of the road marking recognition device can process the received three-dimensional point cloud data using the road marking recognition method provided in the embodiments of this application. . It should be noted that FIG. 1B is only a schematic diagram, and does not limit the structure of the road marking recognition device and the connection mode between the road marking recognition device and other devices or equipment. For example, the communication interface in the image processing device can be replaced with a transceiver.
其中,激光雷达用于感测外部环境信息,例如,环境目标的距离信息、方位信息、反射强度信息、速度信息等。一种实现方式中,激光雷达可以通过测量激光雷达和探测物之间光传播的时间,即光飞行时间(Time-of-Flight,TOF),来探测探测物到激光雷达的距离。或者,激光雷达也可以通过其他技术来探测探测物到激光雷达的距离,例如基于相位移动(phase shift)测量的测距方法,或者基于频率移动(frequency shift)测量的测距方法,在此不做限制。Among them, lidar is used to sense external environmental information, such as distance information, azimuth information, reflection intensity information, and speed information of environmental targets. In one implementation, the lidar can detect the distance from the lidar to the lidar by measuring the time of light propagation between the lidar and the detection object, that is, the time-of-flight (TOF). Alternatively, lidar can also use other technologies to detect the distance from the detected object to the lidar, such as a ranging method based on phase shift measurement, or a ranging method based on frequency shift measurement. Do restrictions.
为了便于理解,以下将结合图1C所示的激光雷达100对测距的工作流程进行举例描述。如图1C所示,激光雷达100可以包括发射电路110、接收电路120、采样电路130和运算电路140。In order to facilitate understanding, the working process of ranging will be described as an example in conjunction with the lidar 100 shown in FIG. 1C. As shown in FIG. 1C, the lidar 100 may include a transmitting circuit 110, a receiving circuit 120, a sampling circuit 130, and an arithmetic circuit 140.
发射电路110可以发射光脉冲序列(例如激光脉冲序列)。接收电路120可以接收经过被探测物反射的光脉冲序列,并对该光脉冲序列进行光电转换,以得到电信号,再对电信号进行处理之后可以输出给采样电路130。采样电路130可以对电信号进行采样,以获取采样结果。运算电路140可以基于采样电路130的采样结果,以确定激光雷达100与被探测物之间的距离。The transmitting circuit 110 may emit a light pulse sequence (for example, a laser pulse sequence). The receiving circuit 120 may receive the light pulse sequence reflected by the object to be detected, and perform photoelectric conversion on the light pulse sequence to obtain an electrical signal. After processing the electrical signal, the electrical signal may be output to the sampling circuit 130. The sampling circuit 130 may sample the electrical signal to obtain the sampling result. The arithmetic circuit 140 may determine the distance between the lidar 100 and the object to be detected based on the sampling result of the sampling circuit 130.
可选地,该激光雷达100还可以包括控制电路150,该控制电路150可以实现对其他电路的控制,例如,可以控制各个电路的工作时间和/或对各个电路进行参数设置等。Optionally, the lidar 100 may further include a control circuit 150, which can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
应理解,虽然图1C示出的激光雷达中包括一个发射电路、一个接收电路、一个采样电路和一个运算电路,用于出射一路光束进行探测,但是本申请实施例并不限于此,发射电路、接收电路、采样电路、运算电路中的任一种电路的数量也可以是至少两个,用于沿相同方向或分别沿不同方向出射至少两路光束;其中,该至少两束光路可以是同时出射,也可以是分别在不同时刻出射。一个示例中,该至少两个发射电路中的发光芯片封装在同一个模块中。例如,每个发射电路包括一个激光发射芯片,该至少两个发射电路中的激光发射芯片中的裸片(die)封装到一起,容置在同一个封装空间中。It should be understood that although the lidar shown in FIG. 1C includes a transmitting circuit, a receiving circuit, a sampling circuit, and an arithmetic circuit for emitting a beam for detection, the embodiment of the present application is not limited to this. The transmitting circuit, The number of any one of the receiving circuit, the sampling circuit, and the arithmetic circuit may also be at least two, which are used to emit at least two light beams in the same direction or in different directions; wherein, the at least two light paths may be emitted at the same time. , It can also be launched at different times. In an example, the light-emitting chips in the at least two transmitting circuits are packaged in the same module. For example, each emitting circuit includes a laser emitting chip, and the dies in the laser emitting chips in the at least two emitting circuits are packaged together and housed in the same packaging space.
一些实现方式中,除了图1C所示的电路,激光雷达100还可以包括扫描模块160,用于将发射电路出射的至少一路激光脉冲序列改变传播方向出射。In some implementations, in addition to the circuit shown in FIG. 1C, the laser radar 100 may further include a scanning module 160 for changing the propagation direction of at least one laser pulse sequence emitted by the transmitting circuit.
其中,可以将包括发射电路110、接收电路120、采样电路130和运算电路140的模块,或者,包括发射电路110、接收电路120、采样电路130、运算电路140和控制电路150的模块称为测距模块,该测距模块150可以独立于其他模块,例如,扫描模块160。Among them, the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, and the arithmetic circuit 140, or the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, the arithmetic circuit 140, and the control circuit 150 may be referred to as the measurement circuit. The distance measurement module 150 can be independent of other modules, for example, the scanning module 160.
激光雷达中可以采用同轴光路,也即激光雷达出射的光束和经反射回来的光束在激光雷达内共用至少部分光路。例如,发射电路出射的至少一路激光脉冲序列经扫描模块改变传播方向出射后,经探测物反射回来的激光脉冲序列经过扫描模块后入射至接收电路。或者,激光雷达也可以采用异轴光路,也即激光雷达出射的光束和经反射回来的光束在激光雷达内分别沿不同的光路传输。图2示出了本申请的激光雷达采用同轴光路的一种实施例的示意图。The coaxial optical path can be used in the lidar, that is, the beam emitted by the lidar and the reflected beam share at least part of the optical path in the lidar. For example, after at least one laser pulse sequence emitted by the transmitter circuit changes its propagation direction and exits through the scanning module, the laser pulse sequence reflected by the probe passes through the scanning module and then enters the receiving circuit. Alternatively, the laser radar can also use an off-axis optical path, that is, the beam emitted by the laser radar and the reflected beam are transmitted along different optical paths in the laser radar. Fig. 2 shows a schematic diagram of an embodiment in which the laser radar of the present application adopts a coaxial optical path.
激光雷达200包括测距模块201,测距模块210包括发射器203(可以包括上述的发射电路)、准直元件204、探测器205(可以包括上述的接收电路、采样电路和运算电路)和光路改变元件206。测距模块210用于发射光束,且接收回光,将回光转换为电信号。其中,发射器203可以用于发射光脉冲序列。在一个实施例中,发射器203可以发射激光脉冲序列。可选的,发射器203发射出的激光束为波长在可见光范围之外的窄带宽光束。准直元件204设置于发射器的出射光路上,用于准直从发射器203发出的光束,将发射器203发出的光束准直为平行光出射至扫描模块。准直元件还用于会聚经探测物反射的回光的至少一部分。该准直元件204可以是准直透镜或者是其他能够准直光束的元件。The lidar 200 includes a ranging module 201. The ranging module 210 includes a transmitter 203 (which may include the above-mentioned transmitting circuit), a collimating element 204, a detector 205 (which may include the above-mentioned receiving circuit, sampling circuit, and arithmetic circuit), and an optical path. Change element 206. The ranging module 210 is used to emit a light beam, receive the return light, and convert the return light into an electrical signal. Among them, the transmitter 203 can be used to emit a light pulse sequence. In one embodiment, the transmitter 203 may emit a sequence of laser pulses. Optionally, the laser beam emitted by the transmitter 203 is a narrow-bandwidth beam with a wavelength outside the visible light range. The collimating element 204 is arranged on the exit light path of the emitter, and is used to collimate the light beam emitted from the emitter 203, and collimate the light beam emitted from the emitter 203 into parallel light and output to the scanning module. The collimating element is also used to condense at least a part of the return light reflected by the probe. The collimating element 204 may be a collimating lens or other elements capable of collimating a light beam.
在图2所示实施例中,通过光路改变元件206来将激光雷达内的发射光路和接收光路在准直元件104之前合并,使得发射光路和接收光路可以共用同一个准直元件,使得光路更加紧凑。在其他的一些实现方式中,也可以是发射器103和探测器105分别使用各自的准直元件,将光路改变元件206设置在准直元件之后的光路上。In the embodiment shown in FIG. 2, the light path changing element 206 is used to merge the transmitting light path and the receiving light path in the lidar before the collimating element 104, so that the transmitting light path and the receiving light path can share the same collimating element, making the light path more compact. In some other implementation manners, the transmitter 103 and the detector 105 may respectively use their own collimating elements, and the optical path changing element 206 is arranged on the optical path behind the collimating element.
在图2所示实施例中,由于发射器103出射的光束的光束孔径较小,激光雷达所接收到的回光的光束孔径较大,所以光路改变元件可以采用小面积的反射镜来将发射光路和接收光路合并。在其他的一些实现方式中,光路改变元件也可以采用带通孔的反射镜,其中该通孔用于透射发射器203的出射光,反射镜用于将回光反射至探测器205。这样可以减小采用小反射镜的情况中小反射镜的支架会对回光的遮挡。In the embodiment shown in FIG. 2, since the beam aperture of the light beam emitted by the transmitter 103 is small, and the beam aperture of the return light received by the lidar is relatively large, the optical path changing element can use a small-area reflector to transmit The light path and the receiving light path are merged. In some other implementations, the light path changing element may also use a reflector with a through hole, where the through hole is used to transmit the emitted light of the emitter 203 and the reflector is used to reflect the return light to the detector 205. In this way, the shielding of the back light from the support of the small reflector in the case of using the small reflector can be reduced.
在图2所示实施例中,光路改变元件偏离了准直元件204的光轴。在其他的一些实现方式中,光路改变元件也可以位于准直元件204的光轴上。In the embodiment shown in FIG. 2, the optical path changing element deviates from the optical axis of the collimating element 204. In some other implementation manners, the optical path changing element may also be located on the optical axis of the collimating element 204.
激光雷达200还包括扫描模块202。扫描模块202放置于测距模块201的出射光路上,扫描模块102用于改变经准直元件204出射的准直光束219的传输方向并投射至外界环境,并将回光投射至准直元件204。回光经准直元件104汇聚到探测器105上。The lidar 200 also includes a scanning module 202. The scanning module 202 is placed on the exit light path of the distance measuring module 201, and the scanning module 102 is used to change the transmission direction of the collimated beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 . The returned light is collected on the detector 105 via the collimating element 104.
在一个实施例中,扫描模块202可以包括至少一个光学元件,用于改变光束的传播路径,其中,该光学元件可以通过对光束进行反射、折射、衍射等等方式来改变光束传播路径。例如,扫描模块202包括透镜、反射镜、棱镜、振镜、光栅、液晶、光学相控阵(Optical Phased Array)或上述光学元件的任意组合。一个示例中,至少部分光学元件是运动的,例如通过驱动模块来驱动该至少部分光学元件进行运动,该运动的光学元件可以在不同时刻将光束反射、折射或衍射至不同的方向。在一些实施例中,扫描模块202的多个光学元件可以绕共同的轴209旋转或振动,每个旋转或振动的光学元件用于不断改变入射光束的传播方向。在一个实施例中,扫描模块202的多个光学元件可以以不同的转速旋转,或以不同的速度振动。在另一个实施例中,扫描模块202的至少部分光学元件可以以基本相同的转速旋转。在一些实施例中,扫描模块的多个光学元件也可以是绕不同的轴旋转。在一些实施例中,扫描模块的多个光学元件也可以是以相同的方向旋转,或以不同的方向旋转;或者沿相同的方向振动,或者沿不同的方向振动,在此不作限制。In an embodiment, the scanning module 202 may include at least one optical element for changing the propagation path of the light beam, wherein the optical element may change the propagation path of the light beam by reflecting, refracting, or diffracting the light beam. For example, the scanning module 202 includes a lens, a mirror, a prism, a galvanometer, a grating, a liquid crystal, an optical phased array (Optical Phased Array), or any combination of the foregoing optical elements. In an example, at least part of the optical element is moving, for example, the at least part of the optical element is driven to move by a driving module, and the moving optical element can reflect, refract, or diffract the light beam to different directions at different times. In some embodiments, the multiple optical elements of the scanning module 202 can rotate or vibrate around a common axis 209, and each rotating or vibrating optical element is used to continuously change the propagation direction of the incident light beam. In one embodiment, the multiple optical elements of the scanning module 202 may rotate at different speeds or vibrate at different speeds. In another embodiment, at least part of the optical elements of the scanning module 202 may rotate at substantially the same rotation speed. In some embodiments, the multiple optical elements of the scanning module may also rotate around different axes. In some embodiments, the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction, or vibrate in different directions, which is not limited herein.
在一个实施例中,扫描模块202包括第一光学元件214和与第一光学元件214连接的驱动器216,驱动器216用于驱动第一光学元件214绕转动轴209转动,使第一光学元件214改变准直光束219的方向。第一光学元件214将准直光束219投射至不同的方向。在一个实施例中,准直光束219经第一光学元件改变后的方向与转动轴109的夹角随着第一光学元件214的转动而变化。在一个实施例中,第一光学元件214包括相对的非平行的一对表面,准直光束219穿过该对表面。在一个实施例中,第一光学元件214包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第一光学元件114包括楔角棱镜,对准直光束119进行折射。In one embodiment, the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214. The driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209 to change the first optical element 214. The direction of the beam 219 is collimated. The first optical element 214 projects the collimated beam 219 to different directions. In one embodiment, the angle between the direction of the collimated beam 219 changed by the first optical element and the rotation axis 109 changes with the rotation of the first optical element 214. In one embodiment, the first optical element 214 includes a pair of opposing non-parallel surfaces through which the collimated light beam 219 passes. In one embodiment, the first optical element 214 includes a prism whose thickness varies along at least one radial direction. In one embodiment, the first optical element 114 includes a wedge-angle prism to collimate the beam 119 for refracting.
在一个实施例中,扫描模块202还包括第二光学元件215,第二光学元件215绕转动轴209转动,第二光学元件215的转动速度与第一光学元件214的转 动速度不同。第二光学元件215用于改变第一光学元件214投射的光束的方向。在一个实施例中,第二光学元件115与另一驱动器217连接,驱动器117驱动第二光学元件215转动。第一光学元件214和第二光学元件215可以由相同或不同的驱动器驱动,使第一光学元件214和第二光学元件215的转速和/或转向不同,从而将准直光束219投射至外界空间不同的方向,可以扫描较大的空间范围。在一个实施例中,控制器218控制驱动器216和217,分别驱动第一光学元件214和第二光学元件215。第一光学元件214和第二光学元件215的转速可以根据实际应用中预期扫描的区域和样式确定。驱动器216和217可以包括电机或其他驱动器。In one embodiment, the scanning module 202 further includes a second optical element 215, the second optical element 215 rotates around the rotation axis 209, and the rotation speed of the second optical element 215 is different from the rotation speed of the first optical element 214. The second optical element 215 is used to change the direction of the light beam projected by the first optical element 214. In one embodiment, the second optical element 115 is connected to another driver 217, and the driver 117 drives the second optical element 215 to rotate. The first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotation speed and/or rotation of the first optical element 214 and the second optical element 215 are different, so that the collimated light beam 219 is projected to the outside space. Different directions can scan a larger space. In one embodiment, the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215, respectively. The rotational speeds of the first optical element 214 and the second optical element 215 can be determined according to the expected scanning area and pattern in actual applications. The drivers 216 and 217 may include motors or other drivers.
在一个实施例中,第二光学元件115包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第二光学元件115包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第二光学元件115包括楔角棱镜。In one embodiment, the second optical element 115 includes a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 115 includes a prism whose thickness varies in at least one radial direction. In one embodiment, the second optical element 115 includes a wedge prism.
一个实施例中,扫描模块102还包括第三光学元件(图未示)和用于驱动第三光学元件运动的驱动器。可选地,该第三光学元件包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第三光学元件包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第三光学元件包括楔角棱镜。第一、第二和第三光学元件中的至少两个光学元件以不同的转速和/或转向转动。In an embodiment, the scanning module 102 further includes a third optical element (not shown) and a driver for driving the third optical element to move. Optionally, the third optical element includes a pair of opposite non-parallel surfaces, and the light beam passes through the pair of surfaces. In one embodiment, the third optical element includes a prism whose thickness varies in at least one radial direction. In one embodiment, the third optical element includes a wedge prism. At least two of the first, second, and third optical elements rotate at different rotation speeds and/or rotation directions.
扫描模块202中的各光学元件旋转可以将光投射至不同的方向,例如方向211和213,如此对激光雷达200周围的空间进行扫描。如图3所示,图3为激光雷达200的一种扫描图案的示意图。可以理解的是,扫描模块内的光学元件的速度变化时,扫描图案也会随之变化。The rotation of each optical element in the scanning module 202 can project light to different directions, such as directions 211 and 213, so that the space around the lidar 200 is scanned. As shown in FIG. 3, FIG. 3 is a schematic diagram of a scanning pattern of the lidar 200. It is understandable that when the speed of the optical element in the scanning module changes, the scanning pattern will also change accordingly.
当扫描模块202投射出的光211打到探测物201时,一部分光被探测物201沿与投射的光211相反的方向反射至激光雷达200。探测物201反射的回光212经过扫描模块202后入射至准直元件204。When the light 211 projected by the scanning module 202 hits the detection object 201, a part of the light is reflected by the detection object 201 to the lidar 200 in a direction opposite to the projected light 211. The return light 212 reflected by the probe 201 is incident on the collimating element 204 after passing through the scanning module 202.
探测器205与发射器203放置于准直元件204的同一侧,探测器205用于将穿过准直元件204的至少部分回光转换为电信号。The detector 205 and the transmitter 203 are placed on the same side of the collimating element 204, and the detector 205 is used to convert at least part of the return light passing through the collimating element 204 into electrical signals.
一个实施例中,各光学元件上镀有增透膜。可选的,增透膜的厚度与发射器103发射出的光束的波长相等或接近,能够增加透射光束的强度。In one embodiment, an anti-reflection film is plated on each optical element. Optionally, the thickness of the antireflection coating is equal to or close to the wavelength of the light beam emitted by the emitter 103, which can increase the intensity of the transmitted light beam.
一个实施例中,激光雷达中位于光束传播路径上的一个元件表面上镀有滤光层,或者在光束传播路径上设置有滤光器,用于至少透射发射器所出射 的光束所在波段,反射其他波段,以减少环境光给接收器带来的噪音。In one embodiment, a filter layer is plated on the surface of an element located on the beam propagation path in the laser radar, or a filter is provided on the beam propagation path for transmitting at least the wavelength band of the beam emitted by the transmitter and reflecting Other bands to reduce the noise caused by ambient light to the receiver.
在一些实施例中,发射器203可以包括激光二极管,通过激光二极管发射纳秒级别的激光脉冲。进一步地,可以确定激光脉冲接收时间,例如,通过探测电信号脉冲的上升沿时间和/或下降沿时间确定激光脉冲接收时间。如此,激光雷达200可以利用脉冲接收时间信息和脉冲发出时间信息计算TOF,从而确定探测物201到激光雷达200的距离。In some embodiments, the transmitter 203 may include a laser diode through which nanosecond laser pulses are emitted. Further, the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or the falling edge time of the electrical signal pulse. In this way, the lidar 200 can calculate the TOF using the pulse receiving time information and the pulse sending time information, so as to determine the distance between the detection object 201 and the lidar 200.
在一种实施方式中,本申请实施方式的激光雷达可应用于移动平台,激光雷达可安装在移动平台的平台本体。具有激光雷达的移动平台可对路面标识进行识别。在某些实施方式中,移动平台包括无人飞行器、汽车、遥控车、机器人、相机中的至少一种。当激光雷达应用于无人飞行器时,平台本体为无人飞行器的机身。当激光雷达应用于汽车时,平台本体为汽车的车身。该汽车可以是自动驾驶汽车或者半自动驾驶汽车,在此不做限制。当激光雷达应用于遥控车时,平台本体为遥控车的车身。当激光雷达应用于机器人时,平台本体为机器人。当激光雷达应用于相机时,平台本体为相机本身。In one embodiment, the lidar of the embodiment of the present application can be applied to a mobile platform, and the lidar can be installed on the platform body of the mobile platform. A mobile platform with lidar can identify road signs. In some embodiments, the mobile platform includes at least one of an unmanned aerial vehicle, a car, a remote control car, a robot, and a camera. When the lidar is applied to an unmanned aerial vehicle, the platform body is the fuselage of the unmanned aerial vehicle. When the lidar is applied to a car, the platform body is the body of the car. The car can be a self-driving car or a semi-automatic driving car, and there is no restriction here. When the lidar is applied to a remote control car, the platform body is the body of the remote control car. When lidar is applied to a robot, the platform body is a robot. When lidar is applied to a camera, the platform body is the camera itself.
需要说明的是,对于包括该路面标识识别装置的设备的类型,本申请实施例可以不做限定,该设备例如可以服务器、自动驾驶汽车、半自动驾驶汽车等。It should be noted that the type of the device including the road marking recognition device may not be limited in the embodiment of the present application, and the device may be, for example, a server, an autonomous vehicle, or a semi-autonomous vehicle.
本申请实施例提供的路面标识识别方法,通过将激光雷达探测获得的三维点云数据压缩成二维点云特征图,并处理二维点云特征图得到路面标识识别结果,由于激光雷达的可靠性较高,且受环境因素影响非常小,即使在恶劣的成像环境下基于激光雷达获得的三维点云数据也能够获得准确性较高的路面标识识别结果,使得路面标识识别不再受限于成像环境,扩大了路面标识识别的使用场景,解决了传统技术中基于图像进行路边标识识别的方式使用场景较为局限,难以满足用户需求的问题。The road marking recognition method provided by the embodiment of this application compresses the three-dimensional point cloud data obtained by lidar detection into a two-dimensional point cloud feature map, and processes the two-dimensional point cloud feature map to obtain the road marking recognition result. Due to the reliability of the lidar It has high performance and is very small affected by environmental factors. Even in the harsh imaging environment based on the three-dimensional point cloud data obtained by Lidar, it is possible to obtain high-accuracy road marking recognition results, so that road marking recognition is no longer limited to The imaging environment expands the usage scenarios of road sign recognition, and solves the problem of limited usage scenarios of roadside sign recognition based on images in the traditional technology, and it is difficult to meet the needs of users.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Hereinafter, some embodiments of the present application will be described in detail with reference to the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
图4为本申请一实施例提供的路面标识识别方法的流程示意图,本实施例的执行主体可以为路面标识识别装置,具体可以为路面标识识别装置的处理器。如图4所示,本实施例的方法可以包括:FIG. 4 is a schematic flowchart of a road marking recognition method provided by an embodiment of the application. The execution subject of this embodiment may be a road marking recognition device, and specifically may be a processor of the road marking recognition device. As shown in Figure 4, the method of this embodiment may include:
步骤401,获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据。Step 401: Obtain three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area.
本步骤中,激光雷达(Laser Radar)也可以称为激光探测及测距***(Light Detection and Ranging,LiDAR),由激光雷达对周围环境进行扫描所获取的数据即为三维点云数据。所述三维点云数据中包含路面标识区域的反射数据,反射数据可以是指经路面标识区域反射的回光所携带的数据,可以理解为三维点云数据是由激光雷达对设置有路面标识的路面进行扫描所获得的数据。三维点云数据中每个点可以包含三维坐标和反射率信息。In this step, the laser radar (Laser Radar) can also be called a laser detection and ranging system (Light Detection and Ranging, LiDAR), and the data obtained by scanning the surrounding environment by the laser radar is the three-dimensional point cloud data. The three-dimensional point cloud data includes the reflection data of the road marking area. The reflection data may refer to the data carried by the back light reflected by the road marking area. It can be understood that the three-dimensional point cloud data is set with the road markings by the lidar. Data obtained by scanning the road surface. Each point in the 3D point cloud data can contain 3D coordinates and reflectance information.
路面标识区域可以是指路面标识所在的位置范围。其中,路面标识具体可以为设置在路面上用于标识行车规范的任意类型标识。示例性的,路面标识可以是各种车道线,例如双实线、单实线、间断线,或者路面导流标识。路面标识还可以包括左转箭头、右转箭头、直行箭头等。需要说明的是,本申请提供的路面标识识别方法能够识别的路面标识可以为所有路面标识中的一种或多种。该一种或多种路面标识可以理解为特定路面标识。The road marking area may refer to the location range where the road marking is located. Wherein, the road sign can specifically be any type of sign set on the road to mark driving regulations. Exemplarily, the road surface markings may be various lane lines, such as double solid lines, single solid lines, discontinuous lines, or road diversion markings. The road markings may also include left-turn arrows, right-turn arrows, straight arrows, and so on. It should be noted that the road markings that can be identified by the road marking recognition method provided in this application may be one or more of all road markings. The one or more road markings can be understood as specific road markings.
步骤402,将所述三维点云数据压缩成二维点云特征图。Step 402: Compress the three-dimensional point cloud data into a two-dimensional point cloud feature map.
本步骤中,立体空间的三维点云数据压缩成平面空间中的二维点云特征图,三维点云数据中的多个点可以对应二维点云特征图中的一个像素。二维点云特征图中包含有能够用于识别路面标识的特征信息。In this step, the three-dimensional point cloud data in the three-dimensional space is compressed into a two-dimensional point cloud feature map in the plane space, and multiple points in the three-dimensional point cloud data can correspond to one pixel in the two-dimensional point cloud feature map. The two-dimensional point cloud feature map contains feature information that can be used to identify road signs.
需要说明的是,二维点云特征图的个数可以与特征信息的种类数对应,例如,假设特征信息的种类数为2时,二维点云特征图的个数可以为2,分别为二维点云特征图1和二维点云特征图2,其中,二维点云特征图1中像素值可以表示一种是特征信息,二维点云特征图2中的像素值可以表示另一种特征信息。It should be noted that the number of two-dimensional point cloud feature maps can correspond to the number of types of feature information. For example, assuming that the number of types of feature information is 2, the number of two-dimensional point cloud feature maps can be 2, respectively Two-dimensional point cloud feature graph 1 and two-dimensional point cloud feature graph 2. Among them, the pixel value in the two-dimensional point cloud feature graph 1 can represent one type of feature information, and the pixel value in the two-dimensional point cloud feature graph 2 can represent the other A feature information.
步骤403,处理所述二维点云特征图,得到路面标识识别结果。Step 403: Process the two-dimensional point cloud feature map to obtain a road marking recognition result.
本步骤中,由于二维点云特征图中包含有能够用于识别路边标识的特征信息,因此,通过处理二维点云特征图可以得到路面标识识别结果。示例性的,路面标识识别结果可以为三维点云数据中是否包含特定路面标识,例如路面标识结果可以为包含单实线或不包含单实线。示例性的,路面标识识别结果可以为三维点云数据中包含的特定路面标识的类别,例如,路面标识结果可以为单实线、间断线。In this step, since the two-dimensional point cloud feature map contains feature information that can be used to identify roadside marks, the road surface mark recognition result can be obtained by processing the two-dimensional point cloud feature map. Exemplarily, the road marking recognition result may be whether the three-dimensional point cloud data contains a specific road marking, for example, the road marking result may include a single solid line or not including a single solid line. Exemplarily, the road surface marking recognition result may be a category of a specific road marking included in the three-dimensional point cloud data. For example, the road marking result may be a single solid line or a discontinuous line.
本实施例中,通过获得激光雷达探测得到的三维点云数据,三维点云数据包含路面标识区域的数据,将三维点云数据压缩成二维点云特征图,处理二维点云特征图,得到路面标识识别结果,由于激光雷达的可靠性较高,且 受环境因素影响非常小,即使在恶劣的成像环境下基于激光雷达获得的三维点云数据也能够获得准确性较高的路面标识识别结果,使得路面标识识别不再受限于成像环境,扩大了路面标识识别的使用场景,解决了传统技术中基于图像进行路边标识识别的方式使用场景较为局限,难以满足用户需求的问题。In this embodiment, by obtaining the three-dimensional point cloud data detected by the lidar, the three-dimensional point cloud data contains data of the road marking area, the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map, and the two-dimensional point cloud feature map is processed, Obtain the road marking recognition results. Because the reliability of lidar is high and the impact of environmental factors is very small, even in the harsh imaging environment based on the three-dimensional point cloud data obtained by lidar can obtain high-accuracy road marking recognition As a result, road sign recognition is no longer limited to the imaging environment, expands the use scenarios of road sign recognition, and solves the problem of limited use scenarios of roadside sign recognition based on images in the traditional technology, and it is difficult to meet user needs.
图5为本申请另一实施例提供的路面标识识别方法的流程示意图,本实施例在图5所示实施例的基础上,主要描述了处理二维点云特征图,得到路面标识识别结果的一种可选的实现方式。如图5所示,本实施例的方法可以包括:Fig. 5 is a schematic flow chart of a road marking recognition method provided by another embodiment of the application. On the basis of the embodiment shown in Fig. 5, this embodiment mainly describes the process of processing a two-dimensional point cloud feature map to obtain a road marking recognition result An optional implementation. As shown in FIG. 5, the method of this embodiment may include:
步骤501,获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据。Step 501: Obtain three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area.
需要说明的是,步骤501与步骤401类似,在此不再赘述。It should be noted that step 501 is similar to step 401, and will not be repeated here.
步骤502,将所述三维点云数据压缩成二维点云特征图。Step 502: Compress the three-dimensional point cloud data into a two-dimensional point cloud feature map.
本步骤中,根据目标方向压缩所述三维点云数据,得到包含特征信息的二维点云特征图,所述特征信息包括相对高度信息和/或反射率信息,所述目标方向为可以获得能够用于识别路面标识的特征信息的任意方向。示例性的,目标方向包括竖直方向,相应的,所述二维点云特征图可以理解为二维水平面点云特征图,以便于简化实现。In this step, the three-dimensional point cloud data is compressed according to the target direction to obtain a two-dimensional point cloud feature map containing feature information. The feature information includes relative height information and/or reflectivity information. Any direction used to identify the characteristic information of road markings. Exemplarily, the target direction includes a vertical direction. Accordingly, the two-dimensional point cloud feature map may be understood as a two-dimensional horizontal plane point cloud feature map, so as to simplify implementation.
其中,相对高度信息可以是指物体在目标方向上相对于目标参照物的高度信息,目标参照物可以包括激光雷达或用于设置激光雷达的移动平台。由于路面标识是设置在路面上,路面与目标参照物之间的相对高度信息通常可以满足一定条件,因此根据相对高度信息可以识别路面标识,即相对高度信息可以用于识别路面标识。Among them, the relative height information may refer to the height information of the object relative to the target reference object in the target direction, and the target reference object may include a lidar or a mobile platform for setting up a lidar. Since the road marking is set on the road, the relative height information between the road surface and the target reference object can usually meet certain conditions, so the road marking can be identified based on the relative height information, that is, the relative height information can be used to identify the road marking.
反射率信息可以是指激光雷达采集到的回波能量占激光雷达的发射能量的百分比,由于反射率主要取决于物体本身的性质,以及入射波长和入射角度,因此在入射波长和入射角度一定时,可以根据反射率识别物体,即反射率信息可以用于识别路面标识。Reflectance information can refer to the percentage of the echo energy collected by the lidar to the emitted energy of the lidar. Since the reflectivity mainly depends on the nature of the object itself, as well as the incident wavelength and angle of incidence, when the incident wavelength and angle of incidence are certain , Can identify objects based on reflectivity, that is, reflectivity information can be used to identify road signs.
通过特征信息包括相对高度信息,可以避免由于压缩导致的信息丢失问题,通过特征信息包括相对高度信息和反射率信息,有利于提高路面标识的识别准确性。Through the feature information including relative height information, the problem of information loss due to compression can be avoided, and through feature information including relative height information and reflectivity information, it is beneficial to improve the recognition accuracy of road signs.
示例性的,所述按照目标方向压缩所述三维点云数据,得到包含反射率信息和/或相对高度信息的二维点云特征图,具体可以包括如下步骤A和步骤 B。Exemplarily, compressing the three-dimensional point cloud data according to the target direction to obtain a two-dimensional point cloud feature map containing reflectance information and/or relative height information may specifically include the following steps A and B.
步骤A,沿所述目标方向对所述三维点云数据进行投影压缩,得到二维点云数据。Step A: Perform projection compression on the three-dimensional point cloud data along the target direction to obtain two-dimensional point cloud data.
其中,所述二维点云数据中每个点可以包括二维坐标和反射率信息。可选的,所述二维点云数据中还可以包括相对高度信息,以避免由于压缩带来的信息丢失的问题。Wherein, each point in the two-dimensional point cloud data may include two-dimensional coordinates and reflectance information. Optionally, the two-dimensional point cloud data may also include relative height information to avoid the problem of information loss due to compression.
步骤B,从所述二维点云数据中提取出特征信息,得到包含所述特征信息的二维点云特征图。Step B: Extract feature information from the two-dimensional point cloud data, and obtain a two-dimensional point cloud feature map containing the feature information.
其中,对于二维点云数据中每个点均提取特征信息,可以得到包含特征信息的二维点云特征图。二维点云特征图中的像素可以与二维点云数据中的点一一对应。Among them, the feature information is extracted for each point in the two-dimensional point cloud data, and a two-dimensional point cloud feature map containing the feature information can be obtained. The pixels in the two-dimensional point cloud feature map can correspond one-to-one with the points in the two-dimensional point cloud data.
步骤503,基于所述二维点云特征图的区域反射率和高度,输出路面标识区域。Step 503: Output a road marking area based on the area reflectivity and height of the two-dimensional point cloud feature map.
本步骤中,可选的,可以将路面标识区域作为路面标识识别结果。In this step, optionally, the road marking area can be used as the road marking recognition result.
示例性的,可以根据二维点云特征图中各像素的反射率和高度,将所述二维点云特征图划分多个区域,其中,一个区域可以对应一个物体,进一步的,可以根据多个区域中各区域对应物体的反射率和高度,以及目标反射率和目标高度,从多个区域中确定出路面标识区域。其中,目标反射率可以表征物体为路面标识时的反射率,目标高度可以表征物体为路面标识时的高度率。Exemplarily, the two-dimensional point cloud feature map can be divided into multiple regions according to the reflectivity and height of each pixel in the two-dimensional point cloud feature map, where one region can correspond to one object, and further, it can be based on multiple regions. Each area in each area corresponds to the reflectivity and height of the object, as well as the target reflectivity and target height, and the road marking area is determined from multiple areas. Among them, the target reflectivity can represent the reflectivity when the object is a road marking, and the target height can represent the height ratio when the object is a road marking.
或者,可选的,可以根据路面标识区域进一步确定路面标识是识别结果。示例性的,可以进一步确定路面标识区域具体对应何种路面标识。示例性的,步骤503之后还可以包括:对输出的所述路面标识区域进行聚类处理;对聚类处理的结果进行识别,输出所述结果对应的路面标识。示例性的,对步骤502输出的路面标识区域可以通过聚类算法进行聚类处理,以将对应相同路面标识的路面标识区域分成一个聚类簇,将对应不同路面标识的路面标识区域划分成不同聚类簇。例如假设路面标识区域包括对应单实线的路面标识区域a、对应间断线的路面标识区域b、对应单实线的路面标识区域c以及对应间断线的路面标识区域d,则通过聚类处理可以得到将路面标识区域a和b分成一个聚类簇,将路面标识区域b和d分成另一聚类簇的聚类处理的结果。Or, optionally, it may be further determined that the road surface marking is the recognition result according to the road marking area. Exemplarily, it may be further determined which road markings the road marking area specifically corresponds to. Exemplarily, after step 503, the method may further include: performing clustering processing on the output road marking area; recognizing the result of the clustering processing, and outputting the road marking corresponding to the result. Exemplarily, the road surface identification area output in step 502 may be clustered by a clustering algorithm, so as to divide the road surface identification area corresponding to the same road surface identification into a cluster, and divide the road surface identification areas corresponding to different road surface identifications into different clusters. Cluster clusters. For example, assuming that the road marking area includes a road marking area corresponding to a single solid line, a road marking area b corresponding to a discontinuous line, a road marking area c corresponding to a single solid line, and a road marking area d corresponding to the discontinuous line, the clustering process can be The result of the clustering process of dividing the road marking areas a and b into one cluster and dividing the road marking areas b and d into another cluster is obtained.
通过先进行聚类,再根据聚类处理的结果进行识别,由于通过聚类可以 将对应相同路面标识的路面标识区域分成一个聚类簇,根据聚类处理的结果进行识别时,可以实现基于一个路面标识的所有路面标识区域确定该路面标识的类别,有利于提高识别的准确性。By clustering first, and then identifying based on the results of the clustering processing, because the road marking areas corresponding to the same road markings can be divided into clusters through clustering, when identifying based on the results of the clustering processing, it can be realized based on one cluster. All the road marking areas of the road marking determine the type of the road marking, which is beneficial to improve the accuracy of recognition.
示例性的,所述对聚类处理的结果进行识别,输出所述结果对应的路面标识,具体可以包括:根据所述结果对所述二维点云特征图进行像素级比对,以获得所述结果对应的路面标识。示例性的,可以根据单个聚类簇中的路面标识区域,确定二维点云特征图中属于该聚类簇的像素的排列方式,并根据该排列方式确定该聚类簇对应的路面标识。例如,假设二维点云特征图中属于一个聚类簇的像素的排列方式为沿着一条直线非连续排列,则该聚类簇对应的路面标识为间断线。通过根据所述结果对二维点云特征图进行像素级比对,有利于提高所获得的路面标识的置信度。Exemplarily, the recognizing the result of the clustering process and outputting the road surface identification corresponding to the result may specifically include: performing a pixel-level comparison of the two-dimensional point cloud feature map according to the result to obtain the result. The road markings corresponding to the results. Exemplarily, the arrangement manner of the pixels belonging to the cluster cluster in the two-dimensional point cloud feature map may be determined according to the road surface identification area in a single cluster cluster, and the road identification corresponding to the cluster cluster may be determined according to the arrangement manner. For example, assuming that the pixels belonging to a cluster in the two-dimensional point cloud feature map are arranged in a non-continuous manner along a straight line, the road surface identifier corresponding to the cluster is a discontinuous line. By performing a pixel-level comparison of the two-dimensional point cloud feature map according to the result, it is beneficial to improve the confidence of the obtained road markings.
本实施例中,通过获得激光雷达探测得到的三维点云数据,三维点云数据包含路面标识区域的数据,将三维点云数据压缩成二维点云特征图,基于二维点云特征图的区域反射率和高度,输出路面标识区域,实现了基于二维点云特征图的区域反射率和高度的路面标识识别方式,有利于简化实现。In this embodiment, by obtaining the three-dimensional point cloud data detected by the lidar, the three-dimensional point cloud data contains the data of the road marking area, and the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map, which is based on the two-dimensional point cloud feature map. Area reflectivity and height, output the road marking area, and realize the road marking recognition method based on the area reflectance and height of the two-dimensional point cloud feature map, which is conducive to simplifying the implementation.
图6为本申请又一实施例提供的路面标识识别方法的流程示意图,本实施例在图6所示实施例的基础上,主要描述了处理二维点云特征图,得到路面标识识别结果的另一种可选的实现方式。如图6所示,本实施例的方法可以包括:Fig. 6 is a schematic flow chart of a road marking recognition method provided by another embodiment of this application. This embodiment mainly describes the processing of a two-dimensional point cloud feature map to obtain a road marking recognition result based on the embodiment shown in Fig. 6 Another alternative implementation. As shown in FIG. 6, the method of this embodiment may include:
步骤601,获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的数据。Step 601: Obtain three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes data of road marking areas.
需要说明的是,步骤601与步骤401类似,在此不再赘述。It should be noted that step 601 is similar to step 401, and will not be repeated here.
步骤602,将所述三维点云数据压缩成二维点云特征图。Step 602: Compress the three-dimensional point cloud data into a two-dimensional point cloud feature map.
需要说明的是,步骤602与步骤401、步骤501类似,在此不再赘述。It should be noted that step 602 is similar to step 401 and step 501, and will not be repeated here.
步骤603,将所述二维点云特征图输入预设神经网络模型,得到所述预设神经网络模型的模型输出结果。Step 603: Input the two-dimensional point cloud feature map into a preset neural network model, and obtain a model output result of the preset neural network model.
本步骤中,所述预设神经网络模型用于确定所述二维点云特征图中各像素的地表对象类别。所述预设神经网络模型可以包括多个输出通道,所述多个输出通道与多个地表对象类别一一对应,所述多个地表对象类别包括至少一个路面标识类别。所述输出通道用于输出对应地表对象类别的置信度特征图,所述置信度特征图用于表征像素是对应地表对象类别的概率。其中,置信度特征图中的像素可以与二维点云特征图中的像素一一对应。In this step, the preset neural network model is used to determine the surface object category of each pixel in the two-dimensional point cloud feature map. The preset neural network model may include a plurality of output channels, the plurality of output channels are in one-to-one correspondence with a plurality of surface object categories, and the plurality of surface object categories include at least one road marking category. The output channel is used to output a confidence feature map corresponding to the category of the surface object, and the confidence feature map is used to characterize the probability that a pixel is the category of the corresponding surface object. Among them, the pixels in the confidence feature map can correspond to the pixels in the two-dimensional point cloud feature map on a one-to-one basis.
例如,假设路面标识类别的个数为3,分别为单实线、间断线和左转箭头,且对应单实线的输出通道输出置信度特征图1、对应间断线的输出通道输出置信度特征图2、对应左转箭头的输出通道输出置信度特征图3,则置信度特征图1中的像素值可以表征像素是单实线的概率,置信度特征图2中的像素值可以表征像素是间断线的概率,置信度特征图3中的像素值可以表征像素是左转箭头的概率。需要说明的是,本申请实施例中一个像素是一个地表对象类别,可以理解为该像素的像素位置是识别为该地表对象类别的像素位置。For example, suppose the number of road marking categories is 3, which are a single solid line, a discontinuous line, and a left-turning arrow, and the output channel output confidence characteristic of the corresponding single solid line Figure 1. The output confidence characteristic of the output channel corresponding to the discontinuous line Figure 2. The output channel output confidence characteristic corresponding to the left-turning arrow Figure 3, the pixel value in the confidence characteristic Figure 1 can represent the probability that the pixel is a single solid line, and the pixel value in the confidence characteristic Figure 2 can represent the pixel is Probability of discontinuous lines, confidence characteristics The pixel value in Figure 3 can represent the probability that the pixel is a left-turning arrow. It should be noted that, in the embodiment of the present application, a pixel is a category of a surface object, and it can be understood that the pixel position of the pixel is a pixel position identified as the category of the surface object.
需要说明的是,当多个地表对象类别包括一个路面标识类别时,该预设神经网络可以用于识别出一个路面标识类别,该路面标识类别可以表示特定的一种路面标识类别,或者可以表示至少两种路面标识类别的集合。It should be noted that when multiple surface object categories include a road marking category, the preset neural network can be used to identify a road marking category, and the road marking category can represent a specific road marking category, or can represent A collection of at least two types of road markings.
示例性的,所述多个地表对象类别还可以包括其他地表对象类别,相应的,模型输出结果还可以包括其他地表对象类别的置信度特征图。例如,模型输出结果还可以包括建筑物的置信度特征图,该置信度特征图中的像素值可以表征像素是建筑物的概率。例如,模型输出结果还可以包括建筑物的置信度特征图,该置信度特征图中的像素值可以表征像素是建筑物的概率。Exemplarily, the multiple surface object categories may further include other surface object categories, and correspondingly, the model output result may also include the confidence characteristic maps of other surface object categories. For example, the model output result may also include a confidence characteristic map of the building, and the pixel value in the confidence characteristic map may represent the probability that the pixel is a building. For example, the model output result may also include a confidence characteristic map of the building, and the pixel value in the confidence characteristic map may represent the probability that the pixel is a building.
示例性的,所述多个地表对象类别还可以包括“其他”,用于表示不能识别出类别的地表对象,以区别于预设神经网络模型能够识别出类别的地表对象。Exemplarily, the multiple surface object categories may further include "other", which is used to indicate that the category of the surface object cannot be identified, so as to distinguish it from the category of the surface object that can be identified by the preset neural network model.
示例性的,可以将所述模型输出结果作为路面标识识别结果,从而简化路面标识识别方法的实现。Exemplarily, the output result of the model may be used as the road marking recognition result, thereby simplifying the realization of the road marking recognition method.
或者,示例性的,步骤603之后还可以包括:基于所述模型输出结果,得到所述路面标识识别结果。从而能够获得更具体的路面标识识别结果,有利于在获得路面标识识别方法之后的后续处理。Or, for example, after step 603, the method may further include: obtaining the road marking recognition result based on the model output result. In this way, more specific road marking recognition results can be obtained, which is beneficial to subsequent processing after obtaining the road marking recognition method.
示例性的,所述根据所述模型输出结果,得到所述路面标识识别结果,包括:将所述多个地表对象类别分别的置信度特征图中同一像素位置像素值最大的置信度特征图对应的地表对象类别,作为所述像素位置的地表对象类别。Exemplarily, the obtaining the road marking recognition result according to the model output result includes: corresponding the confidence characteristic map of the same pixel position with the largest pixel value in the respective confidence characteristic maps of the multiple surface object categories The surface object category of is used as the surface object category of the pixel position.
假设,所述预设神经网络模型的输出通道的个数为4,4个置信度特征图分别为置信度特征图1至置信度特征图4,且置信度特征图1对应单实线、置信度特征图2对应间断线、置信度特征图3对应左转箭头、置信度特征图4对应“其他”。例如,当置信度特征图1中像素位置(100,100)的像素值是70,置信度 特征图2中像素位置(100,100)的像素值是50,置信度特征图3中像素位置(100,100)的像素值是20,置信度特征图4中像素位置(100,100)的像素值是20时,可以确定像素位置(100,100)的地表对象类别为单实线。又例如,当置信度特征图1中像素位置(100,80)的像素值是20,置信度特征图2中像素位置(100,80)的像素值是30,置信度特征图3中像素位置(100,80)的像素值是20,置信度特征图4中像素位置(100,80)的像素值是70时,可以确定像素位置(100,80)的地表对象类别为“其他”,即不是单实线、间断线和左转箭头中的任意一种。Assuming that the number of output channels of the preset neural network model is 4, the 4 confidence feature maps are respectively the confidence feature map 1 to the confidence feature map 4, and the confidence feature map 1 corresponds to a single solid line and a confidence The degree characteristic figure 2 corresponds to the discontinuous line, the confidence characteristic figure 3 corresponds to the left-turning arrow, and the confidence characteristic figure 4 corresponds to "other". For example, when the pixel value at the pixel location (100, 100) in the confidence feature map 1 is 70, the pixel value at the pixel location (100, 100) in the confidence feature map 2 is 50, and the pixel at the pixel location (100, 100) in the confidence feature map 3 When the value is 20, and the pixel value at the pixel position (100, 100) in the confidence characteristic figure 4 is 20, it can be determined that the ground object category at the pixel position (100, 100) is a single solid line. For another example, when the pixel value at the pixel location (100, 80) in the confidence feature map 1 is 20, the pixel value at the pixel location (100, 80) in the confidence feature map 2 is 30, and the pixel location in the confidence feature map 3 When the pixel value of (100,80) is 20, and the pixel value of pixel position (100,80) in the confidence feature Figure 4 is 70, it can be determined that the category of the surface object at pixel position (100,80) is "other", that is It is not any of a single solid line, a broken line, and a left-turning arrow.
由于置信度特征图中的像素位置与二维点云特征图中的像素位置可以一一对应,因此上述像素位置的地表对象类别可以表征二维点云特征图中像素位置的地表对象类别。Since the pixel positions in the confidence feature map and the pixel positions in the two-dimensional point cloud feature map can correspond one-to-one, the surface object category of the pixel location can represent the surface object category of the pixel location in the two-dimensional point cloud feature map.
示例性的,可以将二维点云特征图中像素位置的地表对象类别作为路面标识识别结果。Exemplarily, the ground surface object category at the pixel position in the two-dimensional point cloud feature map may be used as the road marking recognition result.
或者,示例性的,本实施例的方法还可以包括将所述像素位置的地表对象类别,作为所述三维点云数据中对应所述像素位置的点的地表对象类别,即可以将三维点云数据中点的地表对象类别作为路面标识识别结果。Or, as an example, the method of this embodiment may further include using the surface object category of the pixel location as the surface object category of the point corresponding to the pixel location in the three-dimensional point cloud data, that is, the three-dimensional point cloud The surface object category of the point in the data is used as the road marking recognition result.
示例性的,所述预设神经网络模型具体可以为卷积神经网络(Convolutional Neural Networks,CNN)模型。预设神经网络模型的结构例如可以如图7所示。如图7所示,预设神经网络模型可以包括多个计算节点,每个计算节点中可以包括卷积(Conv)层、批量归一化(Batch Normalization,BN)以及激活函数ReLU,计算节点之间可以采用跳跃连接(Skip Connection)方式连接,K×H×W的输入数据可以输入预设神经网络模型,经过预设神经网络模型处理后,可以获得C×H×W的输出数据。其中,K可以表示二维点云特征图的个数,H可以表示二维点云特征图的高,W可以表示二维点云特征图的宽,C可以表示类别数。Exemplarily, the preset neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model. The structure of the preset neural network model may be as shown in FIG. 7, for example. As shown in Figure 7, the preset neural network model can include multiple computing nodes. Each computing node can include a convolution (Conv) layer, batch normalization (BN), and an activation function ReLU. It can be connected in the skip connection mode. The input data of K×H×W can be input into the preset neural network model, and after the preset neural network model is processed, the output data of C×H×W can be obtained. Among them, K can represent the number of two-dimensional point cloud feature maps, H can represent the height of the two-dimensional point cloud feature maps, W can represent the width of the two-dimensional point cloud feature maps, and C can represent the number of categories.
需要说明的是,当二维点云特征图过大时,可以将一个二维点云特征图切割为N个子特征图,相应的,输入数据可以为N×K×H’×W’,输出数据可以为N×C×H’×W’,其中,H’可以表示子特征图的高,W’可以表示子特征图的宽。It should be noted that when the two-dimensional point cloud feature map is too large, a two-dimensional point cloud feature map can be cut into N sub-feature maps. Correspondingly, the input data can be N×K×H'×W', and the output The data can be N×C×H'×W', where H'can represent the height of the sub-feature map, and W'can represent the width of the sub-feature map.
本实施例中,通过获得激光雷达探测得到的三维点云数据,三维点云数据包含路面标识区域的数据,将三维点云数据压缩成二维点云特征图,将所 述二维点云特征图输入预设神经网络模型,得到所述预设神经网络模型的模型输出结果,基于预设神经网络模型对二维点云特征图中的语义进行区分,以获得路面标识识别结果,实现了通过预设神经网络模型获得路面标识识别结果。In this embodiment, by obtaining the three-dimensional point cloud data detected by the lidar, the three-dimensional point cloud data contains data of the road marking area, the three-dimensional point cloud data is compressed into a two-dimensional point cloud feature map, and the two-dimensional point cloud feature The map is input to the preset neural network model, and the model output result of the preset neural network model is obtained. Based on the preset neural network model, the semantics in the two-dimensional point cloud feature map are distinguished to obtain the road marking recognition result, which is achieved through The neural network model is preset to obtain the road marking recognition result.
可选的,在上述实施例的基础上,所述将所述三维点云数据压缩成二维点云特征图,具体可以包括:根据筛选条件,筛选所述三维点云数据,得到筛选后的三维点云数据,并将所述筛选后的三维点云数据压缩成二维点云特征图。通过在处理二维点云特征图之前,提前过滤掉无需关注的无关数据,从而可以减小二维点云特征图的数据量,避免无关数据影响路面标识识别结果的准确性。示例性的,筛选条件可以包括距离条件和/或高度条件。其中,距离条件例如可以为距离小于100米,以从三维点云数据中筛选出符合与激光雷达距离小于100米条件的三维点云数据,从而避免对于距离过远的点进行不必要的识别,有利于节省计算资源。高度条件例如可以为相对高度大于高度阈值1且小于高度阈值2,高度阈值1小于高度阈值2,以从三维点云数据中筛选出符合相对高度大于高度阈值1且小于高度阈值2的三维点云数据,从而避免对于不会是路面标识的点进行不必要的识别,有利于节省计算资源。Optionally, on the basis of the foregoing embodiment, compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map may specifically include: filtering the three-dimensional point cloud data according to a filtering condition to obtain the filtered three-dimensional point cloud data. Three-dimensional point cloud data, and compressing the filtered three-dimensional point cloud data into a two-dimensional point cloud feature map. By filtering out irrelevant data that does not need attention before processing the two-dimensional point cloud feature map, the data volume of the two-dimensional point cloud feature map can be reduced, and irrelevant data can be avoided to affect the accuracy of road marking recognition results. Exemplarily, the filtering conditions may include distance conditions and/or altitude conditions. Among them, the distance condition may be, for example, a distance of less than 100 meters, to filter out the three-dimensional point cloud data that meets the condition that the distance to the lidar is less than 100 meters from the three-dimensional point cloud data, so as to avoid unnecessary recognition of points that are too far away. Conducive to saving computing resources. The height condition can be, for example, that the relative height is greater than the height threshold 1 and less than the height threshold 2, and the height threshold 1 is less than the height threshold 2, so as to filter the 3D point cloud data that meets the 3D point cloud with the relative height greater than the height threshold 1 and less than the height threshold 2. Data, so as to avoid unnecessary identification of points that will not be road markings, which is conducive to saving computing resources.
可选的,在上述实施例的基础上,所述获得激光雷达探测得到的三维点云数据,具体可以包括:获得激光雷达探测得到的多帧三维点云数据;对所述多帧三维点云数据进行累积,获得累积后的三维点云数据。通过多帧累积的方式,可以获得较密集的点云,避免由于单帧探测到的点数量较少,点云稀疏的问题。相应的,所述将所述三维点云数据压缩成二维点云特征图,具体可以包括:将所述累积后的三维点云数据压缩成二维点云特征图。Optionally, on the basis of the foregoing embodiment, the obtaining three-dimensional point cloud data detected by lidar may specifically include: obtaining multiple frames of three-dimensional point cloud data detected by lidar; The data is accumulated, and the accumulated three-dimensional point cloud data is obtained. Through multi-frame accumulation, a denser point cloud can be obtained, and the problem of sparse point cloud due to the small number of points detected in a single frame can be avoided. Correspondingly, the compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map may specifically include: compressing the accumulated three-dimensional point cloud data into a two-dimensional point cloud feature map.
可选的,在上述方法实施例的基础上,还可以包括如下步骤C和步骤D。Optionally, on the basis of the foregoing method embodiment, the following step C and step D may also be included.
步骤C,确定是否满足预设环境条件。Step C: Determine whether the preset environmental conditions are met.
步骤D,若满足所述预设环境条件,则获得图像采集模块采集得到的图像信息,所述图像信息包含所述路面标识区域,并处理所述图像信息,得到路面标识识别结果。Step D: If the preset environmental conditions are met, the image information collected by the image acquisition module is obtained, the image information includes the road marking area, and the image information is processed to obtain a road marking recognition result.
其中,与上述激光雷达类似,所述路面标识识别装置可以包括所述图像采集模块,步骤D中获取图像信息具体可以包括路面标识识别装置的图像采集获得图像信息;或者,所述路面标识识别装置可以不包括所述图像采集模块,步骤D中获取图像信息具体可以包括接收其他装置/设备发送的由图像采集模 块采集到的图像信息。其中,图像采集模块例如可以为摄像头。Wherein, similar to the above-mentioned lidar, the road marking recognition device may include the image acquisition module, and the acquiring of image information in step D may specifically include image acquisition of the road marking recognition device to obtain image information; or, the road marking recognition device The image acquisition module may not be included, and the acquisition of image information in step D may specifically include receiving the image information sent by other devices/equipment and collected by the image acquisition module. Among them, the image acquisition module may be a camera, for example.
预设环境条件可以是指采用基于图像信息获得路面标识识别结果的方式需要满足的成像环境条件。在满足预设环境条件时,可以表示成像环境较好,所获得的图像信息的成像质量较好,因此可以认为基于图像信息获得的路面标识识别结果的准确性较高,从而可以采用基于图像信息获得路面标识识别结果的方式。在不满足预设环境条件时,可以表示成像环境较差,所获得的图像信息的成像质量较差,因此可以认为基于图像信息获得的路面标识识别结果的准确性较低,从而可以不采用基于图像信息获得路面标识识别结果的方式。The preset environmental conditions may refer to imaging environmental conditions that need to be met by using the method of obtaining road sign recognition results based on image information. When the preset environmental conditions are met, it can indicate that the imaging environment is better, and the image quality of the obtained image information is better. Therefore, it can be considered that the accuracy of the road marking recognition result obtained based on the image information is high, and the image information-based Ways to obtain road marking recognition results. When the preset environmental conditions are not met, it can indicate that the imaging environment is poor, and the imaging quality of the obtained image information is poor. Therefore, it can be considered that the accuracy of the road marking recognition results obtained based on the image information is low, so the The way the image information obtains the road marking recognition result.
所述预设环境条件可以包括影响成像质量的任意环境因素中的一种或多种。示例性的,所述预设环境条件可以包括环境光条件和/或镜头脏污程度条件。示例性的,所述环境光条件可以包括环境光强度大于强度阈值。示例性的,所述镜头脏污程度条件包括镜头脏污程度小于程度阈值。The preset environmental conditions may include one or more of any environmental factors that affect the imaging quality. Exemplarily, the preset environmental conditions may include ambient light conditions and/or lens contamination degree conditions. Exemplarily, the ambient light condition may include that the intensity of the ambient light is greater than an intensity threshold. Exemplarily, the condition of the degree of lens contamination includes that the degree of lens contamination is less than a degree threshold.
需要说明的是,对于处理图像信息得到路面标识识别结果的具体方式,本申请实施例不做限定。It should be noted that the specific manner of processing the image information to obtain the road marking recognition result is not limited in the embodiment of the present application.
通过步骤C和步骤D,使得路面标识识别装置不仅可以支持基于三维点云数据的路面标识识别的功能,还可以支持基于图像信息的路面标识识别功能,提高了路面标识识别装置识别路面标识的灵活性。Through step C and step D, the road sign recognition device can not only support the road sign recognition function based on three-dimensional point cloud data, but also support the road sign recognition function based on image information, which improves the flexibility of the road sign recognition device to recognize road signs. Sex.
示例性的,可以将通过处理二维点云特征图得到的路面标识识别结果(以下记为第一路面标识识别结果)以及通过处理图像信息得到的路面标识识别结果(以下记为第二路面标识识别结果),并列作为最终的路面标识识别结果,例如第一路面标识识别结果可以作为一个功能的输入,第二路面标识结果可以作为另一个功能的输入。Exemplarily, the road marking recognition result obtained by processing the two-dimensional point cloud feature map (hereinafter referred to as the first road marking recognition result) and the road marking recognition result obtained by processing the image information (hereinafter referred to as the second road marking Recognition result), which is juxtaposed as the final road marking recognition result. For example, the first road marking recognition result can be used as the input of one function, and the second road marking result can be used as the input of another function.
或者,示例性的,可以根据融合策略,融合第一路面标识识别结果和第二路面标识识别结果,得到融合后的路面标识识别结果,即融合后的路面标识识别结果可以作为最终的路面标识识别结果。通过融合第一路面标识识别结果和第二路面标识识别结果可以提高路面标识的识别效果。Or, for example, the first road marking recognition result and the second road marking recognition result may be merged according to the fusion strategy to obtain the fused road marking recognition result, that is, the fused road marking recognition result can be used as the final road marking recognition result. The recognition effect of the road marking can be improved by fusing the recognition result of the first road marking and the recognition result of the second road marking.
示例性的,所述融合策略可以包括基于距离远近的融合策略。其中,该距离可以是指路面距离。Exemplarily, the fusion strategy may include a fusion strategy based on distance. Wherein, the distance may refer to the road distance.
示例性的,所述根据融合策略,融合第一路面标识识别结果和第二路面标识结果,得到融合后的路面标识识别结果,具体可以包括:对于距离大于 距离阈值的视野范围,将所述第一路面标识识别结果作为所述融合后的路面标识识别结果。由于距离越远时,图像信息中单个像素对应的实际区域越大,根据图像信息所确定的距离准确性越差,因此将第一路面标识识别结果作为距离大于距离阈值的视野范围的融合后的路面标识识别结果,可以避免将第二路面标识识别结果作为距离大于距离阈值的视野范围的路面标识识别结果所导致的识别结果不准确的问题。Exemplarily, according to the fusion strategy, the first road marking recognition result and the second road marking result are merged to obtain the fused road marking recognition result, which may specifically include: for a visual field with a distance greater than a distance threshold, comparing the first road marking recognition result with the second road marking result. A road sign recognition result is used as the merged road sign recognition result. Since the longer the distance, the larger the actual area corresponding to a single pixel in the image information, and the worse the accuracy of the distance determined according to the image information, the first road marking recognition result is taken as the fusion of the field of view with a distance greater than the distance threshold. The road sign recognition result can avoid the problem of inaccurate recognition results caused by using the second road sign recognition result as the road sign recognition result with a distance greater than the distance threshold.
和/或,示例性的,所述根据融合策略,融合第一路面标识识别结果和第二路面标识结果,得到融合后的路面标识识别结果,具体可以包括:对于距离小于距离阈值的视野范围,将所述第二路面标识识别结果作为所述融合后的路面标识识别结果。由于激光雷达的成本远高于图像采集模块,因此将第二路面标识识别结果作为对于距离小于距离阈值的视野范围的融合后的路面标识识别结果,有利于节省成本。And/or, for example, the fusion of the first road marking recognition result and the second road marking result according to the fusion strategy to obtain the fused road marking recognition result may specifically include: for a field of view whose distance is less than a distance threshold, The second road marking recognition result is used as the fused road marking recognition result. Since the cost of the lidar is much higher than that of the image acquisition module, using the second road marking recognition result as the fused road marking recognition result for the field of view with a distance less than the distance threshold is beneficial to cost saving.
或者,示例性的,可以在成像环境较好时选择采用基于图像信息获得路面标识识别结果的方式,在成像环境较差时选择采用三维点云数据获得路面标识识别结果的方式。示例性的,上述获得激光雷达探测得到的三维点云数据,包括:若不满足所述预设环境条件,则获得激光雷达探测得到的三维点云数据的步骤。Or, for example, when the imaging environment is good, the method of obtaining road sign recognition results based on image information may be selected, and when the imaging environment is poor, the method of obtaining road sign recognition results using three-dimensional point cloud data may be selected. Exemplarily, obtaining the three-dimensional point cloud data detected by the lidar includes: if the preset environmental conditions are not satisfied, obtaining the three-dimensional point cloud data detected by the lidar.
可选的,在上述实施例的基础上,还可以包括:展示所述路面标识识别结果。从而有利于用户查看路面标识识别结果。示例性的,所述展示所述路面标识识别结果,包括:根据所述路面标识识别结果在目标图像中标注路面标识,得到标注后的图像,并展示所述标注后的图像。示例性的,可以采用不同颜色标注不同类别的路面标识,例如绿色代表单实线、黄色代表间断线、紫色代表边缘线。Optionally, on the basis of the foregoing embodiment, it may further include: displaying the road marking recognition result. This facilitates the user to view the road marking recognition result. Exemplarily, the displaying the road surface mark recognition result includes: marking the road surface mark in a target image according to the road surface mark recognition result to obtain a marked image, and displaying the marked image. Exemplarily, different colors may be used to mark different types of road signs, for example, green represents a single solid line, yellow represents a discontinuous line, and purple represents an edge line.
示例性的,所述目标图像包括下述中的一种或多种:全黑图像、全白图像或包含所述路面标识区域的图像。其中,全黑图像可以为各像素的红(Red,R)值、绿(Green,G)值和蓝(Blue,B)值均为0的图像,全白图像可以为各像素的R值、G值和B值均为255的图像。Exemplarily, the target image includes one or more of the following: an all-black image, an all-white image, or an image containing the road marking area. Among them, the all-black image can be an image in which the red (Red, R) value, green (Green, G) value and blue (Blue, B) value of each pixel are all 0, and the all-white image can be the R value of each pixel, An image whose G value and B value are both 255.
可选的,在上述实施例的基础上,若路面平坦,则在目标方向为竖直方向时,根据压缩过的三维点云数据(即二维点云特征图)获得的激光雷达与路面标识之间的路面距离,可以与激光雷达与路面标识之间的实际路面距离相符合,因此,通过目标方向为竖直方向,在路面平坦时可以确保获得的路 面标识与激光雷达之间路面距离的准确性。Optionally, on the basis of the foregoing embodiment, if the road surface is flat, when the target direction is the vertical direction, the lidar and road markings obtained from the compressed three-dimensional point cloud data (ie, two-dimensional point cloud feature map) The road distance between the laser radar and the road marking can be consistent with the actual road distance between the laser radar and the road marking. Therefore, the target direction is vertical, and the road surface distance between the road marking and the laser radar can be ensured when the road is flat. accuracy.
然而,对于路面不平坦的场景,若目标方向为竖直方向,则会存在根据压缩过的三维点云数据所获得的激光雷达与路面标识之间的路面距离,与激光雷达与路面标识之间的实际路面距离不符的情况,或者说,在路面不平坦的情况下,根据压缩过的三维点云数据所获得的路面标识长度,例如车道线的长度,会比其实际长度短。对于其它路面标识会存在同样问题,这会导致通过激光点云获得的路面标识信息无法与其它传感器获得的路面标识信息相对应。对于需要进行识别的路面标识,由于路面标识的形状会产生压缩,还可能存在误识别的问题。However, for scenes with uneven road surfaces, if the target direction is vertical, there will be the road distance between the lidar and the road markings obtained from the compressed 3D point cloud data, and the distance between the lidar and the road markings. If the actual road distance does not match, or in other words, when the road is uneven, the road marking length obtained from the compressed 3D point cloud data, such as the length of the lane line, will be shorter than its actual length. The same problem exists for other road markings, which may cause the road marking information obtained through the laser point cloud to fail to correspond to the road marking information obtained by other sensors. For road markings that need to be identified, because the shape of the road markings will be compressed, there may also be a problem of misrecognition.
如图8所示,对于平坦路面R1,根据基于竖直方向d1压缩过的三维点云数据所获得的激光雷达O与路面标识A1之间的路面距离L1,即为激光雷达O与路面标识A1之间的实际路面距离。对于上坡路面R2,根据基于竖直方向d1压缩过的三维点云数据所获得的激光雷达O与路面标识A2之间的路面距离L2,小于激光雷达O与路面标识A2之间的实际路面距离L3+L4。需要说明的是,图8中以路面为平面路面为例,对于路面为曲线的场景,在目标方向为竖直方向时也存在上述激光雷达与路面标识的路面距离不准确的问题。As shown in Figure 8, for a flat road R1, the road distance L1 between the lidar O and the road marking A1 obtained based on the three-dimensional point cloud data compressed in the vertical direction d1 is the lidar O and the road marking A1 The actual road distance between. For the uphill road surface R2, the road distance L2 between the lidar O and the road marking A2 obtained according to the three-dimensional point cloud data compressed based on the vertical direction d1 is smaller than the actual road distance L3 between the lidar O and the road marking A2 +L4. It should be noted that in FIG. 8, the road surface is a flat road surface as an example. For a scene where the road surface is a curve, when the target direction is a vertical direction, there is also the problem of inaccurate road distance between the lidar and the road marking.
在图8的基础上,假设路面标识A2为单实线,则实际的单实线901与根据压缩过的三维点云数据识别出的单实线902的关系可以如图9A所示,可以看出,识别出的单实线902的长度比实际的单直线901的长度短。On the basis of Fig. 8, assuming that the road marking A2 is a single solid line, the relationship between the actual single solid line 901 and the single solid line 902 identified according to the compressed three-dimensional point cloud data can be shown in Fig. 9A, which can be seen It can be seen that the length of the recognized single solid line 902 is shorter than the actual length of the single straight line 901.
在图8的基础上,假设路面标识A2为箭头,则实际的箭头903与根据压缩过的三维点云数据识别出的箭头904的关系可以如图9B所示,可以看出,识别出的箭头904的长度比实际的箭头903的长度短,且识别出的箭头904的三角形比实际的箭头903的三角形扁。On the basis of Fig. 8, assuming that the road marking A2 is an arrow, the relationship between the actual arrow 903 and the arrow 904 recognized according to the compressed three-dimensional point cloud data can be shown in Fig. 9B. It can be seen that the recognized arrow The length of 904 is shorter than the length of the actual arrow 903, and the recognized triangle of the arrow 904 is flatter than the actual triangle of the arrow 903.
因此为了避免激光雷达与路面标识的路面距离不准确的问题,可以根据路面的起伏状态动态确定目标方向,以使得根据基于目标方向压缩过的三维点云数据所获得的激光雷达与路面标识之间的路面距离,可以与实际路面距离相符合。Therefore, in order to avoid the problem of inaccurate road distance between lidar and road markings, the target direction can be dynamically determined according to the undulating state of the road surface, so that the distance between the lidar and the road markings obtained from the three-dimensional point cloud data compressed based on the target direction The road distance can be consistent with the actual road distance.
示例性的,可以根据激光雷达探测得到的三维点云数据,通过三维建模的方式获知路面起伏状态,并根据路面起伏状态,对三维点云数据进行压缩。示例性的,可以根据路面起伏状态,确定不同点云范围与目标方向的对象关系,并根据目标方向对对应点云范围内的点云数据进行压缩。例如,如图10 所示,可以根据路面平台状态确定平坦路面范围内的三位点云数据对应的目标方向为竖直方向d1,并确定上坡路面范围内的三维点云数据对应的目标方向为与路面垂直的倾斜方向d2,进一步的,可以根据竖直方向d1压缩平坦路面范围的三维点云数据,并根据倾斜方向d2压缩上坡路面范围的三维点云数据。并且,采用图10的方式,根据压缩过的三维点云数据获得的激光雷达与路面标识A1之间的路面距离L1,即为激光雷达与路面标识之间的实际路面距离,根据压缩或的三维点云数据获得的激光雷达与路面标识A2之间的路面距离L3+L4,即为激光雷达与路面标识之间的实际路面距离。同时,基于本实施的方法,可以解决车辆识别路面标识尺寸发生变化或者形状发生变化的问题。Exemplarily, according to the three-dimensional point cloud data detected by the lidar, the road surface undulation state can be obtained through three-dimensional modeling, and the three-dimensional point cloud data can be compressed according to the road surface undulation state. Exemplarily, the object relationship between different point cloud ranges and target directions can be determined according to the undulating state of the road surface, and the point cloud data in the corresponding point cloud ranges can be compressed according to the target direction. For example, as shown in Figure 10, the target direction corresponding to the three-position point cloud data in the range of the flat road surface can be determined as the vertical direction d1 according to the state of the road platform, and the target direction corresponding to the three-dimensional point cloud data in the range of the uphill road surface can be determined as The inclination direction d2 perpendicular to the road surface can further compress the three-dimensional point cloud data of the flat road surface area according to the vertical direction d1, and compress the three-dimensional point cloud data of the uphill road surface area according to the inclination direction d2. In addition, using the method shown in Figure 10, the road distance L1 between the lidar and the road marking A1 obtained from the compressed three-dimensional point cloud data is the actual road distance between the lidar and the road marking. The road distance L3+L4 between the lidar and the road marking A2 obtained from the point cloud data is the actual road distance between the lidar and the road marking. At the same time, based on the method of this implementation, the problem that the size or shape of the vehicle identification road marking changes can be solved.
需要说明的是,图10中以路面为平面路面为例。对于路面为曲面的场景,可以将路面按照一定的粒度分为多个路面范围,对于多个路面范围中的各路面范围,可以将与其切平面垂直的方向作为其范围内的三维点云数据对应的目标方向。可以理解的是,粒度越小,激光雷达与路面标识的路面距离的准确性越高。It should be noted that, in FIG. 10, the road surface is a flat road surface as an example. For scenes where the road surface is a curved surface, the road surface can be divided into multiple road surface ranges according to a certain granularity. For each road surface range in the multiple road surface ranges, the direction perpendicular to the tangent plane can be used as the three-dimensional point cloud data corresponding to the range. Target direction. It is understandable that the smaller the particle size, the higher the accuracy of the road distance between the lidar and the road marking.
对于曲面路面,如图11所示,可以将车辆前方的路面分为多个路面范围,每个路面范围可以按照相应的目标方向进行三维点云数据的压缩。需要说明的是,图11中以分为13个路面范围为例,一个路面范围上方的箭头可以表示该路面范围对应的目标方向。For a curved road surface, as shown in Fig. 11, the road surface in front of the vehicle can be divided into multiple road surface ranges, and each road surface range can be compressed into three-dimensional point cloud data according to the corresponding target direction. It should be noted that, in FIG. 11, the division into 13 road ranges is taken as an example, and an arrow above a road range can indicate the target direction corresponding to the road range.
图12为本申请一实施例提供的路面标识识别装置的结构示意图,如图12所示,该装置1200可以包括:处理器1201和存储器1202。FIG. 12 is a schematic structural diagram of a road marking recognition device provided by an embodiment of the application. As shown in FIG. 12, the device 1200 may include a processor 1201 and a memory 1202.
所述存储器1202,用于存储程序代码;The memory 1202 is used to store program codes;
所述处理器1201,调用所述程序代码,当程序代码被执行时,用于执行以下操作:The processor 1201 calls the program code, and when the program code is executed, is configured to perform the following operations:
获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据;Obtaining three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area;
将所述三维点云数据压缩成二维点云特征图;Compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map;
处理所述二维点云特征图,得到路面标识识别结果。The two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
本实施例提供的路面标识识别装置,可以用于执行前述方法实施例的技术方案,其实现原理和技术效果与方法实施例类似,在此不再赘述。The road marking recognition device provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar to those of the method embodiments, and will not be repeated here.
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可 读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。A person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware. The aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application, not to limit them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application. range.

Claims (56)

  1. 一种路面标识识别方法,其特征在于,包括:A road marking recognition method, which is characterized in that it comprises:
    获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据;Obtaining three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area;
    将所述三维点云数据压缩成二维点云特征图;Compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map;
    处理所述二维点云特征图,得到路面标识识别结果。The two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
  2. 根据权利要求1所述的方法,其特征在于,所述将所述三维点云数据压缩成二维点云特征图,包括:The method according to claim 1, wherein the compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map comprises:
    根据目标方向压缩所述三维点云数据,得到包含特征信息的二维点云特征图,所述特征信息包括反射率信息和/或相对高度信息,所述目标方向为能够保留用于识别路面标识的特征信息的方向。Compress the three-dimensional point cloud data according to the target direction to obtain a two-dimensional point cloud feature map containing feature information, the feature information including reflectivity information and/or relative height information, and the target direction can be reserved for identifying road signs The direction of the feature information.
  3. 根据权利要求2所述的方法,其特征在于,所述根据目标方向压缩所述三维点云数据,得到包含反射率信息和/或相对高度信息的二维点云特征图,包括:The method according to claim 2, wherein the compressing the three-dimensional point cloud data according to the target direction to obtain a two-dimensional point cloud feature map containing reflectance information and/or relative height information comprises:
    沿所述目标方向对所述三维点云数据进行投影压缩,得到二维点云数据;Performing projection compression on the three-dimensional point cloud data along the target direction to obtain two-dimensional point cloud data;
    从所述二维点云数据中提取出特征信息,得到包含所述特征信息的二维点云特征图。The feature information is extracted from the two-dimensional point cloud data, and a two-dimensional point cloud feature map containing the feature information is obtained.
  4. 根据权利要求2或3所述的方法,其特征在于,所述处理所述二维点云特征图,得到第一路面标识识别结果,包括:The method according to claim 2 or 3, wherein the processing the two-dimensional point cloud feature map to obtain a first road marking recognition result comprises:
    基于所述二维点云特征图的区域反射率和高度,输出路面标识区域。Based on the area reflectivity and height of the two-dimensional point cloud feature map, the road marking area is output.
  5. 根据权利要求4所述的方法,其特征在于,所述处理所述二维点云特征图,得到路面标识识别结果,还包括:The method according to claim 4, wherein the processing the two-dimensional point cloud feature map to obtain a road marking recognition result further comprises:
    对输出的所述路面标识区域进行聚类处理;Perform clustering processing on the output road marking area;
    对聚类处理的结果进行识别,输出所述结果对应的路面标识。Identify the result of the clustering process, and output the road marking corresponding to the result.
  6. 根据权利要求5所述的方法,其特征在于,所述对聚类处理的结果进行识别,输出所述结果对应的路面标识,包括:The method according to claim 5, wherein the identifying the result of the clustering process and outputting the road marking corresponding to the result comprises:
    根据所述结果对所述二维点云特征图进行像素级比对,以获得所述结果对应的路面标识。Perform a pixel-level comparison on the two-dimensional point cloud feature map according to the result to obtain a road marking corresponding to the result.
  7. 根据权利要求1-3任一项所述的方法,其特征在于,所述处理所述二维点云特征图,得到路面标识识别结果,包括:The method according to any one of claims 1 to 3, wherein the processing the two-dimensional point cloud feature map to obtain a road marking recognition result comprises:
    将所述二维点云特征图输入预设神经网络模型,得到所述预设神经网络 模型的模型输出结果;Inputting the two-dimensional point cloud feature map into a preset neural network model to obtain a model output result of the preset neural network model;
    所述模型输出结果包括至少一个路面标识类别的置信度特征图,单个路面标识的置信度特征图用于表征像素是所述路面标识类别的概率。The model output result includes a confidence characteristic map of at least one road marking category, and the confidence characteristic map of a single road marking is used to characterize the probability that a pixel is the road marking category.
  8. 根据权利要求7所述的方法,其特征在于,所述处理所述二维点云特征图,得到路面标识识别结果,还包括:The method according to claim 7, wherein the processing the two-dimensional point cloud feature map to obtain a road marking recognition result further comprises:
    基于所述模型输出结果,得到所述路面标识识别结果。Based on the model output result, the road marking recognition result is obtained.
  9. 根据权利要求8所述的方法,其特征在于,所述预设神经网络模型包括多个输出通道,所述多个输出通道与多个地表对象类别一一对应,所述输出通道用于输出对应地表对象类别的置信度特征图,所述多个地表对象类别包括所述至少一个路面标识类别;所述根据所述模型输出结果,得到所述路面标识识别结果,包括:The method according to claim 8, wherein the preset neural network model comprises a plurality of output channels, the plurality of output channels correspond to a plurality of surface object categories one-to-one, and the output channels are used to output corresponding A confidence characteristic map of a surface object category, where the multiple surface object categories include the at least one road surface identification category; and the obtaining the road surface identification recognition result according to the model output result includes:
    将所述多个地表对象类别分别的置信度特征图中同一像素位置像素值最大的置信度特征图对应的地表对象类别,作为所述像素位置的地表对象类别。The surface object category corresponding to the confidence feature map with the largest pixel value at the same pixel position in the respective confidence feature maps of the multiple surface object categories is used as the surface object category of the pixel location.
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述模型输出结果,得到所述路面标识识别结果,还包括:The method according to claim 9, wherein said obtaining said road marking recognition result according to said model output result further comprises:
    将所述像素位置的地表对象类别,作为所述三维点云数据中对应所述像素位置的点的地表对象类别。The surface object category of the pixel location is used as the surface object category of the point corresponding to the pixel location in the three-dimensional point cloud data.
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述将所述三维点云数据压缩成二维点云特征图,包括:The method according to any one of claims 1-10, wherein the compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map comprises:
    根据筛选条件,筛选所述三维点云数据,得到筛选后的三维点云数据;Filter the three-dimensional point cloud data according to the filtering conditions to obtain the filtered three-dimensional point cloud data;
    将所述筛选后的三维点云数据压缩成二维点云特征图。Compress the filtered three-dimensional point cloud data into a two-dimensional point cloud feature map.
  12. 根据权利要求11所述的方法,其特征在于,所述筛选条件包括距离条件和/或高度条件。The method according to claim 11, wherein the screening condition includes a distance condition and/or a height condition.
  13. 根据权利要求1-10任一项所述的方法,其特征在于,所述获得激光雷达探测得到的三维点云数据,包括:The method according to any one of claims 1-10, wherein the obtaining three-dimensional point cloud data detected by lidar comprises:
    获得激光雷达探测得到的多帧三维点云数据;Obtain multiple frames of 3D point cloud data detected by lidar;
    对所述多帧三维点云数据进行累积,获得累积后的三维点云数据;Accumulating the multiple frames of three-dimensional point cloud data to obtain accumulated three-dimensional point cloud data;
    所述将所述三维点云数据压缩成二维点云特征图,包括:The compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map includes:
    将所述累积后的三维点云数据压缩成二维点云特征图。Compressing the accumulated three-dimensional point cloud data into a two-dimensional point cloud feature map.
  14. 根据权利要求1-10任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-10, wherein the method further comprises:
    确定是否满足预设环境条件;Determine whether the preset environmental conditions are met;
    若满足所述预设环境条件,则获得图像采集模块采集得到的图像信息,所述图像信息包含所述路面标识区域,并处理所述图像信息,得到路面标识识别结果。If the preset environmental conditions are met, the image information collected by the image acquisition module is obtained, the image information includes the road marking area, and the image information is processed to obtain a road marking recognition result.
  15. 根据权利要求14所述的方法,其特征在于,所述方法还包括:The method according to claim 14, wherein the method further comprises:
    根据融合策略,融合通过处理所述二维点云特征图得到的第一路面标识识别结果和通过处理所述图像信息得到的第二路面标识结果,得到融合后的路面标识识别结果。According to the fusion strategy, the first road marking recognition result obtained by processing the two-dimensional point cloud feature map and the second road marking result obtained by processing the image information are merged to obtain the fused road marking recognition result.
  16. 根据权利要求15所述的方法,其特征在于,所述融合策略包括基于距离远近的融合策略。The method according to claim 15, wherein the fusion strategy comprises a fusion strategy based on distance.
  17. 根据权利要求16所述的方法,其特征在于,所述根据融合策略,融合第一路面标识识别结果和第二路面标识结果,得到融合后的路面标识识别结果,包括:The method according to claim 16, wherein the fusion of the first road marking recognition result and the second road marking result according to the fusion strategy to obtain the fused road marking recognition result comprises:
    对于距离大于距离阈值的视野范围,将所述第一路面标识识别结果作为所述融合后的路面标识识别结果。For a field of view with a distance greater than a distance threshold, the first road marking recognition result is used as the fused road marking recognition result.
  18. 根据权利要求16所述的方法,其特征在于,所述根据融合策略,融合第一路面标识识别结果和第二路面标识结果,得到融合后的路面标识识别结果,包括:The method according to claim 16, wherein the fusion of the first road marking recognition result and the second road marking result according to the fusion strategy to obtain the fused road marking recognition result comprises:
    对于距离小于距离阈值的视野范围,将所述第二路面标识识别结果作为所述融合后的路面标识识别结果。For a field of view whose distance is less than the distance threshold, the second road marking recognition result is used as the fused road marking recognition result.
  19. 根据权利要求14所述的方法,其特征在于,所述获得激光雷达探测得到的三维点云数据,包括:The method according to claim 14, wherein the obtaining three-dimensional point cloud data detected by lidar comprises:
    若不满足所述预设环境条件,则获得激光雷达探测得到的三维点云数据的步骤。If the preset environmental conditions are not met, the step of obtaining three-dimensional point cloud data detected by the lidar.
  20. 根据权利要求14所述的方法,其特征在于,所述预设环境条件包括环境光条件和/或镜头脏污程度条件。The method according to claim 14, wherein the preset environmental conditions include ambient light conditions and/or lens contamination degree conditions.
  21. 根据权利要求20所述的方法,其特征在于,所述环境光条件包括环境光强度大于强度阈值。22. The method of claim 20, wherein the ambient light condition comprises that the ambient light intensity is greater than an intensity threshold.
  22. 根据权利要求20所述的方法,其特征在于,所述镜头脏污程度条件包括镜头脏污程度大于程度阈值。22. The method according to claim 20, wherein the condition of the degree of lens contamination includes that the degree of lens contamination is greater than a degree threshold.
  23. 根据权利要求1-10任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-10, wherein the method further comprises:
    展示所述路面标识识别结果。Display the road marking recognition result.
  24. 根据权利要求23所述的方法,其特征在于,所述展示所述路面标识识别结果,包括:The method according to claim 23, wherein the displaying the road marking recognition result comprises:
    根据所述路面标识识别结果在目标图像中标注路面标识,得到标注后的图像,并展示所述标注后的图像。Mark the road sign in the target image according to the road sign recognition result, obtain the marked image, and display the marked image.
  25. 根据权利要求24所述的方法,其特征在于,所述目标图像包括下述中的一种或多种:The method according to claim 24, wherein the target image includes one or more of the following:
    全黑图像、全白图像或包含所述路面标识区域的图像。An all-black image, an all-white image, or an image containing the road marking area.
  26. 根据权利要求1-10任一项所述的方法,其特征在于,所述激光雷达设置在移动平台上。The method according to any one of claims 1-10, wherein the lidar is arranged on a mobile platform.
  27. 根据权利要求26所述的方法,其特征在于,所述移动平台包括自动驾驶汽车和/或半自动驾驶汽车。The method according to claim 26, wherein the mobile platform comprises an autonomous vehicle and/or a semi-autonomous vehicle.
  28. 一种路面标识识别装置,其特征在于,包括:处理器和存储器;A road marking recognition device, which is characterized by comprising: a processor and a memory;
    所述存储器,用于存储程序代码;The memory is used to store program code;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:The processor calls the program code, and when the program code is executed, is used to perform the following operations:
    获得激光雷达探测得到的三维点云数据,所述三维点云数据包含路面标识区域的反射数据;Obtaining three-dimensional point cloud data detected by lidar, where the three-dimensional point cloud data includes reflection data of a road marking area;
    将所述三维点云数据压缩成二维点云特征图;Compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map;
    处理所述二维点云特征图,得到路面标识识别结果。The two-dimensional point cloud feature map is processed to obtain a road marking recognition result.
  29. 根据权利要求28所述的装置,其特征在于,所述处理器用于将所述三维点云数据压缩成二维点云特征图,具体包括:The device according to claim 28, wherein the processor is configured to compress the three-dimensional point cloud data into a two-dimensional point cloud feature map, which specifically comprises:
    根据目标方向压缩所述三维点云数据,得到包含特征信息的二维点云特征图,所述特征信息包括反射率信息和/或相对高度信息,所述目标方向为能够保留用于识别路面标识的特征信息的方向。Compress the three-dimensional point cloud data according to the target direction to obtain a two-dimensional point cloud feature map containing feature information, the feature information including reflectivity information and/or relative height information, and the target direction can be reserved for identifying road signs The direction of the feature information.
  30. 根据权利要求29所述的装置,其特征在于,所述处理器用于根据目标方向压缩所述三维点云数据,得到包含反射率信息和/或相对高度信息的二维点云特征图,具体包括:The device according to claim 29, wherein the processor is configured to compress the three-dimensional point cloud data according to the target direction to obtain a two-dimensional point cloud feature map containing reflectivity information and/or relative height information, which specifically includes :
    沿所述目标方向对所述三维点云数据进行投影压缩,得到二维点云数据;Performing projection compression on the three-dimensional point cloud data along the target direction to obtain two-dimensional point cloud data;
    从所述二维点云数据中提取出特征信息,得到包含所述特征信息的二维点云特征图。The feature information is extracted from the two-dimensional point cloud data, and a two-dimensional point cloud feature map containing the feature information is obtained.
  31. 根据权利要求29或30所述的装置,其特征在于,所述处理器用于处 理所述二维点云特征图,得到第一路面标识识别结果,具体包括:The device according to claim 29 or 30, wherein the processor is configured to process the two-dimensional point cloud feature map to obtain the first road marking recognition result, which specifically includes:
    基于所述二维点云特征图的区域反射率和高度,输出路面标识区域。Based on the area reflectivity and height of the two-dimensional point cloud feature map, the road marking area is output.
  32. 根据权利要求31所述的装置,其特征在于,所述处理器还用于:The device according to claim 31, wherein the processor is further configured to:
    对输出的所述路面标识区域进行聚类处理;Perform clustering processing on the output road marking area;
    对聚类处理的结果进行识别,输出所述结果对应的路面标识。Identify the result of the clustering process, and output the road marking corresponding to the result.
  33. 根据权利要求32所述的装置,其特征在于,所述处理器用于对聚类处理的结果进行识别,输出所述结果对应的路面标识,具体包括:The device according to claim 32, wherein the processor is configured to identify the result of the clustering process and output the road surface identification corresponding to the result, which specifically comprises:
    根据所述结果对所述二维点云特征图进行像素级比对,以获得所述结果对应的路面标识。Perform a pixel-level comparison on the two-dimensional point cloud feature map according to the result to obtain a road marking corresponding to the result.
  34. 根据权利要求28-30任一项所述的装置,其特征在于,所述处理器用于处理所述二维点云特征图,得到路面标识识别结果,具体包括:The device according to any one of claims 28-30, wherein the processor is configured to process the two-dimensional point cloud feature map to obtain a road marking recognition result, which specifically comprises:
    将所述二维点云特征图输入预设神经网络模型,得到所述预设神经网络模型的模型输出结果;Input the two-dimensional point cloud feature map into a preset neural network model to obtain a model output result of the preset neural network model;
    所述模型输出结果包括至少一个路面标识类别的置信度特征图,单个路面标识的置信度特征图用于表征像素是所述路面标识类别的概率。The model output result includes a confidence characteristic map of at least one road marking category, and the confidence characteristic map of a single road marking is used to characterize the probability that a pixel is the road marking category.
  35. 根据权利要求34所述的装置,其特征在于,所述处理器还用于:The device according to claim 34, wherein the processor is further configured to:
    基于所述模型输出结果,得到所述路面标识识别结果。Based on the model output result, the road marking recognition result is obtained.
  36. 根据权利要求35所述的装置,其特征在于,所述预设神经网络模型包括多个输出通道,所述多个输出通道与多个地表对象类别一一对应,所述输出通道用于输出对应地表对象类别的置信度特征图,所述多个地表对象类别包括所述至少一个路面标识类别;The device according to claim 35, wherein the preset neural network model comprises a plurality of output channels, and the plurality of output channels are in one-to-one correspondence with a plurality of surface object categories, and the output channels are used to output corresponding A confidence characteristic map of a surface object category, where the multiple surface object categories include the at least one road surface identification category;
    所述处理器用于根据所述模型输出结果,得到所述路面标识识别结果,具体包括:The processor is configured to obtain the road marking recognition result according to the model output result, which specifically includes:
    将所述多个地表对象类别分别的置信度特征图中同一像素位置像素值最大的置信度特征图对应的地表对象类别,作为所述像素位置的地表对象类别。The surface object category corresponding to the confidence feature map with the largest pixel value at the same pixel position in the respective confidence feature maps of the multiple surface object categories is used as the surface object category of the pixel location.
  37. 根据权利要求36所述的装置,其特征在于,所述处理器还用于:The device according to claim 36, wherein the processor is further configured to:
    将所述像素位置的路边标识类别,作为所述三维点云数据中与所述像素位置对应的点数据的路面标识类别。The roadside identification category of the pixel location is used as the road surface identification category of the point data corresponding to the pixel location in the three-dimensional point cloud data.
  38. 根据权利要求28-37任一项所述的装置,其特征在于,所述处理器用于将所述三维点云数据压缩成二维点云特征图,具体包括:The device according to any one of claims 28-37, wherein the processor is configured to compress the three-dimensional point cloud data into a two-dimensional point cloud feature map, which specifically comprises:
    根据筛选条件,筛选所述三维点云数据,得到筛选后的三维点云数据;Filter the three-dimensional point cloud data according to the filtering conditions to obtain the filtered three-dimensional point cloud data;
    将所述筛选后的三维点云数据压缩成二维点云特征图。Compress the filtered three-dimensional point cloud data into a two-dimensional point cloud feature map.
  39. 根据权利要求38所述的装置,其特征在于,所述筛选条件包括距离条件和/或高度条件。The device according to claim 38, wherein the screening condition comprises a distance condition and/or a height condition.
  40. 根据权利要求28-37任一项所述的装置,其特征在于,所述处理器用于获得激光雷达探测得到的三维点云数据,具体包括:The device according to any one of claims 28-37, wherein the processor is configured to obtain three-dimensional point cloud data detected by lidar, which specifically comprises:
    获得激光雷达探测得到的多帧三维点云数据;Obtain multiple frames of 3D point cloud data detected by lidar;
    对所述多帧三维点云数据进行累积,获得累积后的三维点云数据;Accumulating the multiple frames of three-dimensional point cloud data to obtain accumulated three-dimensional point cloud data;
    所述将所述三维点云数据压缩成二维点云特征图,包括:The compressing the three-dimensional point cloud data into a two-dimensional point cloud feature map includes:
    将所述累积后的三维点云数据压缩成二维点云特征图。Compressing the accumulated three-dimensional point cloud data into a two-dimensional point cloud feature map.
  41. 根据权利要求28-37任一项所述的装置,其特征在于,所述处理器还用于:The device according to any one of claims 28-37, wherein the processor is further configured to:
    确定是否满足预设环境条件;Determine whether the preset environmental conditions are met;
    若满足所述预设环境条件,则获得图像采集模块采集得到的图像信息,所述图像信息包含所述路面标识区域,并处理所述图像信息,得到路面标识识别结果。If the preset environmental conditions are met, the image information collected by the image acquisition module is obtained, the image information includes the road marking area, and the image information is processed to obtain a road marking recognition result.
  42. 根据权利要求41所述的装置,其特征在于,所述处理器还用于:The device according to claim 41, wherein the processor is further configured to:
    根据融合策略,融合通过处理所述二维点云特征图得到的第一路面标识识别结果和通过处理所述图像信息得到的第二路面标识结果,得到融合后的路面标识识别结果。According to the fusion strategy, the first road marking recognition result obtained by processing the two-dimensional point cloud feature map and the second road marking result obtained by processing the image information are merged to obtain the fused road marking recognition result.
  43. 根据权利要求42所述的装置,其特征在于,所述融合策略包括基于距离远近的融合策略。The device according to claim 42, wherein the fusion strategy comprises a fusion strategy based on distance.
  44. 根据权利要求43所述的装置,其特征在于,所述处理器用于根据融合策略,融合第一路面标识识别结果和第二路面标识结果,得到融合后的路面标识识别结果,具体包括:The device according to claim 43, wherein the processor is configured to merge the first road marking recognition result and the second road marking result according to the fusion strategy to obtain the merged road marking recognition result, which specifically comprises:
    对于距离大于距离阈值的视野范围,将所述第一路面标识识别结果作为所述融合后的路面标识识别结果。For a field of view with a distance greater than a distance threshold, the first road marking recognition result is used as the fused road marking recognition result.
  45. 根据权利要求43所述的装置,其特征在于,所述处理器用于根据融合策略,融合第一路面标识识别结果和第二路面标识结果,得到融合后的路面标识识别结果,具体包括:The device according to claim 43, wherein the processor is configured to merge the first road marking recognition result and the second road marking result according to the fusion strategy to obtain the merged road marking recognition result, which specifically comprises:
    对于距离小于距离阈值的视野范围,将所述第二路面标识识别结果作为所述融合后的路面标识识别结果。For a field of view whose distance is less than the distance threshold, the second road marking recognition result is used as the fused road marking recognition result.
  46. 根据权利要求41所述的装置,其特征在于,所述处理器用于获得激光雷达探测得到的三维点云数据,具体包括:The device according to claim 41, wherein the processor is configured to obtain three-dimensional point cloud data detected by lidar, which specifically comprises:
    若不满足所述预设环境条件,则获得激光雷达探测得到的三维点云数据的步骤。If the preset environmental conditions are not met, the step of obtaining three-dimensional point cloud data detected by the lidar.
  47. 根据权利要求41所述的装置,其特征在于,所述预设环境条件包括环境光条件和/或镜头脏污程度条件。The device according to claim 41, wherein the preset environmental conditions include ambient light conditions and/or lens contamination conditions.
  48. 根据权利要求47所述的装置,其特征在于,所述环境光条件包括环境光强度大于强度阈值。The device of claim 47, wherein the ambient light condition comprises that the intensity of the ambient light is greater than an intensity threshold.
  49. 根据权利要求47所述的装置,其特征在于,所述镜头脏污程度条件包括镜头脏污程度大于程度阈值。The apparatus according to claim 47, wherein the condition of the degree of lens contamination includes that the degree of lens contamination is greater than a degree threshold.
  50. 根据权利要求28-37任一项所述的装置,其特征在于,所述处理器还用于:The device according to any one of claims 28-37, wherein the processor is further configured to:
    展示所述路面标识识别结果。Display the road marking recognition result.
  51. 根据权利要求50所述的装置,其特征在于,所述处理器用于展示所述路面标识识别结果,具体包括:The device according to claim 50, wherein the processor is configured to display the road marking recognition result, which specifically comprises:
    根据所述路面标识识别结果在目标图像中标注路面标识,得到标注后的图像,并展示所述标注后的图像。Mark the road sign in the target image according to the road sign recognition result, obtain the marked image, and display the marked image.
  52. 根据权利要求51所述的装置,其特征在于,所述目标图像包括下述中的一种或多种:The device according to claim 51, wherein the target image comprises one or more of the following:
    全黑图像、全白图像或包含所述路面标识区域的图像。An all-black image, an all-white image, or an image containing the road marking area.
  53. 根据权利要求28-37任一项所述的装置,其特征在于,所述激光雷达设置在移动平台上。The device according to any one of claims 28-37, wherein the lidar is arranged on a mobile platform.
  54. 根据权利要求53所述的装置,其特征在于,所述移动平台包括自动驾驶汽车和/或半自动驾驶汽车。The device according to claim 53, wherein the mobile platform comprises an autonomous vehicle and/or a semi-autonomous vehicle.
  55. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包含至少一段代码,所述至少一段代码可由计算机执行,以控制所述计算机执行如权利要求1-27任一项所述的方法。A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, the computer program contains at least one piece of code, and the at least one piece of code can be executed by a computer to control the computer to execute The method of any one of 1-27 is required.
  56. 一种计算机程序,其特征在于,当所述计算机程序被计算机执行时,用于实现如权利要求1-27任一项所述的方法。A computer program, characterized in that, when the computer program is executed by a computer, it is used to implement the method according to any one of claims 1-27.
PCT/CN2019/109290 2019-09-30 2019-09-30 Road marking recognition method and apparatus WO2021062581A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033738.9A CN112204568A (en) 2019-09-30 2019-09-30 Pavement mark recognition method and device
PCT/CN2019/109290 WO2021062581A1 (en) 2019-09-30 2019-09-30 Road marking recognition method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109290 WO2021062581A1 (en) 2019-09-30 2019-09-30 Road marking recognition method and apparatus

Publications (1)

Publication Number Publication Date
WO2021062581A1 true WO2021062581A1 (en) 2021-04-08

Family

ID=74004779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/109290 WO2021062581A1 (en) 2019-09-30 2019-09-30 Road marking recognition method and apparatus

Country Status (2)

Country Link
CN (1) CN112204568A (en)
WO (1) WO2021062581A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408454A (en) * 2021-06-29 2021-09-17 上海高德威智能交通***有限公司 Traffic target detection method and device, electronic equipment and detection system
CN113591777A (en) * 2021-08-11 2021-11-02 宁波未感半导体科技有限公司 Laser radar signal processing method, electronic device, and storage medium
CN113808142A (en) * 2021-08-19 2021-12-17 高德软件有限公司 Ground identifier identification method and device and electronic equipment
CN117471433A (en) * 2023-12-28 2024-01-30 广东威恒输变电工程有限公司 Construction machinery laser point cloud real-time extraction method based on high reflection intensity target

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801036A (en) * 2021-02-25 2021-05-14 同济大学 Target identification method, training method, medium, electronic device and automobile

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184852A (en) * 2015-08-04 2015-12-23 百度在线网络技术(北京)有限公司 Laser-point-cloud-based urban road identification method and apparatus
CN106683530A (en) * 2017-02-21 2017-05-17 南京多伦科技股份有限公司 Computerized judging system and method based on three-dimensional laser vision and high-precision lane model
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
US20180089536A1 (en) * 2016-09-27 2018-03-29 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for processing point cloud data
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184852A (en) * 2015-08-04 2015-12-23 百度在线网络技术(北京)有限公司 Laser-point-cloud-based urban road identification method and apparatus
US20180089536A1 (en) * 2016-09-27 2018-03-29 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for processing point cloud data
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN106683530A (en) * 2017-02-21 2017-05-17 南京多伦科技股份有限公司 Computerized judging system and method based on three-dimensional laser vision and high-precision lane model
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408454A (en) * 2021-06-29 2021-09-17 上海高德威智能交通***有限公司 Traffic target detection method and device, electronic equipment and detection system
CN113408454B (en) * 2021-06-29 2024-02-06 上海高德威智能交通***有限公司 Traffic target detection method, device, electronic equipment and detection system
CN113591777A (en) * 2021-08-11 2021-11-02 宁波未感半导体科技有限公司 Laser radar signal processing method, electronic device, and storage medium
CN113591777B (en) * 2021-08-11 2023-12-08 宁波未感半导体科技有限公司 Laser radar signal processing method, electronic equipment and storage medium
CN113808142A (en) * 2021-08-19 2021-12-17 高德软件有限公司 Ground identifier identification method and device and electronic equipment
CN113808142B (en) * 2021-08-19 2024-04-26 高德软件有限公司 Ground identification recognition method and device and electronic equipment
CN117471433A (en) * 2023-12-28 2024-01-30 广东威恒输变电工程有限公司 Construction machinery laser point cloud real-time extraction method based on high reflection intensity target
CN117471433B (en) * 2023-12-28 2024-04-02 广东威恒输变电工程有限公司 Construction machinery laser point cloud real-time extraction method based on high reflection intensity target

Also Published As

Publication number Publication date
CN112204568A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
WO2021062581A1 (en) Road marking recognition method and apparatus
US11987250B2 (en) Data fusion method and related device
WO2020243962A1 (en) Object detection method, electronic device and mobile platform
US20210255329A1 (en) Environment sensing system and movable platform
WO2022126427A1 (en) Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium
CN107092021B (en) Vehicle-mounted laser radar three-dimensional scanning method, and ground object classification method and system
WO2021072710A1 (en) Point cloud fusion method and system for moving object, and computer storage medium
CN114616489A (en) LIDAR image processing
WO2021051281A1 (en) Point-cloud noise filtering method, distance measurement device, system, storage medium, and mobile platform
WO2020124318A1 (en) Method for adjusting movement speed of scanning element, ranging device and movable platform
WO2020113475A1 (en) Ranging apparatus and scan field of view equalization method thereof, and mobile platform
WO2022198637A1 (en) Point cloud noise filtering method and system, and movable platform
WO2020215252A1 (en) Method for denoising point cloud of distance measurement device, distance measurement device and mobile platform
US20210255289A1 (en) Light detection method, light detection device, and mobile platform
WO2021232227A1 (en) Point cloud frame construction method, target detection method, ranging apparatus, movable platform, and storage medium
WO2020237663A1 (en) Multi-channel lidar point cloud interpolation method and ranging apparatus
US20230090576A1 (en) Dynamic control and configuration of autonomous navigation systems
US20210341588A1 (en) Ranging device and mobile platform
US20220082665A1 (en) Ranging apparatus and method for controlling scanning field of view thereof
WO2022170535A1 (en) Distance measurement method, distance measurement device, system, and computer readable storage medium
WO2022256976A1 (en) Method and system for constructing dense point cloud truth value data and electronic device
WO2021253429A1 (en) Data processing method and apparatus, and laser radar and storage medium
CN111830525B (en) Laser triangle ranging system
WO2020155142A1 (en) Point cloud resampling method, device and system
WO2021026766A1 (en) Motor rotation speed control method and device for scanning module, and distance measurement device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19947533

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19947533

Country of ref document: EP

Kind code of ref document: A1