CN112906449A - Dense disparity map-based road surface pothole detection method, system and equipment - Google Patents

Dense disparity map-based road surface pothole detection method, system and equipment Download PDF

Info

Publication number
CN112906449A
CN112906449A CN202011390618.5A CN202011390618A CN112906449A CN 112906449 A CN112906449 A CN 112906449A CN 202011390618 A CN202011390618 A CN 202011390618A CN 112906449 A CN112906449 A CN 112906449A
Authority
CN
China
Prior art keywords
road
detection
information
point cloud
road surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011390618.5A
Other languages
Chinese (zh)
Other versions
CN112906449B (en
Inventor
裴姗姗
王欣亮
孙钊
李建
罗杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smarter Eye Technology Co Ltd
Original Assignee
Beijing Smarter Eye Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smarter Eye Technology Co Ltd filed Critical Beijing Smarter Eye Technology Co Ltd
Priority to CN202011390618.5A priority Critical patent/CN112906449B/en
Publication of CN112906449A publication Critical patent/CN112906449A/en
Application granted granted Critical
Publication of CN112906449B publication Critical patent/CN112906449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dense disparity map-based road surface pothole detection method, a system and equipment, wherein the method comprises the following steps: acquiring left and right views of the same road scene, and calculating a dense disparity map of the road scene based on the left and right views; intercepting a detection area based on the obtained dense disparity map; calculating regional point cloud information based on image information and parallax information of the detection region; modeling a road plane, and calculating height information from each discrete point in the area point cloud to a plane where a road surface equation is located; generating a height map according to the regional point cloud information and the height information; filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result; and detecting and confirming the road undulation condition of the front detection area and the area obviously having the hollow characteristics during the running of the vehicle according to the fusion result. The technical problems that in the prior art, the depression detection in a driving area is lacked, and the driving comfort and safety are poor in the automatic driving or auxiliary driving process are solved.

Description

Dense disparity map-based road surface pothole detection method, system and equipment
Technical Field
The invention relates to the technical field of automatic driving, in particular to a dense disparity map-based road surface depression detection method, system and equipment.
Background
Currently, research on an automobile driving assistance direction is progressing with development of an image processing technology, and the research on the technology realizes functions such as detection of a front imaging object by the sensing ability of a sensor to an environment and the understanding ability of a scene. Monitoring the road ahead and assessing its dangerous condition is the key link in the automatic driving technology development, if can detect the pothole condition that the road ahead exists in real time, the vehicle reduces the influence of road to the vehicle through adjusting the suspension setting, ensures driving stability and travelling comfort to reduce the emergence of blowing out, wheel and vehicle damage and road traffic accident.
Therefore, providing a method for detecting potholes on a road surface to solve the problems of improving driving comfort and safety by monitoring the road condition ahead in real time becomes a great challenge to those skilled in the art.
Disclosure of Invention
Therefore, the embodiment of the invention provides a method, a system and equipment for detecting road surface potholes based on a dense parallax map, so as to at least partially solve the technical problems of poor driving comfort and safety in the automatic driving or auxiliary driving process caused by lack of detection of potholes in driving areas in the prior art.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a dense disparity map-based road pothole detection method, comprising:
acquiring left and right views of the same road scene, and calculating a dense disparity map of the road scene based on the left and right views;
intercepting a detection area based on the obtained dense disparity map;
calculating regional point cloud information based on image information and parallax information of the detection region;
modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
generating a height map according to the area point cloud information and the height information;
filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result;
and detecting and confirming the road undulation condition of the front detection area and the area obviously having the crater characteristics during the vehicle running according to the fusion result.
Further, the calculating a dense disparity map of a road scene based on left and right views specifically includes:
and calculating a dense disparity map of the road scene by an SGM (sparse generalized minimum) matching algorithm, an image segmentation method or a deep learning method.
Further, the intercepting a detection area based on the obtained dense disparity map specifically includes:
calculating a detection area under a real world coordinate system according to the detection depth requirement and the detection width requirement;
and converting the detection area under the real world coordinate system into an image coordinate system area in an affine transformation mode, so as to obtain a detection area which is based on the detection area under the real world coordinate system and is intercepted based on the obtained dense parallax image.
Further, the calculating of the point cloud information of the region based on the image information and the parallax information of the detection region specifically includes:
the image information is segmented through the detection area, only the image information in the detection area is processed, the image coordinate system is converted into a world coordinate system, and the 3D point cloud reconstruction of the detection area is completed through the following formula:
Z1=b×F/disp
X1=(Imgx-cx)×b/disp
Y1=(Imgy-cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic vision imaging system model;
f is the focal length of the camera;
cx and cy are image coordinates of the camera principal point;
Imgxand ImgyIs an image coordinate point within the detection area;
disp is the coordinate of the image point as (Img)x,Imgy) The disparity value of (1);
X1is the lateral distance from the camera;
Y1is the longitudinal distance from the camera;
Z1is the depth from the camera.
Further, the modeling of the road plane through the regional point cloud information specifically includes:
fitting a pavement model based on the 3D point cloud reconstruction information;
the road surface model equation is:
cosα*X+cosβ*Y+cosγ*Z+D=0
wherein cos alpha is the direction cosine of an included angle between a road surface normal vector and the X coordinate axis of a world coordinate system;
cos beta is the direction cosine of an included angle between the road surface normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road surface normal vector and the coordinate axis Z of the world coordinate system;
and D is the distance from the origin of the world coordinate system to the plane of the road surface.
Further, the distance from each discrete point in the area point cloud to the plane where the road surface equation is located is calculated, and the distance is obtained through the following formula:
A=cosα;
B=cosβ;
C=cosγ;
Figure BDA0002812414470000031
wherein cos alpha, cos beta, cos gamma and D are parameters of a road surface model equation;
XO、YO、ZOthe position information of the discrete three-dimensional point in a world coordinate system;
h is a coordinate of (X)O,YO,ZO) The height of the discrete three-dimensional points from the road surface.
Further, filtering and correcting the detection result by fusing the height maps of the multiple frames of continuous images to obtain a fusion result, which specifically comprises:
acquiring multi-frame continuous images of the same road scene through a vehicle-mounted binocular stereo vision system, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames;
and for the detection results of two adjacent frames, updating the position of the detection result in the former state according to the moving distance, and adding the detection result in the latter state as new detection data.
The invention also provides a dense disparity map-based road pothole detection system for implementing the method as described above, the system comprising:
the view acquisition unit is used for acquiring left and right views of the same road scene and calculating a dense disparity map of the road scene based on the left and right views;
a detection region acquisition unit for intercepting a detection region based on the obtained dense disparity map;
the point cloud information acquisition unit is used for calculating regional point cloud information based on the image information and the parallax information of the detection region;
the height information acquisition unit is used for modeling a road plane through the regional point cloud information to obtain a road surface equation and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
the height map generating unit is used for generating a height map according to the area point cloud information and the height information;
the image fusion unit is used for filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result;
and the result output unit is used for detecting and confirming the road lodging condition of the front detection area and the area obviously having the hollow characteristics during the running of the vehicle according to the fusion result.
The present invention also provides a dense disparity map-based road surface pothole detection apparatus, comprising: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for executing the method as described above.
According to the road surface pothole detection method based on the dense disparity map, the left view and the right view of the same road scene are obtained through a binocular stereoscopic vision system, the left view and the right view are processed, and the dense disparity map of the road scene is calculated; intercepting a detection area based on the obtained dense disparity map; calculating regional point cloud information based on image information and parallax information of the detection region; modeling a road plane through point cloud information, and calculating height information from points to a real road plane; and filtering and correcting the height information through multi-frame information fusion, and detecting and confirming the road undulation condition of a front detection area and an area obviously having hollow characteristics during the running of the vehicle according to a fusion result. Therefore, the multi-frame detection based on the detection area realizes the quick and accurate detection of the hollow in the driving area in the front, and solves the technical problems of poor driving comfort and safety in the automatic driving or auxiliary driving process caused by the lack of the hollow detection of the driving area in the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other implementation drawings can be derived from the drawings provided by those of ordinary skill in the art without any creative effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the scope of the present invention.
Fig. 1 is a flowchart of a specific embodiment of a dense disparity map-based road depression detection method according to the present invention;
fig. 2 is a block diagram of a specific embodiment of the dense parallax map-based road surface pothole detection system provided by the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to the road surface hollow detection method, system and device based on the dense parallax map, provided by the invention, the early warning of a hollow road section is realized by detecting and judging the hollow condition in the front driving area, and the driving comfort and safety in the automatic driving or auxiliary driving process are improved.
In one embodiment, as shown in fig. 1, the method for detecting road potholes based on the dense parallax map, provided by the invention, comprises the following steps:
s1: the method comprises the steps of obtaining left and right views of the same road scene, calculating a dense disparity map of the road scene based on the left and right views, and specifically calculating the dense disparity map of the road scene through an SGM (sparse generalized minimum mean square) matching algorithm, an image segmentation method or a deep learning method. In an actual implementation process, a binocular device composed of two cameras, for example, a vehicle-mounted binocular stereo camera, is used for acquiring left and right views of the same structured road scene, and the acquired left and right views are processed to obtain a dense disparity map of the road scene. The road scene is a structured road scene with clear road mark lines, a single background environment of the road and obvious geometric characteristics of the road.
S2: and intercepting a detection area based on the obtained dense disparity map. Specifically, the detection area in the image is intercepted with the detection area in the real world coordinate system as a reference, that is, the detection area in the real world coordinate system is calculated according to the detection depth requirement and the detection width requirement; and converting the detection area under the real world coordinate system into an image coordinate system area in an affine transformation mode, so as to obtain a detection area which is based on the detection area under the real world coordinate system and is intercepted based on the obtained dense parallax image.
S3: calculating regional point cloud information based on image information and parallax information of the detection region; specifically, the image information is segmented through the detection area, only the image information in the detection area is processed, the image coordinate system is converted into a world coordinate system, and the 3D point cloud reconstruction of the detection area is completed through the following formula:
Z1=b×F/disp
X1=(Imgx-cx)×b/disp
Y1=(Imgy-cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic vision imaging system model;
f is the focal length of the camera;
cx and cy are image coordinates of the camera principal point;
Imgxand ImgyIs an image coordinate point within the detection area;
disp is the coordinate of the image point as (Img)x,Imgy) The disparity value of (1);
X1is the lateral distance from the camera;
Y1is the longitudinal distance from the camera;
Z1is the depth from the camera.
S4: and modeling the road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane of the road surface equation.
Specifically, the road plane is modeled through regional point cloud information, and the modeling method comprises the following steps:
fitting a pavement model based on the 3D point cloud reconstruction information;
the road surface model equation is:
cos α*X+cos β*Y+cos γ*Z+D=0
wherein cos alpha is the direction cosine of an included angle between a road surface normal vector and the X coordinate axis of a world coordinate system;
cos beta is the direction cosine of an included angle between the road surface normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road surface normal vector and the coordinate axis Z of the world coordinate system;
and D is the distance from the origin of the world coordinate system to the plane of the road surface.
Further, the distance from each discrete point in the area point cloud to the plane where the road surface equation is located is calculated, and the distance is obtained through the following formula:
A=cosα;
B=cosβ;
C=cosγ;
Figure BDA0002812414470000071
wherein cos alpha, cos beta, cos gamma and D are parameters of a road surface model equation;
XO、YO、ZOthe position information of the discrete three-dimensional point in a world coordinate system;
h is a coordinate of (X)O,YO,ZO) The height of the discrete three-dimensional points from the road surface.
S5: and generating a height map according to the area point cloud information and the height information. That is, information on discrete points in the 3D point cloud of the detection area is projected onto an X0Z plane (top plane), and height information of the discrete points is stored at corresponding positions.
S6: and filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result. Specifically, a vehicle-mounted binocular stereo vision system is used for acquiring multiple frames of continuous images of the same road scene, recording acquisition time information and speed information of the multiple frames of continuous images, and calculating the moving distance between two adjacent frames; and for the detection results of two adjacent frames, updating the position of the detection result in the former state according to the movement distance, and adding the detection result in the latter state as new detection data.
In the process of multi-frame fusion, the height information of the XOZ plane is continuously updated, a sliding window with the size of m is cut according to the physical size, the sliding window represents a small area in the road surface detection range, all height data in the area are sorted, and the median value is taken as the road surface height value at the center position of the sliding window.
S7: and detecting and confirming the road undulation condition of the front detection area and the area obviously having the hollow characteristics during the running of the vehicle according to the fusion result. In the process of multi-frame fusion, data within the nearest visible distance Z _ min of the binocular imaging system are further filtered, namely, in the distance range of 0-Z _ min, the position with the calculated height smaller than the preset height threshold value is set to be zero, and for the road section with the height larger than the preset height threshold value, the distance from the camera at the current moment and the concave-convex height of the road section are output.
In the above specific embodiment, according to the dense disparity map-based road surface hollow detection method provided by the invention, left and right views of the same road scene are acquired through a binocular stereo vision system, and the left and right views are processed to calculate the dense disparity map of the road scene; intercepting a detection area based on the obtained dense disparity map; calculating regional point cloud information based on the image information and the parallax information of the detection region; modeling a road plane through point cloud information, and calculating height information from points to a real road plane; the height information is filtered and corrected through multi-frame information fusion, and the road undulation condition of a front detection area and an area obviously having hollow characteristics are detected and confirmed according to the fusion result during the running of the vehicle. Therefore, the multi-frame detection based on the detection area realizes the quick and accurate detection of the hollow in the driving area in the front, and solves the technical problems of poor driving comfort and safety in the automatic driving or auxiliary driving process caused by the lack of hollow detection in the driving area in the prior art.
In addition to the above method, the present invention also provides a dense disparity map-based road pothole detection system for implementing the method as described above, as shown in fig. 2, the system comprising:
the view acquiring unit 100 is configured to acquire left and right views of the same road scene, and calculate a dense disparity map of the road scene based on the left and right views, and specifically, may calculate the dense disparity map of the road scene by using an SGM matching algorithm, an image segmentation method, or a depth learning method. In an actual implementation process, a binocular device composed of two cameras, for example, a vehicle-mounted binocular stereo camera, acquires left and right views of the same structured road scene, and processes the acquired left and right views to obtain a dense disparity map of the road scene. The road scene is a structured road scene with clear road sign lines, the background environment of the road is single, and the geometric characteristics of the road are obvious.
And a detection region acquisition unit 200 for intercepting a detection region based on the obtained dense disparity map. The detection region acquiring unit 200 is specifically configured to intercept a detection region in an image with the detection region in the real world coordinate system as a reference, that is, calculate the detection region in the real world coordinate system according to the detection depth requirement and the detection width requirement; and converting the detection area under the real world coordinate system into an image coordinate system area through an affine transformation mode to obtain a detection area which is intercepted based on the obtained dense parallax image and takes the detection area under the real world coordinate system as a reference.
A point cloud information obtaining unit 300 for calculating region point cloud information based on the image information and the parallax information of the detection region. The point cloud information obtaining unit 300 is specifically configured to segment image information through the detection area, process only the image information in the detection area, convert an image coordinate system into a world coordinate system, and complete 3D point cloud reconstruction of the detection area according to the following formula:
Z1=b×F/disp
X1=(Imgx-cx)×b/disp
Y1=(Imgy-cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic vision imaging system model;
f is the focal length of the camera;
cx and cy are image coordinates of the camera principal point;
Imgxand ImgyIs an image coordinate point within the detection area;
disp is the coordinate of the image point as (Img)x,Imgy) The disparity value of (1);
X1is the lateral distance from the camera;
Y1is the longitudinal distance from the camera;
Z1is the depth from the camera.
And the height information acquiring unit 400 is used for modeling the road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located. The height information obtaining unit 400 is specifically configured to fit a road surface model based on 3D point cloud reconstruction information; the road model equation is:
cos α*X+cos β*Y+cos γ*Z+D=0
wherein cos alpha is the direction cosine of an included angle between a road surface normal vector and the X coordinate axis of a world coordinate system;
cos beta is the direction cosine of an included angle between the road surface normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road surface normal vector and the coordinate axis Z of the world coordinate system;
and D is the distance from the origin of the world coordinate system to the plane of the road surface.
The height information obtaining unit 400 specifically calculates the distance from each discrete point in the area point cloud to the plane where the road surface equation is located by the following formula:
A=cosα;
B=cosβ;
C=cosγ;
Figure BDA0002812414470000101
wherein cos alpha, cos beta, cos gamma and D are parameters of a road surface model equation;
XO、YO、ZOthe position information of the discrete three-dimensional point in a world coordinate system;
h is a coordinate of (X)O,YO,ZO) The height of the discrete three-dimensional points from the road surface.
A height map generating unit 500, configured to generate a height map according to the area point cloud information and the height information. That is, the height map generating unit 500 is configured to project information of discrete points in the 3D point cloud of the detection area to an X0Z plane (top plane), and store height information of the discrete points at corresponding positions.
And an image fusion unit 600, configured to filter and correct the detection result by fusing the height maps of the multiple frames of continuous images to obtain a fusion result. Specifically, the image fusion unit 600 is configured to obtain multiple frames of continuous images of the same road scene through the vehicle-mounted binocular stereo vision system, record acquisition time information and speed information of the multiple frames of continuous images, and calculate a moving distance between two adjacent frames; and for the detection results of two adjacent frames, updating the position of the detection result in the former state according to the moving distance, and adding the detection result in the latter state as new detection data.
In the process of multi-frame fusion, the height information of the XOZ plane is continuously updated, a sliding window with the size of m is cut according to the physical size, the sliding window represents a small area in the road surface detection range, all height data in the area are sorted, and the median value is taken as the road surface height value at the center position of the sliding window.
And a result output unit 700 for detecting and confirming the road undulation condition of the front detection area and the area with obvious depression characteristics during the vehicle running according to the fusion result. In the multi-frame fusion process, data within the nearest visible distance Z _ min of the binocular imaging system are further filtered, namely, in the distance range of 0-Z _ min, the position with the calculated height smaller than the preset height threshold value is set to be zero, and for the road section with the height larger than the preset height threshold value, the distance from the camera at the current moment and the concave-convex height of the road section are output.
In the above specific embodiment, the dense disparity map-based road surface hollow inspection system provided by the invention acquires left and right views of the same road scene through a binocular stereo vision system, processes the left and right views, and calculates the dense disparity map of the road scene; intercepting a detection area based on the obtained dense disparity map; calculating regional point cloud information based on the image information and the parallax information of the detection region; modeling a road plane through point cloud information, and calculating height information from points to a real road plane; the height information is filtered and corrected through multi-frame information fusion, and the road undulation condition of a front detection area and an area obviously having hollow characteristics are detected and confirmed according to the fusion result during the running of the vehicle. Therefore, the multi-frame detection based on the detection area realizes the quick and accurate detection of the hollow in the driving area in the front, and solves the technical problems of poor driving comfort and safety in the automatic driving or auxiliary driving process caused by the lack of hollow detection in the driving area in the prior art.
The present invention also provides a dense disparity map-based road surface pothole detection apparatus, comprising: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein the one or more program instructions are for executing the method as described above by a binocular camera depth calibration system.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or may be implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a random access memory, a flash memory, a read only memory, a programmable read only memory or an electrically erasable programmable memory, a register, etc. storage media well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM for short), Dynamic RAM (DRAM for short), Synchronous DRAM (SDRAM for short), Double Data rate Synchronous DRAM (ddr Data random SDRAM for short), Enhanced Synchronous DRAM (ESDRAM for short), Synchronous link DRAM (SLDRAM for short), and direct memory bus RAM (drdrdrdrdram for short).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention can be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above embodiments are only for illustrating the embodiments of the present invention and are not to be construed as limiting the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the embodiments of the present invention shall be included in the scope of the present invention.

Claims (10)

1. A dense disparity map-based road pothole detection method is characterized by comprising the following steps:
acquiring left and right views of the same road scene, and calculating a dense disparity map of the road scene based on the left and right views;
intercepting a detection area based on the obtained dense disparity map;
calculating regional point cloud information based on image information and parallax information of the detection region;
modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
generating a height map according to the area point cloud information and the height information;
filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result;
and detecting and confirming the road undulation condition of the front detection area and the area obviously having the hollow characteristics during the running of the vehicle according to the fusion result.
2. The dense disparity map-based road pothole detection method according to claim 1, wherein the calculating of the dense disparity map of the road scene based on the left view and the right view specifically comprises:
and calculating a dense disparity map of the road scene by an SGM (sparse generalized minimum) matching algorithm, an image segmentation method or a deep learning method.
3. The dense disparity map-based road surface pothole detection method according to claim 1, wherein intercepting a detection area based on the obtained dense disparity map specifically comprises:
calculating a detection area under a real world coordinate system according to the detection depth requirement and the detection width requirement;
and converting the detection area under the real world coordinate system into an image coordinate system area in an affine transformation mode, so as to obtain a detection area which is based on the detection area under the real world coordinate system and is intercepted based on the obtained dense parallax image.
4. The method for detecting the road pits based on the dense parallax map as claimed in claim 1, wherein the calculating of the area point cloud information based on the image information and the parallax information of the detection area specifically comprises:
dividing image information through the detection area, processing only the image information in the detection area, converting an image coordinate system into a world coordinate system, and completing 3D point cloud reconstruction of the detection area through the following formula:
Z1=b×F/disp
X1=(Imgx-cx)×b/disp
Y1=(Imgy-cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic vision imaging system model;
f is the focal length of the camera;
cx and cy are image coordinates of the camera principal point;
Imgxand ImgyIs an image coordinate point within the detection area;
disp is the coordinate of the image point as (Img)x,Imgy) The disparity value of (1);
X1is the lateral distance from the camera;
Y1is the longitudinal distance from the camera;
Z1is the depth from the camera.
5. The method for detecting the road pits based on the dense parallax map as claimed in claim 4, wherein the modeling of the road plane through the regional point cloud information specifically comprises:
fitting a pavement model based on the 3D point cloud reconstruction information;
the road surface model equation is:
cosα*X+cosβ*Y+cosγ*Z+D=0
wherein cos alpha is the direction cosine of an included angle between a road surface normal vector and the X coordinate axis of a world coordinate system;
cos beta is the direction cosine of an included angle between the road surface normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road surface normal vector and the coordinate axis Z of the world coordinate system;
and D is the distance from the origin of the world coordinate system to the plane of the road surface.
6. The method for detecting the road pits based on the dense parallax map as claimed in claim 1, wherein the distance between each discrete point in the area point cloud and the plane where the road surface equation is located is calculated by the following formula:
A=cosα;
B=cosβ;
C=cosγ;
Figure FDA0002812414460000021
wherein cos alpha, cos beta, cos gamma and D are parameters of a road surface model equation;
XO、YO、ZOthe position information of the discrete three-dimensional point in a world coordinate system;
h is a coordinate of (X)O,YO,ZO) The height of the discrete three-dimensional points from the road surface.
7. The method for detecting the road pits based on the dense parallax map as claimed in claim 1, wherein the step of filtering and correcting the detection result by fusing the height maps of the multiple frames of continuous images to obtain a fusion result specifically comprises:
acquiring multi-frame continuous images of the same road scene through a vehicle-mounted binocular stereo vision system, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames;
and for the detection results of two adjacent frames, updating the position of the detection result in the former state according to the moving distance, and adding the detection result in the latter state as new detection data.
8. A dense disparity map based road pothole detection system for implementing the method of any one of claims 1-7, the system comprising:
the view acquisition unit is used for acquiring left and right views of the same road scene and calculating a dense disparity map of the road scene based on the left and right views;
a detection region acquisition unit for intercepting a detection region based on the obtained dense disparity map;
a point cloud information acquisition unit for calculating area point cloud information based on the image information and parallax information of the detection area;
the height information acquisition unit is used for modeling a road plane through the regional point cloud information to obtain a road surface equation and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
the height map generating unit is used for generating a height map according to the area point cloud information and the height information;
the image fusion unit is used for filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result;
and the result output unit is used for detecting and confirming the road undulation condition of the front detection area and the area obviously having the hollow characteristics during the running of the vehicle according to the fusion result.
9. A dense disparity map-based road pothole detection device, comprising: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-7.
CN202011390618.5A 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map Active CN112906449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011390618.5A CN112906449B (en) 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011390618.5A CN112906449B (en) 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map

Publications (2)

Publication Number Publication Date
CN112906449A true CN112906449A (en) 2021-06-04
CN112906449B CN112906449B (en) 2024-04-16

Family

ID=76111380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011390618.5A Active CN112906449B (en) 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map

Country Status (1)

Country Link
CN (1) CN112906449B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658240A (en) * 2021-07-15 2021-11-16 北京中科慧眼科技有限公司 Main obstacle detection method and device and automatic driving system
CN113674275A (en) * 2021-10-21 2021-11-19 北京中科慧眼科技有限公司 Dense disparity map-based road surface unevenness detection method and system and intelligent terminal
CN113689565A (en) * 2021-10-21 2021-11-23 北京中科慧眼科技有限公司 Road flatness grade detection method and system based on binocular stereo vision and intelligent terminal
CN113706622A (en) * 2021-10-29 2021-11-26 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN113763303A (en) * 2021-11-10 2021-12-07 北京中科慧眼科技有限公司 Real-time ground fusion method and system based on binocular stereo vision and intelligent terminal
CN113792707A (en) * 2021-11-10 2021-12-14 北京中科慧眼科技有限公司 Terrain environment detection method and system based on binocular stereo camera and intelligent terminal
CN113808103A (en) * 2021-09-16 2021-12-17 广州大学 Automatic road surface depression detection method and device based on image processing and storage medium
CN113838111A (en) * 2021-08-09 2021-12-24 北京中科慧眼科技有限公司 Road texture feature detection method and device and automatic driving system
CN115171030A (en) * 2022-09-09 2022-10-11 山东省凯麟环保设备股份有限公司 Multi-modal image segmentation method, system and device based on multi-level feature fusion
CN115205809A (en) * 2022-09-15 2022-10-18 北京中科慧眼科技有限公司 Method and system for detecting roughness of road surface
CN115871622A (en) * 2023-01-19 2023-03-31 重庆赛力斯新能源汽车设计院有限公司 Driving assistance method based on drop road surface, electronic device and storage medium
CN116363219A (en) * 2023-06-02 2023-06-30 中国科学技术大学 Binocular fire source image synthesis method, device and readable storage medium
WO2024060209A1 (en) * 2022-09-23 2024-03-28 深圳市速腾聚创科技有限公司 Method for processing point cloud, and radar
CN117808868A (en) * 2023-12-28 2024-04-02 上海保隆汽车科技股份有限公司 Vehicle control method, road concave-convex feature detection method based on binocular stereoscopic vision, detection system, detection equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651836A (en) * 2016-11-04 2017-05-10 中国科学院上海微***与信息技术研究所 Ground level detection method based on binocular vision
EP3246877A1 (en) * 2016-05-18 2017-11-22 Ricoh Company, Ltd. Road surface estimation based on vertical disparity distribution
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108596899A (en) * 2018-04-27 2018-09-28 海信集团有限公司 Road flatness detection method, device and equipment
CN110060284A (en) * 2019-04-25 2019-07-26 王荩立 A kind of binocular vision environmental detecting system and method based on tactilely-perceptible

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3246877A1 (en) * 2016-05-18 2017-11-22 Ricoh Company, Ltd. Road surface estimation based on vertical disparity distribution
CN106651836A (en) * 2016-11-04 2017-05-10 中国科学院上海微***与信息技术研究所 Ground level detection method based on binocular vision
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108596899A (en) * 2018-04-27 2018-09-28 海信集团有限公司 Road flatness detection method, device and equipment
CN110060284A (en) * 2019-04-25 2019-07-26 王荩立 A kind of binocular vision environmental detecting system and method based on tactilely-perceptible

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RUI FAN 等: "Pothole Detection Based on Disparity Transformation and Road Surface Modeling", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 29, pages 897 - 908, XP011750508, DOI: 10.1109/TIP.2019.2933750 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658240A (en) * 2021-07-15 2021-11-16 北京中科慧眼科技有限公司 Main obstacle detection method and device and automatic driving system
CN113658240B (en) * 2021-07-15 2024-04-19 北京中科慧眼科技有限公司 Main obstacle detection method and device and automatic driving system
CN113838111A (en) * 2021-08-09 2021-12-24 北京中科慧眼科技有限公司 Road texture feature detection method and device and automatic driving system
CN113808103A (en) * 2021-09-16 2021-12-17 广州大学 Automatic road surface depression detection method and device based on image processing and storage medium
CN113674275A (en) * 2021-10-21 2021-11-19 北京中科慧眼科技有限公司 Dense disparity map-based road surface unevenness detection method and system and intelligent terminal
CN113689565A (en) * 2021-10-21 2021-11-23 北京中科慧眼科技有限公司 Road flatness grade detection method and system based on binocular stereo vision and intelligent terminal
CN113689565B (en) * 2021-10-21 2022-03-18 北京中科慧眼科技有限公司 Road flatness grade detection method and system based on binocular stereo vision and intelligent terminal
CN113706622B (en) * 2021-10-29 2022-04-19 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN113706622A (en) * 2021-10-29 2021-11-26 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN113763303A (en) * 2021-11-10 2021-12-07 北京中科慧眼科技有限公司 Real-time ground fusion method and system based on binocular stereo vision and intelligent terminal
CN113792707A (en) * 2021-11-10 2021-12-14 北京中科慧眼科技有限公司 Terrain environment detection method and system based on binocular stereo camera and intelligent terminal
CN115171030A (en) * 2022-09-09 2022-10-11 山东省凯麟环保设备股份有限公司 Multi-modal image segmentation method, system and device based on multi-level feature fusion
CN115171030B (en) * 2022-09-09 2023-01-31 山东省凯麟环保设备股份有限公司 Multi-modal image segmentation method, system and device based on multi-level feature fusion
CN115205809A (en) * 2022-09-15 2022-10-18 北京中科慧眼科技有限公司 Method and system for detecting roughness of road surface
WO2024060209A1 (en) * 2022-09-23 2024-03-28 深圳市速腾聚创科技有限公司 Method for processing point cloud, and radar
CN115871622A (en) * 2023-01-19 2023-03-31 重庆赛力斯新能源汽车设计院有限公司 Driving assistance method based on drop road surface, electronic device and storage medium
CN116363219A (en) * 2023-06-02 2023-06-30 中国科学技术大学 Binocular fire source image synthesis method, device and readable storage medium
CN116363219B (en) * 2023-06-02 2023-08-11 中国科学技术大学 Binocular fire source image synthesis method, device and readable storage medium
CN117808868A (en) * 2023-12-28 2024-04-02 上海保隆汽车科技股份有限公司 Vehicle control method, road concave-convex feature detection method based on binocular stereoscopic vision, detection system, detection equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112906449B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN112906449B (en) Road surface pothole detection method, system and equipment based on dense disparity map
CN108647638B (en) Vehicle position detection method and device
CN103731652B (en) All-moving surface line of demarcation cognitive device and method and moving body apparatus control system
US20230144678A1 (en) Topographic environment detection method and system based on binocular stereo camera, and intelligent terminal
EP3671643A1 (en) Method and apparatus for calibrating the extrinsic parameter of an image sensor
US7623700B2 (en) Stereoscopic image processing apparatus and the method of processing stereoscopic images
CN114495043B (en) Method and system for detecting up-and-down slope road conditions based on binocular vision system and intelligent terminal
CN114323050B (en) Vehicle positioning method and device and electronic equipment
US10984258B2 (en) Vehicle traveling environment detecting apparatus and vehicle traveling controlling system
CN114509045A (en) Wheel area elevation detection method and system
CN108108667A (en) A kind of front vehicles fast ranging method based on narrow baseline binocular vision
CN112465831B (en) Bend scene sensing method, system and device based on binocular stereo camera
JP6768554B2 (en) Calibration device
CN115100621A (en) Ground scene detection method and system based on deep learning network
CN113140002B (en) Road condition detection method and system based on binocular stereo camera and intelligent terminal
CN113781543B (en) Binocular camera-based height limiting device detection method and system and intelligent terminal
CN113965742B (en) Dense disparity map extraction method and system based on multi-sensor fusion and intelligent terminal
CN114937255A (en) Laser radar and camera fusion detection method and device
CN111191538B (en) Obstacle tracking method, device and system based on binocular camera and storage medium
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN113689565B (en) Road flatness grade detection method and system based on binocular stereo vision and intelligent terminal
CN113763303B (en) Real-time ground fusion method and system based on binocular stereo vision and intelligent terminal
US11477371B2 (en) Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method
CN115205809B (en) Method and system for detecting roughness of road surface
CN112070839A (en) Method and equipment for positioning and ranging rear vehicle transversely and longitudinally

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant