CN112906449B - Road surface pothole detection method, system and equipment based on dense disparity map - Google Patents

Road surface pothole detection method, system and equipment based on dense disparity map Download PDF

Info

Publication number
CN112906449B
CN112906449B CN202011390618.5A CN202011390618A CN112906449B CN 112906449 B CN112906449 B CN 112906449B CN 202011390618 A CN202011390618 A CN 202011390618A CN 112906449 B CN112906449 B CN 112906449B
Authority
CN
China
Prior art keywords
detection
information
road
height
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011390618.5A
Other languages
Chinese (zh)
Other versions
CN112906449A (en
Inventor
裴姗姗
王欣亮
孙钊
李建
罗杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smarter Eye Technology Co Ltd
Original Assignee
Beijing Smarter Eye Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smarter Eye Technology Co Ltd filed Critical Beijing Smarter Eye Technology Co Ltd
Priority to CN202011390618.5A priority Critical patent/CN112906449B/en
Publication of CN112906449A publication Critical patent/CN112906449A/en
Application granted granted Critical
Publication of CN112906449B publication Critical patent/CN112906449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pavement pothole detection method, a system and equipment based on a dense disparity map, wherein the method comprises the following steps: acquiring left and right views of the same road scene, and calculating a dense parallax map of the road scene based on the left and right views; intercepting a detection area based on the obtained dense disparity map; calculating region point cloud information based on the image information and parallax information of the detection region; modeling a road plane, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located; generating a height map according to the regional point cloud information and the height information; filtering and correcting the detection result by fusing the height map of the multi-frame continuous images to obtain a fusion result; and detecting and confirming road fluctuation conditions of a front detection area and an area with obvious pothole characteristics in the running process of the vehicle according to the fusion result. The technical problems of poor driving comfort and safety in the automatic driving or auxiliary driving process caused by lack of detection of pits in a driving area in the prior art are solved.

Description

Road surface pothole detection method, system and equipment based on dense disparity map
Technical Field
The invention relates to the technical field of automatic driving, in particular to a pavement pothole detection method, system and equipment based on a dense disparity map.
Background
At present, the research of the automobile auxiliary driving direction is continuously advanced along with the development of an image processing technology, and the research of the technology realizes the functions of detecting a front imaging object and the like through the perception capability of a sensor on the environment and the understanding capability of a scene. Monitoring the road ahead and evaluating the dangerous condition is a key link in the development of automatic driving technology, if the hollow condition of the road ahead can be detected in real time, the vehicle can reduce the influence of the road on the vehicle by adjusting the suspension arrangement, and ensure the driving stability and the comfortableness, thereby reducing the occurrence of tire burst, wheel and vehicle damage and road traffic accidents.
Therefore, the method for detecting the pavement pits is provided to solve the problems of real-time road condition monitoring of the road ahead and improvement of driving comfort and safety, and is a urgent problem to be solved by those skilled in the art.
Disclosure of Invention
Therefore, the embodiment of the invention provides a road surface pothole detection method, a system and equipment based on a dense parallax map, which at least partially solve the technical problems of poor driving comfort and safety in the automatic driving or auxiliary driving process caused by the lack of driving area pothole detection in the prior art.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a method for detecting pavement indentations based on dense disparity maps, the method comprising:
acquiring left and right views of the same road scene, and calculating a dense parallax map of the road scene based on the left and right views;
intercepting a detection area based on the obtained dense disparity map;
calculating region point cloud information based on the image information and parallax information of the detection region;
modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
generating a height map according to the regional point cloud information and the height information;
filtering and correcting the detection result by fusing the height map of the multi-frame continuous images to obtain a fusion result;
and detecting and confirming road fluctuation conditions of a front detection area and an area with obvious pothole characteristics in the running process of the vehicle according to the fusion result.
Further, the calculating the dense disparity map of the road scene based on the left and right views specifically includes:
and calculating a dense disparity map of the road scene through an SGM matching algorithm, an image segmentation method or a deep learning method.
Further, the capturing a detection area based on the obtained dense disparity map specifically includes:
calculating a detection area under a real world coordinate system according to the detection depth requirement and the detection width requirement;
converting the detection area under the real world coordinate system into an image coordinate system area through affine transformation, so as to obtain a detection area which is based on the detection area under the real world coordinate system and is intercepted based on the obtained dense parallax map.
Further, the calculating the regional point cloud information based on the image information and the parallax information of the detection region specifically includes:
the image information is segmented through the detection area, only the image information in the detection area is processed, an image coordinate system is converted into a world coordinate system, and 3D point cloud reconstruction of the detection area is completed through the following formula:
Z 1 =b×F/disp
X 1 =(Img x -cx)×b/disp
Y 1 =(Img y -cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic imaging system model;
f is the focal length of the camera;
cx and cy are the image coordinates of the main point of the camera;
Img x and Img y Is an image coordinate point within the detection region;
disp is the image point coordinate (Img x ,Img y ) Parallax values of (2);
X 1 is the lateral distance from the camera;
Y 1 is the longitudinal distance from the camera;
Z 1 is the depth from the camera.
Further, the modeling of the road plane by the regional point cloud information specifically includes:
fitting a pavement model based on the 3D point cloud reconstruction information;
the road surface model equation is:
cosα*X+cosβ*Y+cosγ*Z+D=0
wherein cos alpha is the cosine of the direction of the included angle between the road normal vector and the X coordinate axis of the world coordinate system;
cos beta is the direction cosine of the included angle between the road normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road normal vector and the Z coordinate axis of the world coordinate system;
d is the distance from the origin of the world coordinate system to the road surface plane.
Further, the distance from each discrete point in the regional point cloud to the plane where the pavement equation is located is calculated, and is specifically obtained by the following formula:
A=cosα;
B=cosβ;
C=cosγ;
wherein cos α, cos β, cos γ and D are parameters of the road surface model equation;
X O 、Y O 、Z O the position information of the discrete three-dimensional points in the world coordinate system;
h is the coordinate (X) O ,Y O ,Z O ) Is at a height from the road surface.
Further, filtering and correcting the detection result by fusing the height map of the multi-frame continuous image to obtain a fusion result, which specifically comprises:
acquiring multi-frame continuous images of the same road scene through a vehicle-mounted binocular stereoscopic vision system, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames;
and for the detection results of two adjacent frames, carrying out position updating on the detection results in the former state according to the moving distance, and adding the detection results in the latter state as new detection data.
The invention also provides a road surface pothole detection system based on the dense disparity map, which is used for implementing the method, and comprises the following steps:
the view acquisition unit is used for acquiring left and right views of the same road scene and calculating a dense parallax map of the road scene based on the left and right views;
a detection region acquisition unit configured to intercept a detection region based on the obtained dense disparity map;
a point cloud information acquisition unit for calculating regional point cloud information based on the image information and parallax information of the detection region;
the height information acquisition unit is used for modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
the altitude map generation unit is used for generating an altitude map according to the regional point cloud information and the altitude information;
the image fusion unit is used for filtering and correcting the detection result through fusing the height images of the multi-frame continuous images so as to obtain a fusion result;
and the result output unit is used for detecting and confirming road fluctuation conditions of a front detection area and an area with obvious pothole characteristics in the running process of the vehicle according to the fusion result.
The invention also provides a road surface pothole detection device based on the dense disparity map, which comprises: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is used for storing one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for performing the method as described above.
According to the road surface pothole detection method based on the dense parallax map, left and right views of the same road scene are obtained through the binocular stereoscopic vision system, the left and right views are processed, and the dense parallax map of the road scene is calculated; intercepting a detection area based on the obtained dense disparity map; calculating region point cloud information based on the image information and parallax information of the detection region; modeling a road plane through the point cloud information, and calculating the height information from the point to the real road plane; and filtering and correcting the height information through multi-frame information fusion, and detecting and confirming road fluctuation conditions of a front detection area and an area obviously with pothole characteristics in vehicle running according to fusion results. Therefore, based on multi-frame detection of the detection area, rapid and accurate detection of the hollows of the front driving area is realized, and the technical problem that driving comfort and safety are poor in automatic driving or auxiliary driving processes due to lack of detection of hollows of the driving area in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the ambit of the technical disclosure.
FIG. 1 is a flowchart of an embodiment of a dense disparity map-based pavement pothole detection method provided by the invention;
fig. 2 is a block diagram of an embodiment of a dense parallax map-based road surface pothole detection system provided by the invention.
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
According to the road surface pothole detection method, system and device based on the dense disparity map, the pothole road section is early-warned through detection and judgment of the pothole condition in the front driving area, and driving comfort and safety in the automatic driving or auxiliary driving process are improved.
In a specific embodiment, as shown in fig. 1, the method for detecting the pavement pits based on the dense disparity map provided by the invention comprises the following steps:
s1: and acquiring left and right views of the same road scene, and calculating a dense parallax map of the road scene based on the left and right views, wherein the dense parallax map of the road scene can be calculated specifically through an SGM matching algorithm, an image segmentation method or a deep learning method. In the actual implementation process, binocular equipment consisting of two cameras, such as a vehicle-mounted binocular stereoscopic camera, acquires left and right views of the same structured road scene, and processes the acquired left and right views to obtain a dense parallax map of the road scene. The road scene is a structured road scene with clear road mark lines, a single background environment of the road and obvious geometric characteristics of the road.
S2: and intercepting a detection area based on the obtained dense disparity map. Specifically, the detection area in the image is intercepted with the detection area in the real world coordinate system as a reference, that is, the detection area in the real world coordinate system is calculated according to the detection depth requirement and the detection width requirement; converting the detection area under the real world coordinate system into an image coordinate system area through affine transformation, so as to obtain a detection area which is based on the detection area under the real world coordinate system and is intercepted based on the obtained dense parallax map.
S3: calculating region point cloud information based on the image information and parallax information of the detection region; specifically, the image information is segmented through the detection area, only the image information in the detection area is processed, an image coordinate system is converted into a world coordinate system, and the 3D point cloud reconstruction of the detection area is completed through the following formula:
Z 1 =b×F/disp
X 1 =(Img x -cx)×b/disp
Y 1 =(Img y -cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic imaging system model;
f is the focal length of the camera;
cx and cy are the image coordinates of the main point of the camera;
Img x and Img y Is an image coordinate point within the detection region;
disp is the image point coordinate (Img x ,Img y ) Parallax values of (2);
X 1 is the lateral distance from the camera;
Y 1 is the longitudinal distance from the camera;
Z 1 is the depth from the camera.
S4: modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located.
Specifically, modeling a road plane through regional point cloud information includes:
fitting a pavement model based on the 3D point cloud reconstruction information;
the road surface model equation is:
cosα*X+cosβ*Y+cosγ*Z+D=0
wherein cos alpha is the cosine of the direction of the included angle between the road normal vector and the X coordinate axis of the world coordinate system;
cos beta is the direction cosine of the included angle between the road normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road normal vector and the Z coordinate axis of the world coordinate system;
d is the distance from the origin of the world coordinate system to the road surface plane.
Further, the distance from each discrete point in the regional point cloud to the plane where the pavement equation is located is calculated, and is specifically obtained by the following formula:
A=cosα;
B=cosβ;
C=cosγ;
wherein cos α, cos β, cos γ and D are parameters of the road surface model equation;
X O 、Y O 、Z O the position information of the discrete three-dimensional points in the world coordinate system;
h is the coordinate (X) O ,Y O ,Z O ) Is at a height from the road surface.
S5: and generating an altitude map according to the regional point cloud information and the altitude information. That is, discrete point information in the 3D point cloud of the detection area is projected to the X0Z plane (top plane), and height information of the discrete points is stored at the corresponding positions.
S6: and filtering and correcting the detection result by fusing the height maps of the multi-frame continuous images to obtain a fusion result. Specifically, a vehicle-mounted binocular stereoscopic vision system is used for acquiring multi-frame continuous images of the same road scene, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames; and for the detection results of two adjacent frames, carrying out position updating on the detection results in the former state according to the moving distance, and adding the detection results in the latter state as new detection data.
In the multi-frame fusion process, the height information of the XOZ plane is continuously updated, a sliding window with the size of m is intercepted according to the physical size, the sliding window represents a small area in the pavement detection range, all the height data in the area are ordered, and the median value is taken as the pavement height value at the central position of the sliding window.
S7: and detecting and confirming road fluctuation conditions of a front detection area and an area with obvious pothole characteristics in the running process of the vehicle according to the fusion result. In the multi-frame fusion process, the data within the nearest visible distance Z_min of the binocular imaging system is further filtered, namely, the position with the calculated height smaller than the preset height threshold value is set to be zero in the distance range of 0-Z_min, and for the road section with the height larger than the preset height threshold value, the distance from the camera at the current moment and the concave-convex height of the road section are output.
In the specific embodiment, the road surface pothole detection method based on the dense parallax map provided by the invention obtains the left and right views of the same road scene through the binocular stereoscopic vision system, processes the left and right views, and calculates the dense parallax map of the road scene; intercepting a detection area based on the obtained dense disparity map; calculating region point cloud information based on the image information and parallax information of the detection region; modeling a road plane through the point cloud information, and calculating the height information from the point to the real road plane; and filtering and correcting the height information through multi-frame information fusion, and detecting and confirming road fluctuation conditions of a front detection area and an area obviously with pothole characteristics in vehicle running according to fusion results. Therefore, based on multi-frame detection of the detection area, rapid and accurate detection of the hollows of the front driving area is realized, and the technical problem that driving comfort and safety are poor in automatic driving or auxiliary driving processes due to lack of detection of hollows of the driving area in the prior art is solved.
In addition to the above method, the present invention also provides a road surface pothole detection system based on a dense disparity map, for implementing the method as described above, as shown in fig. 2, the system comprising:
the view obtaining unit 100 is configured to obtain left and right views of the same road scene, and calculate a dense disparity map of the road scene based on the left and right views, and specifically may calculate the dense disparity map of the road scene by an SGM matching algorithm, an image segmentation method, or a deep learning method. In the actual implementation process, binocular equipment consisting of two cameras, such as a vehicle-mounted binocular stereoscopic camera, acquires left and right views of the same structured road scene, and processes the acquired left and right views to obtain a dense parallax map of the road scene. The road scene is a structured road scene with clear road mark lines, a single background environment of the road and obvious geometric characteristics of the road.
A detection region acquisition unit 200 for intercepting a detection region based on the obtained dense disparity map. The detection region acquiring unit 200 is specifically configured to intercept a detection region in an image with reference to a detection region in a real world coordinate system, that is, calculate the detection region in the real world coordinate system according to a detection depth requirement and a detection width requirement; converting the detection area under the real world coordinate system into an image coordinate system area through affine transformation, so as to obtain a detection area which is based on the detection area under the real world coordinate system and is intercepted based on the obtained dense parallax map.
The point cloud information acquisition unit 300 is configured to calculate area point cloud information based on the image information and parallax information of the detection area. The point cloud information obtaining unit 300 is specifically configured to divide the image information through the detection area, process only the image information in the detection area, convert the image coordinate system into a world coordinate system, and complete the 3D point cloud reconstruction of the detection area through the following formula:
Z 1 =b×F/disp
X 1 =(Img x -cx)×b/disp
Y 1 =(Img y -cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic imaging system model;
f is the focal length of the camera;
cx and cy are the image coordinates of the main point of the camera;
Img x and Img y Is an image coordinate point within the detection region;
disp is the image point coordinate (Img x ,Img y ) Parallax values of (2);
X 1 is the lateral distance from the camera;
Y 1 is the longitudinal distance from the camera;
Z 1 is the depth from the camera.
The altitude information obtaining unit 400 is configured to model the road plane through the regional point cloud information to obtain a road surface equation, and calculate altitude information from each discrete point in the regional point cloud to the plane where the road surface equation is located. The altitude information obtaining unit 400 is specifically configured to fit a road surface model based on the 3D point cloud reconstruction information; the road surface model equation is:
cosα*X+cosβ*Y+cosγ*Z+D=0
wherein cos alpha is the cosine of the direction of the included angle between the road normal vector and the X coordinate axis of the world coordinate system;
cos beta is the direction cosine of the included angle between the road normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road normal vector and the Z coordinate axis of the world coordinate system;
d is the distance from the origin of the world coordinate system to the road surface plane.
The altitude information obtaining unit 400 specifically calculates the distance from each discrete point in the regional point cloud to the plane in which the road surface equation is located by the following formula:
A=cosα;
B=cosβ;
C=cosγ;
wherein cos α, cos β, cos γ and D are parameters of the road surface model equation;
X O 、Y O 、Z O the position information of the discrete three-dimensional points in the world coordinate system;
h is the coordinate (X) O ,Y O ,Z O ) Is at a height from the road surface.
And a height map generating unit 500, configured to generate a height map according to the regional point cloud information and the height information. That is, the height map generating unit 500 is configured to project discrete point information in the 3D point cloud of the detection area to the X0Z plane (top plane), and store the height information of the discrete points in the corresponding positions.
The image fusion unit 600 is configured to filter and correct the detection result by fusing the height maps of the multiple frames of continuous images, so as to obtain a fusion result. Specifically, the image fusion unit 600 is configured to acquire multiple continuous images of the same road scene through the vehicle-mounted binocular stereoscopic vision system, record acquisition time information and speed information of the multiple continuous images, and calculate a movement distance between two adjacent frames; and for the detection results of two adjacent frames, carrying out position updating on the detection results in the former state according to the moving distance, and adding the detection results in the latter state as new detection data.
In the multi-frame fusion process, the height information of the XOZ plane is continuously updated, a sliding window with the size of m is intercepted according to the physical size, the sliding window represents a small area in the pavement detection range, all the height data in the area are ordered, and the median value is taken as the pavement height value at the central position of the sliding window.
And a result output unit 700 for detecting and confirming road undulation of the front detection area and an area having obvious pothole characteristics while the vehicle is traveling according to the fusion result. In the multi-frame fusion process, the data within the nearest visible distance Z_min of the binocular imaging system is further filtered, namely, the position with the calculated height smaller than the preset height threshold value is set to be zero in the distance range of 0-Z_min, and for the road section with the height larger than the preset height threshold value, the distance from the camera at the current moment and the concave-convex height of the road section are output.
In the specific embodiment, the road surface depression detection system based on the dense parallax map provided by the invention acquires left and right views of the same road scene through the binocular stereoscopic vision system, processes the left and right views, and calculates the dense parallax map of the road scene; intercepting a detection area based on the obtained dense disparity map; calculating region point cloud information based on the image information and parallax information of the detection region; modeling a road plane through the point cloud information, and calculating the height information from the point to the real road plane; and filtering and correcting the height information through multi-frame information fusion, and detecting and confirming road fluctuation conditions of a front detection area and an area obviously with pothole characteristics in vehicle running according to fusion results. Therefore, based on multi-frame detection of the detection area, rapid and accurate detection of the hollows of the front driving area is realized, and the technical problem that driving comfort and safety are poor in automatic driving or auxiliary driving processes due to lack of detection of hollows of the driving area in the prior art is solved.
The invention also provides a road surface pothole detection device based on the dense disparity map, which comprises: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is used for storing one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
Corresponding to the above embodiments, the present invention also provides a computer readable storage medium, which contains one or more program instructions. Wherein the one or more program instructions are for performing the method as described above by a binocular camera depth calibration system.
In the embodiment of the invention, the processor may be an integrated circuit chip with signal processing capability. The processor may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP for short), an application specific integrated circuit (Application Specific f ntegrated Circuit ASIC for short), a field programmable gate array (FieldProgrammable Gate Array FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The processor reads the information in the storage medium and, in combination with its hardware, performs the steps of the above method.
The storage medium may be memory, for example, may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable ROM (Electrically EPROM, EEPROM), or a flash Memory.
The volatile memory may be a random access memory (Random Access Memory, RAM for short) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data RateSDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (directracram, DRRAM).
The storage media described in embodiments of the present invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in a combination of hardware and software. When the software is applied, the corresponding functions may be stored in a computer-readable medium or transmitted as one or more instructions or code on the computer-readable medium. Computer-readable media includes both computer-readable storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (10)

1. The method for detecting the pavement pits based on the dense disparity map is characterized by comprising the following steps of:
acquiring left and right views of the same road scene, and calculating a dense parallax map of the road scene based on the left and right views;
intercepting a detection area based on the obtained dense disparity map;
calculating region point cloud information based on the image information and parallax information of the detection region;
modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
generating a height map according to the regional point cloud information and the height information;
filtering and correcting the detection result by fusing the height map of the multi-frame continuous images to obtain a fusion result, wherein the method specifically comprises the following steps: acquiring multi-frame continuous images of the same road scene through a vehicle-mounted binocular stereoscopic vision system, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames; for the detection results of two adjacent frames, carrying out position update on the detection result in the former state according to the moving distance, and adding the detection result in the latter state as new detection data; in the multi-frame fusion process, continuously updating the height information of the XOZ plane, intercepting a sliding window with the size of m according to the physical size, wherein the sliding window represents a small area in the pavement detection range, sequencing all height data in the area, and taking a median value as a pavement height value at the central position of the sliding window;
detecting and confirming road fluctuation conditions of a front detection area and an area obviously provided with a pothole feature in the running process of the vehicle according to the fusion result; in the multi-frame fusion process, the data within the nearest visible distance Z_min of the binocular imaging system are further filtered, the position with the calculated height smaller than the preset height threshold value is set to be zero in the distance range of 0-Z_min, and for the road section with the height larger than the preset height threshold value, the distance from the camera at the current moment and the concave-convex height of the road section are output.
2. The dense disparity map-based road surface pothole detection method according to claim 1, wherein the computing the dense disparity map of the road scene based on the left and right views specifically includes:
and calculating a dense disparity map of the road scene through an SGM matching algorithm, an image segmentation method or a deep learning method.
3. The method for detecting the pavement depression based on the dense disparity map according to claim 1, wherein the intercepting the detection area based on the obtained dense disparity map specifically comprises:
calculating a detection area under a real world coordinate system according to the detection depth requirement and the detection width requirement;
converting the detection area under the real world coordinate system into an image coordinate system of the binocular camera in an affine transformation mode to obtain the detection area of the image coordinate system;
and intercepting a dense disparity map in a detection area of an image coordinate system.
4. The dense disparity map-based pavement pothole detection method according to claim 1, wherein the computing area point cloud information based on the image information and the disparity information of the detection area specifically includes:
the image information is segmented through the detection area, only the image information in the detection area is processed, an image coordinate system is converted into a world coordinate system, and 3D point cloud reconstruction of the detection area is completed through the following formula:
Z 1 =b×F/disp
X 1 =(Img x -cx)×b/disp
Y 1 =(Img y -cy)×b/disp
b is the distance between a left camera and a right camera of the binocular stereoscopic imaging system model;
f is the focal length of the camera;
cx and cy are the image coordinates of the main point of the camera;
Img x and Img y Is the image coordinates within the detection region;
disp is the image point coordinate (Img x ,Img y ) Parallax values of (2);
X 1 is the lateral distance from the camera;
Y 1 is the longitudinal distance from the camera;
Z 1 is the depth from the camera.
5. The dense disparity map-based pavement pothole detection method according to claim 4, wherein the modeling the road plane by the regional point cloud information specifically includes:
fitting a pavement model based on the 3D point cloud reconstruction information;
the road surface model equation is:
cosα*X+cosβ*Y+cosγ*Z+D=0
wherein cos alpha is the cosine of the direction of the included angle between the road normal vector and the X coordinate axis of the world coordinate system;
cos beta is the direction cosine of the included angle between the road normal vector and the Y coordinate axis of the world coordinate system;
cos gamma is the direction cosine of the included angle between the road normal vector and the Z coordinate axis of the world coordinate system;
d is the distance from the origin of the world coordinate system to the road surface plane.
6. The method for detecting the pavement pits based on the dense disparity map according to claim 1, wherein the distance from each discrete point in the computing area point cloud to the plane where the pavement equation is located is specifically obtained by the following formula:
A=cosα;
B=cosβ;
C=cosγ;
wherein cos α, cos β, cos γ and D are parameters of the road surface model equation;
X O 、Y O 、Z O the position information of the discrete three-dimensional points in the world coordinate system;
h is the coordinate (X) O ,Y O ,Z O ) Is at a height from the road surface.
7. The method for detecting the pavement pits based on the dense parallax map according to claim 1, wherein the filtering and the correction are performed on the detection result by fusing the height maps of the multiple frames of continuous images to obtain the fusion result, and specifically comprises the following steps:
acquiring multi-frame continuous images of the same road scene through a vehicle-mounted binocular stereoscopic vision system, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames;
and for the detection results of two adjacent frames, carrying out position updating on the detection results in the former state according to the moving distance, and adding the detection results in the latter state as new detection data.
8. A dense disparity map-based pavement pothole detection system for implementing the method of any of claims 1-7, the system comprising:
the view acquisition unit is used for acquiring left and right views of the same road scene and calculating a dense parallax map of the road scene based on the left and right views;
a detection region acquisition unit configured to intercept a detection region based on the obtained dense disparity map;
a point cloud information acquisition unit for calculating regional point cloud information based on the image information and parallax information of the detection region;
the height information acquisition unit is used for modeling a road plane through the regional point cloud information to obtain a road surface equation, and calculating the height information from each discrete point in the regional point cloud to the plane where the road surface equation is located;
the altitude map generation unit is used for generating an altitude map according to the regional point cloud information and the altitude information;
the image fusion unit is used for filtering and correcting the detection result through fusing the height images of the multi-frame continuous images so as to obtain a fusion result; acquiring multi-frame continuous images of the same road scene through a vehicle-mounted binocular stereoscopic vision system, recording acquisition time information and speed information of the multi-frame continuous images, and calculating the moving distance between two adjacent frames; for the detection results of two adjacent frames, carrying out position update on the detection result in the former state according to the moving distance, and adding the detection result in the latter state as new detection data; in the multi-frame fusion process, continuously updating the height information of the XOZ plane, intercepting a sliding window with the size of m according to the physical size, wherein the sliding window represents a small area in the pavement detection range, sequencing all height data in the area, and taking a median value as a pavement height value at the central position of the sliding window;
the result output unit is used for detecting and confirming road fluctuation conditions of a front detection area and an area with obvious pothole characteristics in the running process of the vehicle according to the fusion result; in the multi-frame fusion process, the data within the nearest visible distance Z_min of the binocular imaging system are further filtered, the position with the calculated height smaller than the preset height threshold value is set to be zero in the distance range of 0-Z_min, and for the road section with the height larger than the preset height threshold value, the distance from the camera at the current moment and the concave-convex height of the road section are output.
9. A dense disparity map-based pavement pothole detection apparatus, the apparatus comprising: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is used for storing one or more program instructions; the processor being configured to execute one or more program instructions for performing the method of any of claims 1-7.
10. A computer readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-7.
CN202011390618.5A 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map Active CN112906449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011390618.5A CN112906449B (en) 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011390618.5A CN112906449B (en) 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map

Publications (2)

Publication Number Publication Date
CN112906449A CN112906449A (en) 2021-06-04
CN112906449B true CN112906449B (en) 2024-04-16

Family

ID=76111380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011390618.5A Active CN112906449B (en) 2020-12-02 2020-12-02 Road surface pothole detection method, system and equipment based on dense disparity map

Country Status (1)

Country Link
CN (1) CN112906449B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658240B (en) * 2021-07-15 2024-04-19 北京中科慧眼科技有限公司 Main obstacle detection method and device and automatic driving system
CN113838111A (en) * 2021-08-09 2021-12-24 北京中科慧眼科技有限公司 Road texture feature detection method and device and automatic driving system
CN113808103A (en) * 2021-09-16 2021-12-17 广州大学 Automatic road surface depression detection method and device based on image processing and storage medium
CN113674275B (en) * 2021-10-21 2022-03-18 北京中科慧眼科技有限公司 Dense disparity map-based road surface unevenness detection method and system and intelligent terminal
CN113689565B (en) * 2021-10-21 2022-03-18 北京中科慧眼科技有限公司 Road flatness grade detection method and system based on binocular stereo vision and intelligent terminal
CN113706622B (en) * 2021-10-29 2022-04-19 北京中科慧眼科技有限公司 Road surface fitting method and system based on binocular stereo vision and intelligent terminal
CN113792707A (en) * 2021-11-10 2021-12-14 北京中科慧眼科技有限公司 Terrain environment detection method and system based on binocular stereo camera and intelligent terminal
CN113763303B (en) * 2021-11-10 2022-03-18 北京中科慧眼科技有限公司 Real-time ground fusion method and system based on binocular stereo vision and intelligent terminal
CN115171030B (en) * 2022-09-09 2023-01-31 山东省凯麟环保设备股份有限公司 Multi-modal image segmentation method, system and device based on multi-level feature fusion
CN115205809B (en) * 2022-09-15 2023-03-24 北京中科慧眼科技有限公司 Method and system for detecting roughness of road surface
WO2024060209A1 (en) * 2022-09-23 2024-03-28 深圳市速腾聚创科技有限公司 Method for processing point cloud, and radar
CN115871622A (en) * 2023-01-19 2023-03-31 重庆赛力斯新能源汽车设计院有限公司 Driving assistance method based on drop road surface, electronic device and storage medium
CN116363219B (en) * 2023-06-02 2023-08-11 中国科学技术大学 Binocular fire source image synthesis method, device and readable storage medium
CN117808868A (en) * 2023-12-28 2024-04-02 上海保隆汽车科技股份有限公司 Vehicle control method, road concave-convex feature detection method based on binocular stereoscopic vision, detection system, detection equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651836A (en) * 2016-11-04 2017-05-10 中国科学院上海微***与信息技术研究所 Ground level detection method based on binocular vision
EP3246877A1 (en) * 2016-05-18 2017-11-22 Ricoh Company, Ltd. Road surface estimation based on vertical disparity distribution
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108596899A (en) * 2018-04-27 2018-09-28 海信集团有限公司 Road flatness detection method, device and equipment
CN110060284A (en) * 2019-04-25 2019-07-26 王荩立 A kind of binocular vision environmental detecting system and method based on tactilely-perceptible

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3246877A1 (en) * 2016-05-18 2017-11-22 Ricoh Company, Ltd. Road surface estimation based on vertical disparity distribution
CN106651836A (en) * 2016-11-04 2017-05-10 中国科学院上海微***与信息技术研究所 Ground level detection method based on binocular vision
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108596899A (en) * 2018-04-27 2018-09-28 海信集团有限公司 Road flatness detection method, device and equipment
CN110060284A (en) * 2019-04-25 2019-07-26 王荩立 A kind of binocular vision environmental detecting system and method based on tactilely-perceptible

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Pothole Detection Based on Disparity Transformation and Road Surface Modeling;Rui Fan 等;IEEE Transactions on Image Processing;第第29卷卷;第897-908页 *

Also Published As

Publication number Publication date
CN112906449A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112906449B (en) Road surface pothole detection method, system and equipment based on dense disparity map
CN108647638B (en) Vehicle position detection method and device
US10354151B2 (en) Method of detecting obstacle around vehicle
JP6350374B2 (en) Road surface detection device
US20230144678A1 (en) Topographic environment detection method and system based on binocular stereo camera, and intelligent terminal
CN114495043B (en) Method and system for detecting up-and-down slope road conditions based on binocular vision system and intelligent terminal
CN103177439A (en) Automatically calibration method based on black and white grid corner matching
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
CN114509045A (en) Wheel area elevation detection method and system
CN112149493B (en) Road elevation measurement method based on binocular stereo vision
CN112204614B (en) Motion segmentation in video from non-stationary cameras
CN112465831B (en) Bend scene sensing method, system and device based on binocular stereo camera
WO2016020718A1 (en) Method and apparatus for determining the dynamic state of a vehicle
CN112184792A (en) Road slope calculation method and device based on vision
CN115511974A (en) Rapid external reference calibration method for vehicle-mounted binocular camera
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN115100621A (en) Ground scene detection method and system based on deep learning network
CN113140002B (en) Road condition detection method and system based on binocular stereo camera and intelligent terminal
JP6768554B2 (en) Calibration device
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN113763303B (en) Real-time ground fusion method and system based on binocular stereo vision and intelligent terminal
CN113689565B (en) Road flatness grade detection method and system based on binocular stereo vision and intelligent terminal
US11477371B2 (en) Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method
CN112070839A (en) Method and equipment for positioning and ranging rear vehicle transversely and longitudinally
CN115205809B (en) Method and system for detecting roughness of road surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant