CN114782447B - Road surface detection method, device, vehicle, storage medium and chip - Google Patents

Road surface detection method, device, vehicle, storage medium and chip Download PDF

Info

Publication number
CN114782447B
CN114782447B CN202210712200.4A CN202210712200A CN114782447B CN 114782447 B CN114782447 B CN 114782447B CN 202210712200 A CN202210712200 A CN 202210712200A CN 114782447 B CN114782447 B CN 114782447B
Authority
CN
China
Prior art keywords
image
target
road surface
determining
camera parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210712200.4A
Other languages
Chinese (zh)
Other versions
CN114782447A (en
Inventor
冷汉超
俞昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210712200.4A priority Critical patent/CN114782447B/en
Publication of CN114782447A publication Critical patent/CN114782447A/en
Application granted granted Critical
Publication of CN114782447B publication Critical patent/CN114782447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to the field of automatic driving, in particular to a road surface detection method, a road surface detection device, a vehicle, a storage medium and a chip, wherein the road surface detection method comprises the steps of obtaining a first image of a road surface at a first moment and a second image of the road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image; performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image; determining a residual light flow graph corresponding to the first image according to the second image and the target image; the detection result information of the target detection road surface is determined according to the residual light flow diagram, so that the detection result information of the target detection road surface is determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for road surface conditions, and the identification rate of the road surface obstacles can be effectively improved.

Description

Road surface detection method, device, vehicle, storage medium and chip
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a road surface detection method and apparatus, a vehicle, a storage medium, and a chip.
Background
With the increase in the demand for automatic driving, fine-grained detection of road surface conditions has been strongly demanded. In the related art, most of the road surface detection is performed based on a neural network model, and because training of the neural network model usually requires a large amount of labeled data, and during actual detection, only objects labeled in the training process can be detected, that is, only objects labeled in the training data can be detected, and objects that have not been labeled in the training data are usually easy to miss detection.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a road surface detection method, apparatus, vehicle, storage medium, and chip.
According to a first aspect of the embodiments of the present disclosure, there is provided a road surface detection method including:
acquiring a first image of a target detection road surface at a first moment and a second image of the target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
Optionally, the performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image includes:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and carrying out homography transformation on the first image according to the target homography matrix to obtain the target image.
Optionally, before the determining the detection result information of the target detection road surface according to the residual light flow graph, the method further includes:
determining the target offset of the camera along the designated direction from the first moment to the second moment according to the first camera parameter and the second camera parameter;
correspondingly, the determining the detection result information of the target detection road surface according to the residual light flow diagram includes:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
Optionally, the determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image includes:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
Optionally, the determining the detection result information in the first image according to the dense height map includes:
determining a plurality of target pixels of which the road height is greater than a first threshold and smaller than a second threshold according to the road height corresponding to each pixel in the dense height map, wherein the first threshold is smaller than the second threshold;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
Optionally, the detection result information includes a concave-convex height of a concave-convex area in the target detection road surface, and the determination of the concave-convex condition information in the first image from the dense height map further includes:
acquiring the maximum road surface height in each concave-convex area;
taking the maximum road surface height in the concave-convex area as the concave-convex height of the concave-convex area.
Optionally, the obtaining a residual light flow map according to the second image and the target image includes:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
According to a second aspect of the embodiments of the present disclosure, there is provided a road surface detection device including:
the acquisition module is configured to acquire a first image of a target detection road surface at a first moment and a second image of the target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
an alignment module configured to perform ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
a first determining module configured to obtain a residual light flow map from the second image and the target image;
a second determination module configured to determine detection result information of the target detection road surface according to the residual light flow graph.
Optionally, the alignment module is configured to:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and carrying out homography transformation on the first image according to the target homography matrix to obtain the target image.
Optionally, the apparatus further comprises: a third determination module configured to determine a target offset of the camera in a specified direction from the first time to the second time according to the first camera parameter and the second camera parameter;
accordingly, the second determination module is configured to:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
Optionally, the second determining module is configured to:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
Optionally, the detection result information includes a concave-convex region, and the second determining module is configured to:
determining a plurality of target pixels of which the road height is greater than a first threshold and smaller than a second threshold according to the road height corresponding to each pixel in the dense height map, wherein the first threshold is smaller than the second threshold;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
Optionally, the detection result information includes a concave-convex height of a concave-convex area in the target detection road surface, and the second determination module is further configured to:
acquiring the maximum road surface height in each concave-convex area;
taking the maximum road surface height in the concave-convex area as the concave-convex height of the concave-convex area.
Optionally, the first determining module is configured to:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the first aspect described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a chip comprising a second processor and an interface; the second processor is for reading instructions to perform the method of the first aspect above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method comprises the steps that a first image of a road surface at a first moment and a second image of the road surface at a second moment can be detected through obtaining a target, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image; performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image; acquiring a residual light flow diagram according to the second image and the target image; the detection result information of the target detection road surface is determined according to the residual light flow diagram, so that the detection result information of the target detection road surface is determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for road surface conditions, and the identification rate of the road surface obstacles can be effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of road surface detection according to an exemplary embodiment;
FIG. 2 is a flow chart of a method of detecting a road surface according to the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram of a dense height map shown in an exemplary embodiment of the present disclosure;
fig. 4 is a block diagram illustrating a road surface detection device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart illustrating a road surface detection method according to an exemplary embodiment, which may include the following steps, as shown in fig. 1.
In step 101, a first image at a first time and a second image at a second time of detecting a road surface by a target, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image are obtained.
The first time and the second time may be two adjacent sampling times, the first time is a previous sampling time, and the second time is a next sampling time. The first camera parameter is camera internal parameter and camera external parameter when acquiring the first image, the second camera parameter is camera internal parameter and camera external parameter when acquiring the second image, the camera internal parameter is used for describing attributes such as camera focal length, camera center and the like, and the camera external parameter is used for describing a series of rotation and translation operations, generally comprising a rotation matrix and a translation vector.
In step 102, the first image is subjected to ground position alignment processing according to the first camera parameter and the second camera parameter, so as to obtain an aligned target image.
In this step, a target homography matrix may be determined according to the first camera parameter and the second camera parameter; and then carrying out homography transformation on the first image according to the target homography matrix to obtain the target image.
It should be noted that the target homography matrix can be obtained by calculating according to the first camera parameters and the second camera parameters through the following formula:
Figure DEST_PATH_IMAGE001
in the above formula 1, H is the homography matrix of the target, R is the rotation matrix from the first time to the second time in the pose of the camera, K is the camera internal reference matrix, T is the translation vector from the first time to the second time in the pose of the camera,
Figure 746777DEST_PATH_IMAGE002
is a normal vector of the ground surface,
Figure DEST_PATH_IMAGE003
is the camera height.
In step 103, a residual light flow map is obtained from the second image and the target image.
In this step, the second image and the target image may be used as inputs of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
The preset optical flow estimation model may be any one of the existing optical flow estimation models, for example, the preset optical flow estimation model may be an optical flow estimation model based on PWC-Net (Pyramid, Warping, and Cost volume, image Pyramid, Warping, and basis quantity) algorithm, or an optical flow estimation model based on RAFT algorithm.
In step 104, the detection result information of the target detection road surface is determined according to the residual light flow graph.
The detection result information may include positions of the recesses and the protrusions, a depth of the recesses, and a height of the protrusions.
According to the technical scheme, the detection result information of the target detection road surface can be determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for the road surface condition, and therefore the identification rate of the road surface obstacles can be effectively improved.
FIG. 2 is a flow chart of a method of detecting a road surface according to the embodiment shown in FIG. 1; as shown in fig. 2, the road surface detection method may further include step 1041;
in step 1041, a target offset of the camera in a designated direction from the first time to the second time is determined according to the first camera parameter and the second camera parameter.
The specified direction may be a Z-axis direction, an X-axis direction or a Y-axis direction in a world coordinate system, the camera extrinsic parameters corresponding to the first camera parameters include distances between the camera and the Z-axis direction, the X-axis direction and the Y-axis at the first time, and the camera extrinsic parameters corresponding to the second camera parameters include distances between the camera and the Z-axis direction, the X-axis direction and the Y-axis at the second time. And obtaining the target offset of the camera along the designated direction according to the camera external participation of the camera at the first moment and the second moment.
The step 104 shown in fig. 1 of determining the detection result information of the target detection road surface according to the residual light flow graph may be implemented by the following steps 1042 to 1044:
in step 1042, a target ratio is determined according to the target homography matrix, the target offset and the residual light flow graph.
The target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel.
It should be noted that the target ratio can be calculated by the following formula:
Figure 867180DEST_PATH_IMAGE004
in the above-mentioned formula 2, the first,
Figure DEST_PATH_IMAGE005
in order to achieve the target ratio,
Figure 850179DEST_PATH_IMAGE006
for the residual streamer corresponding to that pixel,
Figure DEST_PATH_IMAGE007
is the height of the camera from the ground,
Figure 963497DEST_PATH_IMAGE008
is the offset of the camera along the Z-axis (i.e. the target offset) from the first time to the second time,
Figure DEST_PATH_IMAGE009
is a pixel coordinate matrix corresponding to the pixels in the first image
Figure 512290DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
Is the width of the first image and,
Figure 803594DEST_PATH_IMAGE012
is the height of the first image;
Figure 805049DEST_PATH_IMAGE013
is the third component of the target homography matrix if the target homography matrix H is
Figure DEST_PATH_IMAGE014
Then it is to
Figure 472790DEST_PATH_IMAGE015
Is composed of
Figure DEST_PATH_IMAGE016
In step 1043, a dense height map corresponding to the first image is determined according to the target ratio corresponding to the first image.
This step, the dense height map can be obtained by the steps shown in S1 and S2 below.
And S1, determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter.
Wherein, the depth value corresponding to each pixel can be obtained by the following formula 3, so as to obtain the target depth map corresponding to the first image:
Figure 626822DEST_PATH_IMAGE017
in the above-mentioned formula 3, the first,
Figure DEST_PATH_IMAGE018
z is the depth value corresponding to each pixel in the first image,
Figure DEST_PATH_IMAGE020
is the groundThe normal vector of the vector is used as a vector,
Figure 89028DEST_PATH_IMAGE021
is the camera height, K is the camera internal reference, p is the pixel coordinate matrix corresponding to the pixel in the second image
Figure DEST_PATH_IMAGE022
x = 0,1,2···width 2 y =0, 1,2···height 2width 2 Is the width of the second image and,height 2 is the height of the second image.
And S2, determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
It should be noted that the road surface height in each pixel can be obtained by the following formula 4:
Figure 577778DEST_PATH_IMAGE023
wherein the content of the first and second substances,h p is the road surface height corresponding to each pixel in the first image, Z is the depth value corresponding to the pixel,
Figure DEST_PATH_IMAGE024
is the target ratio. The dense height map may be obtained by obtaining the road height in each pixel in the first image and representing the road height in each pixel by an image, as shown in fig. 3, where fig. 3 is a schematic diagram of a dense height map shown in an exemplary embodiment of the present disclosure, and in fig. 3, the lower diagram is the dense height map of the upper diagram.
In step 1044, the detection result information in the first image is determined according to the dense height map.
In this step, the detection result information includes concave-convex areas, that is, positions of the protrusions and the depressions, and a plurality of target pixels whose road height is greater than a first threshold and smaller than a second threshold may be determined according to the road height corresponding to each pixel in the dense height map, where the first threshold is smaller than the second threshold; and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
It should be noted that the clustering process may use a clustering algorithm in the prior art, and the road height and position coordinates of each target pixel are used as input, so that the clustering algorithm outputs one or more clusters, and the concave-convex area is obtained according to the coordinate positions of the target pixels in the clusters.
According to the technical scheme, a first image of a road surface at a first moment and a second image of the road surface at a second moment can be detected by acquiring a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image; performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image; acquiring a residual light flow diagram according to the second image and the target image; the detection result information of the target detection road surface is determined according to the residual light flow graph, so that effective detection can be realized for unmarked obstacles, and fine granularity detection can also be realized for road surface conditions, and the identification rate of the road surface obstacles can be effectively improved.
Optionally, the detection result information may further include concave-convex heights of concave-convex regions in the target detection road surface, and after the concave-convex regions are obtained, the maximum road surface height in each concave-convex region may also be obtained; the maximum road surface height in the concave-convex area is taken as the concave-convex height of the concave-convex area.
The maximum road surface height may be the highest raised road surface height, or the deepest depression depth.
Above technical scheme not only can effectively detect the concave-convex area of more meticulous granularity, can also effectively detect this concave-convex height of concave-convex area in the road surface to can provide reliable data basis for follow-up vehicle control.
FIG. 4 is a block diagram illustrating a road surface detecting device according to an exemplary embodiment; as shown in fig. 4, the road surface detection device may include:
an obtaining module 401 configured to obtain a first image at a first time and a second image at a second time of detecting a road surface by a target, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
an alignment module 402 configured to perform ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
a first determining module 403 configured to obtain a residual light flow map from the second image and the target image;
a second determining module 404 configured to determine detection result information of the target detection road surface according to the residual light flow graph.
According to the technical scheme, the detection result information of the target detection road surface can be determined according to the residual light flow diagram, effective detection can be realized for unmarked obstacles, fine granularity detection can also be realized for the road surface condition, and therefore the identification rate of the road surface obstacles can be effectively improved.
Optionally, the alignment module 402 is configured to:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
and performing homography transformation on the first image according to the target homography matrix to obtain the target image.
Optionally, the apparatus further comprises: a third determination module configured to determine a target offset of the camera in a specified direction from the first time to the second time according to the first camera parameter and the second camera parameter;
accordingly, the second determination module 404 is configured to: determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
Optionally, the second determining module 404 is configured to:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
Optionally, the detection result information includes a concave-convex area, and the second determining module 404 is configured to:
determining a plurality of target pixels of which the road height is greater than a first threshold value and smaller than a second threshold value according to the road height corresponding to each pixel in the dense height map, wherein the first threshold value is smaller than the second threshold value;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
Optionally, the detection result information includes a concave-convex height of a concave-convex area in the target detection road surface, and the second determining module 404 is further configured to:
acquiring the maximum road surface height in each concave-convex area;
the maximum road surface height in the concave-convex area is taken as the concave-convex height of the concave-convex area.
Optionally, the first determining module 403 is configured to:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow diagram output by the preset optical flow estimation model.
Above technical scheme not only can effectively detect the concave-convex area of more meticulous granularity, can also effectively detect this concave-convex height of concave-convex area in the road surface to can provide reliable data basis for follow-up vehicle control.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
In another exemplary embodiment of the present disclosure, a vehicle is provided, including:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
and determining the detection result information of the target detection road surface according to the residual light flow graph.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the road surface detection method provided by the present disclosure.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described road surface detection method when executed by the programmable apparatus, which computer program product may be a chip.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. A road surface detection method, characterized by comprising:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
determining the detection result information of the target detection road surface according to the residual light flow graph;
the performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image includes:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
performing homography transformation on the first image according to the target homography matrix to obtain the target image;
before the determining the detection result information of the target detection road surface according to the residual light flow graph, the method further comprises:
determining the target offset of the camera along the designated direction from the first moment to the second moment according to the first camera parameter and the second camera parameter;
correspondingly, the determining the detection result information of the target detection road surface according to the residual light flow diagram includes:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
2. The method for detecting a road surface according to claim 1, wherein the determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image includes:
determining a target depth map corresponding to the first image according to the target ratio and the first camera parameter;
and determining the road surface height in each pixel in the first image according to the target depth map and the target ratio corresponding to each pixel.
3. The road surface detection method according to claim 1, wherein the detection result information includes an uneven area, and the determining the detection result information in the first image from the dense height map includes:
determining a plurality of target pixels of which the road height is greater than a first threshold and smaller than a second threshold according to the road height corresponding to each pixel in the dense height map, wherein the first threshold is smaller than the second threshold;
and clustering the target pixels to obtain concave-convex areas on the road surface in the first image.
4. The road surface detection method according to claim 3, characterized in that the detection result information includes an irregularity height of an irregularity region in the target detection road surface, and the determination of the irregularity condition information in the first image from the dense height map further includes:
acquiring the maximum road surface height in each concave-convex area;
taking the maximum road surface height in the concave-convex area as the concave-convex height of the concave-convex area.
5. A road surface detection method according to any one of claims 1 to 4, wherein said obtaining a residual light map from said second image and said target image comprises:
and taking the second image and the target image as the input of a preset optical flow estimation model to obtain the residual optical flow graph output by the preset optical flow estimation model.
6. A road surface detecting device characterized by comprising:
the acquisition module is configured to acquire a first image of a target detection road surface at a first moment and a second image of the target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
an alignment module configured to perform ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
a first determining module configured to obtain a residual light flow map from the second image and the target image;
a second determination module configured to determine detection result information of the target detection road surface according to the residual light flow graph;
the alignment module configured to:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
performing homography transformation on the first image according to the target homography matrix to obtain the target image;
the device further comprises: a third determination module configured to determine a target offset of the camera in a specified direction from the first time to the second time according to the first camera parameter and the second camera parameter;
accordingly, the second determination module is configured to:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
7. A vehicle, characterized by comprising:
a first processor;
a memory for storing processor-executable instructions;
wherein the first processor is configured to:
acquiring a first image of a target detection road surface at a first moment and a second image of a target detection road surface at a second moment, and a first camera parameter corresponding to the first image and a second camera parameter corresponding to the second image;
performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image;
acquiring a residual light flow diagram according to the second image and the target image;
determining detection result information of the target detection road surface according to the residual light flow graph;
the performing ground position alignment processing on the first image according to the first camera parameter and the second camera parameter to obtain an aligned target image includes:
determining a target homography matrix according to the first camera parameters and the second camera parameters;
performing homography transformation on the first image according to the target homography matrix to obtain the target image;
before the determining the detection result information of the target detection road surface according to the residual light flow diagram, the method further includes:
determining the target offset of the camera along the designated direction from the first moment to the second moment according to the first camera parameter and the second camera parameter;
correspondingly, the determining the detection result information of the target detection road surface according to the residual light flow diagram includes:
determining a target ratio according to the target homography matrix, the target offset and the residual light flow graph, wherein the target ratio is the ratio of the road surface height in each pixel in the first image to the corresponding depth of the pixel;
determining a dense height map corresponding to the first image according to the target ratio corresponding to the first image;
and determining the detection result information in the first image according to the dense height map.
8. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 5.
9. A chip comprising a second processor and an interface; the second processor is to read instructions to perform the method of any one of claims 1-5.
CN202210712200.4A 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip Active CN114782447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210712200.4A CN114782447B (en) 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210712200.4A CN114782447B (en) 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip

Publications (2)

Publication Number Publication Date
CN114782447A CN114782447A (en) 2022-07-22
CN114782447B true CN114782447B (en) 2022-09-09

Family

ID=82422520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210712200.4A Active CN114782447B (en) 2022-06-22 2022-06-22 Road surface detection method, device, vehicle, storage medium and chip

Country Status (1)

Country Link
CN (1) CN114782447B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008033781A (en) * 2006-07-31 2008-02-14 Toyota Motor Corp Road surface gradient detection device and image display device
JP2009139324A (en) * 2007-12-10 2009-06-25 Mazda Motor Corp Travel road surface detecting apparatus for vehicle
CN102999759A (en) * 2012-11-07 2013-03-27 东南大学 Light stream based vehicle motion state estimating method
CN103236160A (en) * 2013-04-07 2013-08-07 水木路拓科技(北京)有限公司 Road network traffic condition monitoring system based on video image processing technology
CN103366158A (en) * 2013-06-27 2013-10-23 东南大学 Three dimensional structure and color model-based monocular visual road face detection method
WO2019007258A1 (en) * 2017-07-07 2019-01-10 腾讯科技(深圳)有限公司 Method, apparatus and device for determining camera posture information, and storage medium
WO2019156072A1 (en) * 2018-02-06 2019-08-15 株式会社デンソー Attitude estimating device
CN110235026A (en) * 2017-01-26 2019-09-13 御眼视觉技术有限公司 The automobile navigation of image and laser radar information based on alignment
WO2019174377A1 (en) * 2018-03-14 2019-09-19 大连理工大学 Monocular camera-based three-dimensional scene dense reconstruction method
CN111595334A (en) * 2020-04-30 2020-08-28 东南大学 Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit)
CN112784671A (en) * 2019-11-08 2021-05-11 三菱电机株式会社 Obstacle detection device and obstacle detection method
CN113887400A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Obstacle detection method, model training method and device and automatic driving vehicle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MX2013008808A (en) * 2011-04-13 2013-10-03 Nissan Motor Driving assistance device and adjacent vehicle detection method therefor.
CN109685732B (en) * 2018-12-18 2023-02-17 重庆邮电大学 High-precision depth image restoration method based on boundary capture
CN112667837A (en) * 2019-10-16 2021-04-16 上海商汤临港智能科技有限公司 Automatic image data labeling method and device
CN112700486B (en) * 2019-10-23 2024-05-07 浙江菜鸟供应链管理有限公司 Method and device for estimating depth of road surface lane line in image
CN113819890B (en) * 2021-06-04 2023-04-14 腾讯科技(深圳)有限公司 Distance measuring method, distance measuring device, electronic equipment and storage medium
CN113822260B (en) * 2021-11-24 2022-03-22 杭州蓝芯科技有限公司 Obstacle detection method and apparatus based on depth image, electronic device, and medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008033781A (en) * 2006-07-31 2008-02-14 Toyota Motor Corp Road surface gradient detection device and image display device
JP2009139324A (en) * 2007-12-10 2009-06-25 Mazda Motor Corp Travel road surface detecting apparatus for vehicle
CN102999759A (en) * 2012-11-07 2013-03-27 东南大学 Light stream based vehicle motion state estimating method
CN103236160A (en) * 2013-04-07 2013-08-07 水木路拓科技(北京)有限公司 Road network traffic condition monitoring system based on video image processing technology
CN103366158A (en) * 2013-06-27 2013-10-23 东南大学 Three dimensional structure and color model-based monocular visual road face detection method
CN110235026A (en) * 2017-01-26 2019-09-13 御眼视觉技术有限公司 The automobile navigation of image and laser radar information based on alignment
WO2019007258A1 (en) * 2017-07-07 2019-01-10 腾讯科技(深圳)有限公司 Method, apparatus and device for determining camera posture information, and storage medium
WO2019156072A1 (en) * 2018-02-06 2019-08-15 株式会社デンソー Attitude estimating device
WO2019174377A1 (en) * 2018-03-14 2019-09-19 大连理工大学 Monocular camera-based three-dimensional scene dense reconstruction method
CN112784671A (en) * 2019-11-08 2021-05-11 三菱电机株式会社 Obstacle detection device and obstacle detection method
CN111595334A (en) * 2020-04-30 2020-08-28 东南大学 Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit)
CN113887400A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Obstacle detection method, model training method and device and automatic driving vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多传感器融合的路面车辆检测;吴国星 等;《华中科技大学学报(自然科学版)》;20151031;第43卷;第250-254页 *

Also Published As

Publication number Publication date
CN114782447A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN108369650B (en) Method for identifying possible characteristic points of calibration pattern
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN110263714B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN109583365A (en) Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve
CN110490936A (en) Scaling method, device, equipment and the readable storage medium storing program for executing of vehicle camera
CN112598922A (en) Parking space detection method, device, equipment and storage medium
CN111553914A (en) Vision-based goods detection method and device, terminal and readable storage medium
CN116994236A (en) Low-quality image license plate detection method based on deep neural network
CN112784639A (en) Intersection detection, neural network training and intelligent driving method, device and equipment
CN112184723B (en) Image processing method and device, electronic equipment and storage medium
CN114782447B (en) Road surface detection method, device, vehicle, storage medium and chip
CN116863170A (en) Image matching method, device and storage medium
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
CN114897987B (en) Method, device, equipment and medium for determining vehicle ground projection
CN111126286A (en) Vehicle dynamic detection method and device, computer equipment and storage medium
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN114973203A (en) Incomplete parking space identification method and device and automatic parking method
CN103606146A (en) Corner point detection method based on circular target
CN113888740A (en) Method and device for determining binding relationship between target license plate frame and target vehicle frame
CN113643374A (en) Multi-view camera calibration method, device, equipment and medium based on road characteristics
CN110264531A (en) A kind of catching for X-comers takes method, apparatus, system and readable storage medium storing program for executing
CN110363235A (en) A kind of high-definition picture matching process and system
CN113689455B (en) Thermal fluid image processing method, system, terminal and medium
CN113218361B (en) Camera ranging method and device
CN112197747B (en) Method and apparatus for assisting target detection using wireless positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant