CN113808098A - Road disease identification method and device, electronic equipment and readable storage medium - Google Patents

Road disease identification method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113808098A
CN113808098A CN202111076677.XA CN202111076677A CN113808098A CN 113808098 A CN113808098 A CN 113808098A CN 202111076677 A CN202111076677 A CN 202111076677A CN 113808098 A CN113808098 A CN 113808098A
Authority
CN
China
Prior art keywords
road
target
disease
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111076677.XA
Other languages
Chinese (zh)
Inventor
白军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fengtu Technology Shenzhen Co Ltd
Original Assignee
Fengtu Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fengtu Technology Shenzhen Co Ltd filed Critical Fengtu Technology Shenzhen Co Ltd
Priority to CN202111076677.XA priority Critical patent/CN113808098A/en
Publication of CN113808098A publication Critical patent/CN113808098A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a road disease identification method, a road disease identification device, electronic equipment and a computer readable storage medium. The road disease identification method comprises the following steps: acquiring a plurality of dimensional target images of a target road, wherein the plurality of dimensional target images comprise at least two of a color image, a radar data image, an infrared image and an ultraviolet image of the target road; acquiring target speed parameters of acquisition equipment of a plurality of dimensional target images, wherein the acquisition equipment is equipment moving on a target road; performing feature extraction based on the multiple dimension target images to obtain target image features of the multiple dimension target images; fusing the target image characteristics and the target speed parameters to obtain target fusion characteristics of a plurality of dimensional target images; and identifying based on the target fusion characteristics, and determining the road diseases existing in the target road. According to the method and the device, the road disease identification is carried out by combining multi-dimensional and multi-modal data, so that the accuracy of the road disease identification can be improved.

Description

Road disease identification method and device, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of computer vision, in particular to a road disease identification method and device, electronic equipment and a computer readable storage medium.
Background
Road diseases include cracks, pits, ruts, looseness, subsidence, surface damage and the like. The road diseases not only can influence the normal use of the road and shorten the service life of the road, but also have certain potential safety hazards. With the improvement of road maintenance importance, early detection and treatment of diseases are the key points for reducing pavement diseases and reducing the incidence rate of large-scale diseases.
In the prior art, automatic identification of road diseases and differentiation of disease grades are mainly performed on the basis of images through image segmentation and classification technology in computer vision technology.
However, in the practical application process, the inventor of the present application finds that, since the image is only modal data of a single dimension, the image collected in a special environment such as night and rainy day may have a problem of being unable to be identified or having an identification error, and thus, the identification accuracy of the road disease is relatively low.
Disclosure of Invention
The application provides a road disease identification method, a road disease identification device, electronic equipment and a computer readable storage medium, and aims to solve the problem that the existing road disease identification method based on single-mode image data is low in identification accuracy.
In a first aspect, the present application provides a method for identifying a road disease, the method comprising:
acquiring a plurality of dimensional target images of a target road, wherein the plurality of dimensional target images comprise at least two of a color image, a radar data image, an infrared image and an ultraviolet image of the target road;
acquiring target speed parameters of acquisition equipment of the multi-dimensional target images, wherein the acquisition equipment is equipment moving on the target road;
performing feature extraction based on the multiple dimension target images to obtain target image features of the multiple dimension target images;
fusing the target image features and the target speed parameters to obtain target fusion features of the multiple dimensional target images;
and identifying based on the target fusion characteristics, and determining the road diseases existing in the target road.
In a second aspect, the present application provides a road disease recognition device, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of dimensional target images of a target road, and the plurality of dimensional target images comprise at least two of a color image, a radar data image, an infrared image and an ultraviolet image of the target road;
the acquisition unit is further used for acquiring target speed parameters of acquisition equipment of the multi-dimensional target images, wherein the acquisition equipment is equipment moving on the target road;
the feature extraction unit is used for performing feature extraction on the basis of the multiple dimension target images to obtain target image features of the multiple dimension target images;
the feature fusion unit is used for fusing the target image features and the target speed parameters to obtain target fusion features of the multiple dimensional target images;
and the identification unit is used for identifying based on the target fusion characteristics and determining the road diseases existing in the target road.
In some embodiments of the present application, the feature fusion unit is specifically configured to:
acquiring preset attention extraction parameters;
extracting information of the target speed parameter through the attention extraction parameter to obtain a target speed characteristic of the target speed parameter;
and superposing the target image characteristic and the target speed characteristic to obtain the target fusion characteristic.
In some embodiments of the present application, the road disease recognition apparatus further includes a training unit, and before the preset attention-carrying extraction parameter is obtained, the training unit is specifically configured to:
acquiring a training sample, wherein the training sample comprises a plurality of dimensional sample images of a sample road and sample speed parameters of the plurality of dimensional sample images, and the training sample is labeled with actual road disease segmentation information of the sample road;
performing feature extraction based on the multiple dimension sample images through a first feature extraction module in a disease identification model to be trained to obtain sample image features of the multiple dimension sample images;
extracting information of the sample speed parameter through a second characteristic extraction module with an attention mechanism in the disease identification model to be trained to obtain a sample speed characteristic of the sample speed parameter;
fusing the sample image features and the sample speed features through a feature superposition module in the disease recognition model to be trained to obtain sample fusion features of the multiple dimension sample images;
performing disease segmentation based on the sample fusion characteristics through a prediction module in the disease recognition model to be trained to obtain predicted road disease segmentation information of the sample road;
adjusting model parameters of the disease recognition model to be trained based on the predicted road disease segmentation information and the actual road disease segmentation information until a trained disease recognition model is obtained when a preset training stopping condition is met;
and taking the feature extraction parameters of the second feature extraction module in the trained disease recognition model as the attention extraction parameters.
In some embodiments of the present application, the road disease recognition apparatus further includes a classification unit, and after the recognition is performed based on the target fusion feature and the road disease existing in the target road is determined, the classification unit is specifically configured to:
acquiring target road disease segmentation information of the road disease;
and classifying the disease grades based on the target road disease segmentation information to obtain the disease severity grade of the road diseases existing on the target road.
In some embodiments of the present application, the obtaining unit is specifically configured to:
acquiring radar point cloud data of the target road;
performing plane discretization on the radar point cloud data to obtain plane information of the radar point cloud data;
performing height discretization on the radar point cloud data to obtain height information of the radar point cloud data;
determining the radar data image based on the plane information and the altitude information.
In some embodiments of the present application, the identification unit is specifically configured to:
detecting a road area of the target road from the plurality of dimensional target images;
performing road disease detection based on the target fusion characteristics to obtain the position of the initial road disease in the multiple dimensional target images;
detecting whether the position of the initial judgment road disease is in the road area;
and when the position of the initial judgment road disease is in the road area, determining the initial judgment road disease as the road disease existing in the target road.
In some embodiments of the present application, the road damage identification apparatus further includes a display unit, where the display unit is specifically configured to:
displaying a road disease condition of the target road on a preset display platform, wherein the road disease condition comprises at least one of a road disease existing in the target road and a disease severity level of the road disease existing in the target road.
In a third aspect, the present application further provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program, and the processor executes the steps in any one of the road damage identification methods provided in the present application when calling the computer program in the memory.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is loaded by a processor to execute the steps in the road damage identification method.
According to the method, data for identifying the road diseases are added mainly through the following two aspects, and road disease identification based on multi-dimension and multi-mode data is realized, so that the identification accuracy of the road diseases is improved: on the one hand, because the images with different dimensionalities have different road information, the road diseases are identified by combining the target images with multiple dimensionalities, the road information of different images can be complemented, the problem that the road diseases cannot be identified or are identified wrongly due to incomplete information acquired by the single-dimensionality images in a special environment is avoided, and therefore the accuracy of identifying the road diseases can be improved to a certain extent. In the second aspect, because a common image has a larger information loss problem in a special environment, the speed parameter of the acquisition equipment (such as a vehicle) moving on the target road and the multi-dimensional target image are fused for identifying the road diseases, so that the relation between the speed parameter of the acquisition equipment and the road diseases can be fully utilized, and the accuracy of identifying the road diseases can be improved to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a road disease identification system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a road disease identification method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a network structure of an image feature extraction module provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a network structure of a trained disease recognition model provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of another network structure of a trained disease recognition model according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a training process of a lesion recognition model to be trained provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an embodiment of a road damage identification device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the embodiments of the present application, it should be understood that the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known processes have not been described in detail so as not to obscure the description of the embodiments of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed in the embodiments herein.
The main execution body of the road disease identification method in the embodiment of the present application may be the road disease identification device provided in the embodiment of the present application, or different types of electronic devices such as a server device, a physical host, or a User Equipment (UE) integrated with the road disease identification device, where the road disease identification device may be implemented in a hardware or software manner, and the UE may specifically be a terminal device such as a smart phone, a tablet computer, a notebook computer, a palm computer, a desktop computer, or a Personal Digital Assistant (PDA).
The electronic device may adopt a working mode of independent operation, or may also adopt a working mode of a device cluster.
Referring to fig. 1, fig. 1 is a scene schematic diagram of a road disease identification system provided in an embodiment of the present application. The road damage identification system may include an electronic device 100, and a road damage identification device is integrated in the electronic device 100. For example, the electronic device may acquire a plurality of dimensional target images of a target road, the plurality of dimensional target images including at least two of a color image, a radar data image, an infrared image, an ultraviolet image of the target road; acquiring target speed parameters of acquisition equipment of the multi-dimension target images; performing feature extraction based on the multiple dimension target images to obtain target image features of the multiple dimension target images; fusing the target image features and the target speed parameters to obtain target fusion features of the multiple dimensional target images; and identifying based on the target fusion characteristics, and determining the road diseases existing in the target road.
In addition, as shown in fig. 1, the road damage identification system may further include a memory 200 for storing data, such as image data and video data.
It should be noted that the scene schematic diagram of the road damage identification system shown in fig. 1 is merely an example, and the road damage identification system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
In the following, an explanation is started on the road damage identification method provided in the embodiment of the present application, an electronic device is used as an execution main body, and for simplicity and convenience of description, the execution main body will be omitted in the following method embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of a road disease identification method provided in the embodiment of the present application. It should be noted that, although a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. The road disease identification method comprises the following steps of 201-205, wherein:
201. a plurality of dimensional target images of a target road are acquired.
The multiple-dimensional target images include at least two of a color image, a radar data image, an infrared image, and an ultraviolet image of the target road. The color image may be, for example, an RGB image.
The target road refers to a road to be identified whether or not there is a road defect, for example, a certain ring mountain road section, a certain bridge road section, or the like.
The following describes the acquisition modes of a color image, a radar data image, an infrared image and an ultraviolet image of a target road respectively.
1. A color image of the target road.
(1) In practical application, a camera which can be used for shooting color images can be additionally arranged on the acquisition equipment, video frames or images of a target road are obtained by real-time shooting through the camera additionally arranged on the acquisition equipment, and the electronic equipment is in network connection with the camera additionally arranged on the acquisition equipment. And according to the network connection, acquiring a video frame or an image of the target road, which is shot by the camera additionally arranged on the acquisition equipment, on line from the camera additionally arranged on the acquisition equipment to serve as a color image of the target road.
(2) The electronic device may also read out the target road image captured by the camera from a storage medium associated with the target road image captured by the camera (including a camera integrated with the electronic device or a camera attached to the capturing device) for capturing the color image as the color image of the target road.
(3) And reading a target road video frame or image which is acquired in advance by a camera of the acquisition equipment and stored in the electronic equipment as a color image of the target road.
The camera can shoot images according to a preset shooting mode, for example, shooting height, shooting direction or shooting distance can be set, the specific shooting mode can be adjusted according to the camera, and the camera is not limited specifically.
The acquisition devices mentioned above and herein are devices moving on a target road, for example, a moving vehicle.
The manner of obtaining the color image of the target road is only an example, and is not limited thereto.
2. An infrared image of the target road.
(1) In practical application, an infrared imager used for forming an infrared image can be additionally arranged on the acquisition equipment, the infrared image of the target road is acquired in real time through the infrared imager additionally arranged on the acquisition equipment, and the electronic equipment is in network connection with the infrared imager additionally arranged on the acquisition equipment. And according to the network connection, acquiring an infrared image of the target road acquired by the infrared imager additionally arranged on the acquisition equipment on line from the infrared imager additionally arranged on the acquisition equipment.
(2) The electronic device may also read the target road image acquired by the infrared imager from a storage medium associated with the target road image acquired by the infrared imager (including an infrared imager integrated with the electronic device or an infrared imager attached to the acquisition device) that can be used to form the infrared image, as the infrared image of the target road.
(3) And reading a target road video frame or image which is acquired in advance by an infrared imager of the acquisition equipment and stored in the electronic equipment as an infrared image of the target road.
Here, the manner of acquiring the infrared image of the target road is only an example, and is not limited thereto.
3. Ultraviolet image of the target road.
The manner of obtaining the ultraviolet image of the target road is similar to the manner of obtaining the infrared image of the target road, and specific reference may be made to the obtaining description of the infrared image of the target road, which is not described herein again.
4. Radar data image of the target road.
The point cloud and the pixel points acquired by the laser radar are different as follows: the point clouds are all continuous, and the images are discrete; the point cloud can reflect the shape and posture information of a real world target, but lacks texture information; images are discretized representations of real-world objects, but lack the real size of the object; the image can be directly used as the input of a CNN (convolutional neural network) network, and the point cloud can be used as the input of the CNN network only by performing some preprocessing. In order to facilitate the subsequent trained disease identification model to effectively utilize radar point cloud data, in the embodiment of the application, the point cloud is preprocessed through the steps A1-A4 to obtain a radar data image, and then the radar data image is used for road disease identification.
For example, acquiring the radar data image of the target road may specifically include the following steps a1 to a4, where:
and A1, acquiring radar point cloud data of the target road.
The radar point cloud data refers to a point cloud space of a target road, and specifically refers to a mass point set of surface characteristics of the target road.
There are various ways to obtain radar point cloud data of a target road, which exemplarily includes:
(1) and obtaining in real time through a laser radar.
When in actual application, can install the lidar that can be used to gather radar point cloud data additional on collection equipment, gather in real time through the lidar that installs additional on collection equipment and obtain the radar point cloud data of target road, the lidar that installs additional on electronic equipment and the collection equipment establishes network connection. And according to the network connection, acquiring radar point cloud data of a target road acquired by a laser radar additionally arranged on the acquisition equipment on line from the laser radar additionally arranged on the acquisition equipment.
(2) And directly reading from a preset database. And storing the radar point cloud data of the target road in a preset database, and directly reading the radar point cloud data of the target road from the preset database. Specifically, before step 201, the radar point cloud data of the target road may be collected first according to the manner of collecting the radar point cloud data of the target road in real time in (1), and the collected radar point cloud data of the target road may be stored in a preset database, and in step 201, the radar point cloud data of the target road may be directly read by the preset database.
And A2, performing plane discretization based on the radar point cloud data to obtain plane information of the radar point cloud data.
The plane information refers to a bird's eye view (bird's eye view) obtained by discretizing radar point cloud data of a target road in a rotation plane (such as an xy coordinate plane) of a laser coordinate system according to a preset discretization resolution. Wherein the BEV image is a plane image.
The preset discretization resolution refers to a cuboid range (delta l, delta w, delta h) of a pixel point (or a group of feature vectors) of a preset discretized BEV (beam-to-volume) corresponding to a point cloud space; for example, a rectangular space of point cloud 20cm x Δ h corresponds to a pixel point of the discretized BEV map.
For example, in step a2, the radar point cloud data of the target road may be subjected to plane discretization based on a rotation plane (i.e., an xy coordinate plane in the laser coordinate system) according to a preset discretization resolution, so as to obtain a BEV map as plane information of the radar point cloud data.
For example, radar point cloud data of a target road with a spatial size of 32m X32 m acquired by a laser radar can be converted into a BEV map with a size of 512pixel X512 pixel according to a preset discretization resolution of 0.0625m, so that plane information of the radar point cloud data can be obtained.
Here, the preset discretization resolution is only an example, and a specific value of the preset discretization resolution may be adjusted according to a requirement of an actual service scene, and is not limited herein.
And A3, performing height discretization based on the radar point cloud data to obtain height information of the radar point cloud data.
Wherein the height information is used to indicate that the target road is located above or below the ground level with respect to the ground level. The height information refers to channel data above a ground plane, channel data below the ground plane or channel data of the height of the ground plane, which are obtained by discretizing radar point cloud data of a target road in height according to a preset discretization distance and a preset height conversion mode. The above-ground channel data are used for recording height data of the target road above the ground level, and the below-ground channel data are used for recording height data of the target road below the ground level. The ground level height channel data is used to record height data of the target road above and below the ground level simultaneously.
The preset discretization distance refers to a cuboid range (delta l, delta w, delta h) of a group of feature vectors of each preset discretized channel data corresponding to the point cloud space; for example, a rectangular solid space of point cloud 20cm × Δ h corresponds to a set of feature vectors of each channel data after discretization.
The preset height conversion mode refers to a preset height calculation mode in a cuboid range (delta l, delta w, delta h) of the point cloud space corresponding to each discretization interval; for example, the average height value in the rectangular space of the point cloud 20cm × Δ h is used as a set of feature vectors, and the maximum height value in the rectangular space of the point cloud 20cm × Δ h is used as a set of feature vectors.
The following explains how to obtain the height information of the radar point cloud data by taking the example that the height information is the channel data above the ground level, the height information is the channel data below the ground level, and the height information is the ground level height channel data.
1) The altitude information is channel data above ground level.
At this time, in step a3, the radar point cloud data of the target road may be subjected to height discretization according to a preset discretization distance and a preset height transformation manner, so as to obtain channel data above a ground plane as height information of the radar point cloud data.
For example, according to a preset height conversion mode, radar point cloud data in a cuboid range (Δ l × Δ w × Δ h) of each point cloud space in radar point cloud data of a target road is converted into a group of eigenvectors representing the height of the target road relative to the ground plane or above, so that the radar point cloud data of the target road is subjected to height discretization, and channel data above the ground plane, which is shown in the following matrix (1), is obtained. Each numerical value in the matrix 1) represents a group of eigenvectors of the channel data, the numerical value being equal to 0 represents that the height of the target road is the same as the height of the ground level or the target road is below the ground level, the numerical value being greater than 0 represents that the target road is above the ground level, and the specific numerical value being greater than 0 represents the height above the ground level.
Figure BDA0003262508160000111
2) The altitude information is the below ground level channel data.
At this time, in step a3, the radar point cloud data of the target road may be subjected to height discretization according to a preset discretization distance and a preset height transformation manner, so as to obtain channel data below a ground plane as height information of the radar point cloud data.
For example, according to a preset height conversion mode, radar point cloud data in a cuboid range (Δ l × Δ w × Δ h) of each point cloud space in the radar point cloud data of the target road is converted into a group of feature vectors representing the height of the target road relative to the ground level, so that the radar point cloud data of the target road is subjected to height discretization, and channel data below the ground level shown in the following matrix (2) is obtained. Each numerical value in the matrix 2) represents a group of eigenvectors of the channel data, the numerical value being equal to 0 represents that the height of the target road is the same as the height of the ground level or the target road is above the ground level, the numerical value being less than 0 represents that the target road is below the ground level, and the specific numerical value being less than 0 represents the height below the ground level.
Figure BDA0003262508160000112
3) The altitude information is ground level altitude channel data.
At this time, in step a3, the radar point cloud data of the target road may be subjected to height discretization according to a preset discretization distance and a preset height conversion mode, so as to obtain ground level height channel data, which is used as height information of the radar point cloud data.
For example, according to a preset height conversion mode, radar point cloud data in a cuboid range (Δ l × Δ w × Δ h) of each point cloud space in radar point cloud data of a target road is converted into a group of eigenvectors representing the height of the target road above a ground plane or a group of eigenvectors representing the height of the target road below the ground plane, so that the radar point cloud data of the target road is subjected to height discretization, and channel data above the ground plane as shown in the following matrix (3) is obtained. Each value in the matrix 3) represents a group of eigenvectors of the channel data, and the value equal to 0 represents that the height of the target road is the same as the height of the ground level; the numerical value is greater than 0, which means that the target road is above the ground level, and the specific numerical value greater than 0 means the height above the ground level; a value less than 0 indicates that the target road is below ground level, and a specific value less than 0 indicates a height below ground level.
Figure BDA0003262508160000121
A4, determining the radar data image based on the plane information and the height information.
For example, the plane information of the radar point cloud data may be used as one channel data, the height information of the radar point cloud data may be used as another channel data, and the plane information and the height information are directly spliced into multi-channel data serving as a radar data image of the target road.
In order to facilitate the subsequent trained disease recognition model to effectively utilize the point cloud data of the laser radar, in the embodiment of the application, on one hand, the point cloud data of the radar is subjected to plane discretization to obtain plane information of the point cloud data of the radar, namely, the point cloud data of the radar can be converted to obtain a plane image, so that the point cloud data of the laser radar can be converted into characteristic data which can be processed by the trained disease recognition model, the subsequent trained disease recognition model can conveniently and effectively utilize the point cloud data of the laser radar to recognize road diseases, and further, the point cloud data of the laser radar is effectively utilized to improve the recognition accuracy of the road diseases. On the other hand, because the image imaging has the defects of shading, missing depth information and the like, the problem of low road disease identification accuracy rate caused by information missing exists when the road disease identification is carried out through common images such as RGB images, and the deformation condition of the target road relative to the ground plane can be effectively captured through carrying out high discretization on the radar point cloud data. Therefore, the radar point cloud data are discretized on the plane and the height respectively, so that the deformation condition of the target road relative to the ground plane can be fully and effectively captured on the basis of ensuring that the characteristic data can be effectively processed by the model, and the accuracy of road disease identification can be improved.
Therefore, the road diseases existing in the target road are identified by combining the radar data image of the target road, and the identification accuracy of the road diseases can be improved.
202. And acquiring target speed parameters of the acquisition equipment of the target images with multiple dimensions.
Wherein the acquisition equipment is equipment moving on the target road.
The target speed parameter refers to speed-related information of an acquisition device of a plurality of dimensional target images when acquiring the plurality of dimensional target images. Such as the speed, acceleration, etc. of a vehicle traveling on a target road for capturing multiple dimensional target images.
Wherein the collecting device may be a collecting device integrated with a camera, an infrared imager and/or a violet imager for taking color images.
Furthermore, a speed sensor, such as a GNSS sensor, an IMU sensor, or the like, may be installed on the acquisition device, and when acquiring a multi-dimensional target image, information such as a speed, an acceleration, or the like of the acquisition device is acquired by the speed sensor on the acquisition device as a target speed parameter.
Because the target road has road defects such as road depressions and the like, the speed of the acquisition equipment (such as a vehicle) can be reduced or the vehicle can turn, and target speed parameters such as speed, acceleration and the like of the acquisition equipment can reflect the defect conditions of the target road to a certain extent.
203. And performing feature extraction based on the multiple dimension target images to obtain target image features of the multiple dimension target images.
The target image features refer to spatial features of a multi-dimensional target image.
For example, the image feature extraction module provided in the embodiment of the present application may perform feature extraction on a plurality of dimensional target images to obtain target image features. The weight parameter of the image feature extraction module can be obtained by learning through the first feature extraction module in the trained disease recognition model in the following step 606.
In some embodiments, the image feature extraction module exists independently, and at this time, the weight parameter of the first feature extraction module in the trained disease recognition model may be extracted to form the image feature extraction module. In this case, step 203 may specifically include: splicing the multiple dimensional target images to form a multi-channel image; and then, inputting the spliced multi-channel image into an image feature extraction module, and performing operations such as convolution, pooling and the like on the multi-channel image through the image feature extraction module to obtain the target image feature. As shown in fig. 3, the image feature extraction module may include a convolution layer and a pooling layer, so that operations such as convolution and pooling may be performed on a plurality of dimensional images, thereby implementing feature extraction on a plurality of dimensional target images. Fig. 3 shows that a color image (i.e., RGB image), a radar data image (i.e., bev image), an infrared image, and an ultraviolet image of a target road are merged to form a multi-channel image, the merged multi-channel image is input into an image feature extraction module, and the merged multi-channel image is sequentially convolved and pooled by a convolution layer (e.g., "conv" shown in fig. 3) and a pooling layer (e.g., "pool" shown in fig. 3) to output target image features.
In other embodiments, the first feature extraction module integrated in the trained disease recognition model in the embodiment of the present application may be used as an image feature extraction module. In this case, step 203 may specifically include: splicing the multiple dimensional target images to form a multi-channel image; and then inputting the spliced multi-channel image into a trained disease recognition model, and performing operations such as convolution, pooling and the like on the multi-channel image through a first feature extraction module of the trained disease recognition model to obtain the target image feature.
204. And fusing the target image characteristics and the target speed parameters to obtain target fusion characteristics of the multiple dimensional target images.
The target fusion feature refers to an expression feature obtained after fusion of the target image feature and the target speed parameter.
In step 204, there are various ways to fuse the target image feature and the target speed parameter, which exemplarily includes:
1. and fusing in a simple characteristic combination mode. And directly splicing the target image characteristics and the target speed parameters to form a new characteristic vector as target fusion characteristics. For example, the target image feature and the target speed parameter may be combined in a serial or parallel manner to realize the fusion of the target image feature and the target speed parameter, so as to obtain the target fusion feature.
2. And carrying out fusion by a characteristic selection mode. And selecting the most favorable features for subsequent road disease identification to form a new feature vector as target fusion features. For example, information that is in the target speed parameter and is beneficial to road disease identification may be extracted first, and then the extracted information may be fused with the target image features. In this case, step 204 may specifically include steps 2041 to 2043, where:
2041. preset attention extracting parameters are obtained.
In step 2041, there are various ways to obtain the preset attention-taking extraction parameter, which exemplarily include:
(1) the preset attention-carrying extraction parameters may be extracted based on the trained disease recognition model obtained in step 606 described below. Step 2041 may specifically include: and acquiring the feature extraction parameters of a second feature extraction module in the trained disease recognition model as preset attention extraction parameters.
(2) And directly calling a second feature extraction module in the trained disease recognition model to obtain the attention extraction parameters.
2042. And extracting information of the target speed parameter through the attention extraction parameter to obtain the target speed characteristic of the target speed parameter.
The target speed characteristic refers to characteristic information which is obtained after information extraction is carried out on the target speed parameter and is beneficial to subsequent road disease identification.
Illustratively, the target speed parameter may be input into a second feature extraction module of the trained disease recognition model, so as to extract information of the target speed parameter through the feature extraction parameter in the second feature extraction module, thereby obtaining a target speed feature of the target speed parameter.
2043. And superposing the target image characteristic and the target speed characteristic to obtain the target fusion characteristic.
In some embodiments, the target image feature and the target speed feature may be combined in a serial manner, so as to realize superposition of the target image feature and the target speed feature, thereby obtaining the target fusion feature.
In other embodiments, the target image feature and the target speed feature may be combined in a parallel manner, so as to superimpose the target image feature and the target speed feature, thereby obtaining the target fusion feature.
Since the attention-carrying extraction parameter is obtained by learning (as shown in steps 601 to 606 described below), it reflects the constraint relationship between the advantageous feature of road damage identification and the target speed parameter. Therefore, information extraction is carried out on the target speed parameter through the attention extraction parameter to obtain the target speed characteristic of the target speed parameter; and then the target speed characteristics and the target image characteristics are fused, so that the favorable characteristics of road disease identification are effectively extracted from the target speed parameters, and the accuracy of road disease identification is improved.
205. And identifying based on the target fusion characteristics, and determining the road diseases existing in the target road.
For example, the road fault identification of the target road may be performed by a prediction module in the trained fault identification model in the embodiment of the present application. Step 205 may include: and inputting the target fusion characteristics into a prediction module of a trained disease recognition model, and performing road disease segmentation according to the target fusion characteristics through the prediction module of the trained disease recognition model to obtain target road disease segmentation information of a plurality of dimensional target images. The target road disease segmentation information is used for indicating road diseases existing in the target road.
Furthermore, in order to improve the accuracy of road disease identification, when the target road disease segmentation information of the target images with multiple dimensions is obtained, whether the road disease exists in a road area of the target road can be further detected, so as to determine whether the road disease really exists on the target road. If the road diseases are not in the road area of the target road, the identified road diseases are proved to be not the road diseases existing in the target road; and if the road diseases are in the road area of the target road, the identified road diseases are proved not to be the road diseases existing in the target road. At this time, step 205 may further include: detecting a road area of the target road from the plurality of dimensional target images; performing road disease detection based on the target fusion characteristics to obtain the position of the initial road disease in the multiple dimensional target images; detecting whether the position of the initial judgment road disease is in the road area; and when the position of the initial judgment road disease is in the road area, determining the initial judgment road disease as the road disease existing in the target road. And when the position of the initial judgment road disease is outside the road area, filtering the initial judgment road disease.
For ease of understanding, a network structure and a training mode of the trained disease recognition model are described below. As shown in fig. 4, the trained disease recognition model may include a feature extraction module, a feature superposition module, and a prediction module.
Firstly, a feature extraction module.
The feature extraction module is used for extracting features of the target images in multiple dimensions to obtain target image features. The feature extraction module takes the multiple dimension target images as input, performs operations such as convolution and pooling on the multiple dimension target images, realizes feature extraction on the multiple dimension target images, and outputs the target images in terms of features.
Further, as shown in fig. 5, the feature extraction module may further include a first feature extraction module and a second feature extraction module.
The first feature extraction module is used for extracting features of the target images with multiple dimensions to obtain target image features. The first feature extraction module takes a plurality of dimensional target images as input, performs operations such as convolution and pooling on the dimensional target images, realizes feature extraction on the dimensional target images and outputs the feature of the target images.
And the second feature extraction module is used for extracting information of the target speed parameter to obtain the target speed feature. The second characteristic extraction module takes the target speed parameter as input and outputs the target speed characteristic. Further, in order to extract effective information in the target speed parameter and fuse the effective information with the target image characteristics so as to enhance the accuracy of road disease identification, the second characteristic extraction module uses a transformer model with an attention mechanism. the transformer model is composed of a self-attention module only composed of full connection, and is simple and efficient in structure, so that effective information in the target speed parameter can be efficiently extracted and fused into the target image feature.
And secondly, a characteristic superposition module.
As shown in fig. 4, when the trained disease recognition model includes the feature extraction module, the feature superposition module, and the prediction module, the feature superposition module may be configured to splice the target image feature and the target speed parameter, for example, combine the target image feature and the target speed parameter in a serial or parallel manner, so as to realize fusion of the target image feature and the target speed parameter, and obtain a target fusion feature.
As shown in fig. 5, when the trained disease recognition model includes the first feature extraction module, the second feature extraction module, the feature superposition module, and the prediction module, the feature superposition module may be configured to splice the target image feature and the target speed feature, for example, combine the target image feature and the target speed feature in a serial or parallel manner, so as to realize fusion of the target image feature and the target speed feature, and obtain a target fusion feature.
And thirdly, a prediction module.
And the prediction module is used for outputting the road diseases existing in the target road according to the target fusion characteristics. And the prediction module takes the target fusion characteristics as input, performs segmentation processing according to the target fusion characteristics, and determines target road disease segmentation information of the target road, so as to determine the road diseases existing in the target road.
Referring to fig. 5, a training process of the trained disease recognition model is described below by taking the trained disease recognition model as an example, which includes a first feature extraction module, a second feature extraction module, a feature superposition module and a prediction module. As shown in fig. 6, the training process of the trained disease recognition model specifically includes the following steps 601 to 606:
601. training samples are obtained.
The training samples comprise a plurality of dimensional sample images of a sample road and sample speed parameters of the dimensional sample images, and the training samples are marked with actual road disease segmentation information of the sample road.
The actual road defect segmentation information refers to a pre-labeled area of a road defect existing in the sample road.
The multiple dimension sample images are similar to the multiple dimension target images, and reference may be made to the above description for details, which are not repeated here.
The sample speed parameter refers to speed-related information of an acquisition device of a plurality of dimensional sample images when acquiring the plurality of dimensional sample images. Such as the speed, acceleration, etc. of a vehicle traveling on a target road for capturing multiple dimensional target images.
Furthermore, in order to improve the target modeling capacity of the disease identification model, image data can be enhanced by adopting a regularization method such as Dropblock block erasing and Mosaic, and the problem of data imbalance can be solved by adopting a method such as randomly pasting a small amount of category samples and a Class-Balanced local area in an image background area. In order to enhance and improve the capability of the disease identification model for capturing useful signals in time sequence, time sequence data enhancement is carried out on speed parameters and the like by means of noise enhancement, time shift enhancement, pitch enhancement and the like.
602. And performing feature extraction based on the multiple dimension sample images through a first feature extraction module in the disease identification model to be trained to obtain sample image features of the multiple dimension sample images.
Illustratively, splicing a plurality of dimensional sample images to form a multi-channel image; and then inputting the spliced multi-channel image into a disease identification model to be trained, and performing operations such as convolution, pooling and the like on the multi-channel image through a first feature extraction module of the disease identification model to be trained to obtain sample image features.
603. And extracting information of the sample speed parameter through a second characteristic extraction module with an attention mechanism in the disease identification model to be trained to obtain the sample speed characteristic of the sample speed parameter.
The sample speed characteristic refers to characteristic information which is obtained after information extraction is carried out on the sample speed parameter and is beneficial to road disease identification.
604. And fusing the sample image characteristics and the sample speed characteristics through a characteristic superposition module in the disease identification model to be trained to obtain the sample fusion characteristics of the multiple dimension sample images.
The sample fusion feature refers to an expression feature obtained by fusing a sample image feature and a sample speed feature.
605. And performing disease segmentation based on the sample fusion characteristics through a prediction module in the disease recognition model to be trained to obtain the predicted road disease segmentation information of the sample road.
The predicted road damage segmentation information is an area of a road damage present in the sample road obtained by prediction.
The implementation of steps 602 to 604 is similar to that of steps 203, 2042 to 2043, and 205, and reference may be made to the foregoing description for details, which are not repeated herein.
606. And adjusting model parameters of the disease recognition model to be trained based on the predicted road disease segmentation information and the actual road disease segmentation information until the trained disease recognition model is obtained when the preset training stopping condition is met.
In some embodiments, step 606, a training loss of the disease recognition model to be trained may be determined according to the predicted road disease segmentation information and the actual road disease segmentation information; and performing back propagation based on the training loss to adjust model parameters of the disease identification model to be trained, such as adjusting weight parameters of the second feature extraction module. And when the training condition is met, taking the disease recognition model to be trained after parameter adjustment as a trained disease recognition model. At this time, the step 203 may be implemented by using a first feature extraction module in a trained disease recognition model, the step 2042 may be implemented by using a second feature extraction module in the trained disease recognition model, the step 2043 may be implemented by using a feature superposition module in the trained disease recognition model, and the step 205 may be implemented by using a prediction module in the trained disease recognition model.
Wherein, the preset training stopping condition can be set according to the actual requirement. For example, when the training loss is smaller than a preset value, or the training loss is not substantially changed, that is, the difference between the training losses corresponding to adjacent training times is smaller than the preset value; or when the iteration times of the disease identification model to be trained reach the maximum iteration times.
The trained disease recognition model can fully learn the fusion characteristics of the image and the speed parameters and the relation between the pixel points of the region where the road disease is located, so the road disease in the target road can be effectively recognized by adopting the trained disease recognition model. The method for recognizing the road diseases in the target road by adopting the trained disease recognition model has the following effects: on the first hand, the trained disease recognition model can effectively fuse the image and the speed parameter for recognizing the road disease, so that the reference data dimension of the road disease recognition can be improved, and the problem that the road disease cannot be recognized or the road disease is recognized wrongly in a single-dimension image under a special environment is avoided; in the second aspect, the second feature module is provided with an attention mechanism, so that the second feature module can fully learn the favorable features for road disease identification, and the accuracy of road disease identification can be improved.
Further, in order to facilitate understanding of the severity of the road disease, the severity of the road disease existing in the target road may be classified. Namely, the road disease identification method can further comprise the following steps B1-B2:
and B1, acquiring target road disease segmentation information of the road disease.
Specifically, in step B1, the road damage segmentation information obtained when the road damage segmentation is performed in step 205 may be directly obtained.
And B2, classifying the disease grades based on the target road disease segmentation information to obtain the disease severity grade of the road diseases existing on the target road.
The target road disease segmentation information indicates the position of the road disease in the multi-dimensional target image. Exemplarily, a classification module can be further integrated in the trained disease recognition model, and the classification module is used for intercepting regional images of road diseases from a plurality of dimensional target images; carrying out feature extraction on the regional image based on the road disease to obtain the image features of the regional image; and classifying the disease severity grade based on the image characteristics of the area image to obtain the disease severity grade of the road disease existing in the target road.
Furthermore, in order to facilitate related road managers to view and know the road damage condition of the target road, the road damage condition of the target road can be displayed on a preset display platform. The road disease condition comprises at least one of road diseases existing on the target road and disease severity levels of the road diseases existing on the target road. The display platform can be a mobile phone, a computer, a television, a server terminal, a webpage and the like, and can be set according to the requirements of specific service scenes, and the specific representation form of the preset display platform is not limited. Through the preset display platform, relevant road management personnel can know the road disease condition in time without field visit and carry out manual verification and maintenance, and the cost of road maintenance is reduced to a certain extent.
In the embodiment of the application, data for identifying the road diseases are mainly added from the following two aspects, and the road disease identification based on multi-dimensional and multi-modal data is realized, so that the identification accuracy of the road diseases is improved: on the one hand, because the images with different dimensionalities have different road information, the road diseases are identified by combining the target images with multiple dimensionalities, the road information of different images can be complemented, the problem that the road diseases cannot be identified or are identified wrongly due to incomplete information acquired by the single-dimensionality images in a special environment is avoided, and therefore the accuracy of identifying the road diseases can be improved to a certain extent. In the second aspect, because a common image has a larger information loss problem in a special environment, the speed parameter of the acquisition equipment (such as a vehicle) moving on the target road and the multi-dimensional target image are fused for identifying the road diseases, so that the relation between the speed parameter of the acquisition equipment and the road diseases can be fully utilized, and the accuracy of identifying the road diseases can be improved to a certain extent.
In addition, in order to verify the effect brought by the road disease identification provided by the embodiment of the application, experimental tests are also performed on the scheme provided by the embodiment of the application, and the specific steps are as follows:
ten thousand images are used as a test set, and the recall rate and the accuracy rate of the existing model for road disease identification when only single-mode image data is used as input data and the recall rate and the accuracy rate of the trained disease identification model when multi-mode data (including RGB images of roads, radar data images, infrared images, ultraviolet images and speeds and accelerations of collected vehicles) in the embodiment of the application are used as the input data are respectively verified.
Wherein, as shown in the following formula (1), the recall rate of the model indicates how many positive samples in the samples are predicted to be correct by the model. As shown in the following equation (2), the accuracy of the model indicates how many of the samples that the model predicts as positive are true positive samples.
Figure BDA0003262508160000211
Figure BDA0003262508160000212
In the formulas (1) and (2), P represents the accuracy of the model, TP represents the number of actual positive classes predicted by the model as positive classes, FP represents the number of actual negative classes predicted by the model as positive classes, and FN represents the number of actual positive classes predicted by the model as negative classes.
1. The recall rate and accuracy rate of the existing model for road damage recognition when single-modality image data is used as input data are shown in table 1 below.
TABLE 1
Recall rate Rate of accuracy
87.7% 34.6%
2. The recall rate and accuracy rate of the trained disease recognition model when the multi-modal data in the embodiment of the application is used as input data are shown in the following table 2.
TABLE 2
Recall rate Rate of accuracy
94.3% 36.2%
From the experimental data, when the road diseases are identified, the multi-modal input data is formed by combining the multi-dimensional images and the speed information, so that the accuracy of identifying the road diseases can be effectively improved.
In order to better implement the method for identifying a road fault in the embodiment of the present application, on the basis of the method for identifying a road fault, an embodiment of the present application further provides a device for identifying a road fault, as shown in fig. 7, which is a schematic structural diagram of an embodiment of the device for identifying a road fault in the embodiment of the present application, and the device 700 for identifying a road fault includes:
an obtaining unit 701, configured to obtain multiple dimensional target images of a target road, where the multiple dimensional target images include at least two of a color image, a radar data image, an infrared image, and an ultraviolet image of the target road;
the acquiring unit 701 is further configured to acquire target speed parameters of acquiring devices of the multiple-dimensional target images, where the acquiring devices are devices that move on the target road;
a feature extraction unit 702, configured to perform feature extraction based on the multiple dimensional target images to obtain target image features of the multiple dimensional target images;
a feature fusion unit 703, configured to fuse the target image features and the target speed parameters to obtain target fusion features of the multiple dimensional target images;
and the identifying unit 704 is configured to identify based on the target fusion feature, and determine a road disease existing in the target road.
In some embodiments of the present application, the feature fusion unit 703 is specifically configured to:
acquiring preset attention extraction parameters;
extracting information of the target speed parameter through the attention extraction parameter to obtain a target speed characteristic of the target speed parameter;
and superposing the target image characteristic and the target speed characteristic to obtain the target fusion characteristic.
In some embodiments of the present application, the road damage recognition apparatus 700 further includes a training unit (not shown in the figure), and before the preset attention-taking extraction parameter is obtained, the training unit is specifically configured to:
acquiring a training sample, wherein the training sample comprises a plurality of dimensional sample images of a sample road and sample speed parameters of the plurality of dimensional sample images, and the training sample is labeled with actual road disease segmentation information of the sample road;
performing feature extraction based on the multiple dimension sample images through a first feature extraction module in a disease identification model to be trained to obtain sample image features of the multiple dimension sample images;
extracting information of the sample speed parameter through a second characteristic extraction module with an attention mechanism in the disease identification model to be trained to obtain a sample speed characteristic of the sample speed parameter;
fusing the sample image features and the sample speed features through a feature superposition module in the disease recognition model to be trained to obtain sample fusion features of the multiple dimension sample images;
performing disease segmentation based on the sample fusion characteristics through a prediction module in the disease recognition model to be trained to obtain predicted road disease segmentation information of the sample road;
adjusting model parameters of the disease recognition model to be trained based on the predicted road disease segmentation information and the actual road disease segmentation information until a trained disease recognition model is obtained when a preset training stopping condition is met;
and taking the feature extraction parameters of the second feature extraction module in the trained disease recognition model as the attention extraction parameters.
In some embodiments of the present application, the road disease identification apparatus 700 further includes a classification unit (not shown in the figure), and after the identification is performed based on the target fusion feature and the road disease existing in the target road is determined, the classification unit is specifically configured to:
acquiring target road disease segmentation information of the road disease;
and classifying the disease grades based on the target road disease segmentation information to obtain the disease severity grade of the road diseases existing on the target road.
In some embodiments of the present application, the obtaining unit 701 is specifically configured to:
acquiring radar point cloud data of the target road;
performing plane discretization on the radar point cloud data to obtain plane information of the radar point cloud data;
performing height discretization on the radar point cloud data to obtain height information of the radar point cloud data;
determining the radar data image based on the plane information and the altitude information.
In some embodiments of the present application, the identifying unit 704 is specifically configured to:
detecting a road area of the target road from the plurality of dimensional target images;
performing road disease detection based on the target fusion characteristics to obtain the position of the initial road disease in the multiple dimensional target images;
detecting whether the position of the initial judgment road disease is in the road area;
and when the position of the initial judgment road disease is in the road area, determining the initial judgment road disease as the road disease existing in the target road.
In some embodiments of the present application, the road damage identifying device 700 further includes a display unit (not shown in the figure), where the display unit is specifically configured to:
displaying a road disease condition of the target road on a preset display platform, wherein the road disease condition comprises at least one of a road disease existing in the target road and a disease severity level of the road disease existing in the target road.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
Since the road fault identification device can execute the steps in the road fault identification method in any embodiment corresponding to fig. 1 to 6, the beneficial effects that can be realized by the road fault identification method in any embodiment corresponding to fig. 1 to 6 can be realized, and the detailed description is omitted here.
In addition, in order to better implement the method for identifying a road fault in the embodiment of the present application, based on the method for identifying a road fault, an embodiment of the present application further provides an electronic device, referring to fig. 8, fig. 8 shows a schematic structural diagram of the electronic device in the embodiment of the present application, specifically, the electronic device in the embodiment of the present application includes a processor 801, and when the processor 801 is used for executing a computer program stored in a memory 802, each step of the method for identifying a road fault in any embodiment corresponding to fig. 1 to 6 is implemented; alternatively, the processor 801 is configured to implement the functions of the units in the corresponding embodiment of fig. 7 when executing the computer program stored in the memory 802.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 802 and executed by the processor 801 to implement the embodiments of the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The electronic device may include, but is not limited to, a processor 801, a memory 802. Those skilled in the art will appreciate that the illustration is merely an example of an electronic device and does not constitute a limitation of an electronic device, and may include more or less components than those illustrated, or combine some components, or different components, for example, an electronic device may further include an input output device, a network access device, a bus, etc., and the processor 801, the memory 802, the input output device, the network access device, etc., are connected via the bus.
The Processor 801 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center for the electronic device and the various interfaces and lines connecting the various parts of the overall electronic device.
The memory 802 may be used to store computer programs and/or modules, and the processor 801 may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory 802 and invoking data stored in the memory 802. The memory 802 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the electronic device, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the description of the road fault identification device, the electronic device and the corresponding units thereof may refer to the description of the road fault identification method in any embodiment corresponding to fig. 1 to 6, and details are not repeated herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
For this reason, an embodiment of the present application provides a computer-readable storage medium, where a plurality of instructions are stored, where the instructions can be loaded by a processor to execute steps in the method for identifying a road fault in any embodiment corresponding to fig. 1 to 6 in the present application, and specific operations may refer to descriptions of the method for identifying a road fault in any embodiment corresponding to fig. 1 to 6, which are not described herein again.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the method for identifying a road fault in any embodiment corresponding to fig. 1 to 6 in the present application, the beneficial effects that can be achieved by the method for identifying a road fault in any embodiment corresponding to fig. 1 to 6 in the present application can be achieved, for details, see the foregoing description, and are not repeated herein.
The method, the device, the electronic device and the computer-readable storage medium for identifying a road disease provided by the embodiment of the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A road disease identification method is characterized by comprising the following steps:
acquiring a plurality of dimensional target images of a target road, wherein the plurality of dimensional target images comprise at least two of a color image, a radar data image, an infrared image and an ultraviolet image of the target road;
acquiring target speed parameters of acquisition equipment of the multi-dimensional target images, wherein the acquisition equipment is equipment moving on the target road;
performing feature extraction based on the multiple dimension target images to obtain target image features of the multiple dimension target images;
fusing the target image features and the target speed parameters to obtain target fusion features of the multiple dimensional target images;
and identifying based on the target fusion characteristics, and determining the road diseases existing in the target road.
2. The method for identifying the road disease according to claim 1, wherein the fusing the target image features and the target speed parameters to obtain target fusion features of the target images with multiple dimensions comprises:
acquiring preset attention extraction parameters;
extracting information of the target speed parameter through the attention extraction parameter to obtain a target speed characteristic of the target speed parameter;
and superposing the target image characteristic and the target speed characteristic to obtain the target fusion characteristic.
3. The method for identifying the road disease according to claim 2, wherein before the obtaining the preset attention-carrying extraction parameter, the method further comprises:
acquiring a training sample, wherein the training sample comprises a plurality of dimensional sample images of a sample road and sample speed parameters of the plurality of dimensional sample images, and the training sample is labeled with actual road disease segmentation information of the sample road;
performing feature extraction based on the multiple dimension sample images through a first feature extraction module in a disease identification model to be trained to obtain sample image features of the multiple dimension sample images;
extracting information of the sample speed parameter through a second characteristic extraction module with an attention mechanism in the disease identification model to be trained to obtain a sample speed characteristic of the sample speed parameter;
fusing the sample image features and the sample speed features through a feature superposition module in the disease recognition model to be trained to obtain sample fusion features of the multiple dimension sample images;
performing disease segmentation based on the sample fusion characteristics through a prediction module in the disease recognition model to be trained to obtain predicted road disease segmentation information of the sample road;
adjusting model parameters of the disease recognition model to be trained based on the predicted road disease segmentation information and the actual road disease segmentation information until a trained disease recognition model is obtained when a preset training stopping condition is met;
and taking the feature extraction parameters of the second feature extraction module in the trained disease recognition model as the attention extraction parameters.
4. The method for identifying road diseases according to claim 1, wherein after identifying based on the target fusion features and determining the road diseases existing in the target road, the method further comprises:
acquiring target road disease segmentation information of the road disease;
and classifying the disease grades based on the target road disease segmentation information to obtain the disease severity grade of the road diseases existing on the target road.
5. The method for identifying road diseases according to claim 1, characterized in that the radar data image is obtained by:
acquiring radar point cloud data of the target road;
performing plane discretization on the radar point cloud data to obtain plane information of the radar point cloud data;
performing height discretization on the radar point cloud data to obtain height information of the radar point cloud data;
determining the radar data image based on the plane information and the altitude information.
6. The method for identifying the road disease according to claim 1, wherein the identifying based on the target fusion feature and determining the road disease existing in the target road comprises:
detecting a road area of the target road from the plurality of dimensional target images;
performing road disease detection based on the target fusion characteristics to obtain the position of the initial road disease in the multiple dimensional target images;
detecting whether the position of the initial judgment road disease is in the road area;
and when the position of the initial judgment road disease is in the road area, determining the initial judgment road disease as the road disease existing in the target road.
7. The method for identifying a road disease according to any one of claims 1 to 6, further comprising:
displaying a road disease condition of the target road on a preset display platform, wherein the road disease condition comprises at least one of a road disease existing in the target road and a disease severity level of the road disease existing in the target road.
8. A road disease recognition device, characterized in that, the road disease recognition device includes:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of dimensional target images of a target road, and the plurality of dimensional target images comprise at least two of a color image, a radar data image, an infrared image and an ultraviolet image of the target road;
the acquisition unit is further used for acquiring target speed parameters of acquisition equipment of the multi-dimensional target images, wherein the acquisition equipment is equipment moving on the target road;
the feature extraction unit is used for performing feature extraction on the basis of the multiple dimension target images to obtain target image features of the multiple dimension target images;
the feature fusion unit is used for fusing the target image features and the target speed parameters to obtain target fusion features of the multiple dimensional target images;
and the identification unit is used for identifying based on the target fusion characteristics and determining the road diseases existing in the target road.
9. An electronic device, comprising a processor and a memory, wherein the memory stores a computer program, and the processor executes the road disease identification method according to any one of claims 1 to 7 when calling the computer program in the memory.
10. A computer-readable storage medium, having stored thereon a computer program which is loaded by a processor to carry out the steps of the method of identifying a road fault according to any one of claims 1 to 7.
CN202111076677.XA 2021-09-14 2021-09-14 Road disease identification method and device, electronic equipment and readable storage medium Pending CN113808098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111076677.XA CN113808098A (en) 2021-09-14 2021-09-14 Road disease identification method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111076677.XA CN113808098A (en) 2021-09-14 2021-09-14 Road disease identification method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113808098A true CN113808098A (en) 2021-12-17

Family

ID=78895341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111076677.XA Pending CN113808098A (en) 2021-09-14 2021-09-14 Road disease identification method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113808098A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895302A (en) * 2022-04-06 2022-08-12 广州易探科技有限公司 Method and device for rapidly detecting roadbed diseases of urban roads
CN115331190A (en) * 2022-09-30 2022-11-11 北京闪马智建科技有限公司 Road hidden danger identification method and device based on radar fusion
CN115830032A (en) * 2023-02-13 2023-03-21 杭州闪马智擎科技有限公司 Road expansion joint lesion identification method and device based on old facilities
CN116630716A (en) * 2023-06-06 2023-08-22 云途信息科技(杭州)有限公司 Road greening damage identification method, device, computer equipment and storage medium
CN116881782A (en) * 2023-06-21 2023-10-13 清华大学 Pavement defect identification method, device, computer equipment and storage medium
CN117409328A (en) * 2023-12-14 2024-01-16 城云科技(中国)有限公司 Causal-free target detection model, causal-free target detection method and causal-free target detection application for road disease detection
CN117745537A (en) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method, device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITMI962154A1 (en) * 1996-10-17 1998-04-17 Sgs Thomson Microelectronics METHOD FOR THE IDENTIFICATION OF SIGN STRIPES OF ROAD LANES
CN108898085A (en) * 2018-06-20 2018-11-27 安徽大学 A kind of road disease intelligent detecting method based on mobile video
CN109632822A (en) * 2018-12-25 2019-04-16 东南大学 A kind of quasi-static high-precision road surface breakage intelligent identification device and its method
CN109685124A (en) * 2018-12-14 2019-04-26 斑马网络技术有限公司 Road disease recognition methods neural network based and device
CN110189317A (en) * 2019-05-30 2019-08-30 上海卡罗网络科技有限公司 A kind of road image intelligent acquisition and recognition methods based on deep learning
CN110910354A (en) * 2019-11-07 2020-03-24 安徽乐道信息科技有限公司 Road detection vehicle and road detection method and device
CN111080620A (en) * 2019-12-13 2020-04-28 中远海运科技股份有限公司 Road disease detection method based on deep learning
CN111767874A (en) * 2020-07-06 2020-10-13 中兴飞流信息科技有限公司 Pavement disease detection method based on deep learning
CN113066086A (en) * 2021-04-26 2021-07-02 深圳市商汤科技有限公司 Road disease detection method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITMI962154A1 (en) * 1996-10-17 1998-04-17 Sgs Thomson Microelectronics METHOD FOR THE IDENTIFICATION OF SIGN STRIPES OF ROAD LANES
EP0837378A2 (en) * 1996-10-17 1998-04-22 STMicroelectronics S.r.l. Method for identifying marking stripes of road lanes
CN108898085A (en) * 2018-06-20 2018-11-27 安徽大学 A kind of road disease intelligent detecting method based on mobile video
CN109685124A (en) * 2018-12-14 2019-04-26 斑马网络技术有限公司 Road disease recognition methods neural network based and device
CN109632822A (en) * 2018-12-25 2019-04-16 东南大学 A kind of quasi-static high-precision road surface breakage intelligent identification device and its method
CN110189317A (en) * 2019-05-30 2019-08-30 上海卡罗网络科技有限公司 A kind of road image intelligent acquisition and recognition methods based on deep learning
CN110910354A (en) * 2019-11-07 2020-03-24 安徽乐道信息科技有限公司 Road detection vehicle and road detection method and device
CN111080620A (en) * 2019-12-13 2020-04-28 中远海运科技股份有限公司 Road disease detection method based on deep learning
CN111767874A (en) * 2020-07-06 2020-10-13 中兴飞流信息科技有限公司 Pavement disease detection method based on deep learning
CN113066086A (en) * 2021-04-26 2021-07-02 深圳市商汤科技有限公司 Road disease detection method and device, electronic equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895302A (en) * 2022-04-06 2022-08-12 广州易探科技有限公司 Method and device for rapidly detecting roadbed diseases of urban roads
CN115331190A (en) * 2022-09-30 2022-11-11 北京闪马智建科技有限公司 Road hidden danger identification method and device based on radar fusion
CN115331190B (en) * 2022-09-30 2022-12-09 北京闪马智建科技有限公司 Road hidden danger identification method and device based on radar vision fusion
CN115830032A (en) * 2023-02-13 2023-03-21 杭州闪马智擎科技有限公司 Road expansion joint lesion identification method and device based on old facilities
CN116630716A (en) * 2023-06-06 2023-08-22 云途信息科技(杭州)有限公司 Road greening damage identification method, device, computer equipment and storage medium
CN116630716B (en) * 2023-06-06 2024-05-24 云途信息科技(杭州)有限公司 Road greening damage identification method, device, computer equipment and storage medium
CN116881782A (en) * 2023-06-21 2023-10-13 清华大学 Pavement defect identification method, device, computer equipment and storage medium
CN117409328A (en) * 2023-12-14 2024-01-16 城云科技(中国)有限公司 Causal-free target detection model, causal-free target detection method and causal-free target detection application for road disease detection
CN117409328B (en) * 2023-12-14 2024-02-27 城云科技(中国)有限公司 Causal-free target detection model, causal-free target detection method and causal-free target detection application for road disease detection
CN117745537A (en) * 2024-02-21 2024-03-22 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method, device, computer equipment and storage medium
CN117745537B (en) * 2024-02-21 2024-05-17 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113808098A (en) Road disease identification method and device, electronic equipment and readable storage medium
US20210319561A1 (en) Image segmentation method and system for pavement disease based on deep learning
Yeum et al. Automated region-of-interest localization and classification for vision-based visual assessment of civil infrastructure
KR102094341B1 (en) System for analyzing pot hole data of road pavement using AI and for the same
Akagic et al. Pothole detection: An efficient vision based method using rgb color space image segmentation
KR101854554B1 (en) Method, device and storage medium for calculating building height
CN111222395A (en) Target detection method and device and electronic equipment
JP6700373B2 (en) Apparatus and method for learning object image packaging for artificial intelligence of video animation
CN112329881B (en) License plate recognition model training method, license plate recognition method and device
CN114049356B (en) Method, device and system for detecting structure apparent crack
CN109726678B (en) License plate recognition method and related device
CN112966665A (en) Pavement disease detection model training method and device and computer equipment
CN113901961B (en) Parking space detection method, device, equipment and storage medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN115410100A (en) Small target detection method and system based on unmanned aerial vehicle image
CN114972177A (en) Road disease identification management method and device and intelligent terminal
CN114519819B (en) Remote sensing image target detection method based on global context awareness
Premachandra et al. Road crack detection using color variance distribution and discriminant analysis for approaching smooth vehicle movement on non-smooth roads
CN111881984A (en) Target detection method and device based on deep learning
CN113281780B (en) Method and device for marking image data and electronic equipment
CN112597996A (en) Task-driven natural scene-based traffic sign significance detection method
CN112785610A (en) Lane line semantic segmentation method fusing low-level features
CN111160206A (en) Traffic environment element visual perception method and device
CN115909313A (en) Illegal parking board identification method and device based on deep learning
CN115438945A (en) Risk identification method, device, equipment and medium based on power equipment inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination