CN112966730A - Vehicle damage identification method, device, equipment and storage medium - Google Patents

Vehicle damage identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN112966730A
CN112966730A CN202110226715.9A CN202110226715A CN112966730A CN 112966730 A CN112966730 A CN 112966730A CN 202110226715 A CN202110226715 A CN 202110226715A CN 112966730 A CN112966730 A CN 112966730A
Authority
CN
China
Prior art keywords
damage
vehicle
picture
candidate regions
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110226715.9A
Other languages
Chinese (zh)
Inventor
张发恩
郭慧娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innovation Wisdom Shanghai Technology Co ltd
AInnovation Shanghai Technology Co Ltd
Original Assignee
Innovation Wisdom Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innovation Wisdom Shanghai Technology Co ltd filed Critical Innovation Wisdom Shanghai Technology Co ltd
Priority to CN202110226715.9A priority Critical patent/CN112966730A/en
Publication of CN112966730A publication Critical patent/CN112966730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Finance (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a vehicle damage identification method, a device, equipment and a storage medium, wherein the vehicle damage identification method comprises the following steps: acquiring a damage picture of a vehicle to be monitored; extracting image characteristics of the damage picture of the vehicle to be monitored according to a neural network; determining a plurality of damage candidate regions in the damage picture according to the image characteristics; processing the damage picture with the damage candidate regions according to an example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, wherein the damage detection result comprises information of at least one damage, and the information of the damage comprises the category and the position information of the damage. The application can improve the accuracy of vehicle damage identification.

Description

Vehicle damage identification method, device, equipment and storage medium
Technical Field
The application relates to the field of computer vision, in particular to a vehicle damage identification method, device, equipment and storage medium.
Background
Owing to the rapid development of deep learning, the commercial value of computer vision is gradually reflected in a plurality of fields such as security, internet, industrial manufacturing and the like. And the artificial intelligence algorithm is transferred, reformed and innovated, and is also suitable for auxiliary analysis of vehicle damage assessment. AI may help improve the efficiency and accuracy of determining vehicle damage. The current vehicle damage assessment process is as follows: and identifying and judging the vehicle damage condition according to the picture of the vehicle damage shot by the user on site. The user experience can be improved and the cost of the insurance company can be reduced.
The biggest difficulty of the existing intelligent damage assessment is that the identification of vehicle damage has the requirement of higher precision, the damage position is required to be accurately positioned, the damage category is required to be judged, and the precision of the existing damage assessment scheme on the vehicle damage is not high.
Disclosure of Invention
The embodiment of the application aims to provide a vehicle damage identification method, a vehicle damage identification device, vehicle damage identification equipment and a storage medium, which are used for improving the accuracy of vehicle damage identification.
To this end, a first aspect of the present application discloses a vehicle damage identification method, the method comprising:
acquiring a damage picture of a vehicle to be monitored;
extracting image characteristics of the damage picture of the vehicle to be monitored according to a neural network;
determining a plurality of damage candidate regions in the damage picture according to the image characteristics;
processing the damage picture with the damage candidate regions according to an example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, wherein the damage detection result comprises information of at least one damage, and the information of the damage comprises the category and the position information of the damage.
In the first aspect of the application, by obtaining a damage picture of a vehicle to be monitored, image features of the damage picture of the vehicle to be monitored can be extracted according to a neural network, a plurality of damage candidate regions in the damage picture can be determined according to the image features, and the damage picture with the plurality of damage candidate regions can be processed according to an example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result comprises at least one piece of damage information, and the damage information comprises the category and the position information of damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
In the first aspect of the present application, as an optional implementation manner, the example segmentation model includes a classification branch network, a bounding box regression branch network, and a mask prediction branch network;
and determining a plurality of damage candidate regions in the damage picture according to the image features, wherein the determining comprises the following steps:
classifying the plurality of damage candidate regions in the damage picture according to the classification branch network to obtain a first prediction result;
performing frame regression processing on the plurality of damage candidate regions in the damage picture according to the frame regression branch network to obtain a second prediction result;
performing mask prediction on the plurality of damage candidate regions in the damage picture according to the mask prediction branch network to obtain a third prediction result;
and outputting the damage detection result of the vehicle to be monitored according to the first prediction result, the second prediction result and the third prediction result.
In the optional embodiment, a plurality of damage candidate regions in a damage picture are classified through a classification branch network, so that a first prediction result can be obtained; performing frame regression processing on a plurality of damage candidate regions in the damage picture according to a frame regression branch network, and further obtaining a second prediction result; on the other hand, mask prediction is carried out on a plurality of damage candidate areas in the damage picture according to a mask prediction branch network, a third prediction result can be obtained, and therefore the damage detection result of the vehicle to be monitored is output according to the first prediction result, the second prediction result and the third prediction result.
In the first aspect of the present application, as an optional implementation manner, the mask predicted branch network includes a plurality of depth-separable convolutional networks and a deconvolution network;
and performing mask prediction on the plurality of candidate damage regions in the damaged picture according to the mask prediction branch network to obtain a third prediction result, wherein the mask prediction comprises:
taking the plurality of damage candidate regions in the damage picture as an input of the deconvolution network, so that the deconvolution network outputs shallow features of the plurality of damage candidate regions in the damage picture;
processing the plurality of damage candidate regions in the damage picture according to the plurality of depth-separable convolutional networks to output deep features of the plurality of damage candidate regions in the damage picture;
and obtaining the third prediction result according to the shallow feature and the deep feature.
In an optional embodiment, the plurality of damage candidate regions in the damage picture are used as input of a deconvolution network, and then the plurality of damage candidate regions in the damage picture can be sampled through the deconvolution network to extract shallow features, on the other hand, deep features can be extracted through the plurality of depth separable convolution networks, so that the shallow features and the deep features can be fused and identified and positioned based on fused output, wherein the transmission of spatial position information in the image can be enhanced due to the fact that the shallow features do not need to be subjected to multilayer continuous convolution, and then the identification and positioning accuracy of vehicle damage is improved, and particularly the identification and positioning accuracy is better for small damage of a vehicle.
In the first aspect of the present application, as an optional implementation manner, the classification branching network includes a sigmod function;
and classifying the plurality of damage candidate regions in the damage picture according to the classification branch network to obtain a first prediction result, including:
and classifying the plurality of damage candidate regions in the damage picture according to the sigmod function to obtain the first prediction result.
In the optional embodiment, the sigmod function is used in the classification branch network, so that the problem of inter-class competition caused by the softmax classification function can be solved, each damage class is independently predicted respectively, prediction of each damage class is decoupled, and accuracy of damage identification and positioning is further improved.
In the first aspect of the present application, as an optional implementation manner, the classification branching network further includes a loss function, where the calculation formula of the loss function is:
Lcls=L+L0-1
and the number of the first and second groups,
Figure BDA0002956693730000041
where L represents the cross entropy loss function. L is0-1Represents the 0-1 loss function, σ (α)i) For the sigmod function, N represents the number of classes.
In the first aspect of the present application, as an optional implementation manner, the damage category is one of scratch, corner deformation, non-corner deformation, dead fold, crack, fracture, displacement, partial deletion, complete deletion, lamp damage, glass damage, and severe damage.
In the optional embodiment, the damage category is scratch, corner deformation, non-corner deformation, dead fold, crack, fracture, displacement, partial deletion, complete deletion, lamp damage, glass damage and severe damage, that is, compared with the prior art, the optional embodiment can identify and position more types of damage types so as to further improve the precision of damage identification and positioning.
A second aspect of the present application discloses a vehicle damage identifying device, the device including:
the acquisition module is used for acquiring a damage picture of the vehicle to be monitored;
the extraction module is used for extracting the image characteristics of the damage picture of the vehicle to be monitored according to the neural network;
the determining module is used for determining a plurality of damage candidate regions in the damage picture according to the image characteristics;
the identification module is configured to process the damage picture with the plurality of damage candidate regions according to an example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result includes information of at least one damage, and the information of the damage includes category and position information of the damage.
The device of the second aspect of the application can further determine a plurality of damage candidate regions in the damage picture by executing the vehicle damage identification method, further can extract image features of the damage picture of the vehicle to be monitored according to the neural network, further can determine the plurality of damage candidate regions in the damage picture according to the image features, further can process the damage picture with the plurality of damage candidate regions according to the example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result comprises at least one piece of damage information, and the damage information comprises the type and the position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
In the second aspect of the present application, as an optional implementation manner, the example segmentation model includes a classification branch network, a bounding box regression branch network, and a mask prediction branch network;
and, the determining module comprises:
the classification submodule is used for classifying the damage candidate areas in the damage picture according to the classification branch network to obtain a first prediction result;
the frame regression processing submodule is used for carrying out frame regression processing on the plurality of damage candidate regions in the damage picture according to the frame regression branch network to obtain a second prediction result;
the prediction sub-module is used for performing mask prediction on the plurality of damage candidate regions in the damage picture according to the mask prediction branch network to obtain a third prediction result;
and the output module is used for outputting the damage detection result of the vehicle to be monitored according to the first prediction result, the second prediction result and the third prediction result.
Classifying a plurality of damage candidate regions in the damage picture through a classification branch network, and further obtaining a first prediction result; performing frame regression processing on a plurality of damage candidate regions in the damage picture according to a frame regression branch network, and further obtaining a second prediction result; on the other hand, mask prediction is carried out on a plurality of damage candidate areas in the damage picture according to a mask prediction branch network, a third prediction result can be obtained, and therefore the damage detection result of the vehicle to be monitored is output according to the first prediction result, the second prediction result and the third prediction result.
A third aspect of the present application discloses a vehicle damage identifying apparatus, the apparatus including:
a processor; and
a memory configured to store machine readable instructions which, when executed by the processor, cause the processor to perform the vehicle impairment identification method of the first aspect of the present application.
The device of the third aspect of the present application, by executing the vehicle damage identification method, can further obtain a damage picture of the vehicle to be monitored, can further extract image features of the damage picture of the vehicle to be monitored according to the neural network, and can further determine a plurality of damage candidate regions in the damage picture according to the image features, and further can process the damage picture with the plurality of damage candidate regions according to the example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result includes information of at least one damage, and the damage information includes a category and position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
A fourth aspect of the present application discloses a storage medium storing a computer program executed by a processor to perform the vehicle damage identification method of the first aspect of the present application.
The storage medium of the fourth aspect of the present application executes the vehicle damage identification method, and then can obtain a damage picture of a vehicle to be monitored, and then can extract image features of the damage picture of the vehicle to be monitored according to the neural network, and then can determine a plurality of damage candidate regions in the damage picture according to the image features, and then can process the damage picture with the plurality of damage candidate regions according to the example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result includes information of at least one damage, and the damage information includes a category and position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic flow chart of a vehicle damage identification method disclosed in an embodiment of the present application;
FIG. 2 is a diagram illustrating a mask predicted branch network in the prior art;
FIG. 3 is a schematic structural diagram of a mask predicted branch network disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a vehicle damage identification device disclosed in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle damage identification device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a vehicle damage identification method according to an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application includes the steps of:
101. acquiring a damage picture of a vehicle to be monitored;
102. extracting image characteristics of a damage picture of a vehicle to be monitored according to a neural network;
103. determining a plurality of damage candidate regions in a damage picture according to the image characteristics;
104. processing a damage picture with a plurality of damage candidate regions according to an example segmentation model so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, wherein the damage detection result comprises at least one piece of damage information, and the damage information comprises the category and position information of the damage.
In the embodiment of the application, the damage picture of the vehicle to be monitored is obtained, the image characteristics of the damage picture of the vehicle to be monitored can be extracted according to the neural network, a plurality of damage candidate areas in the damage picture can be determined according to the image characteristics, the damage picture with the plurality of damage candidate areas can be processed according to the example segmentation model, the example segmentation model outputs the damage detection result of the vehicle to be monitored, the damage detection result comprises at least one piece of damage information, and the damage information comprises the category and the position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
In the embodiment of the present application, the image feature may be one or a combination of features including color, texture, shape, and the like, as an example. On the other hand, the candidate region is a region where it is currently impossible to determine that the region is a damaged region.
In the embodiment of the application, 9 candidate frames with different size proportions are taken for each pixel point on an image with image characteristics, wherein a region defined by the candidate frames is a damage candidate region.
In the embodiment of the present application, the example segmentation model is MASKRCNN.
In the embodiment of the present application, as an optional implementation manner, the example segmentation model includes a classification branch network, a frame regression branch network, and a mask prediction branch network;
and, the step: determining a plurality of damage candidate regions in a damage picture according to the image characteristics, comprising:
classifying a plurality of damage candidate regions in the damage picture according to a classification branch network to obtain a first prediction result;
performing frame regression processing on a plurality of damage candidate regions in the damage picture according to a frame regression branch network to obtain a second prediction result;
performing mask prediction on a plurality of damage candidate regions in the damage picture according to a mask prediction branch network to obtain a third prediction result;
and outputting a damage detection result of the vehicle to be monitored according to the first prediction result, the second prediction result and the third prediction result.
In the optional embodiment, a plurality of damage candidate regions in a damage picture are classified through a classification branch network, so that a first prediction result can be obtained; performing frame regression processing on a plurality of damage candidate regions in the damage picture according to a frame regression branch network, and further obtaining a second prediction result; on the other hand, mask prediction is carried out on a plurality of damage candidate areas in the damage picture according to a mask prediction branch network, a third prediction result can be obtained, and therefore the damage detection result of the vehicle to be monitored is output according to the first prediction result, the second prediction result and the third prediction result.
In the embodiment of the present application, a plurality of candidate damage regions in a damage picture are classified according to a classification branch network, and a specific manner of obtaining a first prediction result is as follows:
the classification branch network calculates a score of each category to which the candidate region belongs based on the feature of each candidate region (i.e., the candidate box), wherein the category with the highest score is taken as the category of the candidate region. For example, the classification branching network calculates a score of the candidate region belonging to the damage category according to the color, texture, and shape of each candidate region (i.e., candidate frame), and the damage category having the highest score is used as the damage category of the candidate region.
It should be noted that the damage category of the candidate region refers to one of scratch, corner deformation, non-corner deformation, dead fold, crack, fracture, displacement, partial deletion, complete deletion, lamp damage, glass damage, and severe damage.
In the embodiment of the present application, the frame regression branch network performs frame regression processing on a plurality of damage candidate regions in the damage picture, and a specific manner of obtaining the second prediction result is as follows:
and determining which candidate frames are closer to the frame of the real damage occurrence position through a frame regression branch network, continuously adjusting the positions of the candidate frames to be closer to the position of the real damage frame, and finally taking the area defined by the optimal candidate frame as a second prediction result. Further, the bounding box regression branch network determines which candidate boxes are closer to the boxes at the true damage occurrence location by Non-Maximum Suppression (NMS) to screen for redundant boxes and determine which candidate boxes are closer to the boxes at the true damage occurrence location. The number of the plurality of candidate damage regions may be 2 or 3.
In the embodiment of the application, as an optional implementation manner, the mask prediction branch network comprises a plurality of depth separable convolution networks and a deconvolution network;
and performing mask prediction on a plurality of damage candidate regions in the damage picture according to a mask prediction branch network to obtain a third prediction result, wherein the third prediction result comprises the following steps:
taking a plurality of damage candidate areas in the damage picture as the input of a deconvolution network, so that the deconvolution network outputs the shallow features of the plurality of damage candidate areas in the damage picture;
processing a plurality of damage candidate regions in the damage picture according to a plurality of depth-divisible convolutional networks to output deep features of the plurality of damage candidate regions in the damage picture;
and obtaining a third prediction result according to the shallow feature and the deep feature.
In an optional embodiment, a plurality of damage candidate regions in the damage picture are used as input of a deconvolution network, and then the plurality of damage candidate regions in the damage picture can be sampled through the deconvolution network to extract shallow features, on the other hand, deep features can be extracted through a plurality of depth separable convolution networks, so that the shallow features and the deep features can be fused and identified and positioned based on fused output, wherein the transmission of spatial position information in the image can be enhanced due to the fact that the shallow features do not need to be subjected to multilayer continuous convolution, and then the identification and positioning accuracy of vehicle damage is improved, and particularly the identification and positioning accuracy is better for small damage of a vehicle.
In this alternative embodiment, as an example, as shown in fig. 2 and fig. 3, the mask predicted branch network in the prior art includes only several depth-separable convolutional networks, whereas the mask predicted branch network in the embodiment of the present application includes several depth-separable convolutional networks and one deconvolution network, specifically, includes 4 depth-separable convolutional networks.
In the embodiment of the present application, as an optional implementation manner, the classification branch network includes a sigmod function;
and, the step: classifying a plurality of damage candidate regions in the damage picture according to a classification branch network to obtain a first prediction result, wherein the method comprises the following steps:
and classifying a plurality of damage candidate regions in the damage picture according to a sigmod function to obtain a first prediction result.
In the optional embodiment, the sigmod function is used in the classification branch network, so that the problem of inter-class competition caused by the softmax classification function can be solved, each damage class is independently predicted respectively, prediction of each damage class is decoupled, and accuracy of damage identification and positioning is further improved.
In this embodiment of the present application, as an optional implementation manner, the classification branch network further includes a loss function, and a calculation formula of the loss function is:
Lcls=L+L0-1
and the number of the first and second groups,
Figure BDA0002956693730000121
where L represents the cross entropy loss function. L is0-1Represents the 0-1 loss function, σ (α)i) For the sigmod function, N represents the number of classes.
In this alternative embodiment, by combining L0-1Added to L as a regularization termclsIn the cross entropy loss function, the final loss function can guide the distinction between the categories to be more obvious, and the confidence coefficient of the classification is improved.
In the first aspect of the present application, as an alternative embodiment, the damage category is one of scratch, corner deformation, non-corner deformation, dead fold, crack, fracture, displacement, partial deletion, complete deletion, lamp damage, glass damage, and severe damage.
In the optional embodiment, the damage category is scratch, corner deformation, non-corner deformation, dead fold, crack, fracture, displacement, partial deletion, complete deletion, lamp damage, glass damage and severe damage, that is, compared with the prior art, the optional embodiment can identify and position more types of damage types so as to further improve the precision of damage identification and positioning.
Example two
Referring to fig. 4, fig. 4 is a schematic structural diagram of a vehicle damage identification device disclosed in the embodiment of the present application. As shown in fig. 4, the apparatus of the embodiment of the present application includes:
the acquiring module 201 is used for acquiring a damage picture of a vehicle to be monitored;
the extraction module 202 is used for extracting image characteristics of a damage picture of a vehicle to be monitored according to the neural network;
the determining module 203 is used for determining a plurality of damage candidate regions in the damage picture according to the image characteristics;
the identifying module 204 is configured to process a damage picture with a plurality of damage candidate regions according to the example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, where the damage detection result includes information of at least one damage, and the damage information includes a category and location information of the damage.
The device of the embodiment of the application can further determine a plurality of damage candidate areas in the damage picture according to the image characteristics by executing the vehicle damage identification method, and further can extract the image characteristics of the damage picture of the vehicle to be monitored according to the neural network, and further can process the damage picture with the plurality of damage candidate areas according to the example segmentation model, so that the example segmentation model outputs the damage detection result of the vehicle to be monitored, the damage detection result comprises at least one piece of damage information, and the damage information comprises the type and the position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
In the embodiment of the present application, as an optional implementation manner, the example segmentation model includes a classification branch network, a frame regression branch network, and a mask prediction branch network;
and, the determining module includes:
the classification submodule is used for classifying a plurality of damage candidate regions in the damage picture according to the classification branch network to obtain a first prediction result;
the frame regression processing submodule is used for carrying out frame regression processing on a plurality of damage candidate regions in the damage picture according to the frame regression branch network to obtain a second prediction result;
the prediction sub-module is used for performing mask prediction on a plurality of damage candidate regions in the damage picture according to a mask prediction branch network to obtain a third prediction result;
and the output module is used for outputting the damage detection result of the vehicle to be monitored according to the first prediction result, the second prediction result and the third prediction result.
Classifying a plurality of damage candidate regions in the damage picture through a classification branch network, and further obtaining a first prediction result; performing frame regression processing on a plurality of damage candidate regions in the damage picture according to a frame regression branch network, and further obtaining a second prediction result; on the other hand, mask prediction is carried out on a plurality of damage candidate areas in the damage picture according to a mask prediction branch network, a third prediction result can be obtained, and therefore the damage detection result of the vehicle to be monitored is output according to the first prediction result, the second prediction result and the third prediction result.
Please refer to the detailed description of the first embodiment of the present application for other descriptions of the embodiments of the present application, which are not repeated herein.
EXAMPLE III
Referring to fig. 5, fig. 5 is a schematic structural diagram of a vehicle damage identification device according to an embodiment of the present application. As shown in fig. 5, the apparatus of the embodiment of the present application includes:
a processor 301; and
the memory 302 is configured to store machine readable instructions, and when the instructions are executed by the processor 301, the processor 301 executes the vehicle damage identification method according to the first embodiment of the present application.
The device of the embodiment of the application can further determine a plurality of damage candidate regions in the damage picture according to the image characteristics by executing the vehicle damage identification method, and further can process the damage picture with the plurality of damage candidate regions according to the example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result comprises at least one damage information, and the damage information comprises the category and the position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
Example four
The embodiment of the application discloses a storage medium, wherein a computer program is stored in the storage medium, and the computer program is executed by a processor to execute the vehicle damage identification method in the embodiment of the application.
The storage medium of the embodiment of the application can further determine a plurality of damage candidate regions in the damage picture by executing the vehicle damage identification method, further can extract image features of the damage picture of the vehicle to be monitored according to the neural network, further can determine the plurality of damage candidate regions in the damage picture according to the image features, further can process the damage picture with the plurality of damage candidate regions according to the example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result comprises at least one piece of damage information, and the damage information comprises the type and the position information of the damage. Compared with the prior art, the embodiment of the application can process the damage picture by utilizing the example segmentation model, so that the damage can be positioned to each pixel point of the image, and the identification and positioning accuracy of the vehicle damage can be further improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A vehicle damage identification method, characterized in that the method comprises:
acquiring a damage picture of a vehicle to be monitored;
extracting image characteristics of the damage picture of the vehicle to be monitored according to a neural network;
determining a plurality of damage candidate regions in the damage picture according to the image characteristics;
processing the damage picture with the damage candidate regions according to an example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, wherein the damage detection result comprises information of at least one damage, and the information of the damage comprises the category and the position information of the damage.
2. The method of claim 1, wherein the instance segmentation model comprises a classification branch network, a bounding box regression branch network, a mask prediction branch network;
and determining a plurality of damage candidate regions in the damage picture according to the image features, wherein the determining comprises the following steps:
classifying the plurality of damage candidate regions in the damage picture according to the classification branch network to obtain a first prediction result;
performing frame regression processing on the plurality of damage candidate regions in the damage picture according to the frame regression branch network to obtain a second prediction result;
performing mask prediction on the plurality of damage candidate regions in the damage picture according to the mask prediction branch network to obtain a third prediction result;
and outputting the damage detection result of the vehicle to be monitored according to the first prediction result, the second prediction result and the third prediction result.
3. The method of claim 2, wherein said mask predicted branch network comprises a number of deep separable convolutional networks and a deconvolution network;
and performing mask prediction on the plurality of candidate damage regions in the damaged picture according to the mask prediction branch network to obtain a third prediction result, wherein the mask prediction comprises:
taking the plurality of damage candidate regions in the damage picture as an input of the deconvolution network, so that the deconvolution network outputs shallow features of the plurality of damage candidate regions in the damage picture;
processing the plurality of damage candidate regions in the damage picture according to the plurality of depth-separable convolutional networks to output deep features of the plurality of damage candidate regions in the damage picture;
and obtaining the third prediction result according to the shallow feature and the deep feature.
4. The method of claim 2, wherein the classification branching network comprises a sigmod function;
and classifying the plurality of damage candidate regions in the damage picture according to the classification branch network to obtain a first prediction result, including:
and classifying the plurality of damage candidate regions in the damage picture according to the sigmod function to obtain the first prediction result.
5. The method of claim 4, wherein the classification branching network further comprises a loss function calculated as:
Lcls=L+L0-1
and the number of the first and second groups,
Figure FDA0002956693720000021
wherein L represents a cross entropy loss function, L0-1Represents the 0-1 loss function, σ (α)i) For the sigmod function, N represents the number of classes.
6. The method of claim 1, wherein the damage is one of scratch, corner distortion, non-corner distortion, dead fold, crack, fracture, displacement, partial loss, complete loss, lamp failure, glass failure, and severe damage.
7. A vehicle damage identification device, characterized in that the device comprises:
the acquisition module is used for acquiring a damage picture of the vehicle to be monitored;
the extraction module is used for extracting the image characteristics of the damage picture of the vehicle to be monitored according to the neural network;
the determining module is used for determining a plurality of damage candidate regions in the damage picture according to the image characteristics;
the identification module is used for processing the damage picture with the damage candidate areas according to an example segmentation model, so that the example segmentation model outputs a damage detection result of the vehicle to be monitored, the damage detection result comprises at least one piece of damage information, and the damage information comprises the category and the position information of the damage.
8. The apparatus of claim 7, in which the instance segmentation model comprises a classification branch network, a bounding box regression branch network, a mask prediction branch network;
and, the determining module comprises:
the classification submodule is used for classifying the damage candidate areas in the damage picture according to the classification branch network to obtain a first prediction result;
the frame regression processing submodule is used for carrying out frame regression processing on the plurality of damage candidate regions in the damage picture according to the frame regression branch network to obtain a second prediction result;
the prediction sub-module is used for performing mask prediction on the plurality of damage candidate regions in the damage picture according to the mask prediction branch network to obtain a third prediction result;
and the output module is used for outputting the damage detection result of the vehicle to be monitored according to the first prediction result, the second prediction result and the third prediction result.
9. A vehicle damage identification device, characterized in that the device comprises:
a processor; and
a memory configured to store machine readable instructions that, when executed by the processor, cause the processor to perform the vehicle injury identification method of any of claims 1-6.
10. A storage medium characterized in that the storage medium stores a computer program which is executed by a processor to perform the vehicle damage identification method according to any one of claims 1 to 6.
CN202110226715.9A 2021-03-01 2021-03-01 Vehicle damage identification method, device, equipment and storage medium Pending CN112966730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110226715.9A CN112966730A (en) 2021-03-01 2021-03-01 Vehicle damage identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110226715.9A CN112966730A (en) 2021-03-01 2021-03-01 Vehicle damage identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112966730A true CN112966730A (en) 2021-06-15

Family

ID=76276092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110226715.9A Pending CN112966730A (en) 2021-03-01 2021-03-01 Vehicle damage identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112966730A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743407A (en) * 2021-09-08 2021-12-03 平安科技(深圳)有限公司 Vehicle damage detection method, device, equipment and storage medium
CN114898155A (en) * 2022-05-18 2022-08-12 平安科技(深圳)有限公司 Vehicle damage assessment method, device, equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN109447169A (en) * 2018-11-02 2019-03-08 北京旷视科技有限公司 The training method of image processing method and its model, device and electronic system
CN109712118A (en) * 2018-12-11 2019-05-03 武汉三江中电科技有限责任公司 A kind of substation isolating-switch detection recognition method based on Mask RCNN
CN110136198A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Image processing method and its device, equipment and storage medium
CN110569837A (en) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 Method and device for optimizing damage detection result
CN110728236A (en) * 2019-10-12 2020-01-24 创新奇智(重庆)科技有限公司 Vehicle loss assessment method and special equipment thereof
CN110874594A (en) * 2019-09-23 2020-03-10 平安科技(深圳)有限公司 Human body surface damage detection method based on semantic segmentation network and related equipment
CN111488875A (en) * 2020-06-24 2020-08-04 爱保科技有限公司 Vehicle insurance claim settlement loss checking method and device based on image recognition and electronic equipment
CN111612104A (en) * 2020-06-30 2020-09-01 爱保科技有限公司 Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN111768425A (en) * 2020-07-23 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device and equipment
CN112017065A (en) * 2020-08-27 2020-12-01 中国平安财产保险股份有限公司 Vehicle loss assessment and claim settlement method and device and computer readable storage medium
CN112163449A (en) * 2020-08-21 2021-01-01 同济大学 Lightweight multi-branch feature cross-layer fusion image semantic segmentation method
CN112287905A (en) * 2020-12-18 2021-01-29 德联易控科技(北京)有限公司 Vehicle damage identification method, device, equipment and storage medium
CN112348011A (en) * 2020-09-10 2021-02-09 小灵狗出行科技有限公司 Vehicle damage assessment method and device and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403424A (en) * 2017-04-11 2017-11-28 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device and electronic equipment
CN110136198A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Image processing method and its device, equipment and storage medium
CN110569837A (en) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 Method and device for optimizing damage detection result
CN109447169A (en) * 2018-11-02 2019-03-08 北京旷视科技有限公司 The training method of image processing method and its model, device and electronic system
CN109712118A (en) * 2018-12-11 2019-05-03 武汉三江中电科技有限责任公司 A kind of substation isolating-switch detection recognition method based on Mask RCNN
CN110874594A (en) * 2019-09-23 2020-03-10 平安科技(深圳)有限公司 Human body surface damage detection method based on semantic segmentation network and related equipment
CN110728236A (en) * 2019-10-12 2020-01-24 创新奇智(重庆)科技有限公司 Vehicle loss assessment method and special equipment thereof
CN111488875A (en) * 2020-06-24 2020-08-04 爱保科技有限公司 Vehicle insurance claim settlement loss checking method and device based on image recognition and electronic equipment
CN111612104A (en) * 2020-06-30 2020-09-01 爱保科技有限公司 Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN111768425A (en) * 2020-07-23 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device and equipment
CN112163449A (en) * 2020-08-21 2021-01-01 同济大学 Lightweight multi-branch feature cross-layer fusion image semantic segmentation method
CN112017065A (en) * 2020-08-27 2020-12-01 中国平安财产保险股份有限公司 Vehicle loss assessment and claim settlement method and device and computer readable storage medium
CN112348011A (en) * 2020-09-10 2021-02-09 小灵狗出行科技有限公司 Vehicle damage assessment method and device and storage medium
CN112287905A (en) * 2020-12-18 2021-01-29 德联易控科技(北京)有限公司 Vehicle damage identification method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAIMING HE 等: "Mask R-CNN", 《ARXIV》 *
SAQIB MAMOON 等: "SPSSNet: a real-time network for image semantic segmentation", 《FRONT INFORM TECHNOL ELECTRON ENG》 *
SHAOQING REN 等: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《ARXIV》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743407A (en) * 2021-09-08 2021-12-03 平安科技(深圳)有限公司 Vehicle damage detection method, device, equipment and storage medium
WO2023035538A1 (en) * 2021-09-08 2023-03-16 平安科技(深圳)有限公司 Vehicle damage detection method, device, apparatus and storage medium
CN113743407B (en) * 2021-09-08 2024-05-10 平安科技(深圳)有限公司 Method, device, equipment and storage medium for detecting vehicle damage
CN114898155A (en) * 2022-05-18 2022-08-12 平安科技(深圳)有限公司 Vehicle damage assessment method, device, equipment and storage medium
CN114898155B (en) * 2022-05-18 2024-05-28 平安科技(深圳)有限公司 Vehicle damage assessment method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
US6961466B2 (en) Method and apparatus for object recognition
CN112613502A (en) Character recognition method and device, storage medium and computer equipment
CN110598698B (en) Natural scene text detection method and system based on adaptive regional suggestion network
CN112966730A (en) Vehicle damage identification method, device, equipment and storage medium
Özgen et al. Text detection in natural and computer-generated images
CN112989995B (en) Text detection method and device and electronic equipment
KR100868884B1 (en) Flat glass defect information system and classification method
CN115131797A (en) Scene text detection method based on feature enhancement pyramid network
CN112559688A (en) Financial newspaper reading difficulty calculation method, device and equipment and readable storage medium
CN116343301A (en) Personnel information intelligent verification system based on face recognition
CN116844126A (en) YOLOv7 improved complex road scene target detection method
CN115239672A (en) Defect detection method and device, equipment and storage medium
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
CN111832497B (en) Text detection post-processing method based on geometric features
CN111738237B (en) Heterogeneous convolution-based target detection method for multi-core iteration RPN
CN116524725B (en) Intelligent driving traffic sign image data identification system
CN115775226B (en) Medical image classification method based on transducer
Vidhyalakshmi et al. Text detection in natural images with hybrid stroke feature transform and high performance deep Convnet computing
CN110889418A (en) Gas contour identification method
JP7338159B2 (en) Information processing device and program
CN114120202A (en) Semi-supervised video target segmentation method based on multi-scale target model and feature fusion
CN110826564A (en) Small target semantic segmentation method and system in complex scene image
CN111401356A (en) Express single-hand-written telephone number recognition method based on deep learning
CN117078608B (en) Double-mask guide-based high-reflection leather surface defect detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210615