CN115063632A - Vehicle damage identification method, device, equipment and medium based on artificial intelligence - Google Patents

Vehicle damage identification method, device, equipment and medium based on artificial intelligence Download PDF

Info

Publication number
CN115063632A
CN115063632A CN202210696940.3A CN202210696940A CN115063632A CN 115063632 A CN115063632 A CN 115063632A CN 202210696940 A CN202210696940 A CN 202210696940A CN 115063632 A CN115063632 A CN 115063632A
Authority
CN
China
Prior art keywords
damage
damage identification
type
vehicle
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210696940.3A
Other languages
Chinese (zh)
Inventor
康甲
刘莉红
刘玉宇
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210696940.3A priority Critical patent/CN115063632A/en
Publication of CN115063632A publication Critical patent/CN115063632A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a vehicle damage identification method, a device, electronic equipment and a storage medium based on artificial intelligence, and the vehicle damage identification method based on artificial intelligence comprises the following steps: collecting a vehicle image with label data as an annotation data set, dividing the annotation data set into a training set and a detection set, wherein the label data comprises the damage type of each pixel point in the vehicle image; building an initial damage identification network, and training the initial damage identification network based on a training set to obtain a first damage identification network; obtaining a damage identification result of each vehicle image in the detection set based on the first damage identification network, and obtaining the discrimination of different damage types based on the damage identification result; constructing a discrimination loss function based on discrimination, and training the first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network; and acquiring a damage identification result of the real-time vehicle image based on the second damage identification network. The vehicle damage identification method and device can improve accuracy of vehicle damage identification.

Description

Vehicle damage identification method, device, equipment and medium based on artificial intelligence
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a vehicle damage identification method and apparatus based on artificial intelligence, an electronic device, and a storage medium.
Background
After traffic accident happens, an insurance company needs to carry out vehicle damage assessment on an accident site, a vehicle image on the site needs to be collected in the damage assessment process, the type and the damage degree of vehicle damage are determined through the vehicle image to be used as a basis for claim settlement of a vehicle insurance company, and the accuracy of vehicle damage identification directly influences a final claim settlement result.
At present, the damage types of different positions in a vehicle image are generally directly acquired by using a traditional image segmentation network, however, in a vehicle damage assessment scene, certain similarity exists between different damage types, so that the accuracy of vehicle damage identification is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an artificial intelligence based vehicle damage identification method and related apparatus to solve the technical problem of how to improve the accuracy of vehicle damage identification, wherein the related apparatus includes an artificial intelligence based vehicle damage identification device, an electronic apparatus and a storage medium.
The application provides a vehicle damage identification method based on artificial intelligence, which comprises the following steps:
collecting a vehicle image with label data as an annotation data set, and dividing the annotation data set into a training set and a detection set, wherein the label data comprises the damage type of each pixel point in the vehicle image;
building an initial damage recognition network, and training the initial damage recognition network based on the training set to obtain a first damage recognition network;
obtaining a damage identification result of each vehicle image in the detection set based on the first damage identification network, and obtaining the discrimination of different damage types based on the damage identification result;
constructing a discrimination loss function based on the discrimination, and training a first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network;
and acquiring a damage identification result of the real-time vehicle image based on the second damage identification network.
In some embodiments, the acquiring a vehicle image with tag data as an annotation data set, and dividing the annotation data set into a training set and a detection set, where the tag data includes a damage type of each pixel point in the vehicle image, includes:
acquiring a large number of vehicle images in a vehicle damage assessment scene, and acquiring label data of each vehicle image;
storing all the vehicle images and the label data of all the vehicle images as an annotation data set;
and dividing the labeling data set into a training set and a detection set according to a preset proportion.
In some embodiments, the constructing an initial impairment recognition network, and training the initial impairment recognition network based on the training set to obtain a first impairment recognition network includes:
building an initial damage identification network, wherein the initial damage identification network comprises an encoder and a decoder;
training the initial damage identification network based on the training set and a cross entropy loss function to obtain a first damage identification network, wherein the input of the first damage identification network is a vehicle image, the output of the first damage identification network is a damage identification result of the vehicle image, the damage identification result comprises a type vector of each pixel point in the vehicle image, and the type vector comprises a probability value of the pixel point belonging to each damage type;
and selecting the damage type corresponding to the maximum probability value of the type vector of each pixel point in the damage identification result as the damage type of the pixel point in the vehicle image.
In some embodiments, the obtaining the discrimination of different damage types based on the damage identification result includes:
storing all the type vectors in each damage identification result to obtain a type vector set;
calculating an absolute value of a probability value difference between different damage types in a target type vector to serve as initial discrimination between the different damage types, wherein the target type vector is any one of the type vector sets;
constructing an initial distinguishing index matrix of the target type vector based on the initial distinguishing index, wherein the value of the nth column of the mth row in the initial distinguishing index matrix represents the initial distinguishing index between the damage type m and the damage type n;
traversing all the type vectors in the type vector set to obtain an initial region indexing matrix of each type vector;
and calculating the mean value of all the initial region division matrixes to obtain a target region division matrix, and normalizing all the numerical values in the target region division matrix to obtain the discrimination between different damage types.
In some embodiments, the discrimination satisfies the relationship:
Figure BDA0003702545910000031
wherein the content of the first and second substances,
Figure BDA0003702545910000039
representing all the values in the target discrimination matrix,
Figure BDA0003702545910000032
representing the value, alpha, of the m-th row and n-th column in the object discrimination matrix mn The value range is [0,1 ] for distinguishing between the damage types m and n]。
In some embodiments, said constructing a discrimination loss function based on said discrimination comprises:
dividing the vehicle image based on the label data to obtain a pixel point set of each damage type in the vehicle image;
sending the vehicle image to the first damage identification network to obtain a damage identification result, and calculating the mean value of all types of vectors in a pixel point set of the same damage type in the damage identification result to obtain an average type vector of each damage type;
and constructing a discrimination loss function based on the average type vector and the discrimination between different damage types.
In some embodiments, the discrimination loss function satisfies the relation:
Figure BDA0003702545910000033
wherein alpha is mn To differentiate between damage type m and damage type n,
Figure BDA0003702545910000034
is an average type vector of the damage types m in the damage identification result,
Figure BDA0003702545910000035
is the average type vector of the lesion type n in the lesion recognition result,
Figure BDA0003702545910000036
to represent
Figure BDA0003702545910000037
And
Figure BDA0003702545910000038
in betweenDistance L1, MAX being the maximum value of the distance L1 between the average type vectors, maximum value MAX of the distance L1 being 2, N being the number of all damage types, Loss 1 The value of the discrimination loss function.
The embodiment of the present application still provides a vehicle damage recognition device based on artificial intelligence, the device includes:
the system comprises a collecting unit, a judging unit and a judging unit, wherein the collecting unit is used for collecting a vehicle image with label data as an annotation data set, and dividing the annotation data set into a training set and a detection set, and the label data comprises the damage type of each pixel point in the vehicle image;
the first training unit is used for building an initial damage recognition network and training the initial damage recognition network based on the training set to obtain a first damage recognition network;
the distinguishing unit is used for obtaining a damage identification result of each vehicle image in the detection set based on the first damage identification network and obtaining the distinguishing degree of different damage types based on the damage identification result;
the second training unit is used for constructing a discrimination loss function based on the discrimination and training the first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network;
and the damage identification unit is used for acquiring a damage identification result of the real-time vehicle image based on the second damage identification network.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory storing at least one instruction;
and the processor executes the instructions stored in the memory to realize the artificial intelligence based vehicle damage identification method.
The embodiment of the application also provides a computer-readable storage medium, and at least one instruction is stored in the computer-readable storage medium and executed by a processor in an electronic device to implement the artificial intelligence based vehicle damage identification method.
In conclusion, the vehicle image with the label data is divided into the training set and the detection set, the first training is completed based on the training set to obtain the first damage identification network, then the detection result of the detection set is obtained by using the first damage identification network, the discrimination of different damage types is obtained based on the detection result, the discrimination loss function is constructed to carry out the second training on the first damage identification network to obtain the second damage identification network, and therefore the accuracy of vehicle damage identification is improved.
Drawings
Fig. 1 is a flowchart of a preferred embodiment of an artificial intelligence based vehicle damage identification method according to the present application.
Fig. 2 is a functional block diagram of a preferred embodiment of an artificial intelligence based vehicle damage identification apparatus according to the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the artificial intelligence based vehicle damage identification method.
Detailed Description
For a clearer understanding of the objects, features and advantages of the present application, reference will now be made in detail to the present application with reference to the accompanying drawings and specific examples. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, and the described embodiments are merely some, but not all embodiments of the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The embodiment of the Application provides a vehicle damage identification method based on artificial intelligence, which can be applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and hardware of the electronic devices includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device and the like.
The electronic device may be any electronic product capable of performing human-computer interaction with a client, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an Internet Protocol Television (IPTV), an intelligent wearable device, and the like.
The electronic device may also include a network device and/or a client device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network where the electronic device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
Fig. 1 is a flowchart illustrating a preferred embodiment of the artificial intelligence based vehicle damage identification method according to the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
S10, collecting the vehicle image with the label data as an annotation data set, and dividing the annotation data set into a training set and a detection set, wherein the label data comprises the damage type of each pixel point in the vehicle image.
In an optional embodiment, the acquiring a vehicle image with tag data as an annotation data set, and dividing the annotation data set into a training set and a detection set, where the tag data includes a damage type of each pixel point in the vehicle image, includes:
acquiring a large number of vehicle images in a vehicle damage assessment scene, and acquiring label data of each vehicle image;
storing all vehicle images and tag data of all vehicle images as an annotation data set;
and dividing the labeling data set into a training set and a detection set according to a preset proportion.
In this optional embodiment, the tag data of the vehicle image is an image having a size equal to that of the vehicle image, each pixel point in the tag data corresponds to a tag vector of N rows and 1 columns, each row corresponds to a damage type, in the tag vector, the value of the row corresponding to the damage type of the pixel point is 1, and the values of all other rows are 0, where N represents the number of all damage types including a background class, and the background class is the type of the pixel point in the vehicle image that does not belong to the vehicle damage; all the vehicle images and the tag data of all the vehicle images are stored as an annotation data set.
Exemplarily, common damage types in a vehicle damage assessment scene include scratch, dent, wrinkle, dead fold, tear and loss of 7 damage types, and in addition, a background class which does not belong to vehicle damage in a vehicle image divides a pixel point into 8 damage types, and a label vector corresponding to the pixel point is 8 rows and 1 columns; setting the damage type of the pixel point (i, j) as scratch, the label vector of the pixel point (i, j) is (0,1,0,0,0,0,0,0) T And the label vectors of all the pixel points in the vehicle image form the label data of the vehicle image.
In this optional embodiment, the labeling data set is divided into a training set and a detection set according to a preset ratio, where the preset ratio is 2: the method comprises the following steps that 1, a training set is used for training an initial damage identification network subsequently, and a detection set is used for obtaining the discrimination of different damage types subsequently.
Therefore, a training set and a detection set with label data are obtained, and a data basis is provided for accurate identification of vehicle damage in the follow-up process.
S11, constructing an initial damage recognition network, and training the initial damage recognition network based on the training set to obtain a first damage recognition network.
In an optional embodiment, the constructing an initial impairment recognition network, and training the initial impairment recognition network based on the training set to obtain a first impairment recognition network includes:
building an initial damage identification network, wherein the initial damage identification network comprises an encoder and a decoder;
training the initial damage identification network based on the training set and a cross entropy loss function to obtain a first damage identification network, wherein the input of the first damage identification network is a vehicle image, the output of the first damage identification network is a damage identification result of the vehicle image, the damage identification result comprises a type vector of each pixel point in the vehicle image, and the type vector comprises a probability value of the pixel point belonging to each damage type;
and selecting the damage type corresponding to the maximum probability value of the type vector of each pixel point in the damage identification result as the damage type of the pixel point in the vehicle image.
In this optional embodiment, an initial damage identification network is established, an input of the initial damage identification network is a vehicle image, an expected output is a damage identification result of the vehicle image, the damage identification result is an image as large as the vehicle image, the damage identification result includes a type vector of each pixel point in the vehicle image, the type vector includes a probability value that the pixel point belongs to each damage type, and a sum of all probability values in the same pixel point type vector is 1.
In this optional embodiment, the initial damage identification network is an encoder and decoder structure, the encoder uses the convolutional layer to perform downsampling on the input vehicle image to obtain a feature map, and sends the feature map into the decoder to perform upsampling using the anti-convolutional layer to obtain a damage identification result of the vehicle image; the initial impairment recognition network may use an image segmentation network with existing encoder and decoder structures, such as deep lapv3 and UNet, and the application is not limited.
In this optional embodiment, in order to ensure that the output of the initial damage recognition network is the damage recognition result of the vehicle image, the initial damage recognition network needs to be trained based on the training set and the cross entropy loss function to obtain a first damage recognition network, in the training process, the vehicle image in the training set is continuously input into the initial damage recognition network to obtain an output result, a value of the cross entropy loss function is calculated based on the output result and the label data of the vehicle image, and a parameter in the initial damage recognition network is updated by using a gradient descent method, when the value of the cross entropy loss function does not change any more, the training of the initial damage recognition network is stopped to obtain the first damage recognition network, and the first damage recognition network can learn the features of different damage types.
In this optional embodiment, a vehicle image is input to the first damage identification network to obtain a damage identification result of the vehicle image, and the damage type of each pixel point of the vehicle image can be obtained by selecting the damage type corresponding to the maximum probability value of the type vector of each pixel point in the damage identification result.
Therefore, the initial damage identification network is trained based on a training set to obtain a first damage identification network, and the first damage identification network can learn the characteristics of different damage types in the vehicle image to obtain the damage identification result of the vehicle image.
And S12, obtaining the damage identification result of each vehicle image in the detection set based on the first damage identification network, and obtaining the discrimination of different damage types based on the damage identification result.
In an optional embodiment, after the first damage identification network is obtained, all the vehicle images in the detection set are sequentially input to the first damage identification network to obtain a damage identification result of each vehicle image, the damage identification result includes type vectors of all pixel points, absolute values of probability value differences between different damage types in the type vectors can reflect discrimination between different damage types, and the smaller the absolute value of the probability value difference between different damage types is, the smaller the discrimination between two damage types is, confusion is more likely to be generated, and thus, false identification of the damage types is caused.
Illustratively, the pixel points are divided into 8 damage types including scratch, dent, fold, dead fold, tear, deletion and background, and if the type vector of the pixel point (i, j) in the damage identification result is (0.4,0.5,0,0,0.1,0,0,0) T If the maximum probability value in the type vector is 0.5 and the damage type corresponding to 0.5 is scraping, judging that the damage type of the pixel point (i, j) is scraping; however, the probability that the pixel point (i, j) belongs to the scratch is 0.4, which indicates that the distinction degree between the damage type scratch and the scratch is small, and confusion is easy to generate.
In this optional embodiment, the obtaining the discrimination of different damage types based on the damage identification result includes:
storing all the type vectors in each damage identification result to obtain a type vector set;
calculating an absolute value of a probability value difference value between different damage types in a target type vector to serve as an initial discrimination between the different damage types, wherein the target type vector is any one of the type vector sets;
constructing an initial region division matrix of the target type vector based on the initial region division, wherein the initial region division matrix is a square matrix of N rows and N columns, N is the number of all damage types in the target type vector, and the value of the nth column in the mth row in the initial region division matrix represents the initial region division between the damage type m and the damage type N;
traversing all the type vectors in the type vector set to obtain an initial region indexing matrix of each type vector;
calculating the mean value of all initial region division matrixes to obtain a target region division matrix, and performing normalization processing on all numerical values in the target region division matrix to obtain the division between different damage types, wherein the division between the damage types m and n meets the relational expression by taking the damage type m and the damage type n as examples:
Figure BDA0003702545910000091
wherein the content of the first and second substances,
Figure BDA0003702545910000092
representing all the values in the target discrimination matrix,
Figure BDA0003702545910000093
representing the value, alpha, of the m-th row and n-th column in the object discrimination matrix mn The value range is [0,1 ] for distinguishing between the damage types m and n]。
Therefore, the discrimination among different damage types can be obtained by means of the damage identification result of the first damage identification network, and the precise quantification of the discrimination is realized.
S13, constructing a discrimination loss function based on the discrimination, and training the first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network.
In an optional embodiment, in order to enable the first damage identification network to learn the distinguishing features between different damage types, a distinguishing loss function needs to be constructed based on the distinguishing between different damage types, and the first damage identification network needs to be trained for the second time.
In this optional embodiment, the constructing a discrimination loss function based on the discrimination includes:
dividing the vehicle image based on the label data to obtain a pixel point set of each damage type in the vehicle image;
sending the vehicle image into the first damage identification network to obtain a damage identification result, and calculating the mean value of all type vectors in a pixel point set of the same damage type in the damage identification result to obtain an average type vector of each damage type;
and constructing a discrimination loss function based on the average type vector and the discrimination between different damage types.
In this optional embodiment, the discrimination loss function satisfies the relation:
Figure BDA0003702545910000101
wherein alpha is mn To differentiate between damage type m and damage type n,
Figure BDA0003702545910000102
is an average type vector of the damage types m in the damage identification result,
Figure BDA0003702545910000103
is an average type vector of the damage type n in the damage identification result,
Figure BDA0003702545910000104
represent
Figure BDA0003702545910000105
And
Figure BDA0003702545910000106
the distance between the two types of damage is L1, MAX is the maximum value of the distance L1 between the average type vectors, the maximum value MAX of the distance L1 is 2, N is the number of all types of damage, and Loss 1 In terms of the value of the discrimination Loss function, Loss 1 Smaller numerical values of (a) indicate higher accuracy of the damage recognition result.
In the discrimination loss function described above, the discrimination loss function,
Figure BDA0003702545910000107
the distance of the average type vector between different damage types in the damage identification result is constrained to reach the maximum value, so that the first damage identification is forcedThe discrimination network learns the discrimination characteristics among different damage types, and simultaneously, the discrimination alpha among different damage types mn Smaller means easier confusion, so (1-. alpha.) is passed mn ) And (4) distributing larger weight to ensure that the distinguishing characteristics between any two damage types can be learned, and thus, the distinguishing loss function is constructed.
In this optional embodiment, a first damage identification network is trained based on the discrimination loss function and the detection set to obtain a second damage identification network, in the training process, the vehicle images in the detection set are continuously input into the first damage identification network to obtain an output result, the value of the discrimination loss function is calculated based on the output result and the label data of the vehicle images, and the parameters in the first damage identification network are updated by using a gradient descent method, when the value of the discrimination loss function is not changed any more, the training of the first damage identification network is stopped to obtain the second damage identification network, and the second damage identification network can learn the discrimination characteristics of different damage types.
Therefore, a discrimination loss function is constructed based on discrimination between different damage types, the discrimination loss function is used for carrying out secondary training on the first damage identification network to obtain a second damage identification network, the second damage identification network can learn the discrimination characteristics of different damage types, and the accuracy of damage type identification is improved.
And S14, acquiring the damage identification result of the real-time vehicle image based on the second damage identification network.
In an optional embodiment, a real-time vehicle image is collected, the real-time vehicle image is input into a second damage identification network to obtain a damage identification result of the real-time vehicle image, and damage types of all pixel points in the real-time vehicle image are obtained based on the damage identification result.
In this optional embodiment, after the damage identification result of the real-time vehicle image is obtained, areas based on different damage types in the damage identification result may be used as a basis for claim settlement of the car insurance company.
According to the technical scheme, the vehicle image with the label data is divided into the training set and the detection set, the first training is completed based on the training set to obtain the first damage identification network, then the detection result of the detection set is obtained by utilizing the first damage identification network, the discrimination of different damage types is obtained based on the detection result, the discrimination loss function is constructed to carry out the second training on the first damage identification network to obtain the second damage identification network, and therefore the accuracy of vehicle damage identification is improved.
Referring to fig. 2, fig. 2 is a functional block diagram of a preferred embodiment of the vehicle damage identification device based on artificial intelligence according to the present application. The artificial intelligence-based vehicle damage identification device 11 comprises an acquisition unit 110, a first training unit 111, a discrimination unit 112, a second training unit 113 and a damage identification unit 114. A module/unit as referred to herein is a series of computer readable instruction segments capable of being executed by the processor 13 and performing a fixed function, and is stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In an optional embodiment, the collecting unit 110 is configured to collect a vehicle image with label data as an annotation data set, and divide the annotation data set into a training set and a detection set, where the label data includes a damage type of each pixel point in the vehicle image.
In an optional embodiment, the acquiring a vehicle image with tag data as an annotation data set, and dividing the annotation data set into a training set and a detection set, where the tag data includes a damage type of each pixel point in the vehicle image, includes:
acquiring a large number of vehicle images in a vehicle damage assessment scene, and acquiring label data of each vehicle image;
storing all the vehicle images and the label data of all the vehicle images as an annotation data set;
and dividing the labeling data set into a training set and a detection set according to a preset proportion.
In this optional embodiment, the label data of the vehicle image is an image having a size equal to that of the vehicle image, each pixel point in the label data corresponds to a label vector with N rows and 1 columns, each row corresponds to a damage type, in the label vector, the numerical value of the row corresponding to the damage type of the pixel point is 1, and the numerical values of all other rows are 0, where N represents the number of all damage types including a background type, and the background type is the type of the pixel point in the vehicle image that does not belong to the vehicle damage; all vehicle images and tag data of all vehicle images are stored as an annotation data set.
Exemplarily, common damage types in a vehicle damage assessment scene include scratch, dent, wrinkle, dead fold, tear and loss of 7 damage types, and in addition, a background class which does not belong to vehicle damage in a vehicle image divides a pixel point into 8 damage types, and a label vector corresponding to the pixel point is 8 rows and 1 columns; setting the damage type of the pixel point (i, j) as scratch, the label vector of the pixel point (i, j) is (0,1,0,0,0,0,0,0) T And the label vectors of all the pixel points in the vehicle image form the label data of the vehicle image.
In this optional embodiment, the labeling data set is divided into a training set and a detection set according to a preset ratio, where the preset ratio is 2: the method comprises the following steps that 1, a training set is used for training an initial damage identification network subsequently, and a detection set is used for obtaining the discrimination of different damage types subsequently.
The first training unit 111 is configured to build an initial impairment recognition network, and train the initial impairment recognition network based on the training set to obtain a first impairment recognition network.
In an optional embodiment, the constructing an initial impairment recognition network, and training the initial impairment recognition network based on the training set to obtain a first impairment recognition network includes:
building an initial damage identification network, wherein the initial damage identification network comprises an encoder and a decoder;
training the initial damage identification network based on the training set and a cross entropy loss function to obtain a first damage identification network, wherein the input of the first damage identification network is a vehicle image, the output of the first damage identification network is a damage identification result of the vehicle image, the damage identification result comprises a type vector of each pixel point in the vehicle image, and the type vector comprises a probability value of the pixel point belonging to each damage type;
and selecting the damage type corresponding to the maximum probability value of the type vector of each pixel point in the damage identification result as the damage type of the pixel point in the vehicle image.
In this optional embodiment, an initial damage identification network is established, an input of the initial damage identification network is a vehicle image, an expected output is a damage identification result of the vehicle image, the damage identification result is an image as large as the vehicle image, the damage identification result includes a type vector of each pixel point in the vehicle image, the type vector includes a probability value that the pixel point belongs to each damage type, and a sum of all probability values in the same pixel point type vector is 1.
In this optional embodiment, the initial damage identification network is an encoder and decoder structure, the encoder uses the convolutional layer to perform downsampling on the input vehicle image to obtain a feature map, and sends the feature map into the decoder to perform upsampling using the anti-convolutional layer to obtain a damage identification result of the vehicle image; the initial impairment recognition network may use an image segmentation network with existing encoder and decoder structures, such as deep lapv3 and UNet, and the application is not limited.
In this optional embodiment, in order to ensure that the output of the initial damage recognition network is the damage recognition result of the vehicle image, the initial damage recognition network needs to be trained based on the training set and the cross entropy loss function to obtain a first damage recognition network, in the training process, the vehicle image in the training set is continuously input into the initial damage recognition network to obtain an output result, a value of the cross entropy loss function is calculated based on the output result and the label data of the vehicle image, and a parameter in the initial damage recognition network is updated by using a gradient descent method, when the value of the cross entropy loss function does not change any more, the training of the initial damage recognition network is stopped to obtain the first damage recognition network, and the first damage recognition network can learn the features of different damage types.
In this optional embodiment, a vehicle image is input to the first damage identification network to obtain a damage identification result of the vehicle image, and the damage type of each pixel point of the vehicle image can be obtained by selecting the damage type corresponding to the maximum probability value of the type vector of each pixel point in the damage identification result.
The discrimination unit 112 is configured to obtain a damage identification result of each vehicle image in the detection set based on the first damage identification network, and obtain the discrimination of different damage types based on the damage identification result.
In an optional embodiment, after the first damage identification network is obtained, all the vehicle images in the detection set are sequentially input to the first damage identification network to obtain a damage identification result of each vehicle image, the damage identification result includes type vectors of all pixel points, absolute values of probability value differences between different damage types in the type vectors can reflect discrimination between different damage types, and the smaller the absolute value of the probability value difference between different damage types is, the smaller the discrimination between two damage types is, confusion is more likely to be generated, and thus, false identification of the damage types is caused.
Illustratively, the pixel points are divided into 8 damage types including scratch, dent, fold, dead fold, tear, deletion and background, and if the type vector of the pixel point (i, j) in the damage identification result is (0.4,0.5,0,0,0.1,0,0,0) T If the maximum probability value in the type vector is 0.5 and the damage type corresponding to 0.5 is scraping, judging that the damage type of the pixel point (i, j) is scraping; however, the probability that the pixel point (i, j) belongs to the scratch is 0.4, which indicates that the distinction degree between the damage type scratch and the scratch is small, and confusion is easy to generate.
In this optional embodiment, the obtaining the discrimination of different damage types based on the damage identification result includes:
storing all the type vectors in each damage identification result to obtain a type vector set;
calculating an absolute value of a probability value difference value between different damage types in a target type vector to serve as an initial discrimination between the different damage types, wherein the target type vector is any one of the type vector sets;
constructing an initial region index matrix of the target type vector based on the initial region index, wherein the initial region index matrix is a square matrix of N rows and N columns, N is the number of all damage types in the target type vector, and the value of the nth column of the mth row in the initial region index matrix represents the initial region index between the damage type m and the damage type N;
traversing all the type vectors in the type vector set to obtain an initial region indexing matrix of each type vector;
calculating the mean value of all initial region division matrixes to obtain a target region division matrix, and performing normalization processing on all numerical values in the target region division matrix to obtain the division between different damage types, wherein the division between the damage types m and n meets the relational expression by taking the damage type m and the damage type n as examples:
Figure BDA0003702545910000151
wherein the content of the first and second substances,
Figure BDA0003702545910000152
representing all the values in the target discrimination matrix,
Figure BDA0003702545910000153
representing the value, alpha, of the m-th row and n-th column in the object discrimination matrix mn The value range is [0,1 ] for distinguishing between the damage types m and n]。
The second training unit 113 is configured to construct a discrimination loss function based on the discrimination, and train the first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network.
In an alternative embodiment, in order to enable the first impairment recognition network to learn the distinguishing features between different impairment types, it is necessary to construct a distinguishing loss function based on the distinguishing between different impairment types and perform a second training on the first impairment recognition network.
In this optional embodiment, the constructing a discrimination loss function based on the discrimination includes:
dividing the vehicle image based on the label data to obtain a pixel point set of each damage type in the vehicle image;
sending the vehicle image to the first damage identification network to obtain a damage identification result, and calculating the mean value of all types of vectors in a pixel point set of the same damage type in the damage identification result to obtain an average type vector of each damage type;
and constructing a discrimination loss function based on the average type vector and the discrimination between different damage types.
In this optional embodiment, the discrimination loss function satisfies the relation:
Figure BDA0003702545910000154
wherein alpha is mn To differentiate between damage type m and damage type n,
Figure BDA0003702545910000155
is an average type vector of the damage types m in the damage identification result,
Figure BDA0003702545910000156
is the average type vector of the lesion type n in the lesion recognition result,
Figure BDA0003702545910000157
to represent
Figure BDA0003702545910000158
And
Figure BDA0003702545910000159
the distance between the two types of damage is L1, MAX is the maximum value of the distance L1 between the average type vectors, the maximum value MAX of the distance L1 is 2, N is the number of all types of damage, and Loss 1 In terms of the value of the discrimination Loss function, Loss 1 Smaller values of (d) indicate higher accuracy of the damage detection result.
In the discrimination loss function described above, the discrimination loss function,
Figure BDA0003702545910000161
the distance between the average type vectors of different damage types in the damage identification result is constrained to reach the maximum value, so that the first damage identification network is forced to learn the distinguishing characteristics between different damage types, and meanwhile, the distinguishing degree alpha between different damage types mn Smaller means easier confusion, so (1-. alpha.) is passed mn ) And (4) distributing larger weight to ensure that the distinguishing characteristics between any two damage types can be learned, and thus, the distinguishing loss function is constructed.
In this optional embodiment, a first damage identification network is trained based on the discrimination loss function and the detection set to obtain a second damage identification network, in the training process, the vehicle images in the detection set are continuously input into the first damage identification network to obtain an output result, the value of the discrimination loss function is calculated based on the output result and the label data of the vehicle images, and the parameters in the first damage identification network are updated by using a gradient descent method, when the value of the discrimination loss function is not changed any more, the training of the first damage identification network is stopped to obtain the second damage identification network, and the second damage identification network can learn the discrimination characteristics of different damage types.
The damage identification unit 114 is configured to obtain a damage identification result of the real-time vehicle image based on the second damage identification network.
In an optional embodiment, a real-time vehicle image is collected, the real-time vehicle image is input into a second damage identification network to obtain a damage identification result of the real-time vehicle image, and damage types of all pixel points in the real-time vehicle image are obtained based on the damage identification result.
In this optional embodiment, after the damage identification result of the real-time vehicle image is obtained, areas based on different damage types in the damage identification result may be used as a basis for claim settlement of the vehicle insurance company.
According to the technical scheme, the vehicle image with the label data is divided into the training set and the detection set, the first training is completed based on the training set to obtain the first damage identification network, then the detection result of the detection set is obtained by utilizing the first damage identification network, the discrimination of different damage types is obtained based on the detection result, the discrimination loss function is constructed to carry out the second training on the first damage identification network to obtain the second damage identification network, and therefore the accuracy of vehicle damage identification is improved.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 1 comprises a memory 12 and a processor 13. The memory 12 is used for storing computer readable instructions, and the processor 13 is used for executing the computer readable instructions stored in the memory to implement the artificial intelligence based vehicle damage identification method according to any one of the above embodiments.
In an alternative embodiment, the electronic device 1 further comprises a bus, a computer program stored in said memory 12 and executable on said processor 13, such as an artificial intelligence based vehicle damage identification program.
Fig. 3 shows only the electronic device 1 with the memory 12 and the processor 13, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
Referring to fig. 1, the memory 12 in the electronic device 1 stores a plurality of computer-readable instructions to implement an artificial intelligence based vehicle damage identification method, and the processor 13 can execute the plurality of instructions to implement:
collecting a vehicle image with label data as an annotation data set, and dividing the annotation data set into a training set and a detection set, wherein the label data comprises the damage type of each pixel point in the vehicle image;
building an initial damage identification network, and training the initial damage identification network based on the training set to obtain a first damage identification network;
obtaining a damage identification result of each vehicle image in the detection set based on the first damage identification network, and obtaining the discrimination of different damage types based on the damage identification result;
constructing a discrimination loss function based on the discrimination, and training a first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network;
and acquiring a damage identification result of the real-time vehicle image based on the second damage identification network.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
It will be understood by those skilled in the art that the schematic diagram is only an example of the electronic device 1, and does not constitute a limitation to the electronic device 1, the electronic device 1 may have a bus-type structure or a star-shaped structure, the electronic device 1 may further include more or less hardware or software than those shown in the figures, or different component arrangements, for example, the electronic device 1 may further include an input and output device, a network access device, etc.
It should be noted that the electronic device 1 is only an example, and other existing or future electronic products, such as those that may be adapted to the present application, should also be included in the scope of protection of the present application, and are included by reference.
Memory 12 includes at least one type of readable storage medium, which may be non-volatile or volatile. The readable storage medium includes flash memory, removable hard disks, multimedia cards, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, for example a removable hard disk of the electronic device 1. The memory 12 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 1. The memory 12 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of an artificial intelligence-based vehicle damage recognition program, etc., but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects various components of the electronic device 1 by various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules stored in the memory 12 (for example, executing an artificial intelligence-based vehicle damage recognition program and the like) and calling data stored in the memory 12.
The processor 13 executes an operating system of the electronic device 1 and various installed application programs. The processor 13 executes the application program to implement the steps of each of the above-described embodiments of artificial intelligence based vehicle impairment identification methods, such as the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present application. The one or more modules/units may be a series of computer-readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the electronic device 1. For example, the computer program may be segmented into an acquisition unit 110, a first training unit 111, a discrimination unit 112, a second training unit 113, a lesion recognition unit 114.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a Processor (Processor) to execute parts of the artificial intelligence based vehicle damage identification method according to the embodiments of the present application.
The integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer-readable storage medium and executed by a processor, to implement the steps of the embodiments of the methods described above.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), random-access Memory and other Memory, etc.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 3, but this does not indicate only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
The embodiment of the present application further provides a computer-readable storage medium (not shown), in which computer-readable instructions are stored, and the computer-readable instructions are executed by a processor in an electronic device to implement the artificial intelligence based vehicle damage identification method according to any of the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the specification may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting, and although the present application is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present application without departing from the spirit and scope of the technical solutions of the present application.

Claims (10)

1. An artificial intelligence based vehicle damage identification method, characterized in that the method comprises:
collecting a vehicle image with label data as an annotation data set, and dividing the annotation data set into a training set and a detection set, wherein the label data comprises the damage type of each pixel point in the vehicle image;
building an initial damage recognition network, and training the initial damage recognition network based on the training set to obtain a first damage recognition network;
obtaining a damage identification result of each vehicle image in the detection set based on the first damage identification network, and obtaining the discrimination of different damage types based on the damage identification result;
constructing a discrimination loss function based on the discrimination, and training a first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network;
and acquiring a damage identification result of the real-time vehicle image based on the second damage identification network.
2. The artificial intelligence based vehicle damage identification method of claim 1, wherein the collecting of the vehicle image with label data as the labeled data set and dividing the labeled data set into a training set and a detection set, the label data including damage type of each pixel point in the vehicle image comprises:
acquiring a large number of vehicle images in a vehicle damage assessment scene, and acquiring label data of each vehicle image;
storing all the vehicle images and the label data of all the vehicle images as an annotation data set;
and dividing the labeling data set into a training set and a detection set according to a preset proportion.
3. The artificial intelligence based vehicle damage recognition method of claim 1, wherein the building an initial damage recognition network, training the initial damage recognition network based on the training set to obtain a first damage recognition network, comprises:
building an initial damage identification network, wherein the initial damage identification network comprises an encoder and a decoder;
training the initial damage identification network based on the training set and a cross entropy loss function to obtain a first damage identification network, wherein the input of the first damage identification network is a vehicle image, the output of the first damage identification network is a damage identification result of the vehicle image, the damage identification result comprises a type vector of each pixel point in the vehicle image, and the type vector comprises a probability value of the pixel point belonging to each damage type;
and selecting the damage type corresponding to the maximum probability value of the type vector of each pixel point in the damage identification result as the damage type of the pixel point in the vehicle image.
4. The artificial intelligence based vehicle impairment recognition method of claim 3, wherein the obtaining discrimination of different impairment types based on the impairment recognition results comprises:
storing all the type vectors in each damage identification result to obtain a type vector set;
calculating an absolute value of a probability value difference value between different damage types in a target type vector to serve as an initial discrimination between the different damage types, wherein the target type vector is any one of the type vector sets;
constructing an initial-region index matrix of the target type vector based on the initial-region indices, wherein values in an mth row and an nth column in the initial-region index matrix represent initial-region indices between a damage type m and a damage type n;
traversing all the type vectors in the type vector set to obtain an initial region indexing matrix of each type vector;
and calculating the mean value of all the initial region division matrixes to obtain a target region division matrix, and normalizing all the numerical values in the target region division matrix to obtain the discrimination between different damage types.
5. The artificial intelligence based vehicle damage identification method of claim 4 wherein the degree of discrimination satisfies the relationship:
Figure FDA0003702545900000021
wherein the content of the first and second substances,
Figure FDA0003702545900000022
representing all the values in the target discrimination matrix,
Figure FDA0003702545900000023
the value, alpha, representing the mth row and nth column of the object discrimination matrix mn The value range is [0,1 ] for distinguishing between the damage types m and n]。
6. The artificial intelligence based vehicle impairment recognition method of claim 1, wherein the constructing a discrimination loss function based on the discriminations comprises:
dividing the vehicle image based on the label data to obtain a pixel point set of each damage type in the vehicle image;
sending the vehicle image into the first damage identification network to obtain a damage identification result, and calculating the mean value of all type vectors in a pixel point set of the same damage type in the damage identification result to obtain an average type vector of each damage type;
and constructing a discrimination loss function based on the average type vector and the discrimination between different damage types.
7. The artificial intelligence based vehicle impairment recognition method of claim 6, wherein the discrimination loss function satisfies the relation:
Figure FDA0003702545900000031
wherein alpha is mn To differentiate between damage type m and damage type n,
Figure FDA0003702545900000032
is an average type vector of the damage types m in the damage identification result,
Figure FDA0003702545900000033
is the average type vector of the lesion type n in the lesion recognition result,
Figure FDA0003702545900000034
to represent
Figure FDA0003702545900000035
And
Figure FDA0003702545900000036
the distance between the two types of damage is L1, MAX is the maximum value of the distance L1 between the average type vectors, the maximum value MAX of the distance L1 is 2, N is the number of all types of damage, and Loss 1 The value of the discrimination loss function.
8. An artificial intelligence-based vehicle damage identification device, characterized in that the device comprises:
the system comprises a collecting unit, a judging unit and a judging unit, wherein the collecting unit is used for collecting a vehicle image with label data as an annotation data set, and dividing the annotation data set into a training set and a detection set, and the label data comprises the damage type of each pixel point in the vehicle image;
the first training unit is used for building an initial damage recognition network and training the initial damage recognition network based on the training set to obtain a first damage recognition network;
the distinguishing unit is used for obtaining a damage identification result of each vehicle image in the detection set based on the first damage identification network and obtaining the distinguishing degree of different damage types based on the damage identification result;
the second training unit is used for constructing a discrimination loss function based on the discrimination and training the first damage identification network based on the discrimination loss function and the detection set to obtain a second damage identification network;
and the damage identification unit is used for acquiring a damage identification result of the real-time vehicle image based on the second damage identification network.
9. An electronic device, characterized in that the electronic device comprises:
a memory storing computer readable instructions; and
a processor executing computer readable instructions stored in the memory to implement the artificial intelligence based vehicle impairment recognition method of any one of claims 1 to 7.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the artificial intelligence based vehicle injury identification method of any one of claims 1 to 7.
CN202210696940.3A 2022-06-20 2022-06-20 Vehicle damage identification method, device, equipment and medium based on artificial intelligence Pending CN115063632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210696940.3A CN115063632A (en) 2022-06-20 2022-06-20 Vehicle damage identification method, device, equipment and medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210696940.3A CN115063632A (en) 2022-06-20 2022-06-20 Vehicle damage identification method, device, equipment and medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN115063632A true CN115063632A (en) 2022-09-16

Family

ID=83202916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210696940.3A Pending CN115063632A (en) 2022-06-20 2022-06-20 Vehicle damage identification method, device, equipment and medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN115063632A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311104A (en) * 2023-05-15 2023-06-23 合肥市正茂科技有限公司 Training method, device, equipment and medium for vehicle refitting recognition model
CN116363462A (en) * 2023-06-01 2023-06-30 合肥市正茂科技有限公司 Training method, system, equipment and medium for road and bridge passing detection model
CN116703837A (en) * 2023-05-24 2023-09-05 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062381A (en) * 2019-10-17 2020-04-24 安徽清新互联信息科技有限公司 License plate position detection method based on deep learning
CN111667011A (en) * 2020-06-08 2020-09-15 平安科技(深圳)有限公司 Damage detection model training method, damage detection model training device, damage detection method, damage detection device, damage detection equipment and damage detection medium
CN113111716A (en) * 2021-03-15 2021-07-13 中国科学院计算机网络信息中心 Remote sensing image semi-automatic labeling method and device based on deep learning
US20210342997A1 (en) * 2019-12-16 2021-11-04 Insurance Services Office, Inc. Computer Vision Systems and Methods for Vehicle Damage Detection with Reinforcement Learning
CN113705351A (en) * 2021-07-28 2021-11-26 中国银行保险信息技术管理有限公司 Vehicle damage assessment method, device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062381A (en) * 2019-10-17 2020-04-24 安徽清新互联信息科技有限公司 License plate position detection method based on deep learning
US20210342997A1 (en) * 2019-12-16 2021-11-04 Insurance Services Office, Inc. Computer Vision Systems and Methods for Vehicle Damage Detection with Reinforcement Learning
CN111667011A (en) * 2020-06-08 2020-09-15 平安科技(深圳)有限公司 Damage detection model training method, damage detection model training device, damage detection method, damage detection device, damage detection equipment and damage detection medium
CN113111716A (en) * 2021-03-15 2021-07-13 中国科学院计算机网络信息中心 Remote sensing image semi-automatic labeling method and device based on deep learning
CN113705351A (en) * 2021-07-28 2021-11-26 中国银行保险信息技术管理有限公司 Vehicle damage assessment method, device and equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311104A (en) * 2023-05-15 2023-06-23 合肥市正茂科技有限公司 Training method, device, equipment and medium for vehicle refitting recognition model
CN116311104B (en) * 2023-05-15 2023-08-22 合肥市正茂科技有限公司 Training method, device, equipment and medium for vehicle refitting recognition model
CN116703837A (en) * 2023-05-24 2023-09-05 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device
CN116703837B (en) * 2023-05-24 2024-02-06 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device
CN116363462A (en) * 2023-06-01 2023-06-30 合肥市正茂科技有限公司 Training method, system, equipment and medium for road and bridge passing detection model
CN116363462B (en) * 2023-06-01 2023-08-22 合肥市正茂科技有限公司 Training method, system, equipment and medium for road and bridge passing detection model

Similar Documents

Publication Publication Date Title
CN115063632A (en) Vehicle damage identification method, device, equipment and medium based on artificial intelligence
CN115063589A (en) Knowledge distillation-based vehicle component segmentation method and related equipment
CN111476225B (en) In-vehicle human face identification method, device, equipment and medium based on artificial intelligence
CN115049878B (en) Target detection optimization method, device, equipment and medium based on artificial intelligence
CN112232203B (en) Pedestrian recognition method and device, electronic equipment and storage medium
CN111738212A (en) Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence
CN111931729B (en) Pedestrian detection method, device, equipment and medium based on artificial intelligence
CN114972771B (en) Method and device for vehicle damage assessment and claim, electronic equipment and storage medium
CN115170869A (en) Repeated vehicle damage claim identification method, device, equipment and storage medium
CN115222427A (en) Artificial intelligence-based fraud risk identification method and related equipment
CN117611569A (en) Vehicle fascia detection method, device, equipment and medium based on artificial intelligence
CN113065947A (en) Data processing method, device, equipment and storage medium
CN115222944A (en) Tire damage detection method based on artificial intelligence and related equipment
CN113284137B (en) Paper fold detection method, device, equipment and storage medium
CN113486848B (en) Document table identification method, device, equipment and storage medium
CN115222943A (en) Method for detecting damage of rearview mirror based on artificial intelligence and related equipment
CN115131564A (en) Vehicle component damage detection method based on artificial intelligence and related equipment
CN115239958A (en) Wheel hub damage detection method based on artificial intelligence and related equipment
CN112102205A (en) Image deblurring method and device, electronic equipment and storage medium
CN114943908B (en) Vehicle body damage evidence obtaining method, device, equipment and medium based on artificial intelligence
CN115239960A (en) Vehicle grille component damage detection method based on artificial intelligence and related equipment
CN113283421B (en) Information identification method, device, equipment and storage medium
CN114972761B (en) Vehicle part segmentation method based on artificial intelligence and related equipment
CN113111833B (en) Safety detection method and device of artificial intelligence system and terminal equipment
CN114240935B (en) Space-frequency domain feature fusion medical image feature identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination