WO2021114809A1 - 车辆损伤特征检测方法、装置、计算机设备及存储介质 - Google Patents

车辆损伤特征检测方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021114809A1
WO2021114809A1 PCT/CN2020/116741 CN2020116741W WO2021114809A1 WO 2021114809 A1 WO2021114809 A1 WO 2021114809A1 CN 2020116741 W CN2020116741 W CN 2020116741W WO 2021114809 A1 WO2021114809 A1 WO 2021114809A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
adaptive
damage
model
vehicle
Prior art date
Application number
PCT/CN2020/116741
Other languages
English (en)
French (fr)
Inventor
康甲
刘莉红
刘玉宇
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021114809A1 publication Critical patent/WO2021114809A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • This application relates to the field of artificial intelligence image classification, and in particular to a vehicle damage feature detection method, device, computer equipment and storage medium.
  • insurance companies generally manually identify the images taken by the owner or business personnel of the vehicle damage after the traffic accident , That is, to manually identify and determine the damage type and damaged area of the damaged part of the vehicle in the image.
  • the artificially recognized damage type and damaged area may not match; for example: Because it is difficult to distinguish between dents and scratches through visual images, damage assessment personnel can easily determine the type of damage caused by the dent as the type of scratch damage.
  • the miscalculation caused by the above conditions will greatly reduce the accuracy of the damage assessment; While it may cause cost losses for the insurance company, it will also reduce the satisfaction of car owners or customers; in addition, the manual loss determination workload is huge and the loss determination efficiency is low. When a certain loss determination accuracy needs to be met, Will further increase the workload and reduce work efficiency.
  • the present application provides a vehicle damage feature detection method, device, computer equipment, and storage medium, which realize the rapid and accurate automatic identification of the damage type and damage area corresponding to the damaged part of the vehicle in the damage image of the vehicle to be detected, which greatly reduces
  • the process of model construction and model training have improved the accuracy and reliability of determining the type of loss and the area of the loss, and improved the efficiency of the loss.
  • a vehicle damage feature detection method including:
  • the damage image of the vehicle to be detected After receiving the vehicle damage detection instruction, obtain the damage image of the vehicle to be detected; the damage image of the vehicle to be detected includes at least one image of the damaged location of the vehicle;
  • the vehicle damage image to be detected is input into an unsupervised domain adaptive network model;
  • the unsupervised domain adaptive network model includes a pytorch-based migration learning model, a strong local feature adaptive model, a weak global feature adaptive model, and regularization model;
  • the pytorch-based migration learning model outputs a migration feature vector group according to the vehicle characteristics, and at the same time, a first adaptive feature vector group is acquired through the strong local feature adaptive model, and acquired through the weak global feature adaptive model
  • the second adaptive feature vector group the first adaptive feature vector group is the strong local feature adaptive model obtained and output according to the first vehicle damage feature extracted from the local feature map
  • the second self The adaptive feature vector group is obtained by the weak global feature adaptive model according to the second vehicle damage feature extracted from the global feature map and output;
  • the migration feature vector group, the first adaptive feature vector group, and the second adaptive feature vector group are input into the regularization model, and the migration feature vector group, the The first adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result that includes the damage type and the damage area; the recognition result represents that the damage image of the vehicle to be detected contains all the damaged images.
  • the type of damage and the result of the corresponding damage area are input into the regularization model, and the migration feature vector group, the The first adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result that includes the damage type and the damage area; the recognition result represents that the damage image of the vehicle to be detected contains all the damaged images.
  • the type of damage and the result of the corresponding damage area is a recognition result that includes the damage type and the damage area.
  • a vehicle damage feature detection device including:
  • the receiving module is configured to obtain the damage image of the vehicle to be detected after receiving the vehicle damage detection instruction; the damage image of the vehicle to be detected includes at least one image of the damaged location of the vehicle;
  • the input module is used to input the to-be-detected vehicle damage image into an unsupervised domain adaptive network model;
  • the unsupervised domain adaptive network model includes a pytorch-based migration learning model, a strong local feature adaptive model, and a weak global feature automatic Adaptation model and regularization model;
  • An extraction module for extracting the vehicle features of the damage image of the vehicle to be detected through the pytorch-based migration learning model and generating a local feature map and a global feature map; the vehicle features are features related to the vehicle after the migration learning;
  • the output module is configured to output a transfer feature vector set according to the vehicle characteristics through the pytorch-based transfer learning model, and at the same time obtain a first adaptive feature vector set through the strong local feature adaptive model, and use the weak global
  • the feature adaptive model acquires a second adaptive feature vector group; the first adaptive feature vector group is the strong local feature adaptive model acquired and output according to the first vehicle damage feature extracted from the local feature map;
  • the second adaptive feature vector group is obtained and output by the weak global feature adaptive model according to the second vehicle damage feature extracted from the global feature map;
  • the recognition module is configured to input the migration feature vector group, the first adaptive feature vector group, and the second adaptive feature vector group into the regularization model, and use the regularization model to compare the migration feature
  • the vector group, the first adaptive feature vector group, and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result including the damage type and the damage area; the recognition result represents the damage of the vehicle to be detected
  • the image contains the results of all damaged types and corresponding damaged areas.
  • a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
  • the damage image of the vehicle to be detected After receiving the vehicle damage detection instruction, obtain the damage image of the vehicle to be detected; the damage image of the vehicle to be detected includes at least one image of the damaged location of the vehicle;
  • the vehicle damage image to be detected is input into an unsupervised domain adaptive network model;
  • the unsupervised domain adaptive network model includes a pytorch-based migration learning model, a strong local feature adaptive model, a weak global feature adaptive model, and regularization model;
  • the pytorch-based migration learning model outputs a migration feature vector group according to the vehicle characteristics, and at the same time, a first adaptive feature vector group is acquired through the strong local feature adaptive model, and acquired through the weak global feature adaptive model
  • the second adaptive feature vector group the first adaptive feature vector group is the strong local feature adaptive model obtained and output according to the first vehicle damage feature extracted from the local feature map
  • the second self The adaptive feature vector group is obtained by the weak global feature adaptive model according to the second vehicle damage feature extracted from the global feature map and output;
  • the migration feature vector group, the first adaptive feature vector group, and the second adaptive feature vector group are input into the regularization model, and the migration feature vector group, the The first adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result that includes the damage type and the damage area; the recognition result represents that the damage image of the vehicle to be detected contains all the damaged images.
  • the type of damage and the result of the corresponding damage area are input into the regularization model, and the migration feature vector group, the The first adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result that includes the damage type and the damage area; the recognition result represents that the damage image of the vehicle to be detected contains all the damaged images.
  • the type of damage and the result of the corresponding damage area is a recognition result that includes the damage type and the damage area.
  • One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
  • the damage image of the vehicle to be detected After receiving the vehicle damage detection instruction, obtain the damage image of the vehicle to be detected; the damage image of the vehicle to be detected includes at least one image of the damaged location of the vehicle;
  • the vehicle damage image to be detected is input into an unsupervised domain adaptive network model;
  • the unsupervised domain adaptive network model includes a pytorch-based migration learning model, a strong local feature adaptive model, a weak global feature adaptive model, and regularization model;
  • the pytorch-based migration learning model outputs a migration feature vector group according to the vehicle characteristics, and at the same time, a first adaptive feature vector group is acquired through the strong local feature adaptive model, and acquired through the weak global feature adaptive model
  • the second adaptive feature vector group the first adaptive feature vector group is the strong local feature adaptive model obtained and output according to the first vehicle damage feature extracted from the local feature map
  • the second self The adaptive feature vector group is obtained by the weak global feature adaptive model according to the second vehicle damage feature extracted from the global feature map and output;
  • the migration feature vector group, the first adaptive feature vector group, and the second adaptive feature vector group are input into the regularization model, and the migration feature vector group, the The first adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result that includes the damage type and the damage area; the recognition result represents that the damage image of the vehicle to be detected contains all the damaged images.
  • the type of damage and the result of the corresponding damage area are input into the regularization model, and the migration feature vector group, the The first adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result that includes the damage type and the damage area; the recognition result represents that the damage image of the vehicle to be detected contains all the damaged images.
  • the type of damage and the result of the corresponding damage area is a recognition result that includes the damage type and the damage area.
  • the vehicle damage feature detection method, device, computer equipment, and storage medium provided in the present application obtain the damage image of the vehicle to be detected; input the damage image of the vehicle to be detected into the unsupervised domain adaptive network model; the unsupervised domain is adaptive
  • the network model includes a pytorch-based migration learning model, a strong local feature adaptive model, a weak global feature adaptive model, and a regularization model; the pytorch-based migration learning model extracts the vehicle features of the damaged image of the vehicle to be detected, and all
  • the pytorch-based transfer learning model generates a local feature map and a global feature map in the process of extracting the vehicle feature; the vehicle feature is the feature related to the vehicle after the transfer learning; the transfer learning model based on the pytorch Output the transfer feature vector group according to the vehicle characteristics, and at the same time obtain the first adaptive feature vector group through the strong local feature adaptive model, and acquire the second adaptive feature vector group through the weak global feature adaptive model;
  • An adaptive feature vector group and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result including the damage type and the damage area.
  • the transfer learning pytorch model and the strong local feature adaptive model are strengthened.
  • the second vehicle damage feature framework is an unsupervised domain adaptive network model suitable for vehicle damage detection, which can quickly and accurately automatically identify the vehicle damage in the vehicle damage image to be detected.
  • the damage type and damage area corresponding to the position of greatly reduce the process of model construction and model training, improve the accuracy and reliability of determining the type of damage and the area of damage, and improve the efficiency of damage.
  • FIG. 1 is a schematic diagram of an application environment of a method for detecting a vehicle damage feature in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for detecting damage characteristics of a vehicle in an embodiment of the present application
  • step S20 is a flowchart of step S20 of the method for detecting damage features of a vehicle in an embodiment of the present application
  • step S20 is a flowchart of step S20 of a method for detecting damage characteristics of a vehicle in another embodiment of the present application
  • step S40 of the method for detecting damage features of a vehicle in an embodiment of the present application
  • step S40 of the method for detecting damage features of a vehicle in another embodiment of the present application
  • Fig. 7 is a schematic block diagram of a vehicle damage feature detection device in an embodiment of the present application.
  • FIG. 8 is a functional block diagram of the output module 14 of the vehicle damage feature detection device in an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
  • the vehicle damage feature detection method provided by the present application can be applied in the application environment as shown in Fig. 1, in which the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a vehicle damage feature detection method is provided, and the technical solution mainly includes the following steps S10-S50:
  • the damage image of the vehicle to be detected includes at least one image of the damaged location of the vehicle.
  • the vehicle damage detection instruction is triggered to obtain the vehicle damage image to be detected, and the vehicle damage image to be detected contains at least one damage to the vehicle.
  • the image of the location, the damage includes 7 damage conditions such as scratches, scratches, dents, wrinkles, dead folds, tears, and missing.
  • the acquisition method can be set according to requirements, for example, through the vehicle damage detection instruction
  • the damage image of the vehicle to be detected contained in it is directly acquired, or the vehicle damage detection instruction contains a storage path for storing the damage image of the vehicle to be detected, and then it is obtained through the accessed storage path, and so on.
  • the unsupervised domain adaptive network model includes a pytorch-based migration learning model, a strong local feature adaptive model, a weak global feature adaptive model, and Regularization model.
  • the unsupervised domain adaptive network model is a trained adaptive network model
  • the unsupervised domain adaptive network model includes a pytorch-based transfer learning model, a strong local feature adaptive model, and a weak global feature self-adaptive model.
  • Adaptation model and regularization model, the pytorch-based migration learning model is a neural network model that transfers a pytorch-based network structure and is trained (that is, the trained pytorch model), and the characteristics of the trained pytorch model The extraction can be selected according to requirements.
  • the trained pytorch model is a pytorch model applied to vehicle lamp brightness detection, or the trained pytorch model is a pytorch model applied to vehicle model detection, etc.
  • the local feature adaptive model is a trained neural network model used to strengthen the first vehicle damage feature
  • the weak global feature adaptive model is a trained neural network model used to extract the second vehicle damage feature
  • the regular The transformation model is a model that normalizes the received feature vector.
  • the method before step S20, that is, before inputting the damage image of the vehicle to be detected into an unsupervised domain adaptive network model, the method includes:
  • the car damage sample set includes car damage sample images, one of the car damage sample images is associated with a damage label group; the damage label group includes at least one damage label type and at least one damage label area.
  • the car damage sample set includes a plurality of the car damage sample images
  • the car damage sample set is a set of the car damage samples
  • the car damage sample images are historically captured and contain at least one An image of the damaged location of a vehicle
  • one of the damaged sample images is associated with a damage label group
  • the damage label group includes damage label types and damage label areas
  • the damage label types include scratches, scratches, dents, and wrinkles
  • the damage label area is a coordinate area that can cover the damage location through a rectangular frame with a minimum area.
  • the car damage sample image is input into the adaptive network model
  • the adaptive network model is a deep convolutional neural network model including the initial parameters
  • the initial parameters can be performed according to requirements Setting, for example, the initial parameter is a randomly preset parameter, or a preset fixed value, etc.
  • the initial parameter obtains all the parameters in the trained deep convolutional neural network model through migration learning.
  • step S202 that is, before inputting the car damage sample image into an adaptive network model containing initial parameters, the method includes:
  • S20201 Obtain all the migration parameters of the trained pytorch model through migration learning, and determine all the migration parameters as the initial parameters in the adaptive network model.
  • the transfer learning is to transfer the model parameters that have been trained to a new model to help the new model training. Since most of the data or tasks are related, we can use transfer learning to transfer The learned model parameters are shared with the new model in a certain way to speed up and optimize the learning efficiency of the model. There is no need to start training and learning from scratch. In this way, the vehicle-related detection model can be optimized and accelerated through migration learning.
  • the trained pytorch model selects a vehicle-related detection model according to requirements, for example: the trained pytorch model is a pytorch model applied to vehicle lamp brightness detection, or the trained pytorch model is applied to vehicle model detection
  • the pytorch model of the pytorch model, etc., the pytorch model contains the migration parameters, the migration parameters are the parameters of the pytorch model, and all the migration parameters are determined as the initial parameters in the adaptive network model
  • the characteristic of the pytorch model is the calculation of dynamic graphs and a simple model structure (that is, the progression from the data tensor to the abstraction level of the network), which can improve the extraction efficiency and recognition accuracy of the model.
  • the pytorch model completed through migration learning training in this application can quickly build the model, reduce the time for training the pytorch model, and reduce the cost.
  • S203 Perform training feature extraction on the car damage sample image through the adaptive network model, and obtain a training result corresponding to the car damage sample image output by the adaptive network model according to the training feature; the training feature Including the vehicle feature, the first vehicle damage feature, and the second vehicle damage feature; the training result includes at least one sample damage type and at least one sample damage area.
  • the training feature includes the vehicle feature, the first vehicle damage feature, and the second vehicle damage feature
  • the vehicle feature is a feature related to the vehicle after migration learning
  • the damage feature is the feature of the local texture and color depth in the image
  • the second vehicle damage feature is the feature of the common vector characteristics in all feature maps
  • the training result includes the sample damage type and the sample damage area.
  • One of the sample damage An area corresponds to one damage type of the sample, and one damage type of the sample can correspond to multiple damaged areas of the sample.
  • the sample damage types include scratches, scratches, dents, folds, dead folds, tears, missing, etc. 7
  • the sample damage area is a damaged area range in the car damage sample image, that is, a rectangular coordinate range for identifying the damage position in the car damage sample image.
  • S204 Input all the damage label types, all the damage label regions, all the sample damage types, and all the sample damage regions corresponding to the car damage sample image into the loss model in the adaptive network model, And calculate the loss value through the loss function of the loss model.
  • the loss value corresponding to the car damage sample image is calculated, the loss function can be set according to requirements, and the loss function is between all the damage label types and all the sample damage types
  • the weighting function of the logarithm of the difference and the logarithm of the difference between all the damage label areas and all the sample damage areas, the loss value can be calculated through the loss function, and the loss value measures all The index of the sum of the gap between the damage label type and all the sample damage types and the gap between all the damage label areas and all the sample damage areas.
  • the convergence condition may be a condition that the loss value is less than a set threshold, that is, when the loss value is less than a set threshold, the adaptive network model after convergence is recorded as unsupervised domain adaptation The network model, and the unsupervised domain adaptive network model is stored in the blockchain.
  • the method further includes:
  • the convergence condition may also be a condition that the value of the loss value is small and will not drop after 10,000 calculations, that is, the value of the loss value is small and does not decrease after 10,000 calculations.
  • stop training record the self-adaptive network model after convergence as an unsupervised domain-adaptive network model, and store the unsupervised domain-adaptive network model in the blockchain.
  • the aforementioned unsupervised domain adaptive network model can also be stored in the nodes of the blockchain.
  • Blockchain essentially a decentralized database
  • Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the decentralized and fully distributed DNS service provided by the blockchain can realize the query and resolution of domain names through the point-to-point data transmission service between various nodes in the network, which can be used to ensure that the operating system and firmware of an important infrastructure are not available.
  • the initial parameters of the iterative adaptive network model are continuously updated, which can continuously move closer to the accurate recognition result, and make the accuracy of the recognition result higher and higher.
  • This application obtains a car damage sample set; the car damage sample set includes car damage sample images, and one car damage sample image is associated with a damage label group; the damage label group includes at least one damage label type and at least one damage Label area; input the car damage sample image into an adaptive network model containing initial parameters; perform training feature extraction on the car damage sample image through the adaptive network model, and obtain the adaptive network model according to the training
  • the training feature includes the vehicle feature, the first vehicle damage feature, and the second vehicle damage feature;
  • the training result includes at least one sample damage type And at least one sample damage area; input all the damage label types, all the damage label areas, all the sample damage types, and all the sample damage areas corresponding to the car damage sample image into the adaptive network model And calculate the loss value through the loss function of the loss model; when the loss value reaches the preset convergence condition, record the self-adaptive network model after convergence as an unsupervised domain self-adaptive network Model; when the loss value does not reach the preset
  • the vehicle damage sample image containing the damage label type and the damage label area is input into the adaptive network model, and the vehicle damage sample image is trained through the adaptive network model.
  • Features including the vehicle characteristics, the first vehicle The damage feature and the second vehicle damage feature
  • the training result output by the adaptive network model is obtained, and all the damage label types and all the damage label types corresponding to the car damage sample image are obtained through the loss function in the loss model.
  • the damage label area, all the sample damage types, and the loss value determined by all the sample damage areas are trained according to the loss value, and the adaptive network model after convergence is determined as an unsupervised domain adaptive network model, which provides A model training method for quickly identifying the damage in the car damage sample image, which improves the accuracy and reliability of determining the type of damage and the area of the damage, improves the efficiency of the damage, and shortens the training time. cost.
  • the local feature map and the global feature map are generated, and the local feature map is
  • a feature map output by a convolutional layer is taken as the local feature map.
  • a feature map output by a convolutional layer in the middle is selected as the local feature map. The feature map is because the convolutional layer in the middle range contains the feature vector of the first vehicle damage feature, that is, the local feature vector is extracted from the entire image.
  • the global feature map is the input of the pytorch-based migration
  • the feature map of the RPN model in the learning model is a feature map that convolves the to-be-detected vehicle damage image to a preset minimum size, which is convenient for extracting the second vehicle damage feature.
  • S40 Output a set of transfer feature vectors according to the vehicle characteristics through the pytorch-based transfer learning model, and at the same time obtain a first set of adaptive feature vectors through the strong local feature adaptive model, and adapt through the weak global feature
  • the model acquires a second adaptive feature vector group; the first adaptive feature vector group is the strong local feature adaptive model acquiring and outputting according to the first vehicle damage feature extracted from the local feature map; the first The second adaptive feature vector group is the weak global feature adaptive model obtained and output according to the second vehicle damage feature extracted from the global feature map.
  • the first vehicle damage feature is a feature of local texture and color depth in the image
  • the second vehicle damage feature is a feature of a common vector characteristic in all feature maps
  • the strong local feature adaptive model The function is to enhance the damage feature in the local feature map, generate useful adaptation information by extracting the first vehicle damage feature, and output the enhanced first feature vector, that is, extract the first adaptive feature vector Group
  • the function of the weak global feature adaptive model is to extract the second vehicle damage feature from the weaker feature vectors in all the global feature maps, prevent overfitting, and generate a second feature vector, namely The second adaptive feature vector group is extracted.
  • the strong local feature adaptive model extracts the first vehicle damage feature from the local feature map, and the strong local feature adaptive model Outputting a first adaptive feature vector group according to the first vehicle damage feature includes:
  • the local convolutional layer includes a first convolutional layer, a second convolutional layer, and a third convolutional layer.
  • the first convolutional layer includes a 3 ⁇ 3 ⁇ 512 convolution kernel and The first convolution with a step size of 2, a first filling layer filled with 1, a first batch normalization module (batch normalization), a first activation module (ReLU) and a first dropout module (dropout), so
  • the second convolutional layer includes a 3 ⁇ 3 ⁇ 128 convolution kernel and a second convolution with a step size of 2, a second filling layer filled with 1, a second batch of normalization modules (batch normalization), and a first Two activation modules and a second dropout module.
  • the third convolutional layer includes a 3 ⁇ 3 ⁇ 128 convolution kernel and a third convolution with a step size of 2, and a third filling layer filled with 1 ,
  • a third batch of normalization module (batch normalization), a third activation module and a third dropout module (dropout).
  • the first vehicle damage feature is a local texture and color depth feature in the image
  • the first vehicle damage feature extraction is performed on the local feature map through the first convolutional layer
  • the local The feature map is processed to reduce the dimensionality of the feature map and expand the number of channels of the feature map to obtain a first convolutional feature map, input the first convolutional feature map to the second convolutional layer, and pass the second convolution
  • the layer performs the first vehicle damage feature extraction on the first convolution feature map, and performs processing on the first convolution feature map to reduce the dimension of the feature map and expand the number of channels of the feature map to obtain the second convolution
  • the second convolution feature map is input to the third convolution layer
  • the first vehicle damage feature extraction is performed on the second convolution feature map through the third convolution layer
  • the second convolution feature map is processed to reduce the dimension of the feature map and expand the number of channels of the feature map to obtain the local feature vector map.
  • the dimension of the local feature vector map is less than the dimension of the local feature map, so The
  • S402 Input the local feature vector graph to a pooling layer in the strong local feature adaptive model, and perform pooling processing on the local feature vector graph through the pooling layer to obtain a local pooling matrix.
  • the method of pooling treatment can be set according to requirements.
  • the pooling treatment can be average pooling or maximum pooling, etc.
  • the function of the pooling treatment is to affect the local characteristics.
  • Vector graph dimensionality reduction processing, the local pooling matrix is a one-dimensional matrix array.
  • S403 Input the local pooling matrix into a fully connected layer in the strong local feature adaptive model, and perform feature connections on the local pooling matrix through the fully connected layer to obtain a local connected matrix.
  • the feature connection is to map the obtained feature vector value to the position of the sample label space, and perform weighted summation, connect these feature vectors, and feature the local pooling matrix through the fully connected layer Connect to obtain the local connection matrix, and the local connection matrix is a sorted one-dimensional matrix array. For example: convolve through 300 1 ⁇ 1 convolution kernels and connect them into a one-dimensional group of 300 vectors.
  • the regression processing is to perform a normalization operation after weighting the input to obtain the score of each category, and then through the process of softmax mapping to probability, the Softmax layer predicts and classifies the local connection matrix Obtain the first adaptive feature vector group.
  • the local feature map is input into the local convolutional layer, and the first vehicle damage feature in the local feature map is extracted through the local convolution layer to obtain a local feature vector map;
  • the feature vector graph is input to the pooling layer, and the local feature vector graph is pooled by the pooling layer to obtain a local pooling matrix;
  • the local pooling matrix is input to the fully connected layer, and the local pooling matrix is input to the fully connected layer.
  • the fully connected layer performs feature connection on the local pooling matrix to obtain a local connection matrix; input the local connection matrix to the Softmax layer, and perform regression processing on the local connection matrix through the Softmax layer to obtain the The first adaptive feature vector group corresponding to the local feature map.
  • the weak global feature adaptive model extracts the second vehicle damage feature from the global feature map, and the global feature adaptive model is based on
  • the second vehicle damage feature outputting a second adaptive feature vector group includes:
  • the global convolutional layer includes a first global convolutional layer, a second global convolutional layer, and a third global convolutional layer.
  • the first global convolutional layer includes a 1 ⁇ 1 ⁇ 256 Convolution kernel and a first global convolution with a step size of 2, a first global filling layer filled with 0, and a first global activation module.
  • the first global convolution layer includes a 1 ⁇ 1 ⁇ 128 convolution Kernel and a second global convolution with a step size of 2, a second global filling layer filled with 0, and a second global activation module.
  • the third global convolution layer includes a 1 ⁇ 1 ⁇ 1 convolution kernel and A third global convolution with a step size of 1 and a third global filling layer filled with 0.
  • the second vehicle damage feature is a feature of a common vector characteristic (also called a commonality) in all feature maps, and the second vehicle damage feature is performed on the global feature map through the first global convolution layer.
  • Extraction that is, extract the feature vector of the common vector feature from the global feature map to obtain the first global convolution feature map, input the first global convolution feature map to the second global convolution layer, and pass the The second global convolution layer performs the second vehicle damage feature extraction on the first global convolution feature map to obtain a second global convolution feature map, and inputting the second global convolution feature map to the first Three global convolutional layers, the second global convolution feature map is subjected to the second vehicle damage feature extraction through the third global convolutional layer to obtain the global feature vector map, and the second vehicle is extracted
  • the first global filling layer, the second global filling layer, and the third global filling layer are used to prevent interference by introducing common features, but can be filled to a preset size.
  • the Sigmoid activation layer uses a Sigmoid function as the last layer in the weak global feature adaptive model, and the Sigmoid function is a function that activates and classifies the global feature vector.
  • the activation process is a process of mapping a vector with a value of (- ⁇ , + ⁇ ) to a range of (0, 1) through the Sigmoid function, thereby obtaining the second adaptive feature vector group.
  • the global feature map is input into the first global convolutional layer, and the second vehicle damage feature of the global feature map is extracted through the first global convolution layer to obtain a global feature vector map;
  • the global feature vector map is input to the Sigmoid activation layer, and the global feature vector is activated through the Sigmoid activation layer to obtain a second adaptive feature vector group corresponding to the global feature map.
  • the second vehicle damage feature is extracted from the weaker feature vectors in all the global feature maps to prevent overfitting, extract high-quality second vehicle damage features, and provide second adaptation
  • the function of the feature vector group improves the accuracy and reliability of recognition.
  • the regularization process is regularization, that is, to increase rule restrictions, constrain optimization parameters, and prevent obvious features from being infinitely magnified to cause weakened features to be erased.
  • the damage types include scratches, scratches, dents, and dents. There are 7 types of damage, including wrinkles, dead-folds, tears, and missing.
  • the damage area is the area of the damaged position in the damaged image of the vehicle to be detected, that is, the distance of the damaged position relative to the damaged image of the vehicle to be detected. The full set of coordinate ranges.
  • One damage area corresponds to one damage type, and one damage type can correspond to multiple damage areas. In this way, after inputting the damage image of the vehicle to be detected, all damaged areas can be automatically identified Type, and the range of the damaged area corresponding to the type of damage.
  • This application obtains the damage image of the vehicle to be detected; inputs the damage image of the vehicle to be detected into an unsupervised domain adaptive network model; the unsupervised domain adaptive network model includes a pytorch-based migration learning model and a strong local feature adaptive model , Weak global feature adaptive model and regularization model; the pytorch-based migration learning model extracts the vehicle features of the vehicle damage image to be detected, and the pytorch-based migration learning model extracts the vehicle features in the process, A local feature map and a global feature map are generated; the vehicle feature is the feature related to the vehicle after the transfer learning; the transfer learning model based on pytorch outputs the transfer feature vector group according to the vehicle feature, and at the same time passes the strong The local feature adaptive model acquires the first adaptive feature vector group, and the second adaptive feature vector group is acquired through the weak global feature adaptive model; the transfer feature vector group and the first adaptive feature vector group are combined And the second adaptive feature vector group is input into the regularization model, and the migration feature vector group, the
  • an unsupervised domain adaptive network model suitable for vehicle damage detection is realized through the transfer learning pytorch model, the strong local feature adaptive model to strengthen the first vehicle damage feature and the weak global feature adaptive model to extract the second vehicle damage feature framework , It can quickly and accurately automatically identify the damage type and damage area corresponding to the damaged part of the vehicle in the damage image of the vehicle to be detected, greatly reducing the process of model construction and model training, and improving the type and damage assessment The accuracy and reliability of the area determination improves the efficiency of loss determination.
  • a vehicle damage feature detection device is provided, and the vehicle damage feature detection device corresponds to the vehicle damage feature detection method in the above-mentioned embodiment in a one-to-one correspondence.
  • the vehicle damage feature detection device includes a receiving module 11, an input module 12, an extraction module 13, an output module 14 and an identification module 15.
  • the detailed description of each functional module is as follows:
  • the receiving module 11 is configured to obtain a damage image of the vehicle to be detected after receiving a vehicle damage detection instruction; the damage image of the vehicle to be detected includes at least one image of the damaged location of the vehicle;
  • the input module 12 is used to input the to-be-detected vehicle damage image into an unsupervised domain adaptive network model;
  • the unsupervised domain adaptive network model includes a pytorch-based migration learning model, a strong local feature adaptive model, and a weak global feature Adaptive model and regularization model;
  • the extraction module 13 is used to extract the vehicle features of the damaged image of the vehicle to be detected through the pytorch-based migration learning model and generate a local feature map and a global feature map; the vehicle features are features related to the vehicle after the migration learning ;
  • the output module 14 is configured to output a transfer feature vector set according to the vehicle characteristics through the pytorch-based transfer learning model, and at the same time obtain a first adaptive feature vector set through the strong local feature adaptive model, and use the weak
  • the global feature adaptive model acquires a second adaptive feature vector group;
  • the first adaptive feature vector group is the strong local feature adaptive model acquired and output according to the first vehicle damage feature extracted from the local feature map
  • the second adaptive feature vector group is the weak global feature adaptive model obtained and output according to the second vehicle damage feature extracted from the global feature map;
  • the recognition module 15 is configured to input the migration feature vector group, the first adaptive feature vector group, and the second adaptive feature vector group into the regularization model, and use the regularization model to analyze the migration
  • the feature vector group, the first adaptive feature vector group, and the second adaptive feature vector group are subjected to regularization processing to obtain a recognition result including the damage type and the damage area; the recognition result represents the vehicle to be detected
  • the damage image contains the results of all damaged types and corresponding damaged areas.
  • the output module 14 includes:
  • the convolution unit 41 is configured to input the local feature map into a local convolution layer in the strong local feature adaptive model, and extract the first vehicle damage in the local feature map through the local convolution layer Feature, get the local feature vector diagram;
  • the pooling unit 42 is configured to input the local feature vector map into the pooling layer in the strong local feature adaptive model, and perform pooling processing on the local feature vector map through the pooling layer to obtain a local pool Matrix
  • the fully connected unit 43 is configured to input the local pooling matrix into the fully connected layer in the strong local feature adaptive model, and perform feature connection on the local pooling matrix through the fully connected layer to obtain a local connected matrix ;
  • the regression unit 44 is configured to input the local connection matrix into the Softmax layer in the strong local feature adaptive model, and perform regression processing on the local connection matrix through the Softmax layer to obtain the first local feature map corresponding to the local feature map.
  • An adaptive feature vector group is configured to input the local connection matrix into the Softmax layer in the strong local feature adaptive model, and perform regression processing on the local connection matrix through the Softmax layer to obtain the first local feature map corresponding to the local feature map.
  • each module in the above-mentioned vehicle damage feature detection device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection. When the computer-readable instruction is executed by the processor, a method for detecting damage characteristics of a vehicle is realized.
  • the readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage medium.
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor executes the computer-readable instructions, the vehicle in the foregoing embodiment is implemented. Damage feature detection method.
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the vehicle damage feature detection method in the foregoing embodiment.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Accounting & Taxation (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

一种车辆损伤特征检测方法、装置、计算机设备及存储介质,所述方法包括:获取待检测车辆损伤图像(S10)并输入无监督领域自适应网络模型(S20);通过基于pytorch的迁移学习模型提取车辆特征并生成局部特征图和全局特征图(S30);根据车辆特征输出迁移特征向量组,同时通过强局部特征自适应模型获取第一自适应特征向量组,以及通过弱全局特征自适应模型获取第二自适应特征向量组(S40);对迁移特征向量组、第一自适应特征向量组和第二自适应特征向量组进行正则化处理得到识别结果(S50)。该方案实现了自动识别待检测车辆损伤图像中的损伤类型及损伤区域。该方案还涉及区块链技术,所述无监督领域自适应模型可存储于区块链中。

Description

车辆损伤特征检测方法、装置、计算机设备及存储介质
本申请要求于2020年5月27日提交中国专利局、申请号为202010462160.3,发明名称为“车辆损伤特征检测方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能的图像分类领域,尤其涉及一种车辆损伤特征检测方法、装置、计算机设备及存储介质。
背景技术
发明人发现在车辆发生交通事故后,车辆的某些部位会留下破损、刮伤等损伤的痕迹,目前,保险公司一般是人工识别由车主或业务人员拍摄的交通事故之后的车辆损伤的图像,即对图像中车辆的损伤部位的损伤类型及损伤区域进行人工识别并判定,如此,可能由于存在标准理解不一、观察经验不足等影响,导致人工识别的损伤类型及损伤区域不符;例如:由于凹陷和刮擦难以通过目测图像加以分辨,定损人员很容易就将凹陷的损伤类型确定为刮擦的损伤类型,上述情况下导致的定损失误,会大大降低了定损的准确性;在可能会导致保险公司的成本损失的同时,也会降低车主或客户的满意度;此外,人工定损的工作量巨大,定损效率低下,在需要满足一定的定损准确度的情况下,会进一步提升工作量,降低工作效率。
发明内容
本申请提供一种车辆损伤特征检测方法、装置、计算机设备及存储介质,实现了快速地、准确地自动识别出待检测车辆损伤图像中车辆损伤的部位对应的损伤类型及损伤区域,大大缩减了模型构架的过程及模型训练的过程,提升了对定损类型和定损区域进行确定的准确率及可靠性,提高了定损效率。
一种车辆损伤特征检测方法,包括:
接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别 结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
一种车辆损伤特征检测装置,包括:
接收模块,用于接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
输入模块,用于将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
提取模块,用于通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
输出模块,用于通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
识别模块,用于将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包 含至少一处车辆被损伤位置的图像;
将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
本申请提供的车辆损伤特征检测方法、装置、计算机设备及存储介质,通过获取待检测车辆损伤图像;将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征,同时所述基于pytorch的迁移学习模型在提取所述车辆特征过程中,生成了局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果,如此,实现了通过迁移学习pytorch模型、强局部特征自适应模型强化第一车辆损伤特征及弱全局特征自适应模型提取第二车辆损伤特征构架的适用于车辆损伤检测的无监督领域自适应网络模型,能够快速地、准确地自动识别出待检测车辆损伤图像中车辆损伤的部位对应的损伤类型及损伤区域,大大缩减了模型构架的过程及模型训练的过程,提升了对定损类型和定损区域进行确定的准确率及可靠性,提高了定损效率。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中车辆损伤特征检测方法的应用环境示意图;
图2是本申请一实施例中车辆损伤特征检测方法的流程图;
图3是本申请一实施例中车辆损伤特征检测方法的步骤S20的流程图;
图4是本申请另一实施例中车辆损伤特征检测方法的步骤S20的流程图;
图5是本申请一实施例中车辆损伤特征检测方法的步骤S40的流程图;
图6是本申请另一实施例中车辆损伤特征检测方法的步骤S40的流程图;
图7是本申请一实施例中车辆损伤特征检测装置的原理框图;
图8是本申请一实施例中车辆损伤特征检测装置的输出模块14的原理框图;
图9是本申请一实施例中计算机设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的车辆损伤特征检测方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种车辆损伤特征检测方法,其技术方案主要包括以下步骤S10-S50:
S10,接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像。
可理解地,在业务员拍摄完车辆的待检测车辆损伤图像后,触发所述车辆损伤检测指令,获取所述待检测车辆损伤图像,所述待检测车辆损伤图像包含有至少一处车辆被损伤位置的图像,所述损伤包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤情况,所述获取方式可以根据需求进行设定,比如通过所述车辆损伤检测指令中含有的所述待检测车辆损伤图像进行直接获取,或者通过访问所述车辆损伤检测指令中含有存储所述待检测车辆损伤图像的存储路径,再通过访问的存储路径进行获取等等。
S20,将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型。
可理解地,所述无监督领域自适应网络模型为训练完成的自适应网络模型,所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型,所述基于pytorch的迁移学习模型为迁移学习一个基于pytorch的网络结构且训练完成的神经网络模型(即为训练完成的pytorch模型),所述训练完成的pytorch模型的特征提取可以根据需求进行选择,比如所述训练完成的pytorch模型为应用于车辆车灯亮度检测的pytorch模型,或者所述训练完成的pytorch模型为应用于车辆车型检测的pytorch模型等等;所述强局部特征自适应模型为训练完成的用于强化第一车辆损伤特征的神经网络模型;所述弱全局特征自适应模型为训练完成的用于提取第二车辆损伤特征的神经网络模型;所述正则化模型为将接收到的特征向量进行规范化处理的模型。
在一实施例中,如图3所示,所述步骤S20之前,即所述将所述待检测车辆损伤图像输入无监督领域自适应网络模型之前,包括:
S201,获取车损样本集;所述车损样本集包括车损样本图像,一个所述车损样本图像与一个损伤标签组关联;所述损伤标签组包括至少一个损伤标签类型和至少一个损伤标签区域。
可理解地,所述车损样本集包含有多个所述车损样本图像,所述车损样本集为所述车 损样本的集合,所述车损样本图像为历史拍摄且含有至少一处车辆被损伤位置的图像,一个所述车损样本图像与一个损伤标签组关联,所述损伤标签组包括损伤标签类型和损伤标签区域,所述损伤标签类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型,所述损伤标签区域为通过一个最小面积的矩形框能覆盖损伤位置的坐标区域范围。
S202,将所述车损样本图像输入含有初始参数的自适应网络模型。
可理解地,将所述车损样本图像输入至所述自适应网络模型中,所述自适应网络模型为包含有所述初始参数的深度卷积神经网络模型,所述初始参数可以根据需求进行设定,比如所述初始参数为随机预设的参数,或者为预设的固定值等等,作为优选,所述初始参数通过迁移学习获取训练完成的深度卷积神经网络模型中的所有参数。
在一实施例中,所述步骤S202之前,即所述将所述车损样本图像输入含有初始参数的自适应网络模型之前,包括:
S20201,通过迁移学习,获取训练完成的pytorch模型的所有迁移参数,将所有所述迁移参数确定为所述自适应网络模型中的所述初始参数。
可理解地,所述迁移学习(Transfer learning)就是把已经训练完成的模型参数迁移到新的模型来帮助新模型训练,由于大部分数据或任务是存在相关性的,所以通过迁移学习我们可以将已经学到的模型参数通过某种方式来分享给新模型从而加快并优化模型的学习效率不需要从头开始训练学习,如此,通过迁移学习与车辆相关的检测模型能够优化及加快学习效率,所述训练完成的pytorch模型根据需求选择与车辆相关检测的模型,比如:所述训练完成的pytorch模型为应用于车辆车灯亮度检测的pytorch模型,或者所述训练完成的pytorch模型为应用于车辆车型检测的pytorch模型等等,所述pytorch模型中含有所述迁移参数,所述迁移参数为所述pytorch模型的各参数,将所有所述迁移参数确定为所述自适应网络模型中的所述初始参数,所述pytorch模型的特点为对动态图计算和简单的模型结构(即从数据张量到网络的抽象层次的递进),能够提高模型的提取效率和识别准确率。
本申请通过迁移学习训练完成的pytorch模型,能够快速构架模型并且减少了训练pytorch模型的时间,减少了成本。
S203,通过所述自适应网络模型对所述车损样本图像进行训练特征提取,获取所述自适应网络模型根据所述训练特征输出的所述车损样本图像对应的训练结果;所述训练特征包括所述车辆特征、所述第一车辆损伤特征和所述第二车辆损伤特征;所述训练结果包括至少一个样本损伤类型和至少一个样本损伤区域。
可理解地,所述训练特征包括所述车辆特征、所述第一车辆损伤特征和所述第二车辆损伤特征,所述车辆特征为通过迁移学习后与车辆相关的特征,所述第一车辆损伤特征为图像中局部的纹理及颜色深浅的特征,所述第二车辆损伤特征为全部特征图中共同向量特性的特征,所述训练结果包括样本损伤类型和样本损伤区域,一个所述样本损伤区域对应一个所述样本损伤类型,一个所述样本损伤类型可以对应多个所述样本损伤区域,所述样本损伤类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型,所述样本损伤区域为所述车损样本图像中被损伤的区域范围,即识别出所述车损样本图像中损伤位置的矩形坐标范围。
S204,将所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域输入所述自适应网络模型中的损失模型,及通过所述损失模型的损失函数计算出损失值。
可理解地,将所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域输入所述损失模型中的所述损失函数中,计算得出所述车损样本图像对应的所述损失值,所述损失函数可以根据需求进行设定,所述损失函数为所有所述损伤标签类型与所有所述样本损伤类型之间差值的对数和所有 所述损伤标签区域与所有所述样本损伤区域之间差值的对数的加权函数,通过所述损失函数可以计算出所述损失值,所述损失值为衡量所有所述损伤标签类型与所有所述样本损伤类型之间差距和所有所述损伤标签区域与所有所述样本损伤区域之间差距的总和的指标。
S205,在所述损失值达到预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
可理解地,所述收敛条件可以为所述损失值小于设定阈值的条件,即在所述损失值小于设定阈值时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并且将所述无监督领域自适应网络模型存储在区块链中。
在一实施例中,如图4所示,所述步骤S204之后,即所述通过所述损失模型的损失函数计算出损失值之后,还包括:
S206,在所述损失值未达到预设的收敛条件时,迭代更新所述自适应网络模型的初始参数,直至所述损失值达到所述预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
可理解地,所述收敛条件也可以为所述损失值经过了10000次计算后值为很小且不会再下降的条件,即在所述损失值经过10000次计算后值为很小且不会再下降时,停止训练,并将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并且将所述无监督领域自适应网络模型存储在区块链中。
需要强调的是,为进一步保证上述无监督领域自适应网络模型的私密和安全性,上述无监督领域自适应网络模型还可以存储于区块链的节点中。
其中,本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。区块链提供的去中心化的完全分布式DNS服务通过网络中各个节点之间的点对点数据传输服务就能实现域名的查询和解析,可用于确保某个重要的基础设施的操作***和固件没有被篡改,可以监控软件的状态和完整性,发现不良的篡改,并确保所传输的数据没用经过篡改,将所述无监督领域自适应网络模型存储在区块链中,能够确保无监督领域自适应网络模型的私密和安全性。
如此,在所述损失值未达到预设的收敛条件时,不断更新迭代所述自适应网络模型的初始参数,可以不断向准确的识别结果靠拢,让识别结果的准确率越来越高。
本申请通过获取车损样本集;所述车损样本集包括车损样本图像,一个所述车损样本图像与一个损伤标签组关联;所述损伤标签组包括至少一个损伤标签类型和至少一个损伤标签区域;将所述车损样本图像输入含有初始参数的自适应网络模型;通过所述自适应网络模型对所述车损样本图像进行训练特征提取,获取所述自适应网络模型根据所述训练特征输出的所述车损样本图像对应的训练结果;所述训练特征包括所述车辆特征、所述第一车辆损伤特征和所述第二车辆损伤特征;所述训练结果包括至少一个样本损伤类型和至少一个样本损伤区域;将所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域输入所述自适应网络模型中的损失模型,及通过所述损失模型的损失函数计算出损失值;在所述损失值达到预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型;在所述损失值未达到预设的收敛条件时,迭代更新所述自适应网络模型的初始参数,直至所述损失值达到所述预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型。
如此,实现了通过获取含有损伤标签类型和损伤标签区域的车损样本图像输入自适应网络模型,通过自适应网络模型对车损样本图像进行训练特征(包括所述车辆特征、所述 第一车辆损伤特征和所述第二车辆损伤特征)提取,获取自适应网络模型输出的训练结果,通过损失模型中的损失函数,获取所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域确定出的损失值,并根据损失值进行训练,将收敛之后的自适应网络模型确定为无监督领域自适应网络模型,提供了一种快速识别车损样本图像中的损伤情况的模型训练方法,提升了对定损类型和定损区域进行确定的准确率及可靠性,提高了定损效率,并缩短了训练时间,减少了成本。
S30,通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征。
可理解地,在所述基于pytorch的迁移学习模型在提取所述待检测车辆损伤图像的所述车辆特征的过程中,生成所述局部特征图和所述全局特征图,所述局部特征图为在对所述待检测车辆损伤图像卷积缩小的过程中,取一个卷积层输出的特征图作为所述局部特征图,优选地,选取中间的一个卷积层输出的特征图作为所述局部特征图,是由于中间范围的卷积层含有丰富的第一车辆损伤特征的特征向量,即在整个图像中提取出局部的显现的特征向量,所述全局特征图为输入所述基于pytorch的迁移学习模型中的RPN模型的特征图,所述全局特征图为将所述待检测车辆损伤图像卷积缩小至预设的最小尺寸的特征图,方便于提取第二车辆损伤特征。
S40,通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出。
可理解地,所述第一车辆损伤特征为图像中局部的纹理及颜色深浅的特征,所述第二车辆损伤特征为全部特征图中共同向量特性的特征,所述强局部特征自适应模型的作用为增强所述局部特征图中的损伤的特征,通过提取所述第一车辆损伤特征进行生成有用的适应信息,输出增强后的第一特征向量,即提取出所述第一自适应特征向量组;所述弱全局特征自适应模型的作用为在所有所述全局特征图中的较弱的特征向量中提取出所述第二车辆损伤特征,防止过拟合,生成第二特征向量,即提取出所述第二自适应特征向量组。
在一实施例中,如图5所示,所述步骤S40中,即所述强局部特征自适应模型对所述局部特征图进行第一车辆损伤特征的提取,所述强局部特征自适应模型根据所述第一车辆损伤特征输出第一自适应特征向量组,包括:
S401,将所述局部特征图输入所述强局部特征自适应模型中的局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图。
可理解地,所述局部卷积层包括第一卷积层、第二卷积层和第三卷积层,优选地,所述第一卷积层包括一个3×3×512卷积核及步长为2的第一卷积、一个以1填充的第一填充层、一个第一批标准化模块(batch normalization)、一个第一激活模块(ReLU)和一个第一丢弃模块(dropout),所述第二卷积层包括一个3×3×128卷积核及步长为2的第二卷积、一个以1填充的第二填充层、一个第二批标准化模块(batch normalization)、一个第二激活模块和一个第二丢弃模块(dropout),所述第三卷积层包括一个3×3×128卷积核及步长为2的第三卷积、一个以1填充的第三填充层、一个第三批标准化模块(batch normalization)、一个第三激活模块和一个第三丢弃模块(dropout)。
其中,所述第一车辆损伤特征为图像中局部的纹理及颜色深浅的特征,通过所述第一卷积层对所述局部特征图进行所述第一车辆损伤特征提取,并对所述局部特征图进行减少特征图的维度和扩充特征图的通道数处理,得到第一卷积特征图,将所述第一卷积特征图输入所述第二卷积层,通过所述第二卷积层对所述第一卷积特征图进行所述第一车辆损伤 特征提取,并对所述第一卷积特征图进行减少特征图的维度和扩充特征图的通道数处理,得到第二卷积特征图,将所述第二卷积特征图输入所述第三卷积层,通过所述第三卷积层对所述第二卷积特征图进行所述第一车辆损伤特征提取,并对所述第二卷积特征图进行减少特征图的维度和扩充特征图的通道数处理,得到所述局部特征向量图,所述局部特征向量图的维度比所述局部特征图的维度少,所述局部特征向量图的通道数比所述局部特征图的通道数多。
S402,将所述局部特征向量图输入所述强局部特征自适应模型中的池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵。
可理解地,所述池化处理的方法可以根据需求进行设定,比如池化处理可以为平均池化,也可以为最大池化等等,所述池化处理的作用为对所述局部特征向量图降维处理,局部池化矩阵为一个一维矩阵数组。
S403,将所述局部池化矩阵输入所述强局部特征自适应模型中的全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵。
可理解地,所述特征连接为将获得的特征向量值映射到样本标记空间的位置,并且进行加权汇总,将这些特征向量进行连接,通过所述全连接层对所述局部池化矩阵进行特征连接,得到所述局部连接矩阵,所述局部连接矩阵为排序后的一维矩阵数组。例如:通过300个1×1的卷积核进行卷积后连接成一个一维的300个的向量组。
S404,将所述局部连接矩阵输入所述强局部特征自适应模型中的Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
可理解地,所述回归处理为对输入进行加权后进行归一化操作,得到每个类别的分数,再经过Softmax映射为概率的处理过程,所述Softmax层对所述局部连接矩阵进行预测分类得到所述第一自适应特征向量组。
本申请通过将所述局部特征图输入所述局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图;将所述局部特征向量图输入所述池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵;将所述局部池化矩阵输入所述全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵;将所述局部连接矩阵输入所述Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
如此,实现了增强所述局部特征图中的损伤的特征,便于提取出有用的第一车辆损伤特征,提供第一自适应特征向量组的功能,提升了对定损类型和定损区域进行确定的准确率及可靠性,提高了定损效率。
在一实施例中,如图6所示,所述步骤S40中,即所述弱全局特征自适应模型对所述全局特征图进行第二车辆损伤特征的提取,所述全局特征自适应模型根据所述第二车辆损伤特征输出第二自适应特征向量组,包括:
S405,将所述全局特征图输入所述弱全局特征自适应模型中的第一全局卷积层,通过所述第一全局卷积层提取所述全局特征图的第二车辆损伤特征,得到全局特征向量图。
可理解地,所述全局卷积层包括第一全局卷积层、第二全局卷积层和第三全局卷积层,优选地,所述第一全局卷积层包括一个1×1×256卷积核及步长为2的第一全局卷积、一个以0填充的第一全局填充层和一个第一全局激活模块,所述第一全局卷积层包括一个1×1×128卷积核及步长为2的第二全局卷积、一个以0填充的第二全局填充层和一个第二全局激活模块,所述第三全局卷积层包括一个1×1×1卷积核及步长为1的第三全局卷积和一个以0填充的第三全局填充层。
其中,所述第二车辆损伤特征为全部特征图中共同向量特性(也可以称为共性)的特征,通过所述第一全局卷积层对所述全局特征图进行所述第二车辆损伤特征提取,即从所 述全局特征图中提取出共同向量特征的特征向量,得到第一全局卷积特征图,将所述第一全局卷积特征图输入所述第二全局卷积层,通过所述第二全局卷积层对所述第一全局卷积特征图进行所述第二车辆损伤特征提取,得到第二全局卷积特征图,将所述第二全局卷积特征图输入所述第三全局卷积层,通过所述第三全局卷积层对所述第二全局卷积特征图进行所述第二车辆损伤特征提取,得到所述全局特征向量图,在提取所述第二车辆损伤特征过程中,通过所述第一全局填充层、所述第二全局填充层和所述第三全局填充层是为了不引入共性的特征进行干扰但是能进行填充至预设的尺寸大小。
S406,将所述全局特征向量图输入所述弱全局特征自适应模型中的Sigmoid激活层,通过所述Sigmoid激活层对所述全局特征向量进行激活处理,得到所述全局特征图对应的第二自适应特征向量组。
可理解地,所述Sigmoid激活层为通过Sigmoid函数作为所述弱全局特征自适应模型中的最后一层,所述Sigmoid函数为对所述全局特征向量进行激活处理并进行分类的函数,所述激活处理为通过所述Sigmoid函数将取值为(-∞,+∞)的向量映射到(0,1)之间的处理过程,从而得到所述第二自适应特征向量组。
本申请通过将所述全局特征图输入所述第一全局卷积层,通过所述第一全局卷积层提取所述全局特征图的第二车辆损伤特征,得到全局特征向量图;将所述全局特征向量图输入所述Sigmoid激活层,通过所述Sigmoid激活层对所述全局特征向量进行激活处理,得到所述全局特征图对应的第二自适应特征向量组。
如此,实现了在所有所述全局特征图中的较弱的特征向量中提取出所述第二车辆损伤特征,防止过拟合,提取出高质量的第二车辆损伤特征,提供第二自适应特征向量组的功能,提高了识别准确性和可靠性。
S50,将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
可理解地,所述正则化处理为规则化,即为增加规则限制,约束优化参数,防止明显的特征无限放大导致弱化的特征被抹灭,所述损伤类型包括划痕、刮擦、凹陷、褶皱、死折、撕裂、缺失等7种损伤类型,所述损伤区域为所述待检测车辆损伤图像中被损伤位置的区域范围,即被损伤位置的相对于所述待检测车辆损伤图像的坐标范围的全集,一个所述损伤区域对应一个所述损伤类型,一个所述损伤类型可以对应多个所述损伤区域,如此,通过输入待检测车辆损伤图像后,能够自动识别出所有被损伤的类型,以及与被损伤的类型对应的损伤区域的范围。
本申请通过获取待检测车辆损伤图像;将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征,同时所述基于pytorch的迁移学习模型在提取所述车辆特征过程中,生成了局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果。
如此,实现了通过迁移学习pytorch模型、强局部特征自适应模型强化第一车辆损伤 特征及弱全局特征自适应模型提取第二车辆损伤特征构架的适用于车辆损伤检测的无监督领域自适应网络模型,能够快速地、准确地自动识别出待检测车辆损伤图像中车辆损伤的部位对应的损伤类型及损伤区域,大大缩减了模型构架的过程及模型训练的过程,提升了对定损类型和定损区域进行确定的准确率及可靠性,提高了定损效率。
在一实施例中,提供一种车辆损伤特征检测装置,该车辆损伤特征检测装置与上述实施例中车辆损伤特征检测方法一一对应。如图7所示,该车辆损伤特征检测装置包括接收模块11、输入模块12、提取模块13、输出模块14和识别模块15。各功能模块详细说明如下:
接收模块11,用于接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
输入模块12,用于将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
提取模块13,用于通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
输出模块14,用于通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
识别模块15,用于将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
在一实施例中,如图8所示,所述输出模块14包括:
卷积单元41,用于将所述局部特征图输入所述强局部特征自适应模型中的局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图;
池化单元42,用于将所述局部特征向量图输入所述强局部特征自适应模型中的池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵;
全连接单元43,用于将所述局部池化矩阵输入所述强局部特征自适应模型中的全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵;
回归单元44,用于将所述局部连接矩阵输入所述强局部特征自适应模型中的Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
关于车辆损伤特征检测装置的具体限定可以参见上文中对于车辆损伤特征检测方法的限定,在此不再赘述。上述车辆损伤特征检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过***总线连接的处理器、存储器、网络接口和 数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括可读存储介质、内存储器。该可读存储介质存储有操作***、计算机可读指令和数据库。该内存储器为可读存储介质中的操作***和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种车辆损伤特征检测方法。本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中车辆损伤特征检测方法。
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质;该可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现上述实施例中车辆损伤特征检测方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种车辆损伤特征检测方法,其中,包括:
    接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
    将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
    通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
    通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
    将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
  2. 如权利要求1所述的车辆损伤特征检测方法,其中,所述将所述待检测车辆损伤图像输入无监督领域自适应网络模型之前,包括:
    获取车损样本集;所述车损样本集包括车损样本图像,一个所述车损样本图像与一个损伤标签组关联;所述损伤标签组包括至少一个损伤标签类型和至少一个损伤标签区域;
    将所述车损样本图像输入含有初始参数的自适应网络模型;
    通过所述自适应网络模型对所述车损样本图像进行训练特征提取,获取所述自适应网络模型根据所述训练特征输出的所述车损样本图像对应的训练结果;所述训练特征包括所述车辆特征、所述第一车辆损伤特征和所述第二车辆损伤特征;所述训练结果包括至少一个样本损伤类型和至少一个样本损伤区域;
    将所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域输入所述自适应网络模型中的损失模型,及通过所述损失模型的损失函数计算出损失值;
    在所述损失值达到预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
  3. 如权利要求2所述的车辆损伤特征检测方法,其中,所述通过所述损失模型的损失函数计算出损失值之后,还包括:
    在所述损失值未达到预设的收敛条件时,迭代更新所述自适应网络模型的初始参数,直至所述损失值达到所述预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
  4. 如权利要求2所述的车辆损伤特征检测方法,其中,所述将所述车损样本图像输入含有初始参数的自适应网络模型之前,包括:
    通过迁移学习,获取训练完成的pytorch模型的所有迁移参数,将所有所述迁移参数确定为所述自适应网络模型中的所述初始参数。
  5. 如权利要求1所述的车辆损伤特征检测方法,其中,所述强局部特征自适应模型对 所述局部特征图进行第一车辆损伤特征的提取,所述强局部特征自适应模型根据所述第一车辆损伤特征输出第一自适应特征向量组,包括:
    将所述局部特征图输入所述强局部特征自适应模型中的局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图;
    将所述局部特征向量图输入所述强局部特征自适应模型中的池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵;
    将所述局部池化矩阵输入所述强局部特征自适应模型中的全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵;
    将所述局部连接矩阵输入所述强局部特征自适应模型中的Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
  6. 如权利要求1所述的车辆损伤特征检测方法,其中,所述弱全局特征自适应模型对所述全局特征图进行第二车辆损伤特征的提取,所述全局特征自适应模型根据所述第二车辆损伤特征输出第二自适应特征向量组,包括:
    将所述全局特征图输入所述弱全局特征自适应模型中的第一全局卷积层,通过所述第一全局卷积层提取所述全局特征图的第二车辆损伤特征,得到全局特征向量图;
    将所述全局特征向量图输入所述弱全局特征自适应模型中的Sigmoid激活层,通过所述Sigmoid激活层对所述全局特征向量进行激活处理,得到所述全局特征图对应的第二自适应特征向量组。
  7. 一种车辆损伤特征检测装置,其中,包括:
    接收模块,用于接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
    输入模块,用于将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
    提取模块,用于通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
    输出模块,用于通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
    识别模块,用于将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
  8. 如权利要求7所述的车辆损伤特征检测装置,其中,所述输出模块包括:
    卷积单元,用于将所述局部特征图输入所述强局部特征自适应模型中的局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图;
    池化单元,用于将所述局部特征向量图输入所述强局部特征自适应模型中的池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵;
    全连接单元,用于将所述局部池化矩阵输入所述强局部特征自适应模型中的全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵;
    回归单元,用于将所述局部连接矩阵输入所述强局部特征自适应模型中的Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
    接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
    将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
    通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
    通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
    将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
  10. 如权利要求9所述的计算机设备,其中,所述将所述待检测车辆损伤图像输入无监督领域自适应网络模型之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取车损样本集;所述车损样本集包括车损样本图像,一个所述车损样本图像与一个损伤标签组关联;所述损伤标签组包括至少一个损伤标签类型和至少一个损伤标签区域;
    将所述车损样本图像输入含有初始参数的自适应网络模型;
    通过所述自适应网络模型对所述车损样本图像进行训练特征提取,获取所述自适应网络模型根据所述训练特征输出的所述车损样本图像对应的训练结果;所述训练特征包括所述车辆特征、所述第一车辆损伤特征和所述第二车辆损伤特征;所述训练结果包括至少一个样本损伤类型和至少一个样本损伤区域;
    将所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域输入所述自适应网络模型中的损失模型,及通过所述损失模型的损失函数计算出损失值;
    在所述损失值达到预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
  11. 如权利要求10所述的计算机设备,其中,所述通过所述损失模型的损失函数计算出损失值之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    在所述损失值未达到预设的收敛条件时,迭代更新所述自适应网络模型的初始参数,直至所述损失值达到所述预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
  12. 如权利要求10所述的计算机设备,其中,所述将所述车损样本图像输入含有初始 参数的自适应网络模型之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    通过迁移学习,获取训练完成的pytorch模型的所有迁移参数,将所有所述迁移参数确定为所述自适应网络模型中的所述初始参数。
  13. 如权利要求9所述的计算机设备,其中,所述强局部特征自适应模型对所述局部特征图进行第一车辆损伤特征的提取,所述强局部特征自适应模型根据所述第一车辆损伤特征输出第一自适应特征向量组,包括:
    将所述局部特征图输入所述强局部特征自适应模型中的局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图;
    将所述局部特征向量图输入所述强局部特征自适应模型中的池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵;
    将所述局部池化矩阵输入所述强局部特征自适应模型中的全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵;
    将所述局部连接矩阵输入所述强局部特征自适应模型中的Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
  14. 如权利要求9所述的计算机设备,其中,所述弱全局特征自适应模型对所述全局特征图进行第二车辆损伤特征的提取,所述全局特征自适应模型根据所述第二车辆损伤特征输出第二自适应特征向量组,包括:
    将所述全局特征图输入所述弱全局特征自适应模型中的第一全局卷积层,通过所述第一全局卷积层提取所述全局特征图的第二车辆损伤特征,得到全局特征向量图;
    将所述全局特征向量图输入所述弱全局特征自适应模型中的Sigmoid激活层,通过所述Sigmoid激活层对所述全局特征向量进行激活处理,得到所述全局特征图对应的第二自适应特征向量组。
  15. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    接收车辆损伤检测指令之后,获取待检测车辆损伤图像;所述待检测车辆损伤图像包含至少一处车辆被损伤位置的图像;
    将所述待检测车辆损伤图像输入无监督领域自适应网络模型;所述无监督领域自适应网络模型包括基于pytorch的迁移学习模型、强局部特征自适应模型、弱全局特征自适应模型和正则化模型;
    通过所述基于pytorch的迁移学习模型提取所述待检测车辆损伤图像的车辆特征并生成局部特征图和全局特征图;所述车辆特征为通过迁移学习后与车辆相关的特征;
    通过所述基于pytorch的迁移学习模型根据所述车辆特征输出迁移特征向量组,同时通过所述强局部特征自适应模型获取第一自适应特征向量组,以及通过所述弱全局特征自适应模型获取第二自适应特征向量组;所述第一自适应特征向量组为所述强局部特征自适应模型根据自所述局部特征图中提取的第一车辆损伤特征获取并输出;所述第二自适应特征向量组为所述弱全局特征自适应模型根据自所述全局特征图中提取的第二车辆损伤特征获取并输出;
    将所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组输入所述正则化模型,通过所述正则化模型对所述迁移特征向量组、所述第一自适应特征向量组和所述第二自适应特征向量组进行正则化处理,得到包含损伤类型和损伤区域的识别结果;所述识别结果表征了所述待检测车辆损伤图像中包含所有被损伤的类型及对应的损伤区域的结果。
  16. 如权利要求15所述的可读存储介质,其中,所述将所述待检测车辆损伤图像输入无监督领域自适应网络模型之前,所述计算机可读指令被一个或多个处理器执行时,使得 所述一个或多个处理器还执行如下步骤:
    获取车损样本集;所述车损样本集包括车损样本图像,一个所述车损样本图像与一个损伤标签组关联;所述损伤标签组包括至少一个损伤标签类型和至少一个损伤标签区域;
    将所述车损样本图像输入含有初始参数的自适应网络模型;
    通过所述自适应网络模型对所述车损样本图像进行训练特征提取,获取所述自适应网络模型根据所述训练特征输出的所述车损样本图像对应的训练结果;所述训练特征包括所述车辆特征、所述第一车辆损伤特征和所述第二车辆损伤特征;所述训练结果包括至少一个样本损伤类型和至少一个样本损伤区域;
    将所述车损样本图像对应的所有所述损伤标签类型、所有所述损伤标签区域、所有所述样本损伤类型、所有所述样本损伤区域输入所述自适应网络模型中的损失模型,及通过所述损失模型的损失函数计算出损失值;
    在所述损失值达到预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
  17. 如权利要求16所述的可读存储介质,其中,所述通过所述损失模型的损失函数计算出损失值之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    在所述损失值未达到预设的收敛条件时,迭代更新所述自适应网络模型的初始参数,直至所述损失值达到所述预设的收敛条件时,将收敛之后的所述自适应网络模型记录为无监督领域自适应网络模型,并将所述无监督领域自适应网络模型存储在区块链中。
  18. 如权利要求16所述的可读存储介质,其中,所述将所述车损样本图像输入含有初始参数的自适应网络模型之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    通过迁移学习,获取训练完成的pytorch模型的所有迁移参数,将所有所述迁移参数确定为所述自适应网络模型中的所述初始参数。
  19. 如权利要求15所述的可读存储介质,其中,所述强局部特征自适应模型对所述局部特征图进行第一车辆损伤特征的提取,所述强局部特征自适应模型根据所述第一车辆损伤特征输出第一自适应特征向量组,包括:
    将所述局部特征图输入所述强局部特征自适应模型中的局部卷积层,通过所述局部卷积层提取所述局部特征图中的所述第一车辆损伤特征,得到局部特征向量图;
    将所述局部特征向量图输入所述强局部特征自适应模型中的池化层,通过所述池化层对所述局部特征向量图进行池化处理,得到局部池化矩阵;
    将所述局部池化矩阵输入所述强局部特征自适应模型中的全连接层,通过所述全连接层对所述局部池化矩阵进行特征连接,得到局部连接矩阵;
    将所述局部连接矩阵输入所述强局部特征自适应模型中的Softmax层,通过所述Softmax层对所述局部连接矩阵进行回归处理,得到所述局部特征图对应的第一自适应特征向量组。
  20. 如权利要求15所述的可读存储介质,其中,所述弱全局特征自适应模型对所述全局特征图进行第二车辆损伤特征的提取,所述全局特征自适应模型根据所述第二车辆损伤特征输出第二自适应特征向量组,包括:
    将所述全局特征图输入所述弱全局特征自适应模型中的第一全局卷积层,通过所述第一全局卷积层提取所述全局特征图的第二车辆损伤特征,得到全局特征向量图;
    将所述全局特征向量图输入所述弱全局特征自适应模型中的Sigmoid激活层,通过所述Sigmoid激活层对所述全局特征向量进行激活处理,得到所述全局特征图对应的第二自适应特征向量组。
PCT/CN2020/116741 2020-05-27 2020-09-22 车辆损伤特征检测方法、装置、计算机设备及存储介质 WO2021114809A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010462160.3A CN111666990A (zh) 2020-05-27 2020-05-27 车辆损伤特征检测方法、装置、计算机设备及存储介质
CN202010462160.3 2020-05-27

Publications (1)

Publication Number Publication Date
WO2021114809A1 true WO2021114809A1 (zh) 2021-06-17

Family

ID=72384809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116741 WO2021114809A1 (zh) 2020-05-27 2020-09-22 车辆损伤特征检测方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN111666990A (zh)
WO (1) WO2021114809A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723356A (zh) * 2021-09-15 2021-11-30 北京航空航天大学 异质特征关系互补的车辆重识别方法和装置
CN116797533A (zh) * 2023-03-24 2023-09-22 东莞市冠锦电子科技有限公司 电源适配器的外观缺陷检测方法及其***
CN117447068A (zh) * 2023-10-26 2024-01-26 浙江欧诗漫晶体纤维有限公司 多晶莫来石纤维生产线及方法
CN117593301A (zh) * 2024-01-18 2024-02-23 深圳市奥斯珂科技有限公司 基于机器视觉的内存条损伤快速检测方法及***

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666990A (zh) * 2020-05-27 2020-09-15 平安科技(深圳)有限公司 车辆损伤特征检测方法、装置、计算机设备及存储介质
CN112633373A (zh) * 2020-12-22 2021-04-09 东软睿驰汽车技术(沈阳)有限公司 一种车辆工况预测方法及装置
CN112907576B (zh) * 2021-03-25 2024-02-02 平安科技(深圳)有限公司 车辆损伤等级检测方法、装置、计算机设备及存储介质
CN113657409A (zh) * 2021-08-16 2021-11-16 平安科技(深圳)有限公司 车辆损失检测方法、装置、电子设备及存储介质
CN115115611B (zh) * 2022-07-21 2023-04-07 明觉科技(北京)有限公司 车辆损伤识别方法、装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446618A (zh) * 2018-03-09 2018-08-24 平安科技(深圳)有限公司 车辆定损方法、装置、电子设备及存储介质
CN108734702A (zh) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 车损判定方法、服务器及存储介质
CN110570316A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 训练损伤识别模型的方法及装置
CN111666990A (zh) * 2020-05-27 2020-09-15 平安科技(深圳)有限公司 车辆损伤特征检测方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446618A (zh) * 2018-03-09 2018-08-24 平安科技(深圳)有限公司 车辆定损方法、装置、电子设备及存储介质
CN108734702A (zh) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 车损判定方法、服务器及存储介质
CN110570316A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 训练损伤识别模型的方法及装置
CN111666990A (zh) * 2020-05-27 2020-09-15 平安科技(深圳)有限公司 车辆损伤特征检测方法、装置、计算机设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KUNIAKI SAITO, YOSHITAKA USHIKU, TATSUYA HARADA, KATE SAENKO: "Strong-Weak Distribution Alignment for Adaptive Object Detection", 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 15 June 2019 (2019-06-15), pages 6956 - 6965, XP033687423, DOI: 10.1109/CVPR.2019.00712 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723356A (zh) * 2021-09-15 2021-11-30 北京航空航天大学 异质特征关系互补的车辆重识别方法和装置
CN113723356B (zh) * 2021-09-15 2023-09-19 北京航空航天大学 异质特征关系互补的车辆重识别方法和装置
CN116797533A (zh) * 2023-03-24 2023-09-22 东莞市冠锦电子科技有限公司 电源适配器的外观缺陷检测方法及其***
CN116797533B (zh) * 2023-03-24 2024-01-23 东莞市冠锦电子科技有限公司 电源适配器的外观缺陷检测方法及其***
CN117447068A (zh) * 2023-10-26 2024-01-26 浙江欧诗漫晶体纤维有限公司 多晶莫来石纤维生产线及方法
CN117593301A (zh) * 2024-01-18 2024-02-23 深圳市奥斯珂科技有限公司 基于机器视觉的内存条损伤快速检测方法及***
CN117593301B (zh) * 2024-01-18 2024-04-30 深圳市奥斯珂科技有限公司 基于机器视觉的内存条损伤快速检测方法及***

Also Published As

Publication number Publication date
CN111666990A (zh) 2020-09-15

Similar Documents

Publication Publication Date Title
WO2021114809A1 (zh) 车辆损伤特征检测方法、装置、计算机设备及存储介质
WO2021135499A1 (zh) 损伤检测模型训练、车损检测方法、装置、设备及介质
CN111860670B (zh) 域自适应模型训练、图像检测方法、装置、设备及介质
WO2021017261A1 (zh) 识别模型训练方法、图像识别方法、装置、设备及介质
WO2021135500A1 (zh) 车损检测模型训练、车损检测方法、装置、设备及介质
CN110807491A (zh) 车牌图像清晰度模型训练方法、清晰度检测方法及装置
CN109840524B (zh) 文字的类型识别方法、装置、设备及存储介质
CN107886082B (zh) 图像中数学公式检测方法、装置、计算机设备及存储介质
WO2022134354A1 (zh) 车损检测模型训练、车损检测方法、装置、设备及介质
CN111144285B (zh) 胖瘦程度识别方法、装置、设备及介质
CN113705685B (zh) 疾病特征识别模型训练、疾病特征识别方法、装置及设备
CN115797735A (zh) 目标检测方法、装置、设备和存储介质
CN110728680A (zh) 行车记录仪检测方法、装置、计算机设备和存储介质
CN109101984B (zh) 一种基于卷积神经网络的图像识别方法及装置
CN110276802B (zh) 医学图像中病症组织定位方法、装置与设备
CN112241705A (zh) 基于分类回归的目标检测模型训练方法和目标检测方法
CN110751623A (zh) 基于联合特征的缺陷检测方法、装置、设备及存储介质
CN116091596A (zh) 一种自下而上的多人2d人体姿态估计方法及装置
CN115713769A (zh) 文本检测模型的训练方法、装置、计算机设备和存储介质
CN111666973B (zh) 车辆损伤图片处理方法、装置、计算机设备及存储介质
CN114332915A (zh) 人体属性检测方法、装置、计算机设备及存储介质
CN110956102A (zh) 银行柜台监控方法、装置、计算机设备和存储介质
CN112347893B (zh) 用于视频行为识别的模型训练方法、装置和计算机设备
CN116952954B (zh) 一种基于条纹光的凹凸检测方法、装置、设备及存储介质
CN111428679B (zh) 影像识别方法、装置和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899826

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20899826

Country of ref document: EP

Kind code of ref document: A1