WO2017124990A1 - 基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质 - Google Patents

基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质 Download PDF

Info

Publication number
WO2017124990A1
WO2017124990A1 PCT/CN2017/071318 CN2017071318W WO2017124990A1 WO 2017124990 A1 WO2017124990 A1 WO 2017124990A1 CN 2017071318 W CN2017071318 W CN 2017071318W WO 2017124990 A1 WO2017124990 A1 WO 2017124990A1
Authority
WO
WIPO (PCT)
Prior art keywords
photos
photo
loss
vehicle
fixed
Prior art date
Application number
PCT/CN2017/071318
Other languages
English (en)
French (fr)
Inventor
王健宗
李虹杰
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to JP2018524765A priority Critical patent/JP6452186B2/ja
Priority to SG11201800342XA priority patent/SG11201800342XA/en
Priority to KR1020187019505A priority patent/KR102138082B1/ko
Priority to US15/736,352 priority patent/US10410292B2/en
Priority to AU2017209231A priority patent/AU2017209231B2/en
Priority to EP17741021.4A priority patent/EP3407229A4/en
Publication of WO2017124990A1 publication Critical patent/WO2017124990A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Definitions

  • the invention relates to the field of financial service technology, in particular to a method, a system, a device and a readable storage medium for implementing insurance claims and anti-fraud based on multiple picture consistency.
  • the damaged picture of the vehicle needs to be manually detected to be tampered with, the time cost is high, the detection efficiency is low, and the accuracy cannot be guaranteed.
  • many modified pictures are difficult to detect quickly by the naked eye, especially when facing multiple pictures and not sure of the location of the falsified area.
  • a method for implementing insurance claims anti-fraud based on multiple image consistency comprising:
  • the fixed-loss photos of the respective photo collections are respectively grouped into two groups, and according to a key point matching algorithm, the key points corresponding to the respective sets are matched with the photos in the respective groups of the set, and the key points are matched in each group.
  • the loss photos respectively match at least one set of related key points;
  • a linear equation is used to calculate a feature point transformation matrix corresponding to each group, and a corresponding feature point transformation matrix is used to convert one of the photos in each group into another group with the group.
  • a reminder message is generated to remind the received picture that there is fraudulent behavior.
  • An insurance claims anti-fraud system comprising:
  • a photo receiving module configured to receive a plurality of fixed-loss photos of the vehicle photographed by the user from different shooting angles uploaded by the user;
  • a classification module configured to analyze a vehicle part corresponding to each fixed-loss photo by using an analysis model, and classify the fixed-loss photos to divide the fixed-loss photos of the same vehicle part into the same photo collection;
  • a key point detecting module configured to perform key point detection on the fixed loss photos in each photo collection, and obtain key point features of the vehicle parts corresponding to the respective photo sets;
  • the reconstruction module is configured to respectively group the fixed loss photos of each photo collection by one or two
  • the key point matching algorithm matches the key point features corresponding to each set with the photos in each group of the set, and matches at least one set of related key points for the fixed loss photos in each group; corresponding to each group Relevant key points, using a linear equation to calculate the feature point transformation matrix corresponding to each group;
  • a verification module configured to convert one of the photos in each group into a photo to be verified having the same shooting angle as the other photo of the group by using a corresponding feature point transformation matrix, and the photo to be verified and the grouping Another photo in the picture is matched with the feature parameter; and when the feature to be verified does not match the feature parameter of the other photo in the group, a reminder message is generated to remind the received picture that there is fraudulent behavior.
  • An insurance claim anti-fraud device comprising a processing unit, and an insurance claims anti-fraud system, an input/output unit, a communication unit and a storage unit connected to the processing unit;
  • the input/output unit is configured to input a user instruction and output response data of the insurance claim anti-fraud device to the input user instruction;
  • the communication unit is configured to communicate with a predetermined terminal or a background server;
  • the storage unit is configured to store the insurance claims anti-fraud system and the operation data of the insurance claims anti-fraud system;
  • the processing unit is configured to invoke and execute the insurance claims anti-fraud system to perform the following steps:
  • the fixed-loss photos of the respective photo collections are respectively grouped into two groups, and according to a key point matching algorithm, the key points corresponding to the respective sets are matched with the photos in the respective groups of the set, and the key points are matched in each group.
  • the loss photos respectively match at least one set of related key points;
  • a linear equation is used to calculate a feature point transformation matrix corresponding to each group, and a corresponding feature point transformation matrix is used to convert one of the photos in each group into another group with the group.
  • a reminder message is generated to remind the received picture that there is fraudulent behavior.
  • a computer readable storage medium storing one or more programs, the one or more programs being used by one or more processors to perform the steps of:
  • the fixed-loss photos of each photo collection are separately grouped according to a key point matching algorithm. Matching the key points corresponding to each set with the photos in each group of the set to match at least one set of related key points for the fixed loss photos in each group;
  • a linear equation is used to calculate a feature point transformation matrix corresponding to each group, and a corresponding feature point transformation matrix is used to convert one of the photos in each group into another group with the group.
  • a reminder message is generated to remind the received picture that there is fraudulent behavior.
  • the method, system, device and readable storage medium for implementing insurance claims and anti-fraud based on multiple picture consistency according to the present invention, when a car accident occurs in a repair shop for loss verification, the vehicle owner and/or the repair shop is required to take the vehicle part. Multiple pictures of different angles, and by comparing multiple images of different angles, spatial transformation, comparing the consistency of the parts, preventing the owner and the repair shop from tampering with the damage picture and exaggerating the degree of loss.
  • FIG. 1A and FIG. 1B are flowcharts showing an implementation of a method for implementing a method for implementing insurance claims anti-fraud based on multiple picture consistency according to the present invention.
  • FIG. 2 is a flow chart of a method for analyzing an analysis model of photos of various parts of a vehicle in a preferred embodiment of the method for implementing insurance claims anti-fraud based on multiple picture consistency.
  • FIG. 3 is a hardware environment diagram of a first embodiment of an insurance claims anti-fraud device for implementing insurance claims and anti-fraud based on multiple picture consistency according to the present invention.
  • FIG. 4 is a hardware environment diagram of a second embodiment of the insurance claims anti-fraud device for implementing insurance claims and anti-fraud based on multiple picture consistency according to the present invention.
  • FIG. 5 is a functional block diagram of a preferred embodiment of the system for implementing insurance claims anti-fraud based on multiple picture consistency.
  • FIG. 1 it is a flowchart of a method for implementing a preferred embodiment of a method for implementing insurance claims anti-fraud based on multiple picture consistency.
  • the method for implementing insurance claims anti-fraud based on multiple picture consistency in this embodiment is not limited to the steps shown in the flowchart. In addition, in the steps shown in the flowchart, some steps may be omitted, and the order between the steps may be changed. .
  • step S10 when the vehicle is in a car accident and the loss is verified at the repair shop, the photo receiving module 101 receives the fixed loss photo uploaded by the user, such as the owner and the repair shop.
  • step S11 the photo receiving module 101 analyzes whether the captured shooting angles of the respective fixed loss photos are the same.
  • the photographing angle of the photograph can be analyzed by the following method: the shadow of the object in the photograph is recognized, and the direction of the shadow of the object is the lens direction, and the angle between the lens direction and the plane of the object is taken as the photographing angle.
  • step S12 is performed, and when the shooting angles are not the same, step S13 is performed.
  • Step S12 the photo receiving module 101 generates and sends a request for continuing to collect the loss-receiving photos from different angles.
  • the reminding information may be, for example, that the currently uploaded fixed loss photo has the same Y shooting angle, and please continue to collect Y-1 fixed loss photos from other angles.
  • step S13 the classification module 102 analyzes the vehicle parts corresponding to the respective loss-reduction photos by using an analysis model, and classifies the fixed-loss photos to divide the fixed-loss photos of the same vehicle parts into the same photo collection.
  • step S14 the classification module 102 determines whether the number of fixed loss photos in any one of the photo collections is less than a preset number, such as three. When the number of the fixed-length photos in the photo collection is less than the preset number, step S15 is performed, and when the number of the fixed-loss photos in any of the photo collections is not less than the preset number, step S16 is performed.
  • a preset number such as three.
  • step S15 the classification module 102 generates and sends reminder information for continuously collecting the fixed-length photos of the vehicle parts corresponding to the photo collection from different angles to the terminal.
  • the reminding information may be, for example, that the fixed loss photo of the current fixed loss part X lacks Z sheets, and the continuous loss photograph of the Z fixed loss part X is continuously collected from other angles.
  • step S16 the key point detecting module 103 performs key point detection on the fixed loss photos in the respective photo sets, and obtains key point features of the vehicle parts corresponding to the respective photo sets.
  • the key point detection may adopt a SIFT (Scale-invariant feature transform) key point feature detection method.
  • SIFT Scale-invariant feature transform
  • the SIFT is a local feature descriptor
  • the SIFT key point feature is an image. Local features, which remain invariant to rotation, scale scaling, and brightness changes, and maintain a certain degree of stability for viewing angle changes, affine transformations, and noise.
  • step S17 the reconstruction module 104 performs two-two grouping on the fixed-loss photos of the respective photo collections.
  • step S18 the reconstruction module 104 matches the key points corresponding to each set with the photos in each group of the set according to a key point matching algorithm, and matches at least one set of the fixed loss photos in each group. Related key points.
  • the key point matching algorithm may be, for example, a RANSAC (Random Sample Consensus) algorithm.
  • the reconstruction module 104 respectively matches at least a predetermined set of (eg, 8) related key points for the fixed loss photos in each group.
  • a predetermined set of (eg, 8) related key points for the fixed loss photos in each group For example, two photos of B1 and B2 are grouped into one group, and each of B1 and B2 has at least one set of preset number of key points matched, and the key points that B1 is matched are related to the key points that B2 is matched with and one
  • a plurality of key points corresponding to the same location are related to each other and have a one-to-one correspondence.
  • step S19 the reconstruction module 104 calculates a feature point transformation matrix corresponding to each group by using a linear equation according to the relevant key points corresponding to the respective groups. For example, the feature point transformation matrix corresponding to the two photos related points of B1 and B2 is converted from the photo B1 to the photo B2, so that stereo reconstruction (for example, Stereo Reconstruction) can be completed.
  • stereo reconstruction for example, Stereo Reconstruction
  • the feature point transformation matrix may be a Fundamental Matrix.
  • the role of the Fundamental Matrix is to transform the feature points of one image into matrix and transform them into related feature points of another image.
  • the linear equation may be:
  • the feature point transformation matrix F can be obtained, and the feature point transformation matrix F needs to satisfy the following conditions:
  • the linear equation can be solved by the above-mentioned matched eight relevant key points, thereby obtaining the spatial transformation relationship F between the two images.
  • step S20 the verification module 105 selects one of the groups, and uses the corresponding feature point transformation matrix to convert one of the photos in the group into a photo to be verified having the same shooting angle as the other photo of the group.
  • step S21 the verification module 105 matches the to-be-verified photo with another photo in the group.
  • the parameters include features such as color, texture, and the like.
  • step S22 the verification module 105 determines whether there is a parameter mismatch. For example, if the color value difference of the same feature is greater than the preset color threshold, it is determined that the color feature parameters do not match; if the similarity of the texture with the same feature is less than the preset similarity threshold “eg, 90%”, the texture feature is determined. The parameters do not match, etc.
  • step S23 When there is a parameter mismatch, step S23 is performed. When there is no parameter mismatch, step S24 is performed.
  • step S23 the verification module 105 generates fraud risk reminding information, and sends the fraud risk reminding information to the predetermined terminal.
  • the fraud risk reminding information may be: the photos B1 and B2 uploaded by the terminal fail to pass the verification, please pay attention to the forgery risk.
  • step S24 the verification module 105 determines whether there is a group photo that has not been verified. If there are group photos without verification, return to step 20 above. Otherwise, if there are no group photos that have not been verified, the process ends.
  • FIG. 2 it is a flowchart of a method for analyzing an analysis model of photos of various parts of a vehicle in a preferred embodiment of the method for implementing insurance claims anti-fraud based on multiple picture consistency.
  • the flowchart of the method for analyzing the analysis model of the photographs of various parts of the vehicle in the embodiment is not limited to the steps shown in the flowchart, and in addition, in the steps shown in the flowchart, some steps may be omitted, and the order between the steps may be changed.
  • the model training module 100 acquires a preset number of photos of various parts of the vehicle from a car insurance claim database.
  • the model training module 100 classifies according to a preset part of the vehicle (for example, the vehicle preset part classification includes a front side, a side, a tail, a whole, and the like), and the vehicle insurance claim database (for example, the automobile insurance)
  • the claim database stores a mapping relationship or tag data of a predetermined part classification of the vehicle and a fixed-loss photo, and the fixed-loss photo refers to a photo taken by the repair shop at the time of the fixed damage)
  • a preset number of presets for example, 100,000 sheets) (for example, a photo taken in front of 100,000 cars).
  • the model training module 100 generates an analysis model for analyzing photos of various parts of the vehicle based on the acquired photos of various parts of the vehicle according to a preset model generation rule. For example, based on a predetermined number of fixed-loss photos corresponding to the front of the vehicle, an analysis model for analyzing a vehicle damage portion included in the fixed-loss photo is generated for the front of the vehicle; and a predetermined number of fixed-length photos corresponding to the side is generated, and is generated for analysis.
  • the damage photo includes an analysis model with the vehicle damage part as the side; based on the preset number of the corresponding damage number at the rear of the vehicle, an analysis model for analyzing the fixed loss photo including the vehicle damage part as the rear end is generated; based on the preset number corresponding to the vehicle A fixed-loss photo is generated, and an analysis model for analyzing a fixed-loss photo including a vehicle damage portion as a whole vehicle is generated.
  • the analysis model is a convolutional neural network (CNN) model
  • the preset model generation rule is: pre-processing a preset number of photos of each part of the acquired vehicle to convert the format of the acquired photo into a format
  • the default format for example, leveldb format
  • the specific training process is as follows: Before the training starts, the initial values of the weights in the CNN network are randomly and uniformly generated (for example, -0.05 to 0.05; the CNN model is trained by the stochastic gradient descent method.
  • the whole training process can be divided into forward propagation. And backward propagation two stages.
  • the model training module 100 randomly extracts samples from the training data set, inputs them into the CNN network for calculation, and obtains actual calculation results.
  • the model training module 100 calculates The difference between the actual result and the expected result (ie, the tag value), and then the value of each weight is inversely adjusted by the error minimization positioning method, and the effective error generated by the adjustment is calculated.
  • the training process is iterated several times (for example, 100 times). When the overall effective error of the model is less than a preset threshold (for example, plus or minus 0.01), the training ends.
  • a preset threshold for example, plus or minus 0.01
  • the model structure is divided into six layers, which are feature extraction layers for extracting basic features (eg, lines, colors, etc.) of photos, and features for structural feature extraction.
  • a combination layer a feature sampling layer for identifying displacement, scaling, and distortion of the two-dimensional graphic feature, and three layers for sub-sampling layers for reducing the actual feature calculation scale by sampling;
  • the feature combination layer is disposed at the feature extraction layer
  • the feature sampling layer is disposed behind the feature combination layer
  • the sub-sampling layer is disposed behind the feature extraction layer, the feature combination layer, and the feature sampling layer, respectively.
  • step S02 the model training module 100 stores the above analysis model.
  • FIG. 3 it is a hardware environment diagram of a first embodiment of an insurance claims anti-fraud device for implementing insurance claims and anti-fraud based on multiple picture consistency.
  • the system for implementing insurance claims and anti-fraud based on multiple picture consistency (hereinafter referred to as "insurance claim anti-fraud system") 10 can be installed and run in an insurance claim anti-fraud device 1.
  • the insurance claim anti-fraud device 1 can be a claim server.
  • the claims insurance claims anti-fraud device 1 includes a processing unit 11, and an insurance claims anti-fraud system 10, an input/output unit 12, a communication unit 13, and a storage unit 14 connected to the processing unit 11.
  • the input/output unit 12 may be one or more physical buttons and/or a mouse and/or an operating lever for inputting user instructions and outputting response data of the insurance claims anti-fraud device to the input user command;
  • the communication unit 13 is communicatively coupled to one or more terminals (eg, mobile phones, tablets, etc.) or a back-end server to receive users of the terminal, such as the owner and the repair shop, and submit the damaged parts of the vehicle. Fixed loss photo.
  • the communication unit 13 may include a wifi module (which can communicate with the background server via the mobile internet via the wifi module), a Bluetooth module (which can communicate with the mobile phone through the Bluetooth module), and/or a GPRS module (via the GPRS module, via the mobile Internet) Background server communication).
  • the storage unit 14 can be one or more non-volatile storage units such as a ROM, an EPROM or a Flash Memory.
  • the storage unit 14 may be built in or external to the insurance claims anti-fraud device 1.
  • the processing unit 11 is a Core Unit and a Control Unit of the insurance claims anti-fraud device 1 for interpreting computer instructions and processing data in the computer software.
  • the insurance claims anti-fraud system 10 may be a computer software, which includes computer-executable program code, which may be stored in the storage unit 14, under the execution of the processing unit 11, Realize the following functions: According to the user, such as the owner and/or the repair shop, take and send a number of different angles of the damaged image of the damaged part of the vehicle, compare the multiple images of different angles, and perform spatial transformation, contrast Whether the damaged parts are consistent, so as to perform a test to determine whether the damaged photo has been tampered with.
  • the processing unit 11 is configured to invoke and execute the insurance claims anti-fraud system 10 to perform the following steps:
  • the fixed-loss photos of the respective photo collections are respectively grouped into two groups, and according to a key point matching algorithm, the key points corresponding to the respective sets are matched with the photos in the respective groups of the set, and the key points are matched in each group.
  • the loss photos respectively match at least one set of related key points;
  • a linear equation is used to calculate a feature point transformation matrix corresponding to each group, and a corresponding feature point transformation matrix is used to convert one of the photos in each group into another group with the group.
  • a reminder message is generated to remind the received picture that there is fraudulent behavior.
  • the insurance claims anti-fraud system 10 is comprised of a series of program code or code instructions that can be invoked by the processing unit 11 and perform functions corresponding to the included program code or code instructions.
  • the processing unit 11 invokes and executes the insurance claim anti-fraud system 10, and when performing the step of receiving a plurality of fixed-loss photos of the vehicle captured by the user from different shooting angles, the method further includes:
  • the reminder information for continuing to collect the fixed-loss photos from different angles is generated and transmitted to the terminal.
  • the processing unit 11 invokes and executes the insurance claims anti-fraud system 10, and also performs the generation of the analysis model by the following method:
  • vehicle parts include a front end, a rear end, and left and right sides;
  • the convolutional neural network is used to train the pictures of the specific parts of the vehicle that have been marked, and the analysis model capable of accurately determining a specific part of the vehicle is obtained, wherein in the model training process, cross-validation is adopted.
  • the training and evaluation are performed in multiple times, and each time a preset number of pictures are extracted from the pictures of the specific parts of the car that have been marked as test data, and another number of pictures are used as training data.
  • the processing unit 11 invokes and executes the insurance claims anti-fraud system 10, and the key point detection adopts a SIFT key point feature detecting method.
  • the processing unit 11 invokes and executes the insurance claims anti-fraud system 10, and the key point matching algorithm is a RANSAC algorithm.
  • FIG. 4 is a hardware structural diagram of a second embodiment of an insurance claims fraudulent device according to the present invention.
  • the insurance claims anti-fraud device and the insurance claims anti-fraud device in the first embodiment are basically Similarly, the main difference is that the touch input/display unit 17 is used instead of the input/output unit 12 in the insurance claims anti-fraud device.
  • the touch input/display unit 17 is configured to provide a human-computer interaction interface for the user to input an instruction based on the human-computer interaction interface, and output response data indicating the insurance claim anti-fraud device to the user instruction.
  • the touch input/display unit 17 includes a touch input unit and a display unit, and the touch input unit is used for touch input in the touch sensing area of the human-machine interaction interface.
  • the display unit is a display unit with a touch panel.
  • the human-computer interaction interface includes one or more virtual keys (not shown), which have the same functions as the physical keys in the first embodiment of the present invention, and are not described here. Additionally, it will be appreciated that any of the physical keys and/or mouse and/or joysticks in the first embodiment may be replaced with virtual keys on the touch input/display unit 17.
  • the insurance claim anti-fraud system 10 needs to implement the following functions, namely picture collection and labeling, deep learning training, and the same part of the photo. , key point detection, stereo reconstruction, contrast loss parts and give feedback.
  • the picture collection and labeling need to collect different vehicle pictures and mark related parts, such as the front, the rear, the left and right sides and other major categories.
  • different vehicle pictures may be collected from a car insurance claims database connected to the insurance claim anti-fraud device 1.
  • the auto insurance claim database may store photos taken by each repair shop when the vehicle is determined to be damaged, and store a mapping relationship or tag data of the vehicle preset part classification and the fixed loss photo.
  • the deep learning training mainly uses a convolutional neural network (CNN) to train a picture of a specific part of the car that has been marked, so that it can accurately determine that a picture belongs to a specific part of the vehicle.
  • CNN convolutional neural network
  • Cross-validation ensures that more objective evaluation metrics are obtained with relatively small amounts of data.
  • the same part of the photo is classified into the same part of the photo, that is, when receiving the fixed loss photo transmitted by the user, using the above-mentioned analysis model trained by the deep learning, it is determined that each picture belongs to a specific part of the vehicle, and Classify the same parts together.
  • the key point detection is a SIFT (Scale-invariant feature transform) key point detection.
  • SIFT Scale-invariant feature transform
  • SIFT is a local feature descriptor with size, direction, and care feature independence. Due to the different angles and distances of the image, the image size and direction characteristics will be different. With the SIFT key points, the same parts of different photos, such as lights and doors, can be effectively detected without being taken care of, shooting angle and other factors. influences.
  • the Stereo Reconstruction first groups the photos of each vehicle part two by two, and then uses the detected SIFT key points to match, selects the key points that best match each vehicle part, and then according to the key
  • the correlation of points calculates a transformation matrix Fundamental Matrix F.
  • the comparing the loss parts and giving the feedback is to use the above calculated conversion matrix to convert one of the two groups into an angle of another picture, and match the color, texture and other features of the two photos, if found
  • a large degree of non-conformity proves that at least one of the two pictures is PS, and feedback to the staff to avoid car accident fraud.
  • FIG. 5 it is a functional block diagram of a preferred embodiment of the insurance claim defense fraud system.
  • the program code of the insurance claim anti-fraud system 10 of the present invention can be divided into a plurality of functional modules according to different functions thereof.
  • the insurance claim anti-fraud system 10 may include a model training module 100, a photo receiving module 101, a classification module 102, a key point detecting module 103, a reconstruction module 104, and a verification module 105.
  • the model training module 100 is configured to acquire a preset number of photos of various parts of the vehicle from a vehicle insurance claim database, generate a rule for analyzing various parts of the vehicle based on the acquired photos of each part of the vehicle according to a preset model generation rule. Analyze the model and store the above analysis model.
  • the model training module 100 classifies according to a preset part of the vehicle (for example, the vehicle preset part classification includes a front side, a side, a tail, a whole, and the like), and the vehicle insurance claim database (for example, the automobile insurance)
  • the claim database stores a mapping relationship or tag data of a vehicle preset part classification and a fixed-loss photo, and the fixed-loss photo refers to a photo taken by the repair shop at the time of the fixed damage) acquiring a preset number corresponding to each preset part (for example, , 100,000 photos) (for example, getting photos of 100,000 cars in front of them).
  • the model training module 100 generates an analysis model for analyzing photos of various parts of the vehicle according to the preset model generation rules, based on the acquired photos of the preset parts of the vehicle (for example, based on the preset corresponding to the front of the vehicle)
  • the number of fixed-loss photos is generated to analyze the damage model included in the fixed-loss photo as the analysis model in front of the vehicle; based on the preset number of corresponding photos on the side, the generated damage photograph is included in the analysis.
  • Analytical model based on the preset number of fixed-length photos corresponding to the rear of the vehicle, generates an analysis model for analyzing the fixed-loss photo including the vehicle damage part as the rear of the vehicle; and based on the preset number of the corresponding number of the vehicle, the generated photo is used for analysis
  • the damage photo contains the analysis model of the vehicle damage part.
  • the analysis model is a convolutional neural network (CNN) model
  • the preset model generation rule is: pre-processing a preset number of photos of each part of the acquired vehicle to convert the format of the acquired photo into a format
  • the default format for example, leveldb format
  • the specific training process is as follows: Before the training starts, the initial values of the weights in the CNN network are randomly and uniformly generated (for example, -0.05 to 0.05; the CNN model is trained by the stochastic gradient descent method.
  • the whole training process can be divided into forward propagation. And backward propagation two stages.
  • the model training module 100 randomly extracts samples from the training data set, inputs them into the CNN network for calculation, and obtains actual calculation results.
  • the model training module 100 calculates The difference between the actual result and the expected result (ie, the tag value), and then the value of each weight is inversely adjusted by the error minimization positioning method, and the effective error generated by the adjustment is calculated.
  • the training process is iterated several times (for example, 100 times). When the overall effective error of the model is less than a preset threshold (for example, plus or minus 0.01), the training ends.
  • a preset threshold for example, plus or minus 0.01
  • the model structure is divided into six layers, which are feature extraction layers for extracting basic features (eg, lines, colors, etc.) of photos, and features for structural feature extraction.
  • a combination layer a feature sampling layer for identifying displacement, scaling, and distortion of the two-dimensional graphic feature, and three layers for sub-sampling layers for reducing the actual feature calculation scale by sampling;
  • the feature combination layer is disposed at the feature extraction layer
  • the feature sampling layer is disposed behind the feature combination layer
  • the sub-sampling layer is disposed behind the feature extraction layer, the feature combination layer, and the feature sampling layer, respectively.
  • the photo receiving module 101 is configured to perform a loss nuclear timing in a vehicle accident at a repair shop, and receive a user, such as a vehicle owner and a repair shop, to analyze whether the captured shooting angles of the respective fixed loss photos are the same through the fixed loss photos uploaded by the terminal. And when the angles are the same, generate and send reminding information for continuing to collect the fixed loss photos from different angles to the terminal.
  • the reminding information may be, for example, that the currently uploaded fixed loss photo has the same Y shooting angle, and please continue to collect Y-1 fixed loss photos from other angles.
  • the photographing angle of the photograph can be analyzed by the following method: the shadow of the object in the photograph is recognized, and the direction of the shadow of the object is the lens direction, and the angle between the lens direction and the plane of the object is taken as the photographing angle.
  • the classification module 102 is configured to analyze the vehicle parts corresponding to each fixed-loss photo by using the analysis model trained by the model training module 100, and classify the fixed-loss photos to divide the fixed-loss photos of the same vehicle parts. For the same photo collection, and the number of fixed-loss photos in a photo collection is less than the preset number, the reminder information of the fixed-loss photos of the vehicle parts corresponding to the photo collection corresponding to the photo collection is generated and transmitted from different angles to the terminal.
  • the reminding information may be, for example, that the fixed loss photo of the current fixed loss part X lacks Z sheets, and the continuous loss photograph of the Z fixed loss part X is continuously collected from other angles.
  • the key point detecting module 103 is configured to perform key point detection on the fixed loss photos in each photo collection, and obtain key point features of the vehicle parts corresponding to the respective photo sets.
  • the key point detection may adopt a SIFT (Scale-invariant feature transform) key point feature detection method.
  • the SIFT is a local feature descriptor, and the SIFT key point feature is an image. Local features, which remain invariant to rotation, scale scaling, and brightness changes, and maintain a certain degree of stability for viewing angle changes, affine transformations, and noise.
  • the reconstruction module 104 is configured to perform a pairwise grouping of the fixed loss photos of each photo collection by using a preset reconstruction method, and according to a key point matching algorithm, the key point features corresponding to each set and each group of the set The photos in the group are matched to the key points, and the fixed loss photos in each group are respectively At least one set of related key points is allocated, and according to the relevant key points corresponding to each group, a linear equation is used to calculate a feature point transformation matrix corresponding to each group.
  • the reconstruction method may be a Stereo Reconstruction method.
  • the key point matching algorithm may be, for example, a RANSAC (Random Sample Consensus) algorithm.
  • the reconstruction module 104 respectively matches at least a predetermined set of (eg, 8) related key points for the fixed loss photos in each group.
  • a predetermined set of (eg, 8) related key points for the fixed loss photos in each group For example, two photos of B1 and B2 are grouped into one group, and each of B1 and B2 has at least one set of preset number of key points matched, and the key points that B1 is matched are related to the key points that B2 is matched with and one
  • a plurality of key points corresponding to the same location are related to each other and have a one-to-one correspondence.
  • the reconstruction module 104 calculates a feature point transformation matrix corresponding to each group by using a preset linear equation according to each group of related key points corresponding to each group. For example, the feature point transformation matrix corresponding to the two photos related points of B1 and B2 is converted from the photo B1 to the photo B2, so that stereo reconstruction (for example, Stereo Reconstruction) can be completed.
  • the feature point transformation matrix may be a Fundamental Matrix. The role of the Fundamental Matrix is to transform the feature points of one image into matrix and transform them into related feature points of another image.
  • the linear equation may be:
  • the feature point transformation matrix F can be obtained, and the feature point transformation matrix F needs to satisfy the following conditions:
  • the linear equation can be solved by the above-mentioned matched eight relevant key points, thereby obtaining the spatial transformation relationship F between the two images.
  • the verification module 105 is configured to perform parameter verification on two fixed loss photos of each group.
  • the parameter verification includes: selecting a group, using the feature point transformation matrix corresponding to the group, converting one of the photos in the group into a photo to be verified having the same shooting angle as another group of the group,
  • the photo to be verified is matched with another photo in the group, and the parameters include features such as color and texture.
  • the parameters include features such as color and texture.
  • the verification fails and generates a fraud risk reminder message and sends it to the predetermined terminal.
  • the fraud risk reminding information may be: the photos B1 and B2 uploaded by the terminal fail to pass the verification, please pay attention to the forgery risk.
  • the above photo receiving module 101, the classification module 102, the key point detecting module 103, the rebuilding module 104, the verification module 105, and the like may be embedded in the hardware form or independent of the insurance claim anti-fraud device, or may be in software.
  • the form is stored in the memory of the insurance claim anti-fraud device, so that the processor calls to perform the operations corresponding to the above respective modules.
  • the processor can be a central processing unit (CPU), a microprocessor, a microcontroller, or the like.
  • the present invention provides a computer readable storage medium storing one or more programs for execution by one or more processors to implement the following steps:
  • the fixed-loss photos of the respective photo collections are respectively grouped into two groups, and according to a key point matching algorithm, the key points corresponding to the respective sets are matched with the photos in the respective groups of the set, and the key points are matched in each group.
  • the loss photos respectively match at least one set of related key points;
  • a linear equation is used to calculate a feature point transformation matrix corresponding to each group, and a corresponding feature point transformation matrix is used to convert one of the photos in each group into another group with the group.
  • a reminder message is generated to remind the received picture that there is fraudulent behavior.
  • the step of receiving a plurality of fixed-loss photos of the vehicle photographed by the user from different shooting angles uploaded by the terminal comprises:
  • the reminder information for continuing to collect the fixed-loss photos from different angles is generated and transmitted to the terminal.
  • the one or more programs are used by one or more processors to perform the following steps to generate the analysis model:
  • vehicle parts include a front end, a rear end, and left and right sides;
  • cross-validation is used to train and evaluate it multiple times, each time from the specific part of the car that has been marked.
  • the picture extracts a preset number of pictures as test data, and another number of pictures as training data.
  • the key point detection adopts a SIFT key point feature detection method.
  • the key point matching algorithm is a RANSAC algorithm.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Quality & Reliability (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Image Analysis (AREA)

Abstract

一种基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质,该方法包括:将相同车辆部位的定损照片分为同一集合;获得各集合的关键点特征,对各照片集合的定损照片进行分组,为各分组中的定损照片分别匹配出多个相关关键点;根据各分组的相关关键点,计算出各分组的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及当特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。本发明能够自动识别欺诈的理赔行为。

Description

基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质 技术领域
本发明涉及金融服务技术领域,特别是一种基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质。
背景技术
目前在车险行业,车辆受损图片需要人工检测是否被篡改过,时间成本高,检测效率低,且精确度不能保证。此外,鉴于目前的PS技术发展,很多被修改的图片是难以通过肉眼很快察觉的,尤其是面对多张图片且不确定被篡改区域的位置时。
发明内容
鉴于以上内容,有必要提供一种基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质,其能够自动识别欺诈的理赔行为。
一种基于多张图片一致性实现保险理赔反欺诈的方法,该方法包括:
接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
一种保险理赔反欺诈***,该***包括:
照片接收模块,用于接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
分类模块,用于利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
关键点检测模块,用于对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
重建模块,用于对各个照片集合的定损照片分别进行两两分组,根据一关 键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵;
验证模块,用于利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片,将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
一种保险理赔反欺诈设备,该保险理赔反欺诈设备包括处理单元,及与该处理单元连接的保险理赔反欺诈***、输入/输出单元、通信单元及存储单元;
该输入/输出单元用于输入用户指令,并输出保险理赔反欺诈设备对输入的用户指令的响应数据;
该通信单元用于与预先确定的终端或后台服务器通信连接;
该存储单元用于存储该保险理赔反欺诈***,及该保险理赔反欺诈***的运行数据;
该处理单元用于调用并执行该保险理赔反欺诈***,以执行如下步骤:
接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序被一个或者一个以上的处理器用来执行,以实现以下步骤:
接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法, 将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
利用本发明所述基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质,当发生车祸在修理厂进行损失核定的时候,需要车主及/或修理厂拍摄车辆部位多张不同角度的图片,并通过对比多张不同角度的图片,进行空间变换,对比辆部位是否一致,防止车主和修理厂可能通过篡改车损图片,夸大损失程度进行骗保的情况。
附图说明
图1包括图1A及图1B是本发明基于多张图片一致性实现保险理赔反欺诈的方法较佳实施例的方法实施流程图。
图2是本发明基于多张图片一致性实现保险理赔反欺诈的方法较佳实施例中分析车辆各个部位照片的分析模型的方法流程图。
图3是本发明基于多张图片一致性实现保险理赔反欺诈的保险理赔反欺诈设备第一实施例的硬件环境图。
图4是本发明基于多张图片一致性实现保险理赔反欺诈的保险理赔反欺诈设备第二实施例的硬件环境图。
图5是本发明基于多张图片一致性实现保险理赔反欺诈的***较佳实施例的功能模块图。
具体实施方式
参阅图1所示,是本发明基于多张图片一致性实现保险理赔反欺诈的方法较佳实施例的方法实施流程图。本实施例所述基于多张图片一致性实现保险理赔反欺诈的方法并不限于流程图中所示步骤,此外流程图中所示步骤中,某些步骤可以省略、步骤之间的顺序可以改变。
步骤S10,在车辆发生车祸在修理厂进行损失核定时,照片接收模块101接收用户,如车主和修理厂,通过终端上传的定损照片。
步骤S11,照片接收模块101分析上传的各个定损照片的拍摄角度是否相同。本实施例可以通过下述方法分析照片的拍摄角度:识别出照片中物体的阴影,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角即作为拍摄角度。
当拍摄角度相同时,执行步骤S12,及在拍摄角度不相同时,执行步骤S13。
步骤S12,照片接收模块101生成并发送从不同角度继续采集定损照片的提 醒信息给所述终端。所述提醒信息可以是,例如,当前上传的定损照片有Y张拍摄角度相同,请继续从其他角度采集Y-1张定损照片。
步骤S13,分类模块102利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合。
步骤S14,分类模块102判断任意一照片集合中的定损照片的数量是否小于预设数量,如3张。当一照片集合中的定损照片的数量小于预设数量时,执行步骤S15,及当任意一照片集合中的定损照片的数量都没有小于预设数量时,执行步骤S16。
步骤S15,分类模块102生成并发送从不同角度继续采集该照片集合对应的车辆部位的定损照片的提醒信息给所述终端。所述提醒信息可以是,例如,当前定损部位X的定损照片缺少Z张,请继续从其他角度采集Z张定损部位X的定损照片。
步骤S16,关键点检测模块103对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征。
本实施例中,所述(关键点检测可以采用SIFT(Scale-invariant feature transform,尺度不变特征变换)关键点特征检测方法。所述SIFT是一个局部特征描述子,SIFT关键点特征是图像的局部特征,其对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定性。
步骤S17,重建模块104对各个照片集合的定损照片分别进行两两分组。
步骤S18,重建模块104根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点。
所述关键点匹配算法可以是,例如,RANSAC(Random Sample Consensus)算法。
实施例中,所述重建模块104为各个分组中的定损照片分别对应匹配出至少一组预设数量(例如,8个)的相关关键点。例如,B1和B2两张照片被分为一组,B1和B2各有至少一组预设数量的关键点被匹配出来,B1被匹配出的关键点与B2被匹配出的关键点相关且一一对应,例如,对应同一位置的多个关键点相互之间是相关关系且一一对应。
步骤S19,重建模块104根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵。例如,根据B1,B2两张照片相关关键点计算出从照片B1转换到照片B2对应的特征点变换矩阵,从而能够完成立体重构(例如,Stereo Reconstruction)。
所述特征点变换矩阵可以是Fundamental Matrix。Fundamental Matrix的作用是将一副图像的特征点通过矩阵变换,转换成为另一幅图像的相关特征点。
本实施例中,所述线性方程可以是:
Figure PCTCN2017071318-appb-000001
展开可得:
Figure PCTCN2017071318-appb-000002
经过数学变换,可得特征点变换矩阵F,特征点变换矩阵F需满足以下条件:
Figure PCTCN2017071318-appb-000003
该线性方程可以通过上述匹配出来的8个的相关关键点解出,从而求得两幅图像之间的空间变换关系F。
步骤S20,验证模块105选择其中一分组,利用对应的特征点变换矩阵,将该分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片。
步骤S21,验证模块105将所述待验证照片与该分组中的另一张照片进行特征参数匹配。所述参数包括颜色、纹理等特征。
步骤S22,验证模块105判断是否有参数不匹配。例如,若有相同特征的颜色值差异大于预设颜色阈值,则判定颜色特征参数不匹配;若有相同特征的纹理的相似度小于预设相似度阈值“例如,90%”,则判定纹理特征参数不匹配等。
当有参数不匹配时,执行步骤S23。当没有参数不匹配时,执行步骤S24。
步骤S23,验证模块105生成欺诈风险提醒信息,并将该欺诈风险提醒信息发送给预先确定的终端。例如,所述欺诈风险提醒信息可以为:该终端上传的照片B1和B2验证不通过,请注意伪造风险。
步骤S24,验证模块105判断是否存在没有进行验证的分组照片。若存在没有进行验证的分组照片,则返回上述的步骤20。否则,若不存在没有进行验证的分组照片,则结束流程。
参阅图2所述,是本发明基于多张图片一致性实现保险理赔反欺诈的方法较佳实施例中分析车辆各个部位照片的分析模型的方法流程图。本实施例所述分析车辆各个部位照片的分析模型的方法流程图并不限于流程图中所示步骤,此外流程图中所示步骤中,某些步骤可以省略、步骤之间的顺序可以改变。
步骤S00,模型训练模块100从一车险理赔数据库获取车辆各个部位的预设数量的照片。本实施例中,所述模型训练模块100根据车辆预设部位分类(例如,所述车辆预设部位分类包括车前方、侧面、车尾、整体等),从车险理赔数据库(例如,所述车险理赔数据库存储有车辆预设部位分类与定损照片的映射关系或标签数据,所述定损照片指的是修理厂在定损时拍摄的照片)获取各 个预设部位对应的预设数量(例如,10万张)的照片(例如,获取10万张车前方的照片)。
步骤S01,模型训练模块100按照预设的模型生成规则,基于获取的车辆各个部位的照片,生成用于分析车辆各个部位照片的分析模型。例如,基于车前方对应的预设数量的定损照片,生成用于分析定损照片包含的车损部位为车前方的分析模型;基于侧面对应的预设数量定损照片,生成用于分析定损照片包含车损部位为侧面的分析模型;基于车尾对应的预设数量定损照片,生成用于分析定损照片包含车损部位为车尾的分析模型;基于整车对应的预设数量定损照片,生成用于分析定损照片包含车损部位为整车的分析模型等。
其中,所述分析模型为卷积神经网络(CNN)模型,所述预设的模型生成规则为:对获取的车辆各个部位的预设数量的照片进行预处理,以将获取的照片的格式转化为预设格式(例如,leveldb格式);利用格式转化后的照片,训练CNN模型。
具体的训练过程如下:训练开始前,随机且均匀地生成CNN网络内各权重的初始值(例如-0.05至0.05;采用随机梯度下降法对CNN模型进行训练。整个训练过程可分为向前传播和向后传播两个阶段。在向前传播阶段,模型训练模块100从训练数据集中随机提取样本,输入CNN网络进行计算,并得到实际计算结果。在向后传播过程中,模型训练模块100计算实际结果与期望结果(即标签值)的差值,然后利用误差最小化定位方法反向调整各权重的值,同时计算该调整产生的有效误差。训练过程反复迭代若干次(例如,100次),当模型整体有效误差小于预先设定的阈值(例如正负0.01)时,训练结束。
优选地,为了保证CNN模型的识别精度,所述模型结构分为六层,分别是用于对照片进行基本特征(例如,线条、颜色等)提取的特征提取层,用于结构特征提取的特征组合层,用于识别位移、缩放及扭曲的二维图形特征的特征采样层,及三层用于通过采样降低实际特征计算规模的子抽样层;所述特征组合层设于所述特征提取层的后面,特征采样层设于特征组合层后面,所述子抽样层分别设于所述特征提取层、特征组合层和特征采样层的后面。
步骤S02,模型训练模块100存储上述分析模型。
参阅图3所示,是本发明基于多张图片一致性实现保险理赔反欺诈的保险理赔反欺诈设备第一实施例的硬件环境图。
本实施例所述基于多张图片一致性实现保险理赔反欺诈的***(以下简称为“保险理赔反欺诈***”)10可以安装并运行于一个保险理赔反欺诈设备1中。该保险理赔反欺诈设备1可以是一个理赔服务器。所述理赔保险理赔反欺诈设备1包括处理单元11,及与该处理单元11连接的保险理赔反欺诈***10、输入/输出单元12、通信单元13、存储单元14。
该输入/输出单元12可以是一个或多个物理按键和/或鼠标和/或操作杆,用于输入用户指令,并输出保险理赔反欺诈设备对输入的用户指令的响应数据;
该通信单元13与一个或者多个终端(例如,手机、平板电脑等)或后台服务器通信连接,以接收终端的用户,如车主和修理厂,提交的车辆受损部位的 定损照片。该通信单元13可以包括wifi模块(通过wifi模块可以经移动互联网与后台服务器通信)、蓝牙模块(通过蓝牙模块可以与手机进行近距离通信)及/或GPRS模块(通过GPRS模块可以经移动互联网与后台服务器通信)。
该存储单元14可以是一个或者多个非易失性存储单元,如ROM、EPROM或Flash Memory(快闪存储单元)等。所述存储单元14可以内置或者外接于保险理赔反欺诈设备1。所述处理单元11是保险理赔反欺诈设备1的运算核心(Core Unit)和控制核心(Control Unit),用于解释计算机指令以及处理计算机软件中的数据。
本实施例中,所述保险理赔反欺诈***10可以是一种计算机软件,其包括计算机可执行的程序代码,该程序代码可以存储于所述存储单元14中,在处理单元11的执行下,实现下述功能:根据用户,如车主及/或修理厂,拍摄及发送的车辆受损部位的多张不同角度的定损照片,通过对比该多张不同角度的图片,并进行空间变换,对比受损部位是否一致,从而执行定损照片是否被篡改的检验。
该处理单元11用于调用并执行该保险理赔反欺诈***10,以执行如下步骤:
接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
其中,该保险理赔反欺诈***10由一系列程序代码或者代码指令组成,其可以被处理单元11调用并执行与所包含的程序代码或者代码指令对应的功能。
优选地,该处理单元11调用并执行该保险理赔反欺诈***10,在执行所述接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片的步骤时还包括:
识别出所接收的车辆的定损照片中物体的阴影,根据该阴影分析出照片的拍摄角度,其中,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角为拍摄角度;及
当所接收的车辆的定损照片的拍摄角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。
优选地,该处理单元11调用并执行该保险理赔反欺诈***10,还执行通过下述方法生成所述分析模型:
收集车辆不同部位的照片并标注相关部位,其中,所述车辆部位包括车头、车尾、及左右侧面;及
利用卷积神经网络对已经标注出的汽车具体部位的图片进行训练,得到所述能够准确判断出一张图片属于车辆的具体部位的分析模型,其中,在模型训练过程中,采用cross-validation的方法,分多次进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取预设数量的图片作为测试数据,另外的数量的图片作为训练数据。
优选地,该处理单元11调用并执行该保险理赔反欺诈***10,所述关键点检测采用SIFT关键点特征检测方法。
优选地,该处理单元11调用并执行该保险理赔反欺诈***10,所述关键点匹配算法为RANSAC算法。
如图4所示,图4为本发明保险理赔反欺诈设备第二实施例的硬件结构图,在本实施例中,该保险理赔反欺诈设备与第一实施例中的保险理赔反欺诈设备基本相似,主要区别在于:采用触控输入/显示单元17来代替保险理赔反欺诈设备中的输入/输出单元12。
该触控输入/显示单元17用于提供人机交互界面,以供用户基于该人机交互界面触控式输入指令,且输出显示该保险理赔反欺诈设备对用户指令的响应数据。在本实施例中,该触控输入/显示单元17包括触控输入单元和显示单元,所述触控输入单元用于在所述人机交互界面的触控感应区的触控式输入,所述显示单元为带触控面板的显示单元。该人机交互界面包括一个或多个虚拟按键(图中未示出),所述虚拟按键与本发明第一实施例中的物理按键的功能相同,在此不在累述。另外,可以理解,所述第一实施例中的任何物理按键和/或鼠标和/或操作杆均可采用触控输入/显示单元17上的虚拟按键替代。
本实施例中,为了进行针对多张定损照片中的受损部位的篡改检测,所述保险理赔反欺诈***10需要实现以下功能,即图片收集和标注,深度学习训练,照片相同部位归类,关键点检测,立体重建,对比损失部位并给出反馈。
所述图片收集和标注需要收集不同的车辆图片并标注相关部位,如车头、车尾、左右侧面等几个大类。本实施例中,可以从一与保险理赔反欺诈设备1连接的车险理赔数据库中收集不同的车辆图片。所述车险理赔数据库可以存储各个修理厂在对车辆进行定损时拍摄的照片,并存储有车辆预设部位分类与定损照片的映射关系或标签数据。
所述深度学习训练主要利用卷积神经网络(CNN)对已经标注出的汽车具体部位的图片进行训练,从而能够准确判断出一张图片属于车辆的具体部位。 在训练过程中,可以采用cross-validation的方法,分多次,如五次,进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取20%作为测试数据,另外的80%作为训练数据。通过cross-validation,可以确保在数据量相对较少的情况下,获得更为客观的评价指标。
所述照片相同部位归类是对照片相同部位的归类,即当接收到用户传送的定损照片时,利用上述通过深度学习训练好的分析模型,判断每张图片属于车辆的具体部位,并将相同的部位分类在一起。
所述关键点检测是SIFT(Scale-invariant feature transform)关键点检测。SIFT是一个局部特征描述子,具备大小、方向、关照特征无关性。由于图像的拍摄角度、距离不同,会导致图片大小、方向特征不同,利用SIFT关键点,可以有效的检测出不同照片的相同部位,如车灯、车门等,而不受关照、拍摄角度等因素影响。
所述立体重建(Stereo Reconstruction)首先将每个车辆部位的照片两两分组,然后利用上述检测出的SIFT关键点进行匹配,选出与每个车辆部位最匹配的关键点,然后根据所述关键点的相关性计算出一转换矩阵Fundamental Matrix F。
所述对比损失部位并给出反馈是利用上述计算出来的转换矩阵,将两两分组中的一张图片转换到另一张图片的角度,并且匹配两张照片的颜色、纹理等特征,如果发现较大程度的不吻合,则证明两张图片中至少一张是PS过的,将其反馈给工作人员,从而避免车险欺诈事件发生。
参阅图5所示,是本发保险理赔反欺诈***较佳实施例的功能模块图。
本发明所述保险理赔反欺诈***10的程序代码根据其不同的功能,可以划分为多个功能模块。本发明较佳实施例中,所述保险理赔反欺诈***10可以包括模型训练模块100、照片接收模块101、分类模块102、关键点检测模块103、重建模块104及验证模块105。
所述模型训练模块100用于从一车险理赔数据库获取车辆各个部位的预设数量的照片,按照预设的模型生成规则,基于获取的车辆各个部位的照片,生成用于分析车辆各个部位照片的分析模型,并存储上述分析模型。
本实施例中,所述模型训练模块100根据车辆预设部位分类(例如,所述车辆预设部位分类包括车前方、侧面、车尾、整体等),从车险理赔数据库(例如,所述车险理赔数据库存储有车辆预设部位分类与定损照片的映射关系或标签数据,所述定损照片指的是修理厂在定损时拍摄的照片)获取各个预设部位对应的预设数量(例如,10万张)的照片(例如,获取10万张车前方的照片)。
进一步,所述模型训练模块100按照预设的模型生成规则,基于获取的车辆各个预设部位分类对应的照片,生成用于分析车辆各个部位照片的分析模型(例如,基于车前方对应的预设数量的定损照片,生成用于分析定损照片包含的车损部位为车前方的分析模型;基于侧面对应的预设数量定损照片,生成用于分析定损照片包含车损部位为侧面的分析模型;基于车尾对应的预设数量定损照片,生成用于分析定损照片包含车损部位为车尾的分析模型;基于整车对应的预设数量定损照片,生成用于分析定损照片包含车损部位为整车的分析模型)。
其中,所述分析模型为卷积神经网络(CNN)模型,所述预设的模型生成规则为:对获取的车辆各个部位的预设数量的照片进行预处理,以将获取的照片的格式转化为预设格式(例如,leveldb格式);利用格式转化后的照片,训练CNN模型。
具体的训练过程如下:训练开始前,随机且均匀地生成CNN网络内各权重的初始值(例如-0.05至0.05;采用随机梯度下降法对CNN模型进行训练。整个训练过程可分为向前传播和向后传播两个阶段。在向前传播阶段,模型训练模块100从训练数据集中随机提取样本,输入CNN网络进行计算,并得到实际计算结果。在向后传播过程中,模型训练模块100计算实际结果与期望结果(即标签值)的差值,然后利用误差最小化定位方法反向调整各权重的值,同时计算该调整产生的有效误差。训练过程反复迭代若干次(例如,100次),当模型整体有效误差小于预先设定的阈值(例如正负0.01)时,训练结束。
优选地,为了保证CNN模型的识别精度,所述模型结构分为六层,分别是用于对照片进行基本特征(例如,线条、颜色等)提取的特征提取层,用于结构特征提取的特征组合层,用于识别位移、缩放及扭曲的二维图形特征的特征采样层,及三层用于通过采样降低实际特征计算规模的子抽样层;所述特征组合层设于所述特征提取层的后面,特征采样层设于特征组合层后面,所述子抽样层分别设于所述特征提取层、特征组合层和特征采样层的后面。
所述照片接收模块101用于在车辆发生车祸在修理厂进行损失核定时,接收用户,如车主和修理厂,通过终端上传的定损照片,分析上传的各个定损照片的拍摄角度是否相同,并在角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。所述提醒信息可以是,例如,当前上传的定损照片有Y张拍摄角度相同,请继续从其他角度采集Y-1张定损照片。本实施例可以通过下述方法分析照片的拍摄角度:识别出照片中物体的阴影,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角即作为拍摄角度。
所述分类模块102用于利用上述模型训练模块100训练出来的分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合,并在一照片集合中的定损照片的数量小于预设数量时,生成并发送从不同角度继续采集该照片集合对应的车辆部位的定损照片的提醒信息给所述终端。所述提醒信息可以是,例如,当前定损部位X的定损照片缺少Z张,请继续从其他角度采集Z张定损部位X的定损照片。
所述关键点检测模块103用于对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征。本实施例中,所述(关键点检测可以采用SIFT(Scale-invariant feature transform,尺度不变特征变换)关键点特征检测方法。所述SIFT是一个局部特征描述子,SIFT关键点特征是图像的局部特征,其对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定性。
所述重建模块104用于利用预设的重建方法,对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹 配出至少一组相关关键点,并根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵。
本实施例中,所述重建方法可以是Stereo Reconstruction(立体重构)方法。所述关键点匹配算法可以是,例如,RANSAC(Random Sample Consensus)算法。
实施例中,所述重建模块104为各个分组中的定损照片分别对应匹配出至少一组预设数量(例如,8个)的相关关键点。例如,B1和B2两张照片被分为一组,B1和B2各有至少一组预设数量的关键点被匹配出来,B1被匹配出的关键点与B2被匹配出的关键点相关且一一对应,例如,对应同一位置的多个关键点相互之间是相关关系且一一对应。
实施例中,所述重建模块104根据各个分组对应的各组相关关键点,利用预设的线性方程计算出各个分组对应的特征点变换矩阵。例如,根据B1,B2两张照片相关关键点计算出从照片B1转换到照片B2对应的特征点变换矩阵,从而能够完成立体重构(例如,Stereo Reconstruction)。所述特征点变换矩阵可以是Fundamental Matrix。Fundamental Matrix的作用是将一副图像的特征点通过矩阵变换,转换成为另一幅图像的相关特征点。
本实施例中,所述线性方程可以是:
Figure PCTCN2017071318-appb-000004
展开可得:
Figure PCTCN2017071318-appb-000005
经过数学变换,可得特征点变换矩阵F,特征点变换矩阵F需满足以下条件:
Figure PCTCN2017071318-appb-000006
该线性方程可以通过上述匹配出来的8个的相关关键点解出,从而求得两幅图像之间的空间变换关系F。
所述验证模块105用于对每一分组的两张定损照片进行参数验证。所述参数验证包括:选择一分组,利用该分组对应的特征点变换矩阵,将该分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片,将所述待验证照片与该分组中的另一张照片进行特征参数匹配,所述参数包括颜色、纹理等特征,当有参数不匹配,例如,若有相同特征的颜色值差异大于预设颜 色阈值,则判定颜色特征参数不匹配;若有相同特征的纹理的相似度小于预设相似度阈值“例如,90%”,则判定纹理特征参数不匹配等,则该分组中的两张照片验证不通过,并生成欺诈风险提醒信息,并发送给预先确定的终端。例如,所述欺诈风险提醒信息可以为:该终端上传的照片B1和B2验证不通过,请注意伪造风险。
在硬件实现上,以上照片接收模块101、分类模块102、关键点检测模块103、重建模块104及验证模块105等可以以硬件形式内嵌于或独立于保险理赔反欺诈设备中,也可以以软件形式存储于保险理赔反欺诈设备的存储器中,以便于处理器调用执行以上各个模块对应的操作。该处理器可以为中央处理单元(CPU)、微处理器、单片机等。
本发明提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序被一个或者一个以上的处理器用来执行,以实现以下步骤:
接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
优选地,所述接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片的步骤包括:
识别出所接收的车辆的定损照片中物体的阴影,根据该阴影分析出照片的拍摄角度,其中,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角为拍摄角度;及
当所接收的车辆的定损照片的拍摄角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。
优选地,所述一个或者一个以上程序被一个或者一个以上的处理器用来执行,以实现以下步骤来生成所述分析模型:
收集车辆不同部位的照片并标注相关部位,其中,所述车辆部位包括车头、车尾、及左右侧面;及
利用卷积神经网络对已经标注出的汽车具体部位的图片进行训练,得到所述能 够准确判断出一张图片属于车辆的具体部位的分析模型,其中,在模型训练过程中,采用cross-validation的方法,分多次进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取预设数量的图片作为测试数据,另外的数量的图片作为训练数据。
优选地,所述关键点检测采用SIFT关键点特征检测方法。
优选地,所述关键点匹配算法为RANSAC算法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
最后所应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。

Claims (20)

  1. 一种基于多张图片一致性实现保险理赔反欺诈的方法,其特征在于,该方法包括:
    接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
    利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
    对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
    对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
    根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
    将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
    在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
  2. 如权利要求1所述的方法,其特征在于,所述接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片的步骤包括:
    识别出所接收的车辆的定损照片中物体的阴影,根据该阴影分析出照片的拍摄角度,其中,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角为拍摄角度;及
    当所接收的车辆的定损照片的拍摄角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。
  3. 如权利要求1所述的方法,其特征在于,该方法还包括通过下述方法生成所述分析模型:
    收集车辆不同部位的照片并标注相关部位,其中,所述车辆部位包括车头、车尾、及左右侧面;及
    利用卷积神经网络对已经标注出的汽车具体部位的图片进行训练,得到所述能够准确判断出一张图片属于车辆的具体部位的分析模型,其中,在模型训练过程中,采用cross-validation的方法,分多次进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取预设数量的图片作为测试数据,另外的数量的图片作为训练数据。
  4. 如权利要求1所述的方法,其特征在于,所述关键点检测采用SIFT关键点特征检测方法。
  5. 如权利要求1所述的方法,其特征在于,所述关键点匹配算法为RANSAC算法。
  6. 一种保险理赔反欺诈***,其特征在于,该***包括:
    照片接收模块,用于接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
    分类模块,用于利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
    关键点检测模块,用于对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
    重建模块,用于对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵;
    验证模块,用于利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片,将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
  7. 如权利要求1所述的保险理赔反欺诈***,其特征在于,所述照片接收模块还用于:
    识别出所接收的车辆的定损照片中物体的阴影,根据该阴影分析出照片的拍摄角度,其中,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角为拍摄角度;及当所接收的车辆的定损照片的拍摄角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。
  8. 如权利要求1所述的保险理赔反欺诈***,其特征在于,还包括模型训练模块,所述模型训练模块通过下述方法生成所述分析模型:
    收集车辆不同部位的照片并标注相关部位,其中,所述车辆部位包括车头、车尾、及左右侧面;及
    利用卷积神经网络对已经标注出的汽车具体部位的图片进行训练,得到所述能够准确判断出一张图片属于车辆的具体部位的分析模型,其中,在模型训练过程中,采用cross-validation的方法,分多次进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取预设数量的图片作为测试数据,另外的数量的图片作为训练数据。
  9. 如权利要求1所述的保险理赔反欺诈***,其特征在于,所述关键点检测采用SIFT关键点特征检测方法。
  10. 如权利要求1所述的保险理赔反欺诈***,其特征在于,所述关键点匹配算法为RANSAC算法。
  11. 一种保险理赔反欺诈设备,其特征在于,该保险理赔反欺诈设备包括处理单元,及与该处理单元连接的保险理赔反欺诈***、输入/输出单元、通信单元及存储单元;
    该输入/输出单元用于输入用户指令,并输出保险理赔反欺诈设备对输入的用户指令的响应数据;
    该通信单元用于与预先确定的终端或后台服务器通信连接;
    该存储单元用于存储该保险理赔反欺诈***,及该保险理赔反欺诈***的运行数据;
    该处理单元用于调用并执行该保险理赔反欺诈***,以执行如下步骤:
    接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
    利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
    对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
    对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
    根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
    将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
    在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
  12. 如权利要求11所述的保险理赔反欺诈设备,其特征在于,该处理单元调用并执行该保险理赔反欺诈***,在执行所述接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片的步骤时还包括:
    识别出所接收的车辆的定损照片中物体的阴影,根据该阴影分析出照片的拍摄角度,其中,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角为拍摄角度;及
    当所接收的车辆的定损照片的拍摄角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。
  13. 如权利要求11所述的保险理赔反欺诈设备,其特征在于,该处理单元调用并执行该保险理赔反欺诈***,还执行通过下述方法生成所述分析模型:
    收集车辆不同部位的照片并标注相关部位,其中,所述车辆部位包括车头、 车尾、及左右侧面;及
    利用卷积神经网络对已经标注出的汽车具体部位的图片进行训练,得到所述能够准确判断出一张图片属于车辆的具体部位的分析模型,其中,在模型训练过程中,采用cross-validation的方法,分多次进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取预设数量的图片作为测试数据,另外的数量的图片作为训练数据。
  14. 如权利要求11所述的保险理赔反欺诈设备,其特征在于,该处理单元调用并执行该保险理赔反欺诈***,所述关键点检测采用SIFT关键点特征检测方法。
  15. 如权利要求11所述的保险理赔反欺诈设备,其特征在于,该处理单元调用并执行该保险理赔反欺诈***,所述关键点匹配算法为RANSAC算法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序被一个或者一个以上的处理器用来执行,以实现以下步骤:
    接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片;
    利用一分析模型,分析出各个定损照片对应的车辆部位,并对所述定损照片进行分类,以将相同车辆部位的定损照片分为同一照片集合;
    对各个照片集合中的定损照片执行关键点检测,获得各个照片集合对应的车辆部位的关键点特征;
    对各个照片集合的定损照片分别进行两两分组,根据一关键点匹配算法,将各个集合对应的该关键点特征与该集合的各分组中的照片进行关键点匹配,为各个分组中的定损照片分别匹配出至少一组相关关键点;
    根据各个分组对应的相关关键点,利用一线性方程计算出各个分组对应的特征点变换矩阵,并利用对应的特征点变换矩阵,将每一个分组中的其中一张照片转换成与该组另一张照片具有相同拍摄角度的待验证照片;
    将所述待验证照片与该分组中的另一张照片进行特征参数匹配;及
    在待验证照片与该分组中的另一张照片进行特征参数不匹配时,生成提醒信息以提醒接收的图片存在欺诈行为。
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,所述接收用户通过终端上传的多张从不同拍摄角度拍摄的车辆的定损照片的步骤包括:
    识别出所接收的车辆的定损照片中物体的阴影,根据该阴影分析出照片的拍摄角度,其中,物体阴影方向的正前方即是镜头方向,镜头方向与物体平面所呈的夹角为拍摄角度;及
    当所接收的车辆的定损照片的拍摄角度相同时,生成并发送从不同角度继续采集定损照片的提醒信息给所述终端。
  18. 如权利要求16所述的计算机可读存储介质,其特征在于,所述一个或者一 个以上程序被一个或者一个以上的处理器用来执行,以实现以下步骤来生成所述分析模型:
    收集车辆不同部位的照片并标注相关部位,其中,所述车辆部位包括车头、车尾、及左右侧面;及
    利用卷积神经网络对已经标注出的汽车具体部位的图片进行训练,得到所述能够准确判断出一张图片属于车辆的具体部位的分析模型,其中,在模型训练过程中,采用cross-validation的方法,分多次进行训练和评估,每次从已经标注出的汽车具体部位的图片中抽取预设数量的图片作为测试数据,另外的数量的图片作为训练数据。
  19. 如权利要求16所述的计算机可读存储介质,其特征在于,所述关键点检测采用SIFT关键点特征检测方法。
  20. 如权利要求16所述的计算机可读存储介质,其特征在于,所述关键点匹配算法为RANSAC算法。
PCT/CN2017/071318 2016-01-22 2017-01-16 基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质 WO2017124990A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2018524765A JP6452186B2 (ja) 2016-01-22 2017-01-16 複数の写真の一致性に基づく保険賠償詐欺防止方法、システム、装置及び読み取り可能な記録媒体
SG11201800342XA SG11201800342XA (en) 2016-01-22 2017-01-16 Method, system, apparatus, and storage medium for realizing antifraud in insurance claim based on consistency of multiple images
KR1020187019505A KR102138082B1 (ko) 2016-01-22 2017-01-16 다수의 이미지 일치성을 바탕으로 보험클레임 사기 방지를 실현하는 방법, 시스템, 기기 및 판독 가능 저장매체
US15/736,352 US10410292B2 (en) 2016-01-22 2017-01-16 Method, system, apparatus, and storage medium for realizing antifraud in insurance claim based on consistency of multiple images
AU2017209231A AU2017209231B2 (en) 2016-01-22 2017-01-16 Method, system, device and readable storage medium for realizing insurance claim fraud prevention based on consistency between multiple images
EP17741021.4A EP3407229A4 (en) 2016-01-22 2017-01-16 A METHOD, SYSTEM, DEVICE, AND READABLE STORAGE MEDIUM FOR REALIZING FRAUD PREVENTION DURING A CLAIM DECLARATION BASED ON COHERENCE BETWEEN MULTIPLE IMAGES

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610046584.5 2016-01-22
CN201610046584.5A CN105719188B (zh) 2016-01-22 2016-01-22 基于多张图片一致性实现保险理赔反欺诈的方法及服务器

Publications (1)

Publication Number Publication Date
WO2017124990A1 true WO2017124990A1 (zh) 2017-07-27

Family

ID=56153895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/071318 WO2017124990A1 (zh) 2016-01-22 2017-01-16 基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质

Country Status (8)

Country Link
US (1) US10410292B2 (zh)
EP (1) EP3407229A4 (zh)
JP (1) JP6452186B2 (zh)
KR (1) KR102138082B1 (zh)
CN (1) CN105719188B (zh)
AU (1) AU2017209231B2 (zh)
SG (1) SG11201800342XA (zh)
WO (1) WO2017124990A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427367A (zh) * 2019-07-05 2019-11-08 中国平安财产保险股份有限公司 基于评残参数的定损方法、装置、设备及存储介质
JP2020013331A (ja) * 2018-07-18 2020-01-23 ゼアーウィンスリーサービス株式会社 タイヤパンク修理判定システム、タイヤパンクの保証申請の可否を判断する装置及び、タイヤパンクの保証申請の可否を判断する装置の作動方法

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719188B (zh) * 2016-01-22 2017-12-26 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器
CN106372651B (zh) * 2016-08-22 2018-03-06 平安科技(深圳)有限公司 图片品质的检测方法及装置
GB2554361B8 (en) * 2016-09-21 2022-07-06 Emergent Network Intelligence Ltd Automatic image based object damage assessment
CN106600421A (zh) * 2016-11-21 2017-04-26 中国平安财产保险股份有限公司 一种基于图片识别的车险智能定损方法及***
CN106780048A (zh) * 2016-11-28 2017-05-31 中国平安财产保险股份有限公司 一种智能车险的自助理赔方法、自助理赔装置及***
CN106803205A (zh) * 2016-12-27 2017-06-06 北京量子保科技有限公司 一种用于保险自动核赔的***和方法
CN107403424B (zh) * 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置及电子设备
CN107392218B (zh) 2017-04-11 2020-08-04 创新先进技术有限公司 一种基于图像的车辆定损方法、装置及电子设备
CN107085814A (zh) * 2017-04-11 2017-08-22 武汉华创欣网科技有限公司 一种车险理赔照片的分析方法及***
CN107358596B (zh) * 2017-04-11 2020-09-18 阿里巴巴集团控股有限公司 一种基于图像的车辆定损方法、装置、电子设备及***
CN107194323B (zh) 2017-04-28 2020-07-03 阿里巴巴集团控股有限公司 车辆定损图像获取方法、装置、服务器和终端设备
CN111797689B (zh) * 2017-04-28 2024-04-16 创新先进技术有限公司 车辆定损图像获取方法、装置、服务器和客户端
CN107194398B (zh) * 2017-05-10 2018-09-25 平安科技(深圳)有限公司 车损部位的识别方法及***
CN107610091A (zh) * 2017-07-31 2018-01-19 阿里巴巴集团控股有限公司 车险图像处理方法、装置、服务器及***
CN209132890U (zh) * 2017-09-27 2019-07-19 中山市宾哥网络科技有限公司 结算箱
CN108269371B (zh) * 2017-09-27 2020-04-03 缤果可为(北京)科技有限公司 商品自动结算方法、装置、自助收银台
KR101916347B1 (ko) * 2017-10-13 2018-11-08 주식회사 수아랩 딥러닝 기반 이미지 비교 장치, 방법 및 컴퓨터 판독가능매체에 저장된 컴퓨터 프로그램
CN108062712B (zh) * 2017-11-21 2020-11-06 创新先进技术有限公司 一种车险定损数据的处理方法、装置和处理设备
CN108090838B (zh) * 2017-11-21 2020-09-29 阿里巴巴集团控股有限公司 识别车辆受损部件的方法、装置、服务器、客户端及***
CN108520194A (zh) 2017-12-18 2018-09-11 上海云拿智能科技有限公司 基于影像监测的货品感知***及货品感知方法
CN108268619B (zh) 2018-01-08 2020-06-30 阿里巴巴集团控股有限公司 内容推荐方法及装置
CN108323209B (zh) * 2018-01-29 2023-10-31 达闼机器人股份有限公司 信息处理方法、***、云处理设备以及计算机存储介质
CN108446817B (zh) 2018-02-01 2020-10-02 阿里巴巴集团控股有限公司 确定业务对应的决策策略的方法、装置和电子设备
CN108491821A (zh) * 2018-04-02 2018-09-04 深圳市亚来科技有限公司 基于图像处理和深度学习的车险事故鉴别方法、***及存储介质
CN108921811B (zh) * 2018-04-03 2020-06-30 阿里巴巴集团控股有限公司 检测物品损伤的方法和装置、物品损伤检测器
CN108734702A (zh) * 2018-04-26 2018-11-02 平安科技(深圳)有限公司 车损判定方法、服务器及存储介质
CN108573286A (zh) * 2018-05-10 2018-09-25 阿里巴巴集团控股有限公司 一种理赔业务的数据处理方法、装置、设备及服务器
CN108875648A (zh) * 2018-06-22 2018-11-23 深源恒际科技有限公司 一种基于手机视频流的实时车辆损伤和部件检测的方法
US11389131B2 (en) 2018-06-27 2022-07-19 Denti.Ai Technology Inc. Systems and methods for processing of dental images
CN110569856B (zh) 2018-08-24 2020-07-21 阿里巴巴集团控股有限公司 样本标注方法及装置、损伤类别的识别方法及装置
CN110570316A (zh) 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 训练损伤识别模型的方法及装置
CN110569696A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 用于车辆部件识别的神经网络***、方法和装置
CN110569695B (zh) * 2018-08-31 2021-07-09 创新先进技术有限公司 基于定损图像判定模型的图像处理方法和装置
CN110569864A (zh) 2018-09-04 2019-12-13 阿里巴巴集团控股有限公司 基于gan网络的车损图像生成方法和装置
CN109410270B (zh) * 2018-09-28 2020-10-27 百度在线网络技术(北京)有限公司 一种定损方法、设备和存储介质
TWI690874B (zh) * 2018-10-24 2020-04-11 富邦產物保險股份有限公司 用於遠端現場事故影像與身份驗證的系統與方法
CN109635742A (zh) * 2018-12-13 2019-04-16 深源恒际科技有限公司 一种车辆图像定损中的子部件损伤识别方法
CN109635824A (zh) * 2018-12-14 2019-04-16 深源恒际科技有限公司 一种图像匹配深度学习方法及***
CN109697429A (zh) * 2018-12-27 2019-04-30 睿驰达新能源汽车科技(北京)有限公司 一种确定车损的方法及装置
US11379967B2 (en) * 2019-01-18 2022-07-05 Kla Corporation Methods and systems for inspection of semiconductor structures with automatically generated defect features
CN110033386B (zh) * 2019-03-07 2020-10-02 阿里巴巴集团控股有限公司 车辆事故的鉴定方法及装置、电子设备
US11710097B2 (en) * 2019-03-22 2023-07-25 BlueOwl, LLC Systems and methods for obtaining incident information to reduce fraud
CN112017058A (zh) * 2019-05-30 2020-12-01 深圳市聚蜂智能科技有限公司 一种保险定损方法、装置、计算机设备和存储介质
US11676701B2 (en) 2019-09-05 2023-06-13 Pearl Inc. Systems and methods for automated medical image analysis
US10984529B2 (en) 2019-09-05 2021-04-20 Pearl Inc. Systems and methods for automated medical image annotation
US11417208B1 (en) 2019-10-29 2022-08-16 BlueOwl, LLC Systems and methods for fraud prevention based on video analytics
US11388351B1 (en) 2019-10-29 2022-07-12 BlueOwl, LLC Systems and methods for gate-based vehicle image capture
IL293594A (en) * 2019-12-02 2022-08-01 Click Ins Ltd Systems, methods and software for producing a damage pattern in a car
CN111193868B (zh) * 2020-01-09 2021-03-16 中保车服科技服务股份有限公司 车险查勘的拍照方法、装置、计算机设备和可读存储介质
US11055789B1 (en) 2020-01-17 2021-07-06 Pearl Inc. Systems and methods for insurance fraud detection
CN111489433B (zh) * 2020-02-13 2023-04-25 北京百度网讯科技有限公司 车辆损伤定位的方法、装置、电子设备以及可读存储介质
CN111368752B (zh) * 2020-03-06 2023-06-02 德联易控科技(北京)有限公司 车辆损伤的分析方法和装置
CN111861765B (zh) * 2020-07-29 2024-07-12 贵州力创科技发展有限公司 一种车辆保险理赔智能反欺诈方法
US11615544B2 (en) 2020-09-15 2023-03-28 Toyota Research Institute, Inc. Systems and methods for end-to-end map building from a video sequence using neural camera models
US11494927B2 (en) 2020-09-15 2022-11-08 Toyota Research Institute, Inc. Systems and methods for self-supervised depth estimation
US11508080B2 (en) * 2020-09-15 2022-11-22 Toyota Research Institute, Inc. Systems and methods for generic visual odometry using learned features via neural camera models
US11776677B2 (en) 2021-01-06 2023-10-03 Pearl Inc. Computer vision-based analysis of provider data
US11971953B2 (en) 2021-02-02 2024-04-30 Inait Sa Machine annotation of photographic images
JP2024506691A (ja) 2021-02-18 2024-02-14 アイエヌエイアイティ エスエイ 2d画像において視認可能な使用形跡を使用した3dモデルの注釈付け
US11544914B2 (en) 2021-02-18 2023-01-03 Inait Sa Annotation of 3D models with signs of use visible in 2D images
CN113158928B (zh) * 2021-04-27 2023-09-19 浙江云奕科技有限公司 一种基于图像识别的混凝土试块防造假方法
CN113627252A (zh) * 2021-07-07 2021-11-09 浙江吉利控股集团有限公司 一种车辆定损方法、装置、存储介质及电子设备
WO2023006974A1 (en) * 2021-07-30 2023-02-02 Swiss Reinsurance Company Ltd. Optical fraud detector for automated detection of fraud in digital imaginary-based automobile claims, automated damage recognition, and method thereof
CN114462553B (zh) * 2022-04-12 2022-07-15 之江实验室 一种面向车险反欺诈的图像标注及要素抽取方法与***
CN114943908B (zh) * 2022-06-20 2024-07-16 平安科技(深圳)有限公司 基于人工智能的车身损伤取证方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101467153A (zh) * 2006-06-08 2009-06-24 创伤科学有限责任公司 用于获取摄影测量数据来估计碰撞严重性的方法和设备
US20130124414A1 (en) * 2008-01-18 2013-05-16 Mitek Systems Systems and methods for mobile automated clearing house enrollment
CN104680515A (zh) * 2014-12-30 2015-06-03 中国航天科工集团第二研究院七〇六所 一种图像真实性鉴定方法
CN105719188A (zh) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3342677B2 (ja) * 1999-06-22 2002-11-11 インターナショナル・ビジネス・マシーンズ・コーポレーション コンテンツデータ鑑定装置
US20020051577A1 (en) * 2000-10-20 2002-05-02 Naoto Kinjo Method of preventing falsification of image
JP2002198958A (ja) * 2000-10-20 2002-07-12 Fuji Photo Film Co Ltd 画像の改竄防止方法
JP2003187015A (ja) * 2001-12-20 2003-07-04 Seiko Instruments Inc 電子データ出力機器及び電子データ公証システム
JP4208561B2 (ja) * 2002-12-12 2009-01-14 キヤノン株式会社 協定支援システム及びその制御方法、プログラム
US8112325B2 (en) * 2005-09-15 2012-02-07 Manheim Investments, Inc. Method and apparatus for automatically capturing multiple images of motor vehicles and other items for sale or auction
TW200718785A (en) * 2005-11-10 2007-05-16 Toyo Boseki A process for improving the thermal stability of a composition containing a soluble coenzyme conjugated glucose dehydrogenase (GDH)
GB2440171A (en) * 2006-07-17 2008-01-23 Univ Warwick Improvements in data visualisation systems
WO2009147814A1 (ja) * 2008-06-02 2009-12-10 パナソニック株式会社 法線情報を生成する画像処理装置、方法、コンピュータプログラム、および、視点変換画像生成装置
US8004576B2 (en) * 2008-10-31 2011-08-23 Digimarc Corporation Histogram methods and systems for object recognition
WO2010097708A2 (en) * 2009-02-27 2010-09-02 Picosmos Il, Ltd Apparatus, method and system for collecting and utilizing digital evidence
CN101630407B (zh) * 2009-06-05 2012-09-19 天津大学 基于两视几何和图分割的伪造区域定位方法
KR101086243B1 (ko) * 2009-06-24 2011-12-01 대한민국 디지털사진 위조 검출 방법
WO2013093932A2 (en) * 2011-09-29 2013-06-27 Tata Consultancy Services Limited Damage assessment of an object
US8712893B1 (en) * 2012-08-16 2014-04-29 Allstate Insurance Company Enhanced claims damage estimation using aggregate display
US8510196B1 (en) * 2012-08-16 2013-08-13 Allstate Insurance Company Feedback loop in mobile damage assessment and claims processing
EP2911112B1 (en) * 2014-02-21 2019-04-17 Wipro Limited Methods for assessing image change and devices thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101467153A (zh) * 2006-06-08 2009-06-24 创伤科学有限责任公司 用于获取摄影测量数据来估计碰撞严重性的方法和设备
US20130124414A1 (en) * 2008-01-18 2013-05-16 Mitek Systems Systems and methods for mobile automated clearing house enrollment
CN104680515A (zh) * 2014-12-30 2015-06-03 中国航天科工集团第二研究院七〇六所 一种图像真实性鉴定方法
CN105719188A (zh) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 基于多张图片一致性实现保险理赔反欺诈的方法及服务器

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020013331A (ja) * 2018-07-18 2020-01-23 ゼアーウィンスリーサービス株式会社 タイヤパンク修理判定システム、タイヤパンクの保証申請の可否を判断する装置及び、タイヤパンクの保証申請の可否を判断する装置の作動方法
CN110427367A (zh) * 2019-07-05 2019-11-08 中国平安财产保险股份有限公司 基于评残参数的定损方法、装置、设备及存储介质
CN110427367B (zh) * 2019-07-05 2023-02-14 中国平安财产保险股份有限公司 基于评残参数的定损方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN105719188A (zh) 2016-06-29
KR102138082B1 (ko) 2020-07-28
AU2017209231B2 (en) 2019-09-26
JP2018537772A (ja) 2018-12-20
SG11201800342XA (en) 2018-02-27
US10410292B2 (en) 2019-09-10
EP3407229A4 (en) 2019-08-28
EP3407229A1 (en) 2018-11-28
KR20180104609A (ko) 2018-09-21
JP6452186B2 (ja) 2019-01-16
CN105719188B (zh) 2017-12-26
US20180182039A1 (en) 2018-06-28
AU2017209231A1 (en) 2017-11-23

Similar Documents

Publication Publication Date Title
WO2017124990A1 (zh) 基于多张图片一致性实现保险理赔反欺诈的方法、***、设备及可读存储介质
WO2020125216A1 (zh) 一种行人重识别方法、装置、电子设备及计算机可读存储介质
JP6527678B2 (ja) プライバシー保護のための延滞車両特定
WO2018166116A1 (zh) 车损识别方法、电子装置及计算机可读存储介质
JP2014232533A (ja) Ocr出力検証システム及び方法
CN105404886B (zh) 特征模型生成方法和特征模型生成装置
WO2016150240A1 (zh) 身份认证方法和装置
TWI712980B (zh) 理賠資訊提取方法和裝置、電子設備
WO2020164278A1 (zh) 一种图像处理方法、装置、电子设备和可读存储介质
CN110942456B (zh) 篡改图像检测方法、装置、设备及存储介质
CN108229375B (zh) 用于检测人脸图像的方法和装置
CN113887408B (zh) 活化人脸视频的检测方法、装置、设备及存储介质
CN112651333B (zh) 静默活体检测方法、装置、终端设备和存储介质
JP5704909B2 (ja) 注目領域検出方法、注目領域検出装置、及びプログラム
Andiani et al. Face recognition for work attendance using multitask convolutional neural network (MTCNN) and pre-trained facenet
Deng et al. Attention-aware dual-stream network for multimodal face anti-spoofing
Isaac et al. A key point based copy-move forgery detection using HOG features
CN108875553A (zh) 人证核验的方法、装置、***及计算机存储介质
CN112287905A (zh) 车辆损伤识别方法、装置、设备及存储介质
CN116843677A (zh) 钣金件的外观质量检测***及其方法
CN113505716B (zh) 静脉识别模型的训练方法、静脉图像的识别方法及装置
CN116052225A (zh) 掌纹识别方法、电子设备、存储介质及计算机程序产品
CN115497092A (zh) 图像处理方法、装置及设备
WO2019129293A1 (zh) 一种特征数据生成和特征匹配方法及装置
CN110956102A (zh) 银行柜台监控方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17741021

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017209231

Country of ref document: AU

Date of ref document: 20170116

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15736352

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 11201800342X

Country of ref document: SG

WWE Wipo information: entry into national phase

Ref document number: 2018524765

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 20187019505

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE