CN110991558A - Accident processing method and device based on image recognition and computer equipment - Google Patents

Accident processing method and device based on image recognition and computer equipment Download PDF

Info

Publication number
CN110991558A
CN110991558A CN201911309634.4A CN201911309634A CN110991558A CN 110991558 A CN110991558 A CN 110991558A CN 201911309634 A CN201911309634 A CN 201911309634A CN 110991558 A CN110991558 A CN 110991558A
Authority
CN
China
Prior art keywords
accident
target
image
aliquot
detail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911309634.4A
Other languages
Chinese (zh)
Other versions
CN110991558B (en
Inventor
陈德森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201911309634.4A priority Critical patent/CN110991558B/en
Publication of CN110991558A publication Critical patent/CN110991558A/en
Application granted granted Critical
Publication of CN110991558B publication Critical patent/CN110991558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The application relates to the technical field of image recognition, and provides an accident handling method, an accident handling device, computer equipment and a storage medium based on image recognition, wherein the method comprises the following steps: receiving a slight traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and vehicle owner certificate information of an accident vehicle; verifying whether the certificate information of the vehicle owner is legal or not; if the accident classification model is legal, feature extraction is carried out through a feature extraction layer of the accident classification model; wherein, the characteristic extraction layer is obtained by training by fusing different Attention models; carrying out classification calculation in a classification output layer of a preset accident classification model to obtain a classification result; determining an accident responsibility confirmation according to the classification result; sending the accident responsibility subscription to the user terminal of the vehicle owner; according to the method and the system, the corresponding accident responsibility confirmation can be made only by remotely sending a report request by the vehicle master user terminal and uploading the corresponding photo, so that the light traffic accident can be rapidly processed.

Description

Accident processing method and device based on image recognition and computer equipment
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to an accident handling method and apparatus based on image recognition, a computer device, and a storage medium.
Background
Light traffic accidents are the most common type of accidents in every urban traffic accident, the daily light traffic accidents in the first-line large city can be up to four or five hundred, and the light accidents have a remarkably high incidence rate along with the peak of going to and from work and the peak of holidays.
The traditional light traffic accident handling mode is that an accident vehicle owner waits for a traffic police or an insurance company claimant to arrive at the scene of the accident. After recording the information of the owners of the two parties, the traffic police confirms the accident form, provides responsibility confirmation for the owners of the two parties, and then the owners of the two parties report the insurance company of the traffic police. And after the indemnifiers arrive at the scene, the accident responsibility and the vehicle loss are confirmed, and the accident owner is in danger.
The light accident handling mode needs the car owner to wait for the arrival of a traffic police or an insurance carrier in situ, and the responsibility determination and handling of the traffic accident cannot be carried out remotely, so that the phenomena of jam and the like caused by the accident site are caused, and the smooth operation of urban traffic is influenced.
Disclosure of Invention
The application mainly aims to provide an accident handling method, an accident handling device, a computer device and a storage medium based on image recognition, and aims to solve the technical problem that the responsibility confirmation of a slight traffic accident cannot be remotely carried out at present.
In order to achieve the above object, the present application provides an accident handling method based on image recognition, including:
receiving a slight traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner;
verifying whether the owner certificate information is legal or not;
if the accident image information is legal, inputting the accident image information into a preset accident classification model, and performing feature extraction through a feature extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models;
inputting the feature extraction result into a classification output layer of the preset accident classification model for classification calculation to obtain a classification result;
determining an accident responsibility acceptance book according to the classification result;
and sending the accident responsibility confirmation to the owner user terminal.
Further, the step of extracting features through a feature extraction layer of the accident classification model includes:
performing M equal division and N equal division on the accident image information to obtain M first equal divisions and N second equal divisions; wherein M is not equal to N;
performing dimension increasing and dimension reducing processing on the M first equal parts and the N second equal parts respectively to obtain M first target equal parts and N second target equal parts;
respectively inputting the M first target equal parts into corresponding first Attention models, and respectively extracting first detail characteristics corresponding to each first target equal part; respectively inputting the N second target equal parts into corresponding second Attention models, and respectively extracting second detail characteristics corresponding to each second target equal part;
and adding the extracted first detail features and the extracted second detail features to obtain a feature extraction result in the accident image information.
Further, M is 2 and N is 3; the first orientation model is Hard orientation, and the second orientation model is Soft orientation;
the step of adding the extracted first detail features and the extracted second detail features to obtain a feature extraction result in the accident image information includes:
adding a first detail feature extracted by inputting a first aliquot in the first target aliquot into a corresponding Hard event and a second detail feature extracted by inputting a first aliquot in the second target aliquot into a corresponding Soft event to obtain a first target detail feature;
adding first detail features extracted by inputting a second aliquot in the first target aliquot into a corresponding Hard attention and second detail features extracted by inputting a second aliquot in the second target aliquot into a corresponding Soft attention to obtain second target detail features;
and inputting a third aliquot of the second target aliquot into a corresponding Hard attention to extract a second detail characteristic, and adding the first target detail characteristic and the second target detail characteristic to obtain a characteristic extraction result in the accident image information.
Further, the step of verifying whether the owner certificate information is legal includes:
cutting the driving license image and the blank area of the driving license image, and adjusting the cut driving license image into a first picture with a preset resolution and a preset size; adjusting the cut driver license image into a second picture with a preset resolution and a preset size;
filling the first picture into a preset first area, and filling the second picture into a preset second area;
intercepting a first image at a first designated position in the first area, and intercepting a second image at a second designated position in the second area;
recognizing a driving license number included in the first image and recognizing a driving license number included in the second image by adopting a character recognition algorithm;
and calling a traffic police platform interface, inquiring whether the states of the running license and the driver license are normal or not through the traffic police platform interface according to the running license number and the driver license number, and if so, judging that the vehicle owner certificate information is legal.
Further, the step of determining an accident liability acceptance according to the classification result includes:
calling a preset subscription template, adding the classification result to a first designated area of the subscription template, and adding the accident image information to a second designated area of the subscription template to generate an initial liability subscription;
acquiring a first area of the first designated area and a second area of the second designated area in the initial responsibility confirmation;
calculating a first ratio between a first region area and the second region area;
dividing a preset identified seal image into a first seal and a second seal, wherein the area ratio of the first seal to the second seal is the first ratio;
and synthesizing the first seal to the first designated area, and synthesizing the second seal to the second designated area to generate the accident responsibility stipulation.
The application also provides an accident handling device based on image recognition, including:
the receiving unit is used for receiving a light traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner;
the verification unit is used for verifying whether the owner certificate information is legal or not;
the extraction unit is used for inputting the accident image information into a preset accident classification model if the accident image information is legal and extracting the characteristics through a characteristic extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models;
the classification unit is used for inputting the feature extraction result into a classification output layer of the preset accident classification model for classification calculation to obtain a classification result;
the determining unit is used for determining an accident responsibility acceptance according to the classification result;
and the sending unit is used for sending the accident responsibility confirmation to the owner user terminal.
Further, the extraction unit includes:
the dividing subunit is used for performing M equal division and N equal division on the accident image information to obtain M first equal divisions and N second equal divisions; wherein M is not equal to N;
the processing subunit is configured to perform dimension increasing and dimension decreasing processing on the M first aliquots and the N second aliquots, respectively, to obtain M first target aliquots and N second target aliquots;
the extraction subunit is used for respectively inputting the M first target equal parts into corresponding first Attention models and respectively extracting first detail characteristics corresponding to each first target equal part; respectively inputting the N second target equal parts into corresponding second Attention models, and respectively extracting second detail characteristics corresponding to each second target equal part;
and the adding subunit is used for adding the extracted first detail features and the second detail features to obtain a feature extraction result in the accident image information.
Further, M is 2 and N is 3; the first orientation model is Hard orientation, and the second orientation model is Soft orientation; the addition subunit includes:
the first adding module is used for adding a first detail feature extracted by inputting the first aliquot in the first target aliquot into the corresponding Hardattention and a second detail feature extracted by inputting the first aliquot in the second target aliquot into the corresponding Softattention to obtain a first target detail feature;
the second adding module is used for adding the first detail features extracted by inputting the second aliquot in the first target aliquot into the corresponding Hardattention and the second detail features extracted by inputting the second aliquot in the second target aliquot into the corresponding Softattention to obtain second target detail features;
and the third adding module is used for adding the second detail features extracted by inputting the third one of the second target equal parts into the corresponding Hardattention, the first target detail features and the second target detail features to obtain a feature extraction result in the accident image information.
The present application further provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
The accident handling method, the accident handling device, the computer equipment and the storage medium based on image recognition receive a slight traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner; verifying whether the owner certificate information is legal or not; if the accident image information is legal, inputting the accident image information into a preset accident classification model, and performing feature extraction through a feature extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models; inputting the feature extraction result into a classification output layer of the preset accident classification model for classification calculation to obtain a classification result; determining an accident responsibility acceptance book according to the classification result; sending the accident responsibility confirmation to the owner user terminal; according to the method, the corresponding accident responsibility confirmation can be made only by remotely sending a report request by the vehicle master user terminal and uploading the corresponding photo, so that the light traffic accident can be rapidly processed; police officers do not need to be sent to the accident site, and car owners do not need to wait in situ, so that the congested road is avoided.
Drawings
FIG. 1 is a schematic diagram illustrating steps of an accident handling method based on image recognition according to an embodiment of the present application;
FIG. 2 is a block diagram of an accident handling apparatus based on image recognition according to an embodiment of the present application;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, an embodiment of the present application provides an accident handling method based on image recognition, including:
step S1, receiving a light traffic accident reporting request sent by a user terminal of a vehicle owner; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner;
step S2, verifying whether the certificate information of the vehicle owner is legal;
step S3, if the accident image information is legal, inputting the accident image information into a preset accident classification model, and extracting the characteristics through a characteristic extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models;
step S4, inputting the feature extraction result into the classification output layer of the preset accident classification model for classification calculation to obtain a classification result;
step S5, determining accident liability acceptance according to the classification result;
and step S6, sending the accident responsibility subscription to the owner user terminal.
In the present embodiment, the above method is applied to a scenario of automatically handling a light traffic accident. When a traffic accident occurs, the owner user can select a basic accident form to report according to the accident situation, if the accident belongs to a human injury or a serious car damage, the system can automatically inform the traffic police platform to send out nearby police to the scene after the accident is reported by the owner user, and the owner user is informed to wait for the accident treatment on the scene. The present embodiment is primarily directed to the handling of light traffic accidents.
When a light traffic accident occurs, the owner user can report on the owner user terminal, namely, the light traffic accident report request is sent out; specifically, the owner user can choose to report in a photo form or in a video form, if the owner user reports in a video information form, a video seat of the traffic police platform is connected to the owner user in an online video mode, the owner user shoots the accident site and the owner certificate information of both owners through videos according to the indication of the video seat, and the seat records and stores other information of the owners in an inquiry mode and then can inform the owner of successful report and withdraw from the site; at this time, the images in the video can be identified to obtain accident image information carried by the owner user when issuing a report request and owner certificate information of the accident vehicle.
If the report is reported in the form of a photo, when the vehicle owner user sends a report request, the vehicle owner user shoots the accident site at different angles according to the system prompt, uploads the vehicle owner certificate photos of the vehicle owners and fills in the basic information of the vehicle owners, and finally uploads the photo and the basic information of the vehicle owners to a system platform, namely a server through the vehicle owner user terminal.
As described in the step S1, the server receives the light traffic accident report request from the owner user terminal, and the server can obtain the accident image information and the owner certificate information of the accident vehicle from the corresponding report request no matter whether the owner user uses the report in the form of photo or video. The vehicle owner certificate information comprises a driving license image and a driving license image of a vehicle owner, and is used for verifying the identity of the vehicle owner. At this time, the server can send out a prompt to remind the owner of the vehicle to evacuate the site.
As described in the above step S2, when receiving the owner certificate information of the owner user, the server needs to verify whether the owner certificate information is valid, and specifically, mainly verifies whether the driving license, and the like of the owner user are valid.
If the owner certificate information is valid, the accident image information can be identified to make an accident responsibility acceptance book; in this embodiment, the accident image information is input into a preset accident classification model, and is recognized by using an image recognition technology, which mainly recognizes a collision position of an accident vehicle, a driving direction of the accident vehicle, and road information on which the accident vehicle is driven in the accident image information. Comprehensively judging the main responsible person of the traffic accident according to the identified information; and finally generating an accident responsibility subscription of the pdf document through an Itext component.
In this embodiment, an accident classification model obtained by preset training is used for dividing responsibilities of a traffic accident, the accident classification model inputs accident image information and outputs a classification result of the accident image information, and the classification result includes vehicle responsibilities in the accident image information, namely which vehicle is the main responsibilities of the accident and which vehicle is the secondary responsibilities of the accident.
As described in the step S3, the preset accident classification model includes a feature extraction layer and a classification layer, where the feature extraction layer is configured to extract image features in the accident image information, and the classification layer is configured to perform classification calculation on the extracted image features to output a classification result; the feature extraction layer is trained by fusing different Attention models, namely the feature extraction layer is not only a single Attention model feature extraction layer, but also trained by fusing a plurality of different Attention models, and the different Attention models have different concerned feature points when extracting features; the extraction of the features is carried out after the fusion of different Attention models, so that the features of a plurality of different Attention points are convenient to fuse, the image features in the accident image information are reflected more comprehensively, the omission of the features in the accident image information is avoided, and the accuracy of subsequent classification results is convenient to improve.
As described in step S4, the classification output layer of the preset accident classification model is used to perform classification calculation on the feature extraction result, so as to obtain a corresponding classification result. Because the features of a plurality of different concern points are fused in the feature extraction result, the image features in the accident image information are reflected more comprehensively, and therefore, the classification accuracy is improved conveniently when classification calculation is carried out on the accident image information through a classification layer.
As described in the step S5, according to the classification result, a corresponding accident responsibility stigmation can be determined, and the accident responsibility stigmation includes the classification result, i.e. includes the vehicle responsibility division in the accident image information; meanwhile, the accident image information and the owner certificate information of the accident vehicle carried in the report request can be included, so that follow-up operations such as rechecking are facilitated.
As described in the step S6, the accident responsibility confirmation is sent to the owner user terminal, the owner user can check the accident responsibility confirmation on the owner user terminal, and if the accident is disagreeed, a rechecking application can be provided. In this embodiment, when a light traffic accident occurs, the owner user only needs to upload the corresponding photo information to the server, and the light traffic accident can be automatically handled through the platform of the server without sending a police officer to the accident scene or waiting in place by the owner, so that the congested road is avoided.
In an embodiment, the step S3 of extracting features through the feature extraction layer of the accident classification model includes:
step S31, performing M equal division and N equal division on the accident image information to obtain M first equal divisions and N second equal divisions; wherein M is not equal to N; carrying out different time sharing on the accident image information, wherein the thinning degrees are different; generally, the larger the number of equally divided numbers, the higher the degree of refinement of the accident image information.
Step S32, performing dimension raising and dimension lowering processing on the M first aliquots and the N second aliquots, respectively, to obtain M first target aliquots and N second target aliquots; in this embodiment, M first aliquots and N second aliquots may be subjected to dimension-up and dimension-down processing by convolution kernel, which aims to ensure that feature maps after each layer are consistent in space and detail features are not lost when extracting features of each aliquot; specifically, if the dimensions of the feature map obtained from a certain layer a and the dimensions of the feature map obtained from a certain layer B are the same, the overlay processing can be performed without affecting the spatial detail features.
Step S33, respectively inputting the M first target equal parts into corresponding first Attention models, and respectively extracting first detail characteristics corresponding to each first target equal part; respectively inputting the N second target equal parts into corresponding second Attention models, and respectively extracting second detail characteristics corresponding to each second target equal part;
in this embodiment, each of the first target equal parts and the second target equal parts obtained by the dimension ascending and descending are respectively input into a corresponding Attention model; when the accident image information is subjected to different equal time division, the thinning degrees are different, and for different thinning degrees, the corresponding first target equal division or second target equal division is input into different Attention models for detail feature extraction; the Attention model is a single depth model, which is actually to learn a weight distribution, and the features of the accident image respectively correspond to different Attention degrees according to the learned weight distribution, that is, learned weight values. Different Attention models have different Attention points to the detail features, and the extracted detail features are different; for example, one of the Attention models focuses on the relative position of the accident vehicle, while the other Attention model focuses on the collision trace of the accident vehicle, etc.
And step S34, adding the extracted first detail features and the second detail features to obtain a feature extraction result in the accident image information.
As described in step S34, because the ascending and descending dimensions are performed between each of the M equal divisions through the same network; performing dimension ascending and dimension descending between each of the N equal divisions through the same network; the dimensions of the detail features extracted after the first target equal division and the second target equal division are input into the corresponding Attention models are the same, and the detail features can be directly added; when the detail features are added, the detail features of all parts of accident image information and the features concerned by all Attention models are integrated, the final result of the detail features obtained by the final addition is more comprehensive, all the detail features can be kept from being faded, and a more powerful basis is provided for accident responsibility division. In general, it is possible to keep the detail features of the accident image from being faded, such as the collision trace, the relative position of the vehicle, the driving direction of the vehicle, and the like, rather than only keeping a part of the detail features.
In a specific embodiment, said M is 2 and said N is 3; the first Attention model is Hardattention, and the second Attention model is Soft Attention; the Hard attention selects a part of components in the distribution by using a certain sampling strategy, and the Soft attention keeps all the components for weighting. Both the Hardattention and Soft attention described above can be understood to be a single depth model to some extent, but with different detail features of interest. And after being equally divided, the accident image information 2 is put into Hard attention for processing, and after being equally divided, the accident image information 3 is put into Soft attention for processing. Thus, the detail features extracted by Hard attention and Soft attention described above will also differ.
The step S34 of adding the extracted first detail features and the extracted second detail features to obtain a feature extraction result in the accident image information includes:
step S341, adding a first detail feature extracted by inputting the first aliquot of the first target aliquot into the corresponding Hard attention and a second detail feature extracted by inputting the first aliquot of the second target aliquot into the corresponding Soft attention to obtain a first target detail feature;
step S342, adding a first detail feature extracted by inputting a second aliquot of the first target aliquot into a corresponding Hard attention and a second detail feature extracted by inputting a second aliquot of the second target aliquot into a corresponding Soft attention to obtain a second target detail feature;
step S343, adding the second detail features extracted by inputting the third one of the second target halves into the corresponding Hard attention, and the first target detail features and the second target detail features to obtain the feature extraction result in the accident image information.
In this embodiment, the first target aliquot includes two aliquots, and the second target aliquot includes three aliquots, where each aliquot outputs a detail feature when being input into the corresponding attention model. In this embodiment, the output of the first aliquot of Hard attention is connected to the output of the first aliquot of Soft attention, and the output of the second aliquot of Hard attention is connected to the output of the second aliquot of Soft attention; adding first detail features extracted by inputting a second aliquot in the first target aliquot into a corresponding Hard attention and second detail features extracted by inputting the second aliquot in the second target aliquot into a corresponding Soft attention to obtain second target detail features; adding first detail features extracted by inputting a second aliquot in the first target aliquot into a corresponding Hardattention and second detail features extracted by inputting the second aliquot in the second target aliquot into a corresponding Softattention to obtain second target detail features; the first target detail characteristic and the second target detail characteristic reserve detail characteristics of each layer.
Finally, inputting a third aliquot of the second target aliquots into a corresponding Soft attribute to extract second detail features, and adding the second detail features, the first target detail features and the second target detail features to obtain a feature extraction result in the accident image information; all detail features in the accident image information are reserved in the feature extraction result, and the features of the accident image cannot be lost.
In another embodiment, the step S2 of verifying whether the owner certificate information is legal includes:
step S21, cutting the blank area of the driving license image and the driving license image, and adjusting the cut driving license image into a first picture with preset resolution and preset size; adjusting the cut driver license image into a second picture with a preset resolution and a preset size; in this embodiment, the blank areas of the driving license image and the driving license image are cut out, and only the area with the image information is reserved, so that the driving license image and the driving license image can be conveniently filled into the corresponding area in the following process. The adjustment to the predetermined resolution and the predetermined size is to facilitate the adjustment of the first picture and the second picture to the standard picture, which can be just filled in the corresponding first area or the second area.
Step S22, filling the first picture into a preset first area, and filling the second picture into a preset second area; the first area and the second area are two preset areas, the size of each area is fixed, and the corresponding resolution and the preset size requirement are preset for the pictures filled in the areas; when the picture with the preset resolution and the preset size is filled in the first area or the second area, the first area or the second area can be just filled.
A step S23 of intercepting a first image of a first designated position in the first area and intercepting a second image of a second designated position in the second area; the first designated position and the second designated position are small areas which are designated in advance in the first area or the second area, wherein the first designated position is an area where a driving license number in a driving license image is located, and the second designated position is an area where the driving license number in the driving license image is located.
Since the step has already performed the standardization process on the pictures filled in the first area and the second area, the first designated location in the first area is necessarily a driving license number, and the second designated location in the second area is necessarily a driving license number. In the embodiment, by the method, the areas where the driving license numbers and the driving license numbers are located are gradually intercepted, so that character recognition is conveniently carried out on the areas where the driving license numbers and the driving license numbers are located only in the follow-up process, the driving license images and the whole driving license images do not need to be recognized, the calculation amount can be obviously reduced, and the recognition efficiency is improved. It should be understood that although the above-described processes of cutting a blank region, adjusting resolution, etc. also require a small amount of computation, the amount of computation for performing character recognition with respect to the full map may be significantly reduced.
Step S24, recognizing the driving license number included in the first image and recognizing the driving license number included in the second image by adopting a character recognition algorithm;
and step S25, calling a traffic police platform interface, inquiring whether the states of the driving license and the driving license are normal or not through the traffic police platform interface according to the driving license number and the driving license number, and if so, judging that the vehicle owner certificate information is legal. Specifically, the traffic police platform stores state information of each driving license and driving license, and the state information is used for indicating whether the driving license and the driving license are normal or not. The traffic police platform interface is a police communication interface, so that the police communication interface can be called according to the number of the driving license to inquire whether the state of the driving license is normal or not, and the police communication interface can be called according to the number of the driving license to inquire whether the state of the driving license is normal or not.
In another embodiment, the step S5 of determining the accident responsibility subscription according to the classification result includes:
step S51, calling a preset admission book template, adding the classification result to a first designated area of the admission book template, and adding the accident image information to a second designated area of the admission book template to generate an initial responsibility admission book; the preset recognition book template is provided with at least two designated areas, namely the first designated area and the second designated area, and the size of the designated areas can be adjusted according to the content added in the designated areas.
A step S52 of acquiring a first region area of the first designated region and a second region area of the second designated region in the initial responsibility confirmation; since the contents to be added are different, the final area of the first designated area and the second designated area are also different.
Step S53, calculating a first ratio between the first region area and the second region area;
step S54, dividing a preset identified seal image into a first seal and a second seal, wherein the area ratio of the first seal to the second seal is the first ratio;
step S55, synthesizing the first seal into the first designated area, and synthesizing the second seal into the second designated area to generate the accident responsibility stipulation.
The preset affirming seal image is used for stamping the seal on the initial responsibility affirmation book, and only the initial responsibility affirmation book stamped with the seal can become an effective accident responsibility affirmation book; in this embodiment, in order to increase the security of the accident liability approval book and prevent the accident liability approval book from being counterfeited, the preset approval stamp image is not added on the initial liability approval book as a complete image, but is divided into a plurality of stamps, and the plurality of stamps are respectively added on the initial liability approval book; and the preset segmentation rule for identifying the seal image is set, the segmentation proportional relation of the seal image can change along with the first ratio between the area of the first region and the area of the second region, and the seal image is not a fixed segmentation proportion, namely the size proportion of the seal in the accident liability identification book made each time can be different, thereby avoiding counterfeiting by other people and further ensuring the safety and effectiveness of the seal image.
In an embodiment, after the step S6 of sending the accident responsibility stipulation to the owner user terminal, the method includes:
step S7, acquiring the license plate number of the accident vehicle from the vehicle owner certificate information, and searching the policy information corresponding to the accident vehicle in a database according to the license plate number;
and step S8, acquiring a corresponding insurance platform according to the policy information, and sending the accident image information, the owner certificate information and the accident responsibility confirmation to the insurance platform and the traffic police accident platform.
In this embodiment, after the user applies a security to the vehicle, the database stores policy information corresponding to the vehicle, and the policy information generally corresponds to the license plate number of the vehicle. Therefore, as described in step S5, the license plate number of the accident vehicle included in the owner certificate information is recognized by a character recognition algorithm (OCR), and the policy information corresponding to the accident vehicle can be searched from the database according to the recognized license plate number. In other embodiments, the driving license number, and the like can be identified from the vehicle owner certificate information through a character recognition algorithm.
As described in the step S8, each policy information includes a corresponding insurance platform, i.e., an underwriting platform of the policy, so that the corresponding insurance platform can be identified from the policy information, and the accident image information, the vehicle owner certificate information, and the accident liability subscription statement are sent to the insurance platform, so as to facilitate the vehicle owner user to carry out insurance claim settlement; alternatively, the information may be sent to the insured life together. Meanwhile, the accident image information, the owner certificate information and the accident responsibility confirmation can be sent to the first insurance platform and sent to the traffic police accident platform for recording.
Referring to fig. 2, an embodiment of the present application further provides an accident handling apparatus based on image recognition, including:
a receiving unit 10, configured to receive a light traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner;
a verification unit 20 for verifying whether the owner certificate information is legal;
the extracting unit 30 is used for inputting the accident image information into a preset accident classification model if the accident image information is legal, and extracting the characteristics through a characteristic extracting layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models;
the classification unit 40 is configured to input the feature extraction result into a classification output layer of the preset accident classification model to perform classification calculation, so as to obtain a classification result;
a determining unit 50 for determining an accident liability acceptance according to the classification result;
a sending unit 60, configured to send the accident responsibility stipulation to the owner user terminal.
In one embodiment, the extraction unit 30 includes:
the dividing subunit is used for performing M equal division and N equal division on the accident image information to obtain M first equal divisions and N second equal divisions; wherein M is not equal to N;
the processing subunit is configured to perform dimension increasing and dimension decreasing processing on the M first aliquots and the N second aliquots, respectively, to obtain M first target aliquots and N second target aliquots;
the extraction subunit is used for respectively inputting the M first target equal parts into corresponding first Attention models and respectively extracting first detail characteristics corresponding to each first target equal part; respectively inputting the N second target equal parts into corresponding second Attention models, and respectively extracting second detail characteristics corresponding to each second target equal part;
and the adding subunit is used for adding the extracted first detail features and the second detail features to obtain a feature extraction result in the accident image information.
In a specific embodiment, said M is 2 and said N is 3; the first Attention model is Hardattention, and the second Attention model is Soft Attention; the addition subunit includes:
the first adding module is used for adding a first detail feature extracted by inputting the first aliquot in the first target aliquot into the corresponding Hardattention and a second detail feature extracted by inputting the first aliquot in the second target aliquot into the corresponding Softattention to obtain a first target detail feature;
the second adding module is used for adding the first detail features extracted by inputting the second aliquot in the first target aliquot into the corresponding Hardattention and the second detail features extracted by inputting the second aliquot in the second target aliquot into the corresponding Softattention to obtain second target detail features;
and the third adding module is used for adding the second detail features extracted by inputting the third one of the second target equal parts into the corresponding Hardattention, the first target detail features and the second target detail features to obtain a feature extraction result in the accident image information.
In another embodiment, the verification unit 20 includes:
the cutting subunit is used for cutting the driving license image and the blank area of the driving license image, and adjusting the cut driving license image into a first picture with preset resolution and preset size; adjusting the cut driver license image into a second picture with a preset resolution and a preset size;
a filling subunit, configured to fill the first picture into a preset first area, and fill the second picture into a preset second area;
the intercepting subunit is used for intercepting a first image at a first specified position in the first area and intercepting a second image at a second specified position in the second area;
the identification subunit is used for identifying the driving license number included in the first image and identifying the driving license number included in the second image by adopting a character identification algorithm;
and the verification subunit is used for calling a traffic police platform interface, inquiring whether the states of the driving license and the driving license are normal or not through the traffic police platform interface according to the driving license number and the driving license number, and if so, judging that the vehicle owner certificate information is legal.
In still another embodiment, the determining unit 50 includes:
a calling subunit, configured to call a preset subscription template, add the classification result to a first designated area of the subscription template, and add the accident image information to a second designated area of the subscription template, so as to generate an initial responsibility subscription;
an acquisition subunit configured to acquire a first region area of the first designated region and a second region area of the second designated region in the initial responsibility confirmation;
a calculating subunit, configured to calculate a first ratio between a first region area and the second region area;
the seal identification device comprises a segmentation subunit, a comparison subunit and a comparison unit, wherein the segmentation subunit is used for segmenting a preset identification seal image into a first seal and a second seal, and the area ratio of the first seal to the second seal is the first ratio;
and the synthesizing subunit is used for synthesizing the first seal to the first designated area and synthesizing the second seal to the second designated area so as to generate the accident responsibility stipulation.
In an embodiment, the apparatus further includes:
the searching unit is used for acquiring the license plate number of the accident vehicle from the vehicle owner certificate information and searching the policy information corresponding to the accident vehicle in a database according to the license plate number;
and the first sending unit is used for acquiring a corresponding insurance platform according to the policy information and sending the accident image information, the vehicle owner certificate information and the accident responsibility confirmation to the insurance platform and the traffic police accident platform.
For the specific implementation of each unit and module in the foregoing embodiments, please refer to the description in the foregoing method embodiments, which is not repeated herein.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing image data and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an accident handling method based on image recognition.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is only a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects may be applied.
In summary, the accident handling method, the accident handling apparatus, the computer device and the storage medium based on image recognition provided in the embodiment of the present application receive a light traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner; verifying whether the owner certificate information is legal or not; if the accident image information is legal, inputting the accident image information into a preset accident classification model, and performing feature extraction through a feature extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models; inputting the feature extraction result into a classification output layer of the preset accident classification model for classification calculation to obtain a classification result; determining an accident responsibility acceptance book according to the classification result; sending the accident responsibility confirmation to the owner user terminal; according to the method and the system, the corresponding accident responsibility confirmation can be made only by remotely sending a report request by the vehicle master user terminal and uploading the corresponding photo, so that the light traffic accident can be rapidly processed.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware associated with instructions of a computer program, which may be stored on a non-volatile computer-readable storage medium, and when executed, may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only for the preferred embodiment of the present application and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (10)

1. An accident handling method based on image recognition is characterized by comprising the following steps:
receiving a slight traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner;
verifying whether the owner certificate information is legal or not;
if the accident image information is legal, inputting the accident image information into a preset accident classification model, and performing feature extraction through a feature extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models;
inputting the feature extraction result into a classification output layer of the preset accident classification model for classification calculation to obtain a classification result;
determining an accident responsibility acceptance book according to the classification result;
and sending the accident responsibility confirmation to the owner user terminal.
2. The accident handling method based on image recognition according to claim 1, wherein the step of performing feature extraction through a feature extraction layer of the accident classification model comprises:
performing M equal division and N equal division on the accident image information to obtain M first equal divisions and N second equal divisions; wherein M is not equal to N;
performing dimension increasing and dimension reducing processing on the M first equal parts and the N second equal parts respectively to obtain M first target equal parts and N second target equal parts;
respectively inputting the M first target equal parts into corresponding first Attention models, and respectively extracting first detail characteristics corresponding to each first target equal part; respectively inputting the N second target equal parts into corresponding second Attention models, and respectively extracting second detail characteristics corresponding to each second target equal part;
and adding the extracted first detail features and the extracted second detail features to obtain a feature extraction result in the accident image information.
3. The image recognition-based accident handling method according to claim 2, wherein M is 2, N is 3; the first orientation model is Hard orientation, and the second orientation model is Soft orientation;
the step of adding the extracted first detail features and the extracted second detail features to obtain a feature extraction result in the accident image information includes:
adding a first detail feature extracted by inputting a first aliquot in the first target aliquot into a corresponding Hard event and a second detail feature extracted by inputting a first aliquot in the second target aliquot into a corresponding Soft event to obtain a first target detail feature;
adding first detail features extracted by inputting a second aliquot in the first target aliquot into a corresponding Hard attention and second detail features extracted by inputting a second aliquot in the second target aliquot into a corresponding Soft attention to obtain second target detail features;
and inputting a third aliquot of the second target aliquot into a corresponding Hard attention to extract a second detail characteristic, and adding the first target detail characteristic and the second target detail characteristic to obtain a characteristic extraction result in the accident image information.
4. The accident handling method based on image recognition according to claim 1, wherein the step of verifying whether the owner certificate information is legal comprises:
cutting the driving license image and the blank area of the driving license image, and adjusting the cut driving license image into a first picture with a preset resolution and a preset size; adjusting the cut driver license image into a second picture with a preset resolution and a preset size;
filling the first picture into a preset first area, and filling the second picture into a preset second area;
intercepting a first image at a first designated position in the first area, and intercepting a second image at a second designated position in the second area;
recognizing a driving license number included in the first image and recognizing a driving license number included in the second image by adopting a character recognition algorithm;
and calling a traffic police platform interface, inquiring whether the states of the running license and the driver license are normal or not through the traffic police platform interface according to the running license number and the driver license number, and if so, judging that the vehicle owner certificate information is legal.
5. The accident handling method based on image recognition according to claim 1, wherein the step of determining the accident liability subscription according to the classification result comprises:
calling a preset subscription template, adding the classification result to a first designated area of the subscription template, and adding the accident image information to a second designated area of the subscription template to generate an initial liability subscription;
acquiring a first area of the first designated area and a second area of the second designated area in the initial responsibility confirmation;
calculating a first ratio between a first region area and the second region area;
dividing a preset identified seal image into a first seal and a second seal, wherein the area ratio of the first seal to the second seal is the first ratio;
and synthesizing the first seal to the first designated area, and synthesizing the second seal to the second designated area to generate the accident responsibility stipulation.
6. An accident handling device based on image recognition, comprising:
the receiving unit is used for receiving a light traffic accident reporting request sent by a vehicle owner user terminal; the reporting request carries accident image information and owner certificate information of an accident vehicle, wherein the owner certificate information comprises a driving license image and a driving license image of an owner;
the verification unit is used for verifying whether the owner certificate information is legal or not;
the extraction unit is used for inputting the accident image information into a preset accident classification model if the accident image information is legal and extracting the characteristics through a characteristic extraction layer of the accident classification model; the feature extraction layer is obtained by training by fusing different Attention models;
the classification unit is used for inputting the feature extraction result into a classification output layer of the preset accident classification model for classification calculation to obtain a classification result;
the determining unit is used for determining an accident responsibility acceptance according to the classification result;
and the sending unit is used for sending the accident responsibility confirmation to the owner user terminal.
7. The image recognition-based accident handling device of claim 6, wherein the extraction unit comprises:
the dividing subunit is used for performing M equal division and N equal division on the accident image information to obtain M first equal divisions and N second equal divisions; wherein M is not equal to N;
the processing subunit is configured to perform dimension increasing and dimension decreasing processing on the M first aliquots and the N second aliquots, respectively, to obtain M first target aliquots and N second target aliquots;
the extraction subunit is used for respectively inputting the M first target equal parts into corresponding first Attention models and respectively extracting first detail characteristics corresponding to each first target equal part; respectively inputting the N second target equal parts into corresponding second Attention models, and respectively extracting second detail characteristics corresponding to each second target equal part;
and the adding subunit is used for adding the extracted first detail features and the second detail features to obtain a feature extraction result in the accident image information.
8. The image recognition-based accident handling device of claim 7, wherein M is 2, N is 3; the first orientation model is Hard orientation, and the second orientation model is Soft orientation; the addition subunit includes:
the first adding module is used for adding a first detail feature extracted by inputting the first aliquot in the first target aliquot into the corresponding Hard attention and a second detail feature extracted by inputting the first aliquot in the second target aliquot into the corresponding Softattention to obtain a first target detail feature;
the second adding module is used for adding the first detail features extracted by inputting the second aliquot in the first target aliquot into the corresponding Hard attention and the second detail features extracted by inputting the second aliquot in the second target aliquot into the corresponding Softattention to obtain second target detail features;
and the third adding module is used for adding the second detail features extracted by inputting the third one of the second target equal parts into the corresponding Hard attention, the first target detail features and the second target detail features to obtain a feature extraction result in the accident image information.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201911309634.4A 2019-12-18 2019-12-18 Accident handling method and device based on image recognition and computer equipment Active CN110991558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911309634.4A CN110991558B (en) 2019-12-18 2019-12-18 Accident handling method and device based on image recognition and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911309634.4A CN110991558B (en) 2019-12-18 2019-12-18 Accident handling method and device based on image recognition and computer equipment

Publications (2)

Publication Number Publication Date
CN110991558A true CN110991558A (en) 2020-04-10
CN110991558B CN110991558B (en) 2023-04-28

Family

ID=70095273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911309634.4A Active CN110991558B (en) 2019-12-18 2019-12-18 Accident handling method and device based on image recognition and computer equipment

Country Status (1)

Country Link
CN (1) CN110991558B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233421A (en) * 2020-10-15 2021-01-15 胡歆柯 Intelligent city intelligent traffic monitoring system based on machine vision
CN114301943A (en) * 2021-12-30 2022-04-08 合众新能源汽车有限公司 Self-service case reporting method and self-service case reporting system based on bicycle accidents

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323741A1 (en) * 2015-04-30 2016-11-03 Research & Business Foundation Sungkyunkwan University Method and apparatus for transmitting vehicle accident information based on interaction between devices and method and vehicle accident information collection apparatus
CN107240025A (en) * 2017-05-22 2017-10-10 深圳市中车数联科技有限公司 Traffic accident treatment method, system and computer-readable recording medium
CN107909113A (en) * 2017-11-29 2018-04-13 北京小米移动软件有限公司 Traffic-accident image processing method, device and storage medium
CN109740416A (en) * 2018-11-19 2019-05-10 深圳市华尊科技股份有限公司 Method for tracking target and Related product
CN109741602A (en) * 2019-01-11 2019-05-10 福建工程学院 A kind of method and system of fender-bender auxiliary fix duty
CN109754326A (en) * 2019-01-11 2019-05-14 福建工程学院 A kind of fender-bender assists the method and system of quick setting loss
CN109919140A (en) * 2019-04-02 2019-06-21 浙江科技学院 Vehicle collision accident responsibility automatic judging method, system, equipment and storage medium
CN109961056A (en) * 2019-04-02 2019-07-02 浙江科技学院 Traffic accident responsibility identification, system and equipment based on decision Tree algorithms

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323741A1 (en) * 2015-04-30 2016-11-03 Research & Business Foundation Sungkyunkwan University Method and apparatus for transmitting vehicle accident information based on interaction between devices and method and vehicle accident information collection apparatus
CN107240025A (en) * 2017-05-22 2017-10-10 深圳市中车数联科技有限公司 Traffic accident treatment method, system and computer-readable recording medium
CN107909113A (en) * 2017-11-29 2018-04-13 北京小米移动软件有限公司 Traffic-accident image processing method, device and storage medium
CN109740416A (en) * 2018-11-19 2019-05-10 深圳市华尊科技股份有限公司 Method for tracking target and Related product
CN109741602A (en) * 2019-01-11 2019-05-10 福建工程学院 A kind of method and system of fender-bender auxiliary fix duty
CN109754326A (en) * 2019-01-11 2019-05-14 福建工程学院 A kind of fender-bender assists the method and system of quick setting loss
CN109919140A (en) * 2019-04-02 2019-06-21 浙江科技学院 Vehicle collision accident responsibility automatic judging method, system, equipment and storage medium
CN109961056A (en) * 2019-04-02 2019-07-02 浙江科技学院 Traffic accident responsibility identification, system and equipment based on decision Tree algorithms

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁敏雄: "佛山:轻微交通事故在线直赔", 《道路交通管理》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233421A (en) * 2020-10-15 2021-01-15 胡歆柯 Intelligent city intelligent traffic monitoring system based on machine vision
CN114301943A (en) * 2021-12-30 2022-04-08 合众新能源汽车有限公司 Self-service case reporting method and self-service case reporting system based on bicycle accidents

Also Published As

Publication number Publication date
CN110991558B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
KR102151365B1 (en) Image-based vehicle loss evaluation method, apparatus and system, and electronic device
CN111667011B (en) Damage detection model training and vehicle damage detection method, device, equipment and medium
US11373249B1 (en) Automobile monitoring systems and methods for detecting damage and other conditions
CN113538714B (en) Parking lot control method, system and computer readable storage medium
CN109784170B (en) Vehicle risk assessment method, device, equipment and storage medium based on image recognition
CN115605889A (en) Method for determining damage to vehicle parts
CN107862340A (en) A kind of model recognizing method and device
US20140316825A1 (en) Image based damage recognition and repair cost estimation
CN109741602A (en) A kind of method and system of fender-bender auxiliary fix duty
CN110287971A (en) Data verification method, device, computer equipment and storage medium
CN110991558A (en) Accident processing method and device based on image recognition and computer equipment
CN111079751B (en) Method and device for identifying authenticity of license plate, computer equipment and storage medium
CN109800984B (en) Driving level evaluation method, driving level evaluation device, computer device, and storage medium
WO2021184564A1 (en) Image-based accident liability determination method and apparatus, computer device, and storage medium
US20230334987A1 (en) Systems and methods for fraud prevention based on video analytics
CN110298454A (en) Checking method, device, computer equipment and the storage medium of operation image
CN112241127B (en) Automatic driving safety scoring method, automatic driving safety scoring device, computer equipment and storage medium
CN108876633B (en) Method and device for processing insurance of foreign policy, computer equipment and storage medium
CN111709413A (en) Certificate verification method and device based on image recognition, computer equipment and medium
CN110807630A (en) Payment method and device based on face recognition, computer equipment and storage medium
CN110310018B (en) License plate information processing method and device, electronic equipment and storage medium
CN111985448A (en) Vehicle image recognition method and device, computer equipment and readable storage medium
CN111415150B (en) Mobile payment method and device based on vehicle-mounted terminal and computer equipment
CN109360137A (en) A kind of car accident appraisal procedure, computer readable storage medium and server
CN113837170A (en) Automatic auditing processing method, device and equipment for vehicle insurance claim settlement application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant