CN111046212A - Traffic accident processing method and device and electronic equipment - Google Patents

Traffic accident processing method and device and electronic equipment Download PDF

Info

Publication number
CN111046212A
CN111046212A CN201911227447.1A CN201911227447A CN111046212A CN 111046212 A CN111046212 A CN 111046212A CN 201911227447 A CN201911227447 A CN 201911227447A CN 111046212 A CN111046212 A CN 111046212A
Authority
CN
China
Prior art keywords
accident
target
image data
responsibility
traffic accident
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911227447.1A
Other languages
Chinese (zh)
Inventor
樊太飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Shengxin (Shanghai) Information Technology Co.,Ltd.
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201911227447.1A priority Critical patent/CN111046212A/en
Publication of CN111046212A publication Critical patent/CN111046212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Software Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Educational Administration (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)

Abstract

One or more embodiments of the present specification provide a traffic accident handling method and apparatus, and an electronic device, where the method includes: acquiring image data of an accident scene of a target traffic accident; extracting feature data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident; inputting the characteristic data to a prediction model to predict a liability assessment result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results; and outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.

Description

Traffic accident processing method and device and electronic equipment
Technical Field
One or more embodiments of the present disclosure relate to the field of computer application technologies, and in particular, to a method and an apparatus for processing a traffic accident, and an electronic device.
Background
Now, after a traffic accident occurs, a traffic management department generally needs to send a traffic police to an accident scene for investigation, so as to make traffic accident responsibility confirmation for each accident vehicle in the traffic accident by the traffic management department according to the actual situation of the accident scene, that is, determine the responsibility duty ratio that the driver of each vehicle should assume in the traffic accident, for example: all or nothing of responsibility, etc. However, the traffic accident responsibility confirmation of the accident vehicle in the traffic accident is performed manually, which is generally low in efficiency and inconvenient for performing corresponding business processing on the accident vehicle according to the responsibility confirmation result in the following process.
Disclosure of Invention
The specification provides a traffic accident handling method, which is applied to a server and comprises the following steps:
acquiring image data of an accident scene of a target traffic accident;
extracting feature data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident;
inputting the characteristic data to a prediction model to predict a liability assessment result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results;
and outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.
Optionally, the image data comprises: vehicle image data; vehicle driving track image data; road surface image data; traffic light image data.
Optionally, the acquiring image data of the accident scene of the target traffic accident includes:
acquiring accident scene image data of a target traffic accident sent by a client; and the image data is obtained by calling a camera by the client to shoot.
Optionally, the extracting feature data based on the image data includes:
determining a damaged part and a driving direction of the target accident vehicle, and a road surface condition and a traffic signal lamp condition of an accident scene of the target traffic accident based on the image data;
and determining the damaged part and the driving direction of the target accident vehicle, and the road surface condition and the traffic signal lamp condition of the accident scene of the target traffic accident as characteristic data.
Optionally, the responsibility determination result is responsibility or non-responsibility, and the machine learning model is a binary classification model;
or, the responsibility confirmation result is one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, common responsibility and no responsibility, and the machine learning model is a multi-classification model.
The present specification also provides a traffic accident handling apparatus, the apparatus comprising:
the acquisition module acquires image data of an accident scene of a target traffic accident;
an extraction module that extracts feature data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident;
a prediction module that inputs the characteristic data to a prediction model to predict a responsibility assumption result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results;
and the output module is used for outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.
Optionally, the image data comprises: vehicle image data; vehicle driving track image data; road surface image data; traffic light image data.
Optionally, the obtaining module:
acquiring accident scene image data of a target traffic accident sent by a client; and the image data is obtained by calling a camera by the client to shoot.
Optionally, the extraction module:
determining a damaged part and a driving direction of the target accident vehicle, and a road surface condition and a traffic signal lamp condition of an accident scene of the target traffic accident based on the image data;
and determining the damaged part and the driving direction of the target accident vehicle, and the road surface condition and the traffic signal lamp condition of the accident scene of the target traffic accident as characteristic data.
Optionally, the responsibility determination result is responsibility or non-responsibility, and the machine learning model is a binary classification model;
or, the responsibility confirmation result is one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, common responsibility and no responsibility, and the machine learning model is a multi-classification model.
This specification also proposes an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the steps of the above method by executing the executable instructions.
The present specification also contemplates a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the above-described method.
In the technical scheme, the responsibility confirmation result of the accident vehicle in the traffic accident can be predicted based on the image data of the accident scene of the traffic accident, the predicted responsibility confirmation result is output, and the corresponding business processing is executed based on the responsibility confirmation result, so that the traffic accident responsibility confirmation of the accident vehicle in the traffic accident can be automatically realized, the traffic accident responsibility confirmation of the accident vehicle in the traffic accident does not need to be manually carried out, the efficiency of the traffic accident responsibility confirmation can be improved, and the convenience is provided for the subsequent corresponding business processing execution according to the responsibility confirmation result.
Drawings
FIG. 1 is a schematic view of a traffic accident handling system shown in an exemplary embodiment of the present description;
FIG. 2 is a flow chart of a traffic accident handling method shown in an exemplary embodiment of the present description;
fig. 3 is a hardware configuration diagram of an electronic device in which a traffic accident handling apparatus according to an exemplary embodiment of the present disclosure is located;
fig. 4 is a block diagram of a traffic accident handling apparatus according to an exemplary embodiment of the present specification.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The present specification aims to provide a technical solution for predicting a responsibility confirmation result of an accident vehicle in a traffic accident based on image data of an accident scene of the traffic accident that has occurred, and outputting the predicted responsibility confirmation result.
In specific implementation, a machine learning model can be trained in advance based on a plurality of characteristic data samples marked with responsibility confirmation results, and the trained machine learning model is used as a prediction model for forecasting the responsibility confirmation results; the feature data may be data related to responsibility confirmation extracted based on image data of an accident scene of a traffic accident.
For target traffic accidents which have occurred together, image data of an accident scene of the target traffic accident may be acquired, data related to responsibility confirmation in the target traffic accident performed on a target accident vehicle in the target traffic accident may be extracted as feature data based on the acquired image data, and the extracted feature data may be input to the prediction model, so that the responsibility confirmation result in the target traffic accident of the target accident vehicle may be predicted by the prediction model based on the feature data.
Subsequently, the predicted responsibility confirmation result may be output to perform a corresponding business process based on the responsibility confirmation result.
In the technical scheme, the responsibility confirmation result of the accident vehicle in the traffic accident can be predicted based on the image data of the accident scene of the traffic accident, the predicted responsibility confirmation result is output, and the corresponding business processing is executed based on the responsibility confirmation result, so that the traffic accident responsibility confirmation of the accident vehicle in the traffic accident can be automatically realized, the traffic accident responsibility confirmation of the accident vehicle in the traffic accident does not need to be manually carried out, the efficiency of the traffic accident responsibility confirmation can be improved, and the convenience is provided for the subsequent corresponding business processing execution according to the responsibility confirmation result.
Referring to fig. 1, fig. 1 is a schematic diagram of a traffic accident handling system according to an exemplary embodiment of the present disclosure.
In practical applications, for a traffic accident that has occurred together, the traffic management department may predict the responsibility confirmation result of the accident vehicle in the traffic accident based on the image data of the accident scene of the traffic accident, and perform the business processes such as responsibility following and the like according to the predicted responsibility confirmation result, for example: penalizing the driver of the accident vehicle.
Alternatively, the insurance company may predict the responsibility confirmation result of the accident vehicle in the traffic accident based on the image data of the accident scene of the traffic accident, and perform the business process such as settlement of the accident vehicle according to the predicted responsibility confirmation result.
That is, in the traffic accident handling system shown in fig. 1, the electronic device of the service execution party may be an electronic device used by a service execution party such as a traffic administration department or an insurance company that needs to determine the result of the responsibility confirmation of the accident vehicle in the traffic accident; the electronic device may be a server, a computer, a mobile phone, a tablet device, a notebook computer, a palmtop computer (PDAs), or the like, which is not limited in this specification.
The user of the service executing party may carry the user electronic device with the camera to go to the accident scene of the traffic accident, and shoot the accident scene of the traffic accident through the user electronic device, for example: and calling a camera carried by the user electronic equipment by a client running in the user electronic equipment to shoot the accident scene of the traffic accident. Subsequently, the photographed image data of the accident site of the traffic accident may be uploaded to the service executing side electronic device by the user electronic device, or the user electronic device may be connected to the service executing side electronic device by the user of the service executing side, so that the service executing side electronic device may read the photographed image data of the accident site of the traffic accident from a storage medium of the user electronic device.
Alternatively, the image data of the accident site of the traffic accident, which is captured by the image capturing device for monitoring installed near the accident site of the traffic accident, may be uploaded to the service execution side electronic device. Specifically, the electronic device on the service execution side may send a notification message to a camera device for monitoring installed near an accident site of the traffic accident to trigger the camera device to shoot the accident site of the traffic accident, and upload image data of the accident site of the traffic accident obtained by shooting to the electronic device on the service execution side.
The service execution side electronic device may further predict a responsibility confirmation result of the accident vehicle in the traffic accident based on the image data, and output the predicted responsibility confirmation result.
Referring to fig. 2, fig. 2 is a flowchart illustrating a traffic accident handling method according to an exemplary embodiment of the present disclosure.
The traffic accident handling method can be applied to the electronic device of the service execution party shown in fig. 1, and comprises the following steps:
step 202, acquiring image data of an accident scene of a target traffic accident;
step 204, extracting characteristic data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident;
step 206, inputting the characteristic data into a prediction model to predict the responsibility confirmation result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results;
and 208, outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.
In the present embodiment, for a traffic accident (referred to as a target traffic accident) that has occurred together, the image data of the accident scene of the target traffic accident may be acquired by the above-described service execution side electronic device first.
In practical applications, the user of the service execution party may carry the user electronic device with the camera to go to the accident scene of the traffic accident, shoot the accident scene of the traffic accident through the user electronic device, and upload the shot image data of the accident scene of the traffic accident to the service execution party electronic device by the user electronic device, or connect the user electronic device to the service execution party electronic device by the user of the service execution party, so that the service execution party electronic device can read the shot image data of the accident scene of the traffic accident from the storage medium of the user electronic device.
Or, the electronic device on the service execution side may send a notification message to a camera device for monitoring installed near the accident site of the traffic accident, so as to trigger the camera device to shoot the accident site of the traffic accident, and upload the image data of the accident site of the traffic accident obtained by shooting to the electronic device on the service execution side.
In one embodiment, the image data of the accident scene of the traffic accident may include: vehicle image data, that is, image data including an accident vehicle (particularly, a damaged portion of the accident vehicle) in the traffic accident; vehicle driving track image data, namely an image photo containing the driving track of an accident vehicle in the traffic accident; road surface image data, that is, image data including a road surface condition at an accident site of the traffic accident; the traffic light image data is image data including the traffic light condition at the scene of the traffic accident.
In this embodiment, after acquiring the image data of the accident scene of the target traffic accident, the service execution side electronic device may extract, as the feature data, data related to the responsibility confirmation in the target traffic accident performed for one accident vehicle (referred to as a target accident vehicle) in the target traffic accident, based on the image data.
In one illustrated embodiment, in a first aspect, the business execution side electronic device may determine a damage portion of the target accident vehicle based on vehicle image data in the image data, to take the damage portion as feature data.
In a second aspect, the traffic performing side electronic device may determine a traveling direction of the target accident vehicle before the target traffic accident occurs based on the vehicle traveling track image data in the image data to take the traveling track of the target accident vehicle as the feature data.
For example, assuming that the travel track of the target accident vehicle is determined to turn left based on the vehicle travel track image data in the image data, the travel direction of the target accident vehicle may be further determined to turn left.
In a third aspect, the traffic executor electronic device may determine a road surface condition of the accident scene of the target traffic accident based on the road surface image data in the image data to use the road surface condition as the feature data as well.
Specifically, the road surface condition may include: whether the target accident vehicle presses a traffic marking in an accident scene of the target traffic accident; and whether the running direction of the target accident vehicle is matched with the traffic identification corresponding to the running channel where the target accident vehicle is located in the accident scene of the target traffic accident or not.
For example, if it is determined that the driving direction of the target accident vehicle is turning left based on the road surface image data in the image data, but the traffic sign corresponding to the driving lane where the target accident vehicle is located in the accident scene of the target traffic accident is straight, it may be further determined that the driving direction of the target accident vehicle does not match the traffic sign corresponding to the driving lane where the target accident vehicle is located in the accident scene of the target traffic accident, that is, the target accident vehicle is driven on the wrong driving lane.
In a fourth aspect, the electronic device at the service execution side may determine, based on the traffic light image data in the image data, a traffic light condition at the scene of the target traffic accident (specifically, a display condition of a traffic light at the scene of the target traffic accident, that is, a traffic signal indicated by the traffic light at the time of occurrence of the target traffic accident), so as to use the traffic light condition as the feature data.
Specifically, the traffic light state may be a display condition of a traffic light at an accident scene of the target traffic accident, i.e., a traffic signal indicated by the traffic light at the time of the target traffic accident.
In practical applications, the electronic device on the service execution side can also determine vehicle information such as a vehicle type and a vehicle money of the target accident vehicle based on the vehicle image data in the image data, so as to use the vehicle information as characteristic data.
In this embodiment, after the feature data is extracted, the service execution side electronic device may input the feature data to a prediction model trained in advance, so that the prediction model predicts a result of responsibility confirmation of the target accident vehicle in the target traffic accident based on the feature data.
It should be noted that the prediction model may be a machine learning model trained based on a plurality of feature data samples labeled with responsibility confirmation results.
In practical applications, the machine learning model may be a binary classification model, and the responsible determination result may be responsible or non-responsible. That is, the responsibility determination result of the target accident vehicle in the target traffic accident predicted by the prediction model based on the feature data may be responsible or non-responsible.
Alternatively, the machine learning model may be a multi-classification model (e.g., a deep neural network model), and the responsibility confirmation result may be one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, concordant responsibility and no responsibility. That is, the responsibility confirmation result of the target accident vehicle in the target traffic accident predicted by the prediction model based on the feature data may be one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, concordant responsibility and no responsibility.
In this embodiment, after the responsibility confirmation result of the target accident vehicle in the target traffic accident is predicted by the prediction model, the service execution side electronic device may output the responsibility confirmation result to execute the corresponding service process based on the responsibility confirmation result.
In practical applications, on one hand, the electronic device of the service executing party can directly execute service processing such as liability assignment or claim settlement based on the output responsibility confirmation result and a service processing policy preset by the user of the service executing party; on the other hand, the electronic device of the service executing party can output the predicted responsibility confirmation result to the display screen, that is, the responsibility confirmation result is displayed on the display screen for the user of the service executing party to view, so that the user of the service executing party can execute the service processing such as responsibility confirmation or claim settlement according to the responsibility confirmation result.
The process of training the machine learning model to obtain the above-mentioned predictive model is described below.
The step of training the machine learning model may be performed by the electronic device of the service executing party, or may be performed by another electronic device, and the user of the service executing party may transfer the trained prediction model to the electronic device of the service executing party, so that the electronic device of the service executing party may perform the prediction of the responsibility confirmation result through the prediction model.
In practical application, a proper number of characteristic data samples (which can be specifically set by a user of the service executing party) can be obtained from the related data of the historical traffic accidents recorded on the record; one of the characteristic data samples may specifically include characteristic data of an accident vehicle in a historical traffic accident.
The data type of the feature data sample used for training the machine learning model is the same as the data type of the feature data used for predicting the responsibility confirmation result by the prediction model.
For example, assuming that the feature data samples used in training the machine learning model include four types of data, i.e., a damaged portion and a traveling direction of the target accident vehicle, and a road surface condition and a traffic light condition at an accident site of the target traffic accident, the feature data used in the prediction of the responsibility confirmation result by the prediction model should include four types of data, i.e., a damaged portion and a traveling direction of the target accident vehicle, and a road surface condition and a traffic light condition at an accident site of the target traffic accident.
In another example, assuming that the feature data samples used in training the machine learning model include only two types of data, namely, the damaged portion and the traveling direction of the target accident vehicle, the feature data used in the prediction of the responsibility confirmation result by the prediction model should include only two types of data, namely, the damaged portion and the traveling direction of the target accident vehicle.
After the characteristic data samples are obtained, corresponding responsibility confirmation results may be labeled for the characteristic data samples, for example: assuming that a certain characteristic data sample includes the characteristic data of the accident vehicle a in the historical traffic accident a, the responsibility confirmation result labeled for the characteristic data sample is the responsibility confirmation result of the accident vehicle a in the historical traffic accident a.
Subsequently, the characteristic data samples labeled with the responsibility confirmation results can be input into a machine learning model preset by the user of the business executive party for calculation, and model parameters of the machine learning model are adjusted according to the calculation results so as to reduce the loss function of the machine learning model. When the loss function of the machine learning model is reduced to an expected threshold (the expected threshold can be specifically set by the user of the business executive party), the machine learning model can be considered to be trained completely, and then the trained machine learning model can be used as the prediction model, so that the responsibility confirmation result prediction can be carried out through the prediction model.
In the technical scheme, the responsibility confirmation result of the accident vehicle in the traffic accident can be predicted based on the image data of the accident scene of the traffic accident, the predicted responsibility confirmation result is output, and the corresponding business processing is executed based on the responsibility confirmation result, so that the traffic accident responsibility confirmation of the accident vehicle in the traffic accident can be automatically realized, the traffic accident responsibility confirmation of the accident vehicle in the traffic accident does not need to be manually carried out, the efficiency of the traffic accident responsibility confirmation can be improved, and the convenience is provided for the subsequent corresponding business processing execution according to the responsibility confirmation result.
Corresponding to the embodiment of the traffic accident handling method, the specification also provides an embodiment of a traffic accident handling device.
The embodiment of the traffic accident processing device can be applied to electronic equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 3, the electronic device in the traffic accident handling apparatus in this specification is a hardware structure diagram of the electronic device, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 3, the electronic device in the embodiment may further include other hardware according to the actual function of the traffic accident handling, which is not described again.
Referring to fig. 4, fig. 4 is a block diagram of a traffic accident handling apparatus according to an exemplary embodiment of the present disclosure. The traffic accident handling apparatus 40 may be applied to the electronic device shown in fig. 3, and includes:
an acquisition module 401, which acquires image data of an accident scene of a target traffic accident;
an extraction module 402 that extracts feature data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident;
a prediction module 403 for inputting the characteristic data into a prediction model to predict a responsibility confirmation result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results;
an output module 404 for outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.
In the present embodiment, the image data includes: vehicle image data; vehicle driving track image data; road surface image data; traffic light image data.
In this embodiment, the obtaining module 401:
acquiring accident scene image data of a target traffic accident sent by a client; and the image data is obtained by calling a camera by the client to shoot.
In this embodiment, the extracting module 402:
determining a damaged part and a driving direction of the target accident vehicle, and a road surface condition and a traffic signal lamp condition of an accident scene of the target traffic accident based on the image data;
and determining the damaged part and the driving direction of the target accident vehicle, and the road surface condition and the traffic signal lamp condition of the accident scene of the target traffic accident as characteristic data.
In this embodiment, the responsibility determination result is responsibility or non-responsibility, and the machine learning model is a binary classification model;
or, the responsibility confirmation result is one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, common responsibility and no responsibility, and the machine learning model is a multi-classification model.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (12)

1. A traffic accident handling method is applied to a server and comprises the following steps:
acquiring image data of an accident scene of a target traffic accident;
extracting feature data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident;
inputting the characteristic data to a prediction model to predict a liability assessment result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results;
and outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.
2. The method of claim 1, the image data comprising: vehicle image data; vehicle driving track image data; road surface image data; traffic light image data.
3. The method of claim 2, the obtaining image data of an accident scene of a target traffic accident comprising:
acquiring accident scene image data of a target traffic accident sent by a client; and the image data is obtained by calling a camera by the client to shoot.
4. The method of claim 2, the extracting feature data based on the image data, comprising:
determining a damaged part and a driving direction of the target accident vehicle, and a road surface condition and a traffic signal lamp condition of an accident scene of the target traffic accident based on the image data;
and determining the damaged part and the driving direction of the target accident vehicle, and the road surface condition and the traffic signal lamp condition of the accident scene of the target traffic accident as characteristic data.
5. The method of claim 1, the responsibility determination result being either responsible or not responsible, the machine learning model being a binary classification model;
or, the responsibility confirmation result is one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, common responsibility and no responsibility, and the machine learning model is a multi-classification model.
6. A traffic accident management apparatus, the apparatus comprising:
the acquisition module acquires image data of an accident scene of a target traffic accident;
an extraction module that extracts feature data based on the image data; wherein the characteristic data is data relating to a duty assignment in the target traffic accident performed on a target accident vehicle in the target traffic accident;
a prediction module that inputs the characteristic data to a prediction model to predict a responsibility assumption result of the target accident vehicle in the target traffic accident based on the characteristic data by the prediction model; the prediction model is a machine learning model trained on the basis of a plurality of characteristic data samples marked with responsibility confirmation results;
and the output module is used for outputting the predicted responsibility confirmation result of the target accident vehicle in the target traffic accident.
7. The apparatus of claim 6, the image data comprising: vehicle image data; vehicle driving track image data; road surface image data; traffic light image data.
8. The apparatus of claim 7, the acquisition module to:
acquiring accident scene image data of a target traffic accident sent by a client; and the image data is obtained by calling a camera by the client to shoot.
9. The apparatus of claim 7, the extraction module to:
determining a damaged part and a driving direction of the target accident vehicle, and a road surface condition and a traffic signal lamp condition of an accident scene of the target traffic accident based on the image data;
and determining the damaged part and the driving direction of the target accident vehicle, and the road surface condition and the traffic signal lamp condition of the accident scene of the target traffic accident as characteristic data.
10. The apparatus of claim 6, the responsibility confirmation result being either responsible or not responsible, the machine learning model being a binary classification model;
or, the responsibility confirmation result is one of the following responsibility confirmation results: full responsibility, principal responsibility, secondary responsibility, common responsibility and no responsibility, and the machine learning model is a multi-classification model.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1 to 5 by executing the executable instructions.
12. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 5.
CN201911227447.1A 2019-12-04 2019-12-04 Traffic accident processing method and device and electronic equipment Pending CN111046212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911227447.1A CN111046212A (en) 2019-12-04 2019-12-04 Traffic accident processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911227447.1A CN111046212A (en) 2019-12-04 2019-12-04 Traffic accident processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111046212A true CN111046212A (en) 2020-04-21

Family

ID=70234614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911227447.1A Pending CN111046212A (en) 2019-12-04 2019-12-04 Traffic accident processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111046212A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814625A (en) * 2020-06-29 2020-10-23 广东荣文科技集团有限公司 Intelligent traffic accident handling method and related device
CN112241974A (en) * 2020-05-29 2021-01-19 北京新能源汽车技术创新中心有限公司 Traffic accident detection method, processing method, system and storage medium
CN112749661A (en) * 2021-01-14 2021-05-04 金陵科技学院 Traffic accident responsibility judging model based on block chain and IVggNet
WO2021139347A1 (en) * 2020-05-21 2021-07-15 平安科技(深圳)有限公司 Artificial intelligence open platform and method for intelligent transportation, and medium and electronic device
CN113538193A (en) * 2021-06-30 2021-10-22 东莞市绿灯网络科技有限公司 Traffic accident handling method and system based on artificial intelligence and computer vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018186625A1 (en) * 2017-04-06 2018-10-11 삼성전자주식회사 Electronic device, warning message providing method therefor, and non-transitory computer-readable recording medium
CN109919140A (en) * 2019-04-02 2019-06-21 浙江科技学院 Vehicle collision accident responsibility automatic judging method, system, equipment and storage medium
CN109961056A (en) * 2019-04-02 2019-07-02 浙江科技学院 Traffic accident responsibility identification, system and equipment based on decision Tree algorithms

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018186625A1 (en) * 2017-04-06 2018-10-11 삼성전자주식회사 Electronic device, warning message providing method therefor, and non-transitory computer-readable recording medium
CN109919140A (en) * 2019-04-02 2019-06-21 浙江科技学院 Vehicle collision accident responsibility automatic judging method, system, equipment and storage medium
CN109961056A (en) * 2019-04-02 2019-07-02 浙江科技学院 Traffic accident responsibility identification, system and equipment based on decision Tree algorithms

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021139347A1 (en) * 2020-05-21 2021-07-15 平安科技(深圳)有限公司 Artificial intelligence open platform and method for intelligent transportation, and medium and electronic device
CN112241974A (en) * 2020-05-29 2021-01-19 北京新能源汽车技术创新中心有限公司 Traffic accident detection method, processing method, system and storage medium
CN112241974B (en) * 2020-05-29 2024-05-10 北京国家新能源汽车技术创新中心有限公司 Traffic accident detection method, processing method, system and storage medium
CN111814625A (en) * 2020-06-29 2020-10-23 广东荣文科技集团有限公司 Intelligent traffic accident handling method and related device
CN111814625B (en) * 2020-06-29 2024-04-19 广东荣文科技集团有限公司 Intelligent traffic accident handling method and related device
CN112749661A (en) * 2021-01-14 2021-05-04 金陵科技学院 Traffic accident responsibility judging model based on block chain and IVggNet
CN113538193A (en) * 2021-06-30 2021-10-22 东莞市绿灯网络科技有限公司 Traffic accident handling method and system based on artificial intelligence and computer vision

Similar Documents

Publication Publication Date Title
CN111046212A (en) Traffic accident processing method and device and electronic equipment
JP6873237B2 (en) Image-based vehicle damage assessment methods, equipment, and systems, as well as electronic devices
JP6859505B2 (en) Image-based vehicle damage determination methods, devices and electronic devices
US20190213804A1 (en) Picture-based vehicle loss assessment
CN110033386B (en) Vehicle accident identification method and device and electronic equipment
WO2018191421A1 (en) Image-based vehicle damage determining method, apparatus, and electronic device
CN112329702B (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN110348392B (en) Vehicle matching method and device
CN109756760A (en) Generation method, device and the server of video tab
CN112633255B (en) Target detection method, device and equipment
CN109102026B (en) Vehicle image detection method, device and system
CN114663871A (en) Image recognition method, training method, device, system and storage medium
CN111929688B (en) Method and equipment for determining radar echo prediction frame sequence
US9805272B1 (en) Storage system of original frame of monitor data and storage method thereof
CN111161533B (en) Traffic accident processing method and device and electronic equipment
CN110223320B (en) Object detection tracking method and detection tracking device
Pang et al. Real-time detection of road manhole covers with a deep learning model
CN116580390A (en) Price tag content acquisition method, price tag content acquisition device, storage medium and computer equipment
US20110161443A1 (en) Data management systems and methods for mobile devices
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
US11720969B2 (en) Detecting vehicle identity and damage status using single video analysis
Singh et al. Evaluating the Performance of Ensembled YOLOv8 Variants in Smart Parking Applications for Vehicle Detection and License Plate Recognition under Varying Lighting Conditions
CN111047861A (en) Traffic accident processing method and device and electronic equipment
US20240233327A1 (en) Method and system for training a machine learning model with a subclass of one or more predefined classes of visual objects
CN117312598B (en) Evidence obtaining method, device, computer equipment and storage medium for fee evasion auditing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211229

Address after: Room 610, floor 6, No. 618, Wai Road, Huangpu District, Shanghai 200010

Applicant after: Ant Shengxin (Shanghai) Information Technology Co.,Ltd.

Address before: 310000 801-11 section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Alipay (Hangzhou) Information Technology Co.,Ltd.