WO2022142319A1 - False insurance claim report processing method and apparatus, and computer device and storage medium - Google Patents

False insurance claim report processing method and apparatus, and computer device and storage medium Download PDF

Info

Publication number
WO2022142319A1
WO2022142319A1 PCT/CN2021/109443 CN2021109443W WO2022142319A1 WO 2022142319 A1 WO2022142319 A1 WO 2022142319A1 CN 2021109443 W CN2021109443 W CN 2021109443W WO 2022142319 A1 WO2022142319 A1 WO 2022142319A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation value
user
scene
accident
information
Prior art date
Application number
PCT/CN2021/109443
Other languages
French (fr)
Chinese (zh)
Inventor
徐财应
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2022142319A1 publication Critical patent/WO2022142319A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a method, device, computer equipment and storage medium for processing false insurance reports.
  • auto insurance in property insurance is a kind of insurance that is more popular among the general public. It has a wide audience, low threshold and high occurrence rate.
  • surveyors or loss assessors can only take pictures and manually collect evidence at the accident site.
  • the inventor realizes that there will be cases where insurers falsely report cases, exaggerate losses, and even gangs deliberately make false cases, resulting in insurance company surveyors and loss assessors making false reports and omitting cases, and eventually a large number of false cases will appear, resulting in The rate of false cases is high.
  • Embodiments of the present application provide a method, device, computer equipment, and storage medium for processing false insurance reports, so as to solve the problem of a high false case rate.
  • a method for handling false insurance reports including:
  • the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
  • dialogue data fed back by the user terminal where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
  • the facial expression evaluation value the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
  • a device for processing false insurance reports comprising:
  • a response module used to respond to an insurance report request triggered by a user, so as to send a material submission request to the client;
  • the first acquisition module is configured to acquire the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene The image includes an image of the user's face;
  • the second obtaining module is configured to obtain the dialog data fed back by the user terminal, where the dialog data is the dialog data that the user answers according to the preset questions, and obtains the dialog data when the user answers the preset questions according to the preset questions. the user's location;
  • the first analysis module is used to analyze the facial image of the user, to obtain the change of the user's facial micro-expression, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
  • a second analysis module configured to analyze the oral audio of the accident details to extract the accident details and the change of the user's voice
  • a third analysis module configured to analyze the sound change to obtain a sound evaluation value, and analyze the accident detail information to obtain a scene evaluation value;
  • a third obtaining module configured to obtain a lying evaluation value according to the accident detail information and the dialogue data
  • a fourth acquiring module configured to acquire a location evaluation value according to the accident detail information and the location of the user
  • the storage module is configured to store the judgment result in association with the certificate information.
  • a computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor implements the following steps when executing the computer-readable instructions:
  • the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
  • dialogue data fed back by the user terminal where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
  • the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image
  • the accident scene image includes the user's facial image
  • the facial expression evaluation value the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
  • FIG. 1 is a schematic diagram of a system framework of a method for processing false insurance reports in an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of a computer device in an embodiment of the present application.
  • users who have purchased the corresponding insurance can trigger an insurance report request through the insurance official application.
  • an insurance report request can be triggered through the insurance official application installed on the client.
  • the user terminal may refer to a terminal device such as a user's mobile phone, which is not limited here.
  • the report processing terminal After receiving the insurance report request, the report processing terminal will respond to the insurance report request, and specifically, send a material submission request to the client.
  • the material submission request is used to include instruction information, which is used to instruct the user to upload the user's certificate information and accident scene video according to the preset operation.
  • the user can upload the certificate information and accident scene video according to the instructions of the material submission request.
  • the instruction information also indicates various information that needs to be included when recording the accident scene video, including the user's face image. That is to say, in the recorded accident scene video, the user needs to be included in the accident scene video for further analysis later.
  • the accident scene video includes the accident details oral audio and accident scene images entered by the user.
  • the accident details oral audio refers to the audio about the current accident details entered by the user according to the instructions of the material submission request.
  • the circumstances include but are not limited to the time of the accident, the location of the accident, the scene of the accident, the persons involved, the loss, etc.
  • the material submission request can be configured according to the subsequent false determination requirements, so that the user can enter the required accident details oral audio according to the requirements.
  • the accident scene image refers to the scene image taken by the user of the accident scene and himself. For example, in a vehicle collision accident, it is necessary to video record the people, vehicles, collision location, and collision location at the collision accident scene, so as to make the accident scene.
  • the video contains images of the accident scene.
  • the certificate information refers to the information used to indicate the above-mentioned unique identity information of the user, for example, it may refer to an ID card, a driver's license, a social security card, and the like.
  • the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image.
  • the certificate information and accident scene video uploaded by the user through the user terminal can be obtained.
  • the data is the dialogue data that the user answers according to the preset question and answer questions.
  • the human-machine dialogue system is triggered to access the user terminal, so as to conduct human-machine dialogue with the user through the user terminal.
  • preset questions are preset, and the man-machine dialogue system will ask the user through the preset questions, so as to obtain the user's answer, so as to obtain the dialogue data that the user answers according to the preset question, and feed it back to the report. processing side.
  • the preset question can be a preset template, such as directly asking the user for information such as the location, time, location, and scene of the incident; or it can be obtained by parsing the oral audio of the accident details entered by the user.
  • the accident details information is generated, and there is no limitation here, as long as it has reference value for the subsequent determination of false cases.
  • the accident scene video contains the audio of the accident details entered by the user and the accident scene image
  • the accident scene image also includes the user's facial image.
  • Obtain the change of the user's facial micro-expression and obtain the facial expression evaluation value according to the change of the facial micro-expression. That is to say, the facial expressions of the user when recording the video of the accident scene are evaluated to determine the authenticity of the user's oral audio.
  • the report processing terminal will use the micro-expression processing technology to analyze the facial image to obtain the user's micro-expression, and feed back the obtained user's micro-expression to anti-fraud experts for evaluation.
  • Fraud experts score the 30 known micro-expressions (system presets, depending on the micro-expression processing technology, not explained here), by the corresponding anti-fraud experts.
  • the expressions recognized by the user's micro-expressions get corresponding scores, that is, facial expression evaluation values.
  • S50 Analyze the oral audio of the accident details to extract the accident details and the changes of the user's voice.
  • the accident scene video contains the oral audio of the accident details entered by the user.
  • the corresponding oral audio of the accident details can be extracted through the audio analysis technology.
  • Accident details and changes in the user's voice can be extracted.
  • the accident details information is related to the above-mentioned user recording the scene accident video and dictating the accident details.
  • the accident details oral audio refers to the information about the current accident details entered by the user according to the instructions of the material submission request. Audio, where the details include but are not limited to the time of the accident, the location of the accident, the scene of the accident, the persons involved, the loss and so on. Therefore, by analyzing the oral audio of the accident details, it is possible to extract the accident details such as the time of the accident, the location of the accident, the scene of the accident, the persons involved, and the loss.
  • the content answered by the user in the oral audio of the accident details can be used, and the keywords in the content can be identified through the Chinese word segmentation technology, and then the detailed information of the accident can be learned.
  • Sound changes refer to the audio changes in the oral audio of the accident details, reflecting the sound changes during the user's dictation.
  • S60 Analyze the sound change to obtain the sound evaluation value, and analyze the accident detail information to obtain the scene evaluation value.
  • the sound change can reflect the user's psychological change to a certain extent, and the sound evaluation value can also be obtained by analyzing the sound change through the obtained sound change. Among them, if it is determined by the sound change that the possibility of the user's oral content being false is higher, the corresponding sound evaluation value is higher; The lower the sound evaluation value. It should be noted that, in terms of implementation, a sound evaluation algorithm can be used to analyze the change of the sound, so as to obtain the possibility that the user's oral content is false, thereby obtaining the sound evaluation value, which will not be described in detail here.
  • the accident detail information can be further analyzed to obtain a scene evaluation value, wherein the scene evaluation value is an evaluation value used to represent the authenticity of the current accident scene.
  • the accident details information includes the accident location and the accident scene information, as shown in FIG. 3 , in S60 , that is, analyzing the accident details information to obtain the scene evaluation value, which specifically includes the following steps:
  • S61 Use the crime scene to obtain the surrounding environment information of the crime scene.
  • the incident location and the incident scene information may be included, wherein the incident location refers to the location where the accident occurred, and the incident scene information refers to the scene information where the accident occurred. Therefore, after the crime site is acquired, the surrounding environment information of the crime site can be obtained by using the crime site.
  • the surrounding environment information of the crime scene includes information such as buildings and road conditions around the crime scene.
  • S62 Perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree.
  • S63 Obtain a scene evaluation value according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
  • the user when the user is dictating, the user may say that the accident was caused by a sheep on the road from location A, which caused it to accidentally hit the roadside guardrail.
  • the surrounding environment information includes whether there is a village near location A. If not, it means that the scene matching degree is low, and it can be judged that the case is likely to be fraudulent. Otherwise, the scene matching degree is high; In the case of a pit, the possibility of a traffic accident is greater, indicating that the scene matching degree is high, otherwise, the scene matching degree is low.
  • the scene evaluation value is obtained according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree. Positive correlation means that the higher the scene matching degree, the higher the scene evaluation value. It should be noted that, through scene verification, it can be determined whether there is a possibility of falsehood in the content spoken by the user, which further improves the effective judgment basis for subsequent false judgments, and improves the judgment accuracy of the scheme.
  • the lying evaluation value is further obtained based on the accident details and dialogue data.
  • the lying evaluation value is also an evaluation value used to determine the false component of the user's oral content.
  • the lying evaluation value also refers to the dialogue data, rather than simply based on the accident details.
  • the accident detail information includes the time of the accident, the location of the accident, and the scene of the accident.
  • S70 that is, the lying evaluation value is obtained according to the accident detail information and the dialogue data , including the following steps:
  • S71 Generate a preset question by using the crime time, the crime location, and the crime scene information.
  • S72 Send the preset question and answer to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question.
  • the detailed information of the accident is obtained by parsing the oral video of the accident details, and the information of the incident time, the incident location and the incident scene will be used to generate preset questions. For example, if the crime occurred at 17:00 on May 3, 2020, and the crime occurred in City A, the scene information of the crime is that the vehicle collided with the vehicle in front and caused the front window to shatter. Then the following preset questions can be generated correspondingly: "Did you have an accident at 17:00 on May 3, 2020?”; "Is your accident location in city A or city B?”; "What happened to your accident? Did it hit another car or was hit by another car, where is the damage?" .
  • the corresponding preset questions are generated by the user's own oral content, which is convenient to find logical problems in the subsequent user's dialogue, so as to determine the false components of the user's oral content, and then determine the possibility of false reporting.
  • the preset question and answer is sent to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question.
  • the special system can execute the dialogue process, which can reduce the processing burden of the report processing end and improve the efficiency.
  • S73 Receive the dialogue data that the user answers according to the preset questions fed back by the man-machine dialogue system, and the dialogue data includes the crime time, crime location and crime scene information answered by the user.
  • the man-machine dialogue system can access the client according to the preset questions, and execute the man-machine dialogue process.
  • the dialogue data fed back by the man-machine dialogue system is the dialogue data answered by the user according to the preset questions, and the dialogue data includes the incident time, the incident location and the incident scene information answered by the user.
  • S74 Match the incident time, incident location, and incident scene information in the accident details information with the incident time, incident location, and incident scene information answered by the user in the dialogue data, respectively, and obtain the matching answer information.
  • steps S71-S73 two aspects of content are obtained.
  • the incident time, the incident location and the incident scene information obtained through the detailed oral audio uploaded by the user; on the other hand, through the dialogue again.
  • the information about the time of the incident, the location of the incident and the scene of the incident is obtained.
  • the incident time, incident location and incident scene information in the accident details information are respectively matched with the incident time, incident location and incident scene information answered by the user in the dialogue data, and the answer is obtained.
  • Information matching For example, if the location of the crime does not match, it means that there is a contradiction between the users and the possibility of false reporting. It should be noted that, in practical applications, more matching information items may be set to synthesize and conduct multiple dialogues to obtain the matching degree of answer information of the required answer information.
  • the incident scene information depends on which scene this solution is applicable to.
  • the above incident scene information may refer to the number of people involved in the vehicle and the part of the vehicle that was damaged. etc., there is no specific limitation.
  • S76 Obtain a lying evaluation value according to the matching degree of the answer information and the pause duration, wherein the lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause duration.
  • steps S75-S76 it can be understood that, in addition to obtaining the matching degree of the answer information, the dialog data needs to be further analyzed to obtain the pause time when the user answers each preset question through the dialog data. Fraud is determined by the length of the "pause" in the conversation. It is understandable that for a certain question, if the answering pause time is too long, it means that the user is hesitant about the certain question, and there is the possibility of false reporting. For some reason, it is necessary to obtain the pause when the user answers each preset question. duration. Finally, the lying evaluation value is obtained according to the matching degree of the answer information and the pause time. The lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause time.
  • the lying evaluation value is positively correlated with the matching degree of the answer information, which means that the higher the matching degree of the answer information, the higher the lying evaluation value, and vice versa, the lower the lying evaluation value; the lying evaluation value is negatively correlated with the pause time, which refers to the pause time. The longer it is, the lower the lie evaluation value, and vice versa, the higher the lie evaluation value.
  • the final lying evaluation value may be the accumulation of the evaluation values corresponding to the matching degree of the answer information and the pause duration.
  • S80 Obtain a location evaluation value according to the accident detail information and the location of the user.
  • a location evaluation value needs to be further obtained according to the accident detailed information and the user's location, where the location evaluation value is an evaluation value used to characterize the accuracy of the accident occurrence.
  • the accident details information includes the location of the incident.
  • the location evaluation value is obtained according to the accident details information and the user's location, which specifically includes the following steps:
  • S81 Obtain the authorization of the user in advance to consult the authorization of the location of the client terminal.
  • S82 Query the location of the user terminal from the operator corresponding to the user terminal according to the certificate information, so as to obtain the location of the user when the dialogue data is generated.
  • the user when reporting a case, the user will also upload certificate information, and the report processing end can query the location of the user terminal from the operator corresponding to the user terminal through the certificate information.
  • the operator corresponding to the client will determine whether the report processing terminal is authorized.
  • the operator corresponding to the client will feed back the corresponding client location based on the certificate information based on the query request of the reporting processing terminal.
  • the user's location After obtaining the user's location, compare the queried location with the incident location extracted by parsing the accident details oral content, and obtain the location evaluation value according to the distance comparison result.
  • the evaluation value is negatively correlated with the distance comparison result, which means that the farther the user's location is from the crime scene, the lower the location evaluation value, and vice versa.
  • S90 Determine whether the user reports falsely according to the facial expression evaluation value, the voice evaluation value, the scene evaluation value, the lying evaluation value, and the location evaluation value, and store the determination result in association with the certificate information.
  • the facial expression evaluation value, voice evaluation value, scene evaluation value, lying evaluation value and position evaluation value can be obtained.
  • the above-mentioned facial expression evaluation value, sound evaluation value, scene evaluation value, lying Both the evaluation value and the location evaluation value are determined by objective facts.
  • whether the user has falsely reported the case is determined according to the facial expression evaluation value, voice evaluation value, scene evaluation value, lying evaluation value, and location evaluation value.
  • the certificate information is stored in association, and the user's report processing result is finally obtained.
  • step S90 that is, according to the evaluation value of facial expression, the evaluation value of voice, the evaluation value of scene, the evaluation value of lying and the evaluation value of location, it is determined whether there is a false report of the user, which specifically includes the following steps:
  • totalScore represents the target evaluation value
  • a i corresponds to the weight value corresponding to each evaluation value
  • xi corresponds to each evaluation value
  • the scores of the above five dimensions are in the range of [0, 10], and then the corresponding weights are assigned to the scores of each dimension, where the sum of the weights of each dimension is 1, and the calculation formula of the final total score is as follows:
  • ⁇ i is the weight value of each dimension.
  • the default values are: (0.1, 0.1, 0.2, 0.3, 0.3) when the totalscore is greater than 8, the risk can be determined to be high; when the totalscore value is [5, 8) there is a certain Risk, below 5, there is very low risk
  • the above weight value can be readjusted according to the final feedback risk conclusion so as to be closer to the final running result.
  • the rules for assigning weights are mainly confirmed by the final case conclusion, specifically: when the number of cases reaches 2,000, the final case conclusion and the conclusion given by the system scoring results account for less than 50% of the cases, and those with inconsistent conclusions need to be evaluated.
  • the conclusion of the case is classified to see which dimension scores affect the final score. For example: when the number of cases reaches 2,000, but only 800 of the final conclusions are consistent with the conclusions given by the system, that is, only 40%, then it is necessary to classify the conclusions of the remaining 1,200 cases, and count the results of each dimension.
  • the adjusted weight value is calculated by dividing the number of each dimension by the total number of 1200.
  • storing the determination result in association with the credential information refers to storing the determination result in association with the credential information on the blockchain of the blockchain system.
  • the certificate information is associated and stored on the blockchain of the blockchain system, which facilitates the traceability of false reporters and ensures that the report results are not tampered with.
  • Blockchain is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the underlying platform of the blockchain can include processing modules such as user management, basic services, smart contracts, and operation monitoring.
  • the user management module is responsible for the identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, and maintenance of the corresponding relationship between the user's real identity and blockchain address (authority management), etc.
  • account management maintenance of public and private key generation
  • key management key management
  • authorization management maintenance of the corresponding relationship between the user's real identity and blockchain address
  • the basic service module is deployed on all blockchain node devices to verify the validity of business requests, After completing the consensus on valid requests, record them in the storage.
  • the basic service For a new business request, the basic service first adapts the interface for analysis and authentication processing (interface adaptation), and then encrypts the business information through the consensus algorithm (consensus management), After encryption, it is completely and consistently transmitted to the shared ledger (network communication), and records are stored; the smart contract module is responsible for the registration and issuance of contracts, as well as contract triggering and contract execution.
  • contract logic through a programming language and publish to On the blockchain (contract registration), according to the logic of the contract terms, call the key or other events to trigger execution, complete the contract logic, and also provide the function of contract upgrade and cancellation;
  • the operation monitoring module is mainly responsible for the deployment in the product release process , configuration modification, contract settings, cloud adaptation, and visual output of real-time status in product operation, such as: alarms, monitoring network conditions, monitoring node equipment health status, etc.
  • the embodiment of the present application provides a method for handling false insurance reports, which mainly provides insurance companies with anti-fraud and anti-leakage risk identification and control to quickly identify cases, and reads the data from multiple dimensions by recording videos by users. Judging information, and comprehensively judging and storing the report results based on the judgment information, can reduce the error caused by a large number of manual judgments by insurance companies, save a lot of investigation manpower and social resources, improve the timeliness of case closure for insurance companies, and improve Users have a better experience with the insurance company's case handling capabilities and timeliness, thereby truly reducing costs and increasing efficiency for insurance companies, greatly reducing the rate of false cases and improving the efficiency of case determination.
  • a device for processing a false insurance report corresponds one-to-one with the method for processing a false insurance report in the above embodiment.
  • the false insurance report processing device includes a response module 101 , a first acquisition module 102 , a second acquisition module 103 , a first analysis module 104 , a second analysis module 105 , a third analysis module 106 , and a third acquisition module 104 .
  • module 107 , a fourth acquisition module 108 , a determination module 109 and a storage module 110 The detailed description of each functional module is as follows:
  • a response module used to respond to an insurance report request triggered by a user, so as to send a material submission request to the client;
  • the first acquisition module is configured to acquire the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene The image includes an image of the user's face;
  • the second obtaining module is configured to obtain the dialog data fed back by the user terminal, where the dialog data is the dialog data that the user answers according to the preset questions, and obtains the dialog data when the user answers the preset questions according to the preset questions. the user's location;
  • the first analysis module is used to analyze the facial image of the user, to obtain the change of the user's facial micro-expression, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
  • a second analysis module configured to analyze the oral audio of the accident details to extract the accident details and the change of the user's voice
  • a third analysis module configured to analyze the sound change to obtain a sound evaluation value, and analyze the accident detail information to obtain a scene evaluation value;
  • a third obtaining module configured to obtain a lying evaluation value according to the accident detail information and the dialogue data
  • a fourth acquiring module configured to acquire a location evaluation value according to the accident detail information and the location of the user
  • a determination module used for determining whether the user has a false report according to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value;
  • the storage module is configured to store the judgment result in association with the certificate information.
  • the embodiment of this application provides a false insurance report processing device, which mainly provides anti-fraud and anti-leakage risk identification and control for insurance companies to quickly identify cases. Judging information, and comprehensively judging and storing the report results based on the judgment information, can reduce the error caused by a large number of manual judgments by insurance companies, save a lot of investigation manpower and social resources, improve the timeliness of case closure for insurance companies, and improve Users have a better experience with the insurance company's case handling capabilities and timeliness, thereby truly reducing costs and increasing efficiency for insurance companies, greatly reducing the rate of false cases and improving the efficiency of case determination.
  • modules in the device for processing false insurance reports can be implemented by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device in one embodiment, is provided, and the computer device can be a server, and its internal structure diagram can be as shown in FIG. 7 .
  • the computer device includes a processor, memory, a network interface, and a database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, a volatile storage medium, and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions and a database.
  • the internal memory provides an environment for the execution of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instructions when executed by the processor, implement a method for processing false insurance reports.
  • a computer apparatus comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor executes the computer-readable instructions
  • the following steps are implemented: responding to the insurance report request triggered by the user, so as to send a material submission request to the client; obtaining the certificate information and the accident scene video uploaded by the user according to the material submission request, and the accident scene video Including the accident details oral audio and the accident scene image entered by the user, the accident scene image includes the user's face image; acquiring the dialogue data fed back by the user terminal, the dialogue data is the user according to the preset question.
  • Answered dialogue data and obtain the user's position when the user answered the preset question; analyze the user's facial image to obtain the user's facial micro-expression changes, and according to the facial micro-expression Obtain the facial expression evaluation value from the change situation; analyze the accident details oral audio to extract the accident details information and the user's voice change situation; analyze the voice change situation to obtain the sound evaluation value, and analyze the accident details information to obtain the scene evaluation value; obtain a lying evaluation value according to the accident details information and the dialogue data; obtain a position evaluation value according to the accident details information and the user's position; The value, the lying evaluation value, and the location evaluation value determine whether the user has made a false report, and store the determination result in association with the credential information.
  • the processor implements the following steps when executing the computer-readable instructions: the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value, and the position evaluation value, respectively The weight value corresponding to the value configuration; the target evaluation value is calculated according to the following formula: Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values; judging whether the target evaluation value is greater than a preset threshold; When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report; when the target evaluation value is less than or equal to the preset threshold, it is determined that the user does not have a false report.
  • the processor implements the following steps when executing the computer-readable instructions: obtaining the surrounding environment information of the crime site by using the crime site; converting the surrounding environment information Perform scene matching with the crime scene information to obtain a scene matching degree; and obtain the scene evaluation value according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
  • the processor implements the following steps when executing the computer-readable instructions: generating the preset question by using the information about the time of the crime, the location of the crime, and the scene of the crime; Send the preset question to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question;
  • the dialogue data answered by the preset question, the dialogue data includes the incident time, incident location and incident scene information answered by the user; the incident time, incident location in the accident details information and incident scene information, respectively perform information matching with the incident time, incident location and incident scene information answered by the user in the dialogue data to obtain the matching degree of the answer information; parse the dialogue data to obtain all the information.
  • the pause duration when the user answers each of the preset questions; the lying evaluation value is obtained according to the answer information matching degree and the pause duration, wherein the lying evaluation value and the answer information matching degree Positive correlation, the lying evaluation value is negatively correlated with the pause duration.
  • the processor implements the following steps when executing the computer-readable instructions: obtaining the authorization of the user in advance to consult the authorization of the location of the client; The operator corresponding to the user terminal inquires the location of the user terminal to obtain the location of the user when the dialogue data is generated; compares the distance between the user's location and the crime scene; according to the distance comparison result The location evaluation value is obtained, wherein the location evaluation value is negatively correlated with the distance comparison result.
  • the processor implements the following steps when executing the computer-readable instructions: storing the determination result in association with the certificate information on the blockchain of the blockchain system.
  • one or more readable storage media are provided storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following: Steps: responding to an insurance report request triggered by a user, to send a material submission request to the client; obtaining the certificate information and accident scene video uploaded by the user according to the material submission request, and the accident scene video includes the user input
  • the oral audio of the accident details and the accident scene image, the accident scene image includes the user's face image
  • the dialogue data fed back by the user terminal is obtained, and the dialogue data is the dialogue data that the user answers according to preset questions, and obtain the position of the user when the user answers the preset questions; analyze the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain the facial expression according to the change of the facial micro-expression evaluation value
  • analyze the oral audio of the accident details to extract the accident details information and the voice change of the user
  • analyze the sound change to obtain a sound evaluation value
  • the one or more processors when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: evaluating the facial expression values respectively , sound evaluation value, scene evaluation value, lying evaluation value and position evaluation value configure the corresponding weight value; calculate the target evaluation value according to the following formula: Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values; judging whether the target evaluation value is greater than a preset threshold; When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report; when the target evaluation value is less than or equal to the preset threshold, it is determined that the user does not have a false report.
  • the one or more processors when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: obtaining all information from the crime site by using the crime scene. Describe the surrounding environment information of the crime scene; perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree; obtain the scene evaluation value according to the scene matching degree, wherein the scene The evaluation value is positively correlated with the scene matching degree.
  • the one or more processors when the computer-readable instructions are executed by one or more processors, perform the following steps: using the time of the incident, the generating the preset question; sending the preset question to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question; Receive the dialogue data that the user answers according to the preset questions, which is fed back by the man-machine dialogue system, where the dialogue data includes the crime time, crime location, and crime scene information answered by the user; Matching the incident time, incident location and incident scene information in the accident details information with the incident time, incident location and incident scene information answered by the user in the dialogue data, respectively, Obtain the matching degree of answer information; parse the dialogue data to obtain the pause duration when the user answers each of the preset questions; obtain the lying evaluation value according to the matching degree of the answer information and the pause duration , wherein the lying evaluation value is positively correlated with the matching degree of the answer information, and the
  • the one or more processors when the computer-readable instructions are executed by one or more processors, the one or more processors cause the one or more processors to perform the following steps: obtaining the user's authorization to view in advance Authorization of the location of the client; query the location of the client from the operator corresponding to the client according to the certificate information to obtain the location of the user when the dialogue data is generated; compare the location of the user with the location of the user. A distance comparison is performed on the crime scene; the location evaluation value is obtained according to the distance comparison result, wherein the location evaluation value is negatively correlated with the distance comparison result.
  • the one or more processors when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: combining the determination result with the The certificate information is associated and stored on the blockchain of the blockchain system.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

A false insurance claim report processing method and apparatus, and a computer device and a storage medium, which relate to the field of artificial intelligence. The method comprises: acquiring identity information and an accident scene video, which are uploaded by a user, wherein the accident scene video includes accident detail speech audio and an accident scene image, which are input by the user, and the accident scene image comprises a facial image of the user; acquiring dialogue data, and acquiring the position of the user when the user answers pre-set questions; acquiring a facial micro-expression change condition, and acquiring a facial expression evaluation value according to the facial micro-expression change condition; extracting accident detail information and a voice change condition of the user, performing analysis to obtain a voice evaluation value, and analyzing the accident detail information to obtain a scenario evaluation value; acquiring a lying evaluation value; acquiring a position evaluation value according to the accident detail information and the position of the user; and according to the facial expression evaluation value, the voice evaluation value, the scenario evaluation value, the lying evaluation value and the position evaluation value, determining whether a false claim is reported.

Description

虚假保险报案处理方法、装置、计算机设备及存储介质False insurance report processing method, device, computer equipment and storage medium
本申请要求于 20201228日提交中国专利局、申请号为 202011582853.2,发明名称为“虚假保险报案处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。 This application claims the priority of the Chinese patent application filed on December 28 , 2020 with the application number 202011582853.2 and the invention name is "false insurance report processing method, device, computer equipment and storage medium", the entire content of which is approved by Reference is incorporated in this application.
技术领域technical field
本申请涉及人工智能技术领域,尤其涉及一种虚假保险报案处理方法、装置、计算机设备及存储介质。The present application relates to the technical field of artificial intelligence, and in particular, to a method, device, computer equipment and storage medium for processing false insurance reports.
背景技术Background technique
保险行业最重要的是识别、防控风险,例如财产险中的车险是一种较大众普及的险种,受众面广,门槛较低,出现率较高。对于传统的保险公司,只能通过查勘员,或者定损员到事故现场拍照和人工取证。The most important thing in the insurance industry is to identify, prevent and control risks. For example, auto insurance in property insurance is a kind of insurance that is more popular among the general public. It has a wide audience, low threshold and high occurrence rate. For traditional insurance companies, surveyors or loss assessors can only take pictures and manually collect evidence at the accident site.
技术问题technical problem
发明人意识到,会存在保险人虚假报案、夸大损失情况,甚至存在团伙故意做假案,导致保险公司查勘员、定损员误报,漏报案件情况,最终会出现大量的虚假案件,导致虚假案件率较高。The inventor realizes that there will be cases where insurers falsely report cases, exaggerate losses, and even gangs deliberately make false cases, resulting in insurance company surveyors and loss assessors making false reports and omitting cases, and eventually a large number of false cases will appear, resulting in The rate of false cases is high.
技术解决方案technical solutions
本申请实施例提供一种虚假保险报案处理方法、装置、计算机设备及存储介质,以解决虚假案件率较高的问题。Embodiments of the present application provide a method, device, computer equipment, and storage medium for processing false insurance reports, so as to solve the problem of a high false case rate.
一种虚假保险报案处理方法,包括:A method for handling false insurance reports, including:
响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;Responding to the insurance report request triggered by the user, to send a material submission request to the client;
获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;Acquiring the certificate information and accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;acquiring dialogue data fed back by the user terminal, where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;Analyzing the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;analyzing the accident detail spoken audio to extract accident detail information and changes in the user's voice;
分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;Analyzing the sound change to obtain a sound evaluation value, and analyzing the accident detail information to obtain a scene evaluation value;
根据所述事故详情信息和所述对话数据获取说谎评估值;obtaining a lying assessment value according to the accident detail information and the dialogue data;
根据所述事故详情信息和所述用户的位置获取位置评估值;Obtain a location evaluation value according to the accident detail information and the location of the user;
根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。According to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
一种虚假保险报案处理装置,包括:A device for processing false insurance reports, comprising:
响应模块,用于响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;A response module, used to respond to an insurance report request triggered by a user, so as to send a material submission request to the client;
第一获取模块,用于获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;The first acquisition module is configured to acquire the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene The image includes an image of the user's face;
第二获取模块,用于获取所述用户端所反馈的对话数据,所述对话数据为所述用户按 照预设问题提问进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;The second obtaining module is configured to obtain the dialog data fed back by the user terminal, where the dialog data is the dialog data that the user answers according to the preset questions, and obtains the dialog data when the user answers the preset questions according to the preset questions. the user's location;
第一分析模块,用于分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;The first analysis module is used to analyze the facial image of the user, to obtain the change of the user's facial micro-expression, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
第二分析模块,用于分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;a second analysis module, configured to analyze the oral audio of the accident details to extract the accident details and the change of the user's voice;
第三分析模块,用于分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;a third analysis module, configured to analyze the sound change to obtain a sound evaluation value, and analyze the accident detail information to obtain a scene evaluation value;
第三获取模块,用于根据所述事故详情信息和所述对话数据获取说谎评估值;a third obtaining module, configured to obtain a lying evaluation value according to the accident detail information and the dialogue data;
第四获取模块,用于根据所述事故详情信息和所述用户的位置获取位置评估值;a fourth acquiring module, configured to acquire a location evaluation value according to the accident detail information and the location of the user;
判定模块,用于根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案;A determination module, used for determining whether the user has a false report according to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value;
存储模块,用于将判定结果与所述证件信息关联存储。The storage module is configured to store the judgment result in association with the certificate information.
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor implements the following steps when executing the computer-readable instructions:
响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;Responding to the insurance report request triggered by the user, to send a material submission request to the client;
获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;Acquiring the certificate information and accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;acquiring dialogue data fed back by the user terminal, where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;Analyzing the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;analyzing the accident detail spoken audio to extract accident detail information and changes in the user's voice;
分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;Analyzing the sound change to obtain a sound evaluation value, and analyzing the accident detail information to obtain a scene evaluation value;
根据所述事故详情信息和所述对话数据获取说谎评估值;obtaining a lying assessment value according to the accident detail information and the dialogue data;
根据所述事故详情信息和所述用户的位置获取位置评估值;Obtain a location evaluation value according to the accident detail information and the location of the user;
根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。According to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps:
响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;Responding to the insurance report request triggered by the user, to send a material submission request to the client;
获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;Acquiring the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;acquiring dialogue data fed back by the user terminal, where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;Analyzing the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;analyzing the accident detail spoken audio to extract accident detail information and changes in the user's voice;
分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;Analyzing the sound change to obtain a sound evaluation value, and analyzing the accident detail information to obtain a scene evaluation value;
根据所述事故详情信息和所述对话数据获取说谎评估值;obtaining a lying assessment value according to the accident detail information and the dialogue data;
根据所述事故详情信息和所述用户的位置获取位置评估值;Obtain a location evaluation value according to the accident detail information and the location of the user;
根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。According to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
本方案中,通过用户录制视频的方式,读取多个维度的判定信息,并依据判定信息去综合判定报案结果并关联存储出来,可降低保险公司因大量的人工判定产生的误差,同时节省大量的调查人力及社会资源,提高保险公司的案件的结案时效,提高用户对保险公司的案件处理能力和时效带来更好的体验,从而真正为保险公司带来降本增效的作用,极大地降到了虚假案件率。In this solution, the user can record the video to read the judgment information of multiple dimensions, and based on the judgment information, comprehensively judge the report results and store them in association, which can reduce the errors caused by a large number of manual judgments of the insurance company, and save a lot of money at the same time. Investigate human and social resources, improve the timeliness of case closure for insurance companies, and improve users’ ability to handle cases and timeliness of insurance companies to bring better experience to insurance companies, thereby truly reducing costs and increasing efficiency for insurance companies. dropped to the false case rate.
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below, and other features and advantages of the application will become apparent from the description, drawings, and claims
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments of the present application. Obviously, the drawings in the following description are only some embodiments of the present application. , for those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.
图1是本申请一实施例中虚假保险报案处理方法的一***框架示意图;1 is a schematic diagram of a system framework of a method for processing false insurance reports in an embodiment of the present application;
图2是本申请一实施例中虚假保险报案处理方法的一流程示意图;2 is a schematic flowchart of a method for processing false insurance reports in an embodiment of the present application;
图3是图1步骤S60的一个具体实施方式示意图;Fig. 3 is a schematic diagram of a specific implementation of step S60 in Fig. 1;
图4是图1步骤S70的一个具体实施方式示意图;Fig. 4 is a schematic diagram of a specific implementation of step S70 in Fig. 1;
图5是图1步骤S80的一个具体实施方式示意图;Fig. 5 is a schematic diagram of a specific implementation of step S80 in Fig. 1;
图6是本申请一实施例中虚假保险报案处理装置的一结构示意图;6 is a schematic structural diagram of a false insurance report processing device in an embodiment of the present application;
图7是本申请一实施例中计算机设备的一结构示意图。FIG. 7 is a schematic structural diagram of a computer device in an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
本申请涉及人工智能、区块链技术领域,提供了一种虚假保险报案处理方法,为便于理解,首选对本方案所涉及的虚假保险报案处理方法所应用的虚假保险报案处理***做一个描述,如图1所示,本方案的虚假保险报案处理***主要包括报案处理端、用户端、人机对话***,还可以包括区块链***。其中,用户端可以但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。人机对话***可以用独立的服务器或者是多个服务器组成的服务器集群来实现。This application relates to the fields of artificial intelligence and blockchain technology, and provides a false insurance report processing method. For ease of understanding, it is preferred to describe the false insurance report processing system applied by the false insurance report processing method involved in this solution, such as As shown in Figure 1, the false insurance report processing system of this scheme mainly includes a report processing terminal, a user terminal, a human-machine dialogue system, and can also include a blockchain system. Among them, the user terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices. The man-machine dialogue system can be implemented by an independent server or a server cluster composed of multiple servers.
其中,本申请实施例的案处理端、用户端、人机对话***,均可以基于人工智能技术对相关的数据进行获取和处理。其中,人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用***。The case processing terminal, the user terminal, and the man-machine dialogue system in the embodiments of the present application can all acquire and process related data based on artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results. .
人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互***、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、机器人技术、生物识别技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。The basic technologies of artificial intelligence generally include technologies such as sensors, special artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics. Artificial intelligence software technology mainly includes computer vision technology, robotics technology, biometrics technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
结合上述虚假保险报案处理***,对本方案的虚假保险报案处理方法进行描述,如图1所示,主要包括如下步骤:Combined with the above false insurance report processing system, the false insurance report processing method of this scheme is described, as shown in Figure 1, which mainly includes the following steps:
S10:响应用户触发的保险报案请求,以向用户端发送材料提交请求。S10: Respond to the insurance report request triggered by the user, so as to send a material submission request to the client.
当出现保险事故时,对于购买了相应保险的用户,其可以通过保险官方应用触发保险报案请求。例如,以车险为例,用户可以为其车辆购买车险,当用户驾车过程中出现车祸、 碰撞等事故时,需要申请车险理赔。其中,可以通过安装于用户端的保险官方应用触发保险报案请求。该用户端可以指的是用户的手机等终端设备,这里不做限定。在用户通过用户端的保险官方应用触发保险报案请求后,用户端会将保险报案请求反馈至报案处理端。When an insurance accident occurs, users who have purchased the corresponding insurance can trigger an insurance report request through the insurance official application. For example, taking car insurance as an example, a user can purchase car insurance for his vehicle, and when a car accident, collision or other accident occurs while driving, the user needs to apply for a car insurance claim. Among them, the insurance report request can be triggered through the insurance official application installed on the client. The user terminal may refer to a terminal device such as a user's mobile phone, which is not limited here. After the user triggers the insurance report request through the insurance official application of the client, the client will feed back the insurance report request to the report processing end.
报案处理端接收到保险报案请求之后,会响应该保险报案请求,具体地,向用户端发送材料提交请求。其中,该材料提交请求用于包含指示信息,用于指示用户按照预设操作,上传用户的证件信息和事故现场视频。After receiving the insurance report request, the report processing terminal will respond to the insurance report request, and specifically, send a material submission request to the client. Among them, the material submission request is used to include instruction information, which is used to instruct the user to upload the user's certificate information and accident scene video according to the preset operation.
用户端在接收到材料提交请求之后,用户则可以按照材料提交请求的指示,上传的证件信息和事故现场视频。其中,该指示信息还指示用于在录入事故现场视频时,所需包含的各种信息,其中包括用户的面部图像。也就是说,录入的事故现场视频中,用户需包含在该事故现场视频内,以便后续做进一步分析。After the client receives the material submission request, the user can upload the certificate information and accident scene video according to the instructions of the material submission request. Wherein, the instruction information also indicates various information that needs to be included when recording the accident scene video, including the user's face image. That is to say, in the recorded accident scene video, the user needs to be included in the accident scene video for further analysis later.
其中,事故现场视频包含用户录入的事故详情口述音频和事故现场图像,事故详情口述音频指的是,用户按照材料提交请求的指示,所录入的关于当前发生事故详细情况的音频,其中,该详细情况包括但不局限于事故的案发时间、案发地点、案发场景、涉案人员、损失情况等等。具体地,可以依据后续的虚假判定需求去配置材料提交请求,从而使用户按照需求录入所需的事故详情口述音频。事故现场图像指的是,用户对事故现场和自身进行拍摄的现场图像,如车辆碰撞事故中,需对碰撞事故现场的人、车、碰撞位置和碰撞地点等范围进行视频录制,从而使得事故现场视频包含事故现场图像。Among them, the accident scene video includes the accident details oral audio and accident scene images entered by the user. The accident details oral audio refers to the audio about the current accident details entered by the user according to the instructions of the material submission request. The circumstances include but are not limited to the time of the accident, the location of the accident, the scene of the accident, the persons involved, the loss, etc. Specifically, the material submission request can be configured according to the subsequent false determination requirements, so that the user can enter the required accident details oral audio according to the requirements. The accident scene image refers to the scene image taken by the user of the accident scene and himself. For example, in a vehicle collision accident, it is necessary to video record the people, vehicles, collision location, and collision location at the collision accident scene, so as to make the accident scene. The video contains images of the accident scene.
其中,证件信息指的是用于指示上述用户唯一身份信息的信息,例如可以指的是身份证、驾驶证、社保卡等。Wherein, the certificate information refers to the information used to indicate the above-mentioned unique identity information of the user, for example, it may refer to an ID card, a driver's license, a social security card, and the like.
S20:获取用户根据材料提交请求上传的证件信息和事故现场视频,事故现场视频包含用户录入的事故详情口述音频和事故现场图像,事故现场图像包括用户的面部图像。S20: Obtain the certificate information and the accident scene video uploaded by the user according to the material submission request. The accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image.
对于报案处理端而言,可以获取用户通过用户端所上传的证件信息和事故现场视频。For the report processing terminal, the certificate information and accident scene video uploaded by the user through the user terminal can be obtained.
S30:获取用户端所反馈的对话数据,对话数据为用户按照预设问题进行回答的对话数据,并获取用户按照预设问题进行回答时用户的位置。S30: Acquire the dialog data fed back by the user terminal, where the dialog data is the dialog data answered by the user according to the preset question, and obtain the position of the user when the user answers according to the preset question.
在获取用户根据材料提交请求上传的证件信息和事故现场视频之后,完成第一步的信息录入,本方案中,为提高虚假案件的虚假判定,需进一步获取用户端所反馈的对话数据,该对话数据为用户按照预设问答提问进行回答的对话数据。After obtaining the certificate information and accident scene video uploaded by the user according to the material submission request, complete the first step of information entry. In this solution, in order to improve the false judgment of false cases, it is necessary to further obtain the dialogue data fed back by the user. The data is the dialogue data that the user answers according to the preset question and answer questions.
具体地,在具体应用时,在获取用户根据材料提交请求上传的证件信息和事故现场视频之后,触发人机对话***接入该用户端,以通过该用户端与用户进行人机对话。其中,本方案会预设有预设问题,并使人机对话***通过预设问题向用户进行询问,从而获取用户的回答,以获取用户按照预设问题进行回答的对话数据,并反馈给报案处理端。需要说明的是,该预设问题可以是预先设定的模板,例如直接询问用户发生案件的案发地点、时间、位置和场景等信息;或者是通过解析用户所录入的事故详情口述音频得到的事故详情信息所生成,这里不做限定,只要对于后续做虚假案件判定有参考价值即可。Specifically, in specific applications, after obtaining the certificate information and accident scene video uploaded by the user according to the material submission request, the human-machine dialogue system is triggered to access the user terminal, so as to conduct human-machine dialogue with the user through the user terminal. Among them, in this solution, preset questions are preset, and the man-machine dialogue system will ask the user through the preset questions, so as to obtain the user's answer, so as to obtain the dialogue data that the user answers according to the preset question, and feed it back to the report. processing side. It should be noted that the preset question can be a preset template, such as directly asking the user for information such as the location, time, location, and scene of the incident; or it can be obtained by parsing the oral audio of the accident details entered by the user. The accident details information is generated, and there is no limitation here, as long as it has reference value for the subsequent determination of false cases.
S40:分析用户的面部图像,以获取用户的面部微表情变化情况,并根据面部微表情变化情况获取面部表情评估值。S40: Analyze the facial image of the user to obtain the change of the user's facial micro-expression, and obtain the facial expression evaluation value according to the change of the facial micro-expression.
如前述,事故现场视频包含用户录入的事故详情口述音频和事故现场图像,而其中的事故现场图像还包括用户的面部图像,为了提高虚假保险报案处理,本方案还会分析用户的面部图像,以获取用户的面部微表情变化情况,并根据面部微表情变化情况获取面部表情评估值。也就是说,会对用户录制事故现场视频时的面部表情做评估,以确定用户做口述音频时的真假性。As mentioned above, the accident scene video contains the audio of the accident details entered by the user and the accident scene image, and the accident scene image also includes the user's facial image. Obtain the change of the user's facial micro-expression, and obtain the facial expression evaluation value according to the change of the facial micro-expression. That is to say, the facial expressions of the user when recording the video of the accident scene are evaluated to determine the authenticity of the user's oral audio.
具体地,在一应用场景中,报案处理端会将利用微表情处理技术对面部图像进行分析以获取用户的微表情,并将获取到的用户微表情反馈给反欺诈专家中做评估,通过反欺诈专家对目前已知的30种微表情(***预设,取决于微表情处理技术,这里不做一一说明)中,由相应的反欺诈专家打分,分数范围为可以0-10,从而根据用户的微表情识别到的表 情得到相应的得分,也即面部表情评估值。Specifically, in an application scenario, the report processing terminal will use the micro-expression processing technology to analyze the facial image to obtain the user's micro-expression, and feed back the obtained user's micro-expression to anti-fraud experts for evaluation. Fraud experts score the 30 known micro-expressions (system presets, depending on the micro-expression processing technology, not explained here), by the corresponding anti-fraud experts. The expressions recognized by the user's micro-expressions get corresponding scores, that is, facial expression evaluation values.
具体地,在一应用场景中,报案处理端将利用微表情处理技术对面部图像进行分析以获取用户的微表情,并将获取到的用户微表情直接计算得到面部表情评估值。本方案不做限定。Specifically, in an application scenario, the report processing end will use the micro-expression processing technology to analyze the facial image to obtain the user's micro-expression, and directly calculate the obtained user's micro-expression to obtain the facial expression evaluation value. This plan is not limited.
S50:分析事故详情口述音频,以提取事故详情信息和用户的声音变化情况。S50: Analyze the oral audio of the accident details to extract the accident details and the changes of the user's voice.
如前述,事故现场视频包含用户录入的事故详情口述音频,在获取到事故现场视频之后,通过音频解析技术便可提取到相应的事故详情口述音频,进一步利用音频分析技术分析事故详情口述音频,便可提取事故详情信息和用户的声音变化情况。As mentioned above, the accident scene video contains the oral audio of the accident details entered by the user. After obtaining the accident scene video, the corresponding oral audio of the accident details can be extracted through the audio analysis technology. Accident details and changes in the user's voice can be extracted.
其中,事故详情信息与前述用户录制现场事故视频时,进行事故详情口述时有关,如前述,事故详情口述音频指的是,用户按照材料提交请求的指示,所录入的关于当前发生事故详细情况的音频,其中,该详细情况包括但不局限于事故的案发时间、案发地点、案发场景、涉案人员、损失情况等等。因此,分析事故详情口述音频,便可提取出事故的案发时间、案发地点、案发场景、涉案人员、损失情况等事故详情信息,具体依据虚假判定需求而提取。在实现上,可以通过事故详情口述音频中用户所回答的内容,通过中文分词技术识别出内容中的关键词,继而了解到事故详情信息。Among them, the accident details information is related to the above-mentioned user recording the scene accident video and dictating the accident details. As mentioned above, the accident details oral audio refers to the information about the current accident details entered by the user according to the instructions of the material submission request. Audio, where the details include but are not limited to the time of the accident, the location of the accident, the scene of the accident, the persons involved, the loss and so on. Therefore, by analyzing the oral audio of the accident details, it is possible to extract the accident details such as the time of the accident, the location of the accident, the scene of the accident, the persons involved, and the loss. In terms of implementation, the content answered by the user in the oral audio of the accident details can be used, and the keywords in the content can be identified through the Chinese word segmentation technology, and then the detailed information of the accident can be learned.
声音变化情况,指的是事故详情口述音频中,反映用户的口述过程中的声音变化情况。Sound changes refer to the audio changes in the oral audio of the accident details, reflecting the sound changes during the user's dictation.
S60:分析声音变化情况得到声音评估值,并分析事故详情信息获取场景评估值。S60: Analyze the sound change to obtain the sound evaluation value, and analyze the accident detail information to obtain the scene evaluation value.
声音变化情况在一定程度上可以反映用户的心理变化,通过获取的声音变化情况,也可以分析声音变化情况得到声音评估值。其中,若通过声音变化情况判定出用户口述内容存在虚假的可能性越高,则相应的声音评估值越高,若通过声音变化情况判定出用户口述内容存在虚假的可能性越低,则相应的声音评估值越低。需要说明的是,在实现上,可以通过声音评估算法去分析声音变化情况,从而得到用户口述内容存在虚假的可能性,从而得到声音评估值,具体这里不做详细描述。The sound change can reflect the user's psychological change to a certain extent, and the sound evaluation value can also be obtained by analyzing the sound change through the obtained sound change. Among them, if it is determined by the sound change that the possibility of the user's oral content being false is higher, the corresponding sound evaluation value is higher; The lower the sound evaluation value. It should be noted that, in terms of implementation, a sound evaluation algorithm can be used to analyze the change of the sound, so as to obtain the possibility that the user's oral content is false, thereby obtaining the sound evaluation value, which will not be described in detail here.
并且,基于事故详情信息,还可以进一步分析事故详情信息获取场景评估值,其中,该场景评估值是用于表征当前事故场景真实性的评估值。In addition, based on the accident detail information, the accident detail information can be further analyzed to obtain a scene evaluation value, wherein the scene evaluation value is an evaluation value used to represent the authenticity of the current accident scene.
作为一个示例,事故详情信息包含案发地点和案发场景信息,如图3中,S60中,也即分析事故详情信息获取场景评估值,具体包括如下步骤:As an example, the accident details information includes the accident location and the accident scene information, as shown in FIG. 3 , in S60 , that is, analyzing the accident details information to obtain the scene evaluation value, which specifically includes the following steps:
S61:利用案发地点获取案发地点的周围环境信息。S61: Use the crime scene to obtain the surrounding environment information of the crime scene.
如前述,通过解析事故详情信息可以包括案发地点和案发场景信息,其中,案发地点指的是发生事故的地点,案发场景信息指的是发生事故所在的场景信息。因此,在获取到案发地点之后,可以利用案发地点获取案发地点的周围环境信息。案发地点的周围环境信息包括案发地点周围的建筑物、道路情况等信息。As mentioned above, by analyzing the accident detail information, the incident location and the incident scene information may be included, wherein the incident location refers to the location where the accident occurred, and the incident scene information refers to the scene information where the accident occurred. Therefore, after the crime site is acquired, the surrounding environment information of the crime site can be obtained by using the crime site. The surrounding environment information of the crime scene includes information such as buildings and road conditions around the crime scene.
S62:将周围环境信息与案发场景信息进行场景匹配,以获取场景匹配度。S62: Perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree.
S63:根据场景匹配度获取场景评估值,其中,场景评估值与场景匹配度正相关。S63: Obtain a scene evaluation value according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
例如,具体地,用户在口述时,可能说事故是由于地点A的路上出现一头羊导致不小心撞到路边护栏,这时就可以根据当时事故的地点A获取地点A的周围环境信息,该周围环境信息包括地点A的附近是否存在村庄,如果没有,说明场景匹配度较低,则可判断该案件是可能欺诈,反之场景匹配度较高;另外,如果案发地点是存在路面不平,多坑情况,那出现交通事故可能性更大,说明场景匹配度较高,反之,场景匹配度较低。根据场景匹配度获取场景评估值,其中,场景评估值与场景匹配度正相关。正相关指的是场景匹配度越高,则场景评估值越高。需要说明的是,通过场景验证,可以判定用户口述的内容是否存在虚假的可能性,进一步为后续做虚假判定提高有效地判定依据,提高了方案的判定准确率。For example, when the user is dictating, the user may say that the accident was caused by a sheep on the road from location A, which caused it to accidentally hit the roadside guardrail. The surrounding environment information includes whether there is a village near location A. If not, it means that the scene matching degree is low, and it can be judged that the case is likely to be fraudulent. Otherwise, the scene matching degree is high; In the case of a pit, the possibility of a traffic accident is greater, indicating that the scene matching degree is high, otherwise, the scene matching degree is low. The scene evaluation value is obtained according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree. Positive correlation means that the higher the scene matching degree, the higher the scene evaluation value. It should be noted that, through scene verification, it can be determined whether there is a possibility of falsehood in the content spoken by the user, which further improves the effective judgment basis for subsequent false judgments, and improves the judgment accuracy of the scheme.
S70:根据事故详情信息和对话数据获取说谎评估值。S70: Obtain a lying evaluation value according to the accident detail information and dialogue data.
本方案中,还会进一步根据事故详情信息和对话数据获取说谎评估值。其中,该说谎 评估值也是用于判定用户口述内容的虚假成分的评估值,与场景评估值不一样的是,说谎评估值还参考了对话数据,而不是简单依据事故详情信息。In this scheme, the lying evaluation value is further obtained based on the accident details and dialogue data. Among them, the lying evaluation value is also an evaluation value used to determine the false component of the user's oral content. Unlike the scene evaluation value, the lying evaluation value also refers to the dialogue data, rather than simply based on the accident details.
作为一个示例,具体地,作为一个示例,事故详情信息包含案发时间、案发地点和案发场景信息,如图4所示,S70中,也即根据事故详情信息和对话数据获取说谎评估值,具体包括如下步骤:As an example, specifically, as an example, the accident detail information includes the time of the accident, the location of the accident, and the scene of the accident. As shown in FIG. 4, in S70, that is, the lying evaluation value is obtained according to the accident detail information and the dialogue data , including the following steps:
S71:利用案发时间、案发地点和案发场景信息,生成预设问题。S71: Generate a preset question by using the crime time, the crime location, and the crime scene information.
S72:将预设问答发送至人机对话***,以使人机对话***通过预设问题向用户发起对话处理。S72: Send the preset question and answer to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question.
在解析事故详情口述视频得到事故详情信息,会利用利用案发时间、案发地点和案发场景信息,生成预设问题。例如,若案发时间在2020年5月3日17时,案发地点在城市A,案发场景信息为车辆撞到前方车辆且导致前车窗碎裂。则可以对应生成如下预设问题:“您是否在2020年5月3日17时发生事故?”;“您的事故案发地点是在城市A还是城市B?”;“您的事故案发情况是撞到其他车还是被他人车撞到,损伤位置在哪”?。The detailed information of the accident is obtained by parsing the oral video of the accident details, and the information of the incident time, the incident location and the incident scene will be used to generate preset questions. For example, if the crime occurred at 17:00 on May 3, 2020, and the crime occurred in City A, the scene information of the crime is that the vehicle collided with the vehicle in front and caused the front window to shatter. Then the following preset questions can be generated correspondingly: "Did you have an accident at 17:00 on May 3, 2020?"; "Is your accident location in city A or city B?"; "What happened to your accident? Did it hit another car or was hit by another car, where is the damage?" .
可见在该实施例中,通过用户自身口述内容去生成相应的预设问题,便于从后续用户的对话中找出逻辑问题,从而判定用户口述的虚假成分,进而判定虚假报案的可能。It can be seen that in this embodiment, the corresponding preset questions are generated by the user's own oral content, which is convenient to find logical problems in the subsequent user's dialogue, so as to determine the false components of the user's oral content, and then determine the possibility of false reporting.
具体地,将预设问答发送至人机对话***,以使人机对话***通过预设问题向用户发起对话处理。需要说明的是,通过接入人机对话***,以让专门的***执行对话过程,可减轻报案处理端的处理负担,提高效率。Specifically, the preset question and answer is sent to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question. It should be noted that, by connecting to the human-machine dialogue system, the special system can execute the dialogue process, which can reduce the processing burden of the report processing end and improve the efficiency.
S73:接收人机对话***所反馈的用户按照预设问题所回答的对话数据,对话数据中包含用户所回答的案发时间、案发地点和案发场景信息。S73: Receive the dialogue data that the user answers according to the preset questions fed back by the man-machine dialogue system, and the dialogue data includes the crime time, crime location and crime scene information answered by the user.
人机对话***接收到上述预设问题之后,便可按照预设问题接入用户端,并执行人机对话过程。具体地,人机对话***所反馈的用户按照预设问题所回答的对话数据,对话数据中包含用户所回答的案发时间、案发地点和案发场景信息。After receiving the above preset questions, the man-machine dialogue system can access the client according to the preset questions, and execute the man-machine dialogue process. Specifically, the dialogue data fed back by the man-machine dialogue system is the dialogue data answered by the user according to the preset questions, and the dialogue data includes the incident time, the incident location and the incident scene information answered by the user.
S74:将事故详情信息中的案发时间、案发地点和案发场景信息,分别与对话数据中的用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度。S74: Match the incident time, incident location, and incident scene information in the accident details information with the incident time, incident location, and incident scene information answered by the user in the dialogue data, respectively, and obtain the matching answer information. Spend.
可以理解,经过步骤S71-S73,便得到了两个方面的内容,一方面是通过用户上传的详情口述音频得到的案发时间、案发地点和案发场景信息;另一方面是通过再次对话得到的案发时间、案发地点和案发场景信息。该步骤中,将事故详情信息中的案发时间、案发地点和案发场景信息,分别与对话数据中的用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度。例如,若案发地点不匹配,说明用户前后出现矛盾,可能出现虚假报案的可能。需要说明的情况,在实际应用中,可以多设置匹配信息项,以综合并多次进行对话,获取所需的回答信息的回答信息匹配度。It can be understood that after steps S71-S73, two aspects of content are obtained. On the one hand, the incident time, the incident location and the incident scene information obtained through the detailed oral audio uploaded by the user; on the other hand, through the dialogue again. The information about the time of the incident, the location of the incident and the scene of the incident is obtained. In this step, the incident time, incident location and incident scene information in the accident details information are respectively matched with the incident time, incident location and incident scene information answered by the user in the dialogue data, and the answer is obtained. Information matching. For example, if the location of the crime does not match, it means that there is a contradiction between the users and the possibility of false reporting. It should be noted that, in practical applications, more matching information items may be set to synthesize and conduct multiple dialogues to obtain the matching degree of answer information of the required answer information.
其中,案发场景信息则取决于本方案适用于何种场景,举个简单例子,若当前案发场景是车辆事故场景,则上述案发场景信息可以指的是车辆涉及的人数、车辆损失部位等,具体不做限定。Among them, the incident scene information depends on which scene this solution is applicable to. For a simple example, if the current incident scene is a vehicle accident scene, the above incident scene information may refer to the number of people involved in the vehicle and the part of the vehicle that was damaged. etc., there is no specific limitation.
S75:解析对话数据,以获取用户对每个预设问题进行回答时的停顿时长。S75: Parse the dialogue data to obtain the pause time when the user answers each preset question.
S76:根据回答信息匹配度和停顿时长获取说谎评估值,其中,说谎评估值与回答信息匹配度正相关,说谎评估值与停顿时长负相关。S76: Obtain a lying evaluation value according to the matching degree of the answer information and the pause duration, wherein the lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause duration.
对于步骤S75-S76,可以理解,通过对话数据,除了获取上述回答信息匹配度之外,还需进一步解析对话数据,以获取用户对每个预设问题进行回答时的停顿时长。根据对话中出现“停顿”的时长判断是否存在欺诈。可以理解,针对某个问题,如果回答停顿时间过长,说明给用户对该某个问题存在迟疑,存在虚假报案的可能性,因故,需获取用户对每个预设问题进行回答时的停顿时长。最后根据回答信息匹配度和停顿时长获取说谎评估值,其中,说谎评估值与回答信息匹配度正相关,说谎评估值与停顿时长负相关。说谎评估值与回答信息匹配度正相关,指的是回答信息匹配度越高,谎评估值越高,反之,则谎 评估值越低;谎评估值与停顿时长负相关,指的是停顿时长越长,则谎评估值越低,反之,则谎评估值越高。最终的说谎评估值可以为回答信息匹配度和停顿时长对应的评估值的累加。For steps S75-S76, it can be understood that, in addition to obtaining the matching degree of the answer information, the dialog data needs to be further analyzed to obtain the pause time when the user answers each preset question through the dialog data. Fraud is determined by the length of the "pause" in the conversation. It is understandable that for a certain question, if the answering pause time is too long, it means that the user is hesitant about the certain question, and there is the possibility of false reporting. For some reason, it is necessary to obtain the pause when the user answers each preset question. duration. Finally, the lying evaluation value is obtained according to the matching degree of the answer information and the pause time. The lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause time. The lying evaluation value is positively correlated with the matching degree of the answer information, which means that the higher the matching degree of the answer information, the higher the lying evaluation value, and vice versa, the lower the lying evaluation value; the lying evaluation value is negatively correlated with the pause time, which refers to the pause time. The longer it is, the lower the lie evaluation value, and vice versa, the higher the lie evaluation value. The final lying evaluation value may be the accumulation of the evaluation values corresponding to the matching degree of the answer information and the pause duration.
S80:根据事故详情信息和用户的位置获取位置评估值。S80: Obtain a location evaluation value according to the accident detail information and the location of the user.
本方案中,还需进一步根据事故详情信息和用户的位置获取位置评估值,其中,该位置评估值是用于表征事故发生事故的准确性的评估值。In this solution, a location evaluation value needs to be further obtained according to the accident detailed information and the user's location, where the location evaluation value is an evaluation value used to characterize the accuracy of the accident occurrence.
作为一个示例,如图5所示,事故详情信息包含案发地点,步骤S80中,也即根据事故详情信息和用户的位置获取位置评估值,具体包括如下步骤:As an example, as shown in FIG. 5 , the accident details information includes the location of the incident. In step S80 , the location evaluation value is obtained according to the accident details information and the user's location, which specifically includes the following steps:
S81:预先获得用户的授权查阅用户端地点的授权。S81: Obtain the authorization of the user in advance to consult the authorization of the location of the client terminal.
本方案中,为了便于执行虚假报案的判定,需先获取用户的授权查阅用户端地点的授权,以避免涉及用户隐私问题。具体地,在用户同意授权之后,方可获取用户的用户端的位置。In this solution, in order to facilitate the determination of false reports, it is necessary to obtain the authorization of the user to view the authorization of the location of the user terminal, so as to avoid the issue of user privacy. Specifically, after the user agrees to authorize, the location of the user terminal of the user can be obtained.
S82:依据证件信息从用户端对应的运营商查询用户端地点,以获取产生对话数据时用户的位置。S82: Query the location of the user terminal from the operator corresponding to the user terminal according to the certificate information, so as to obtain the location of the user when the dialogue data is generated.
如前述,在报案时,用户还会上传证件信息,报案处理端可以通过证件信息从用户端对应的运营商查询用户端地点。用户端对应的运营商会判定报案处理端是否得到授权,当确定该报案处理端得到用户授权之后,用户端对应的运营商才会依据报案处理端的查询请求,依据证件信息反馈相应的用户端地点,从而可以获取到产生对话数据时用户的位置,也即用户发生对话时的经纬度信息。As mentioned above, when reporting a case, the user will also upload certificate information, and the report processing end can query the location of the user terminal from the operator corresponding to the user terminal through the certificate information. The operator corresponding to the client will determine whether the report processing terminal is authorized. When it is determined that the reporting processing terminal is authorized by the user, the operator corresponding to the client will feed back the corresponding client location based on the certificate information based on the query request of the reporting processing terminal. Thus, the position of the user when the dialogue data is generated, that is, the latitude and longitude information of the user when the dialogue occurs can be acquired.
S83:将用户的位置与案发地点进行距离比较。S83: Compare the distance between the user's location and the crime scene.
S84:根据距离比较结果获取位置评估值,其中,位置评估值与距离比较结果负相关。S84: Obtain a position evaluation value according to the distance comparison result, where the position evaluation value is negatively correlated with the distance comparison result.
在获取到用户的位置之后,将查询到的位置与解析事故详情口述内容提取出的案发地点进行比较,根据距离比较结果获取位置评估值,其中,位置评估值与距离比较结果负相关,位置评估值与距离比较结果负相关,指的是用户的位置与案发地点距离越远,则位置评估值越低,反之越高。After obtaining the user's location, compare the queried location with the incident location extracted by parsing the accident details oral content, and obtain the location evaluation value according to the distance comparison result. The evaluation value is negatively correlated with the distance comparison result, which means that the farther the user's location is from the crime scene, the lower the location evaluation value, and vice versa.
S90:根据面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定用户是否存在虚假报案,并将判定结果与证件信息关联存储。S90: Determine whether the user reports falsely according to the facial expression evaluation value, the voice evaluation value, the scene evaluation value, the lying evaluation value, and the location evaluation value, and store the determination result in association with the certificate information.
需要说明的是,经过前述几个步骤,便可得到面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值,上述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值均通过客观存在的事实而判定出来,最后根据面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定用户是否存在虚假报案,并将判定结果与证件信息关联存储,最终得到用户的报案处理结果。It should be noted that, after the aforementioned steps, the facial expression evaluation value, voice evaluation value, scene evaluation value, lying evaluation value and position evaluation value can be obtained. The above-mentioned facial expression evaluation value, sound evaluation value, scene evaluation value, lying Both the evaluation value and the location evaluation value are determined by objective facts. Finally, whether the user has falsely reported the case is determined according to the facial expression evaluation value, voice evaluation value, scene evaluation value, lying evaluation value, and location evaluation value. The certificate information is stored in association, and the user's report processing result is finally obtained.
作为一个示例,步骤S90中,也即根据面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定用户是否存在虚假报案,具体包括如下步骤:As an example, in step S90, that is, according to the evaluation value of facial expression, the evaluation value of voice, the evaluation value of scene, the evaluation value of lying and the evaluation value of location, it is determined whether there is a false report of the user, which specifically includes the following steps:
S91:分别为面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值。S91: Configure corresponding weight values for the facial expression evaluation value, the voice evaluation value, the scene evaluation value, the lying evaluation value, and the position evaluation value, respectively.
S92:按照如下公式计算目标评估值。S92: Calculate the target evaluation value according to the following formula.
Figure PCTCN2021109443-appb-000001
Figure PCTCN2021109443-appb-000001
其中,totalScore表示目标评估值,a i对应表示各评估值对应的权重值,x i对应表示各评估值; Among them, totalScore represents the target evaluation value, a i corresponds to the weight value corresponding to each evaluation value, and xi corresponds to each evaluation value;
S93:判断目标评估值是否大于预设阈值。S93: Determine whether the target evaluation value is greater than a preset threshold.
S94:当目标评估值大于预设阈值,则判定用户为存在虚假报案。S94: When the target evaluation value is greater than the preset threshold, it is determined that the user has made a false report.
S95:当目标评估值小于或等于预设阈值,则判定用户为非存在虚假报案。S95: When the target evaluation value is less than or equal to the preset threshold, it is determined that the user does not have a false report.
最后,上述五个维度的得分结果的范围都是[0,10],然后给每个维度的分数赋予相应的权重,其中各维度的权重总和为1,最后总分的计算公式如下:Finally, the scores of the above five dimensions are in the range of [0, 10], and then the corresponding weights are assigned to the scores of each dimension, where the sum of the weights of each dimension is 1, and the calculation formula of the final total score is as follows:
Figure PCTCN2021109443-appb-000002
Figure PCTCN2021109443-appb-000002
其中α i是各维度的权重值,默认取值依次为:(0.1,0.1,0.2,0.3,0.3)当totalscore大于8,则可认定风险较高;当totalscore值为[5,8)存在一定风险,低于5,存在风险极低 where α i is the weight value of each dimension. The default values are: (0.1, 0.1, 0.2, 0.3, 0.3) when the totalscore is greater than 8, the risk can be determined to be high; when the totalscore value is [5, 8) there is a certain Risk, below 5, there is very low risk
上述权重值可以依据最终反馈的风险结论,来重新调整以便更加贴近最终运行的结果。The above weight value can be readjusted according to the final feedback risk conclusion so as to be closer to the final running result.
作为一个示例,配置权重的规则主要是通过最终案件结论确认,具体为:当案件量达到2000,最终案件结论与该***评分结果给出的结论一致占比低于50%,需对结论不一致的案件的结论进行归类,看哪几种维度的分数影响最终的分数。例如:当案件数量达到2000,但是最终结论与***给出的结论一致的只有800,也就是只占40%,那么就需要对剩下的1200个案件的结论进行归类,统计每一维度的数量,根据每一位维度的数量除以总数1200,算出调整后的权重值。As an example, the rules for assigning weights are mainly confirmed by the final case conclusion, specifically: when the number of cases reaches 2,000, the final case conclusion and the conclusion given by the system scoring results account for less than 50% of the cases, and those with inconsistent conclusions need to be evaluated. The conclusion of the case is classified to see which dimension scores affect the final score. For example: when the number of cases reaches 2,000, but only 800 of the final conclusions are consistent with the conclusions given by the system, that is, only 40%, then it is necessary to classify the conclusions of the remaining 1,200 cases, and count the results of each dimension. The adjusted weight value is calculated by dividing the number of each dimension by the total number of 1200.
作为一个示例,将判定结果与证件信息关联存储,指的是将判定结果与证件信息关联存储在区块链***的区块链上。将证件信息关联存储在区块链***的区块链上,方便对虚假报案人进行追溯和保证报案结果不被篡改。As an example, storing the determination result in association with the credential information refers to storing the determination result in association with the credential information on the blockchain of the blockchain system. The certificate information is associated and stored on the blockchain of the blockchain system, which facilitates the traceability of false reporters and ensures that the report results are not tampered with.
区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层。Blockchain is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Blockchain, essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block. The blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
区块链底层平台可以包括用户管理、基础服务、智能合约以及运营监控等处理模块。其中,用户管理模块负责所有区块链参与者的身份信息管理,包括维护公私钥生成(账户管理)、密钥管理以及用户真实身份和区块链地址对应关系维护(权限管理)等,并且在授权的情况下,监管和审计某些真实身份的交易情况,提供风险控制的规则配置(风控审计);基础服务模块部署在所有区块链节点设备上,用来验证业务请求的有效性,并对有效请求完成共识后记录到存储上,对于一个新的业务请求,基础服务先对接口适配解析和鉴权处理(接口适配),然后通过共识算法将业务信息加密(共识管理),在加密之后完整一致的传输至共享账本上(网络通信),并进行记录存储;智能合约模块负责合约的注册发行以及合约触发和合约执行,开发人员可以通过某种编程语言定义合约逻辑,发布到区块链上(合约注册),根据合约条款的逻辑,调用密钥或者其它的事件触发执行,完成合约逻辑,同时还提供对合约升级注销的功能;运营监控模块主要负责产品发布过程中的部署、配置的修改、合约设置、云适配以及产品运行中的实时状态的可视化输出,例如:告警、监控网络情况、监控节点设备健康状态等。The underlying platform of the blockchain can include processing modules such as user management, basic services, smart contracts, and operation monitoring. Among them, the user management module is responsible for the identity information management of all blockchain participants, including maintenance of public and private key generation (account management), key management, and maintenance of the corresponding relationship between the user's real identity and blockchain address (authority management), etc. When authorized, supervise and audit the transactions of some real identities, and provide rule configuration for risk control (risk control audit); the basic service module is deployed on all blockchain node devices to verify the validity of business requests, After completing the consensus on valid requests, record them in the storage. For a new business request, the basic service first adapts the interface for analysis and authentication processing (interface adaptation), and then encrypts the business information through the consensus algorithm (consensus management), After encryption, it is completely and consistently transmitted to the shared ledger (network communication), and records are stored; the smart contract module is responsible for the registration and issuance of contracts, as well as contract triggering and contract execution. Developers can define contract logic through a programming language and publish to On the blockchain (contract registration), according to the logic of the contract terms, call the key or other events to trigger execution, complete the contract logic, and also provide the function of contract upgrade and cancellation; the operation monitoring module is mainly responsible for the deployment in the product release process , configuration modification, contract settings, cloud adaptation, and visual output of real-time status in product operation, such as: alarms, monitoring network conditions, monitoring node equipment health status, etc.
可见,本申请实施例提供了一种虚假保险报案处理方法,主要为保险公司提供快速识别案件的反欺诈、反渗漏的风险识别和管控,通过用户录制视频的方式,读取多个维度的判定信息,并依据判定信息去综合判定报案结果并关联存储出来,可降低保险公司因大量的人工判定产生的误差,同时节省大量的调查人力及社会资源,提高保险公司的案件的结案时效,提高用户对保险公司的案件处理能力和时效带来更好的体验,从而真正为保险公司带来降本增效的作用,极大地降到了虚假案件率,提高了案件判定效率。It can be seen that the embodiment of the present application provides a method for handling false insurance reports, which mainly provides insurance companies with anti-fraud and anti-leakage risk identification and control to quickly identify cases, and reads the data from multiple dimensions by recording videos by users. Judging information, and comprehensively judging and storing the report results based on the judgment information, can reduce the error caused by a large number of manual judgments by insurance companies, save a lot of investigation manpower and social resources, improve the timeliness of case closure for insurance companies, and improve Users have a better experience with the insurance company's case handling capabilities and timeliness, thereby truly reducing costs and increasing efficiency for insurance companies, greatly reducing the rate of false cases and improving the efficiency of case determination.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
在一实施例中,提供一种虚假保险报案处理装置,该虚假保险报案处理装置与上述实 施例中虚假保险报案处理方法一一对应。如图6所示,该虚假保险报案处理装置包括响应模块101、第一获取模块102、第二获取模块103、第一分析模块104、第二分析模块105、第三分析模块106、第三获取模块107、第四获取模块108、判定模块109和存储模块110。各功能模块详细说明如下:In one embodiment, a device for processing a false insurance report is provided, and the device for processing a false insurance report corresponds one-to-one with the method for processing a false insurance report in the above embodiment. As shown in FIG. 6 , the false insurance report processing device includes a response module 101 , a first acquisition module 102 , a second acquisition module 103 , a first analysis module 104 , a second analysis module 105 , a third analysis module 106 , and a third acquisition module 104 . module 107 , a fourth acquisition module 108 , a determination module 109 and a storage module 110 . The detailed description of each functional module is as follows:
响应模块,用于响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;A response module, used to respond to an insurance report request triggered by a user, so as to send a material submission request to the client;
第一获取模块,用于获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;The first acquisition module is configured to acquire the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene The image includes an image of the user's face;
第二获取模块,用于获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题提问进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;The second obtaining module is configured to obtain the dialog data fed back by the user terminal, where the dialog data is the dialog data that the user answers according to the preset questions, and obtains the dialog data when the user answers the preset questions according to the preset questions. the user's location;
第一分析模块,用于分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;The first analysis module is used to analyze the facial image of the user, to obtain the change of the user's facial micro-expression, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
第二分析模块,用于分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;a second analysis module, configured to analyze the oral audio of the accident details to extract the accident details and the change of the user's voice;
第三分析模块,用于分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;a third analysis module, configured to analyze the sound change to obtain a sound evaluation value, and analyze the accident detail information to obtain a scene evaluation value;
第三获取模块,用于根据所述事故详情信息和所述对话数据获取说谎评估值;a third obtaining module, configured to obtain a lying evaluation value according to the accident detail information and the dialogue data;
第四获取模块,用于根据所述事故详情信息和所述用户的位置获取位置评估值;a fourth acquiring module, configured to acquire a location evaluation value according to the accident detail information and the location of the user;
判定模块,用于根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案;A determination module, used for determining whether the user has a false report according to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value;
存储模块,用于将判定结果与所述证件信息关联存储。The storage module is configured to store the judgment result in association with the certificate information.
可见,本申请实施例提供了一种虚假保险报案处理装置,主要为保险公司提供快速识别案件的反欺诈、反渗漏的风险识别和管控,通过用户录制视频的方式,读取多个维度的判定信息,并依据判定信息去综合判定报案结果并关联存储出来,可降低保险公司因大量的人工判定产生的误差,同时节省大量的调查人力及社会资源,提高保险公司的案件的结案时效,提高用户对保险公司的案件处理能力和时效带来更好的体验,从而真正为保险公司带来降本增效的作用,极大地降到了虚假案件率,提高了案件判定效率。It can be seen that the embodiment of this application provides a false insurance report processing device, which mainly provides anti-fraud and anti-leakage risk identification and control for insurance companies to quickly identify cases. Judging information, and comprehensively judging and storing the report results based on the judgment information, can reduce the error caused by a large number of manual judgments by insurance companies, save a lot of investigation manpower and social resources, improve the timeliness of case closure for insurance companies, and improve Users have a better experience with the insurance company's case handling capabilities and timeliness, thereby truly reducing costs and increasing efficiency for insurance companies, greatly reducing the rate of false cases and improving the efficiency of case determination.
关于虚假保险报案处理装置的具体限定可以参见上文中对于虚假保险报案处理方法的限定,在此不再赘述。上述虚假保险报案处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For specific limitations on the device for processing false insurance reports, please refer to the limitations on methods for processing false insurance reports above, which will not be repeated here. All or part of the modules in the device for processing false insurance reports can be implemented by software, hardware and combinations thereof. The above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图7所示。该计算机设备包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作***和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种虚假保险报案处理方法。In one embodiment, a computer device is provided, and the computer device can be a server, and its internal structure diagram can be as shown in FIG. 7 . The computer device includes a processor, memory, a network interface, and a database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium, a volatile storage medium, and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions and a database. The internal memory provides an environment for the execution of the operating system and computer-readable instructions in the non-volatile storage medium. The network interface of the computer device is used to communicate with an external terminal through a network connection. The computer readable instructions, when executed by the processor, implement a method for processing false insurance reports.
在一个实施例中,提供一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面 部图像;获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;根据所述事故详情信息和所述对话数据获取说谎评估值;根据所述事故详情信息和所述用户的位置获取位置评估值;根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。In one embodiment, there is provided a computer apparatus comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor executes the computer-readable instructions When reading the instruction, the following steps are implemented: responding to the insurance report request triggered by the user, so as to send a material submission request to the client; obtaining the certificate information and the accident scene video uploaded by the user according to the material submission request, and the accident scene video Including the accident details oral audio and the accident scene image entered by the user, the accident scene image includes the user's face image; acquiring the dialogue data fed back by the user terminal, the dialogue data is the user according to the preset question. Answered dialogue data, and obtain the user's position when the user answered the preset question; analyze the user's facial image to obtain the user's facial micro-expression changes, and according to the facial micro-expression Obtain the facial expression evaluation value from the change situation; analyze the accident details oral audio to extract the accident details information and the user's voice change situation; analyze the voice change situation to obtain the sound evaluation value, and analyze the accident details information to obtain the scene evaluation value; obtain a lying evaluation value according to the accident details information and the dialogue data; obtain a position evaluation value according to the accident details information and the user's position; The value, the lying evaluation value, and the location evaluation value determine whether the user has made a false report, and store the determination result in association with the credential information.
结合前述计算机设备,在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:分别为所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值;按照如下公式计算目标评估值:
Figure PCTCN2021109443-appb-000003
其中,所述totalScore表示所述目标评估值,所述a i对应表示各所述评估值对应的权重值,x i对应表示各所述评估值;判断所述目标评估值是否大于预设阈值;当所述目标评估值大于预设阈值,则判定所述用户为存在虚假报案;当所述目标评估值小于或等于预设阈值,则判定所述用户为非存在虚假报案。结合前述计算机设备,在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:利用所述案发地点获取所述案发地点的周围环境信息;将所述周围环境信息与所述案发场景信息进行场景匹配,以获取场景匹配度;根据所述场景匹配度获取所述场景评估值,其中,所述场景评估值与所述场景匹配度正相关。
In combination with the aforementioned computer equipment, in one embodiment, the processor implements the following steps when executing the computer-readable instructions: the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value, and the position evaluation value, respectively The weight value corresponding to the value configuration; the target evaluation value is calculated according to the following formula:
Figure PCTCN2021109443-appb-000003
Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values; judging whether the target evaluation value is greater than a preset threshold; When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report; when the target evaluation value is less than or equal to the preset threshold, it is determined that the user does not have a false report. With reference to the aforementioned computer equipment, in one embodiment, the processor implements the following steps when executing the computer-readable instructions: obtaining the surrounding environment information of the crime site by using the crime site; converting the surrounding environment information Perform scene matching with the crime scene information to obtain a scene matching degree; and obtain the scene evaluation value according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
结合前述计算机设备,在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:利用所述案发时间、案发地点和案发场景信息,生成所述预设问题;将所述预设问题发送至人机对话***,以使所述人机对话***通过所述预设问题向所述用户发起对话处理;接收所述人机对话***所反馈的所述用户按照所述预设问题所回答的对话数据,所述对话数据中包含所述用户所回答的案发时间、案发地点和案发场景信息;将所述事故详情信息中的案发时间、案发地点和案发场景信息,分别与所述对话数据中的所述用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度;解析所述对话数据,以获取所述用户对每个所述预设问题进行回答时的停顿时长;根据所述回答信息匹配度和所述停顿时长获取所述说谎评估值,其中,所述说谎评估值与所述回答信息匹配度正相关,所述说谎评估值与所述停顿时长负相关。With reference to the aforementioned computer equipment, in one embodiment, the processor implements the following steps when executing the computer-readable instructions: generating the preset question by using the information about the time of the crime, the location of the crime, and the scene of the crime; Send the preset question to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question; The dialogue data answered by the preset question, the dialogue data includes the incident time, incident location and incident scene information answered by the user; the incident time, incident location in the accident details information and incident scene information, respectively perform information matching with the incident time, incident location and incident scene information answered by the user in the dialogue data to obtain the matching degree of the answer information; parse the dialogue data to obtain all the information. The pause duration when the user answers each of the preset questions; the lying evaluation value is obtained according to the answer information matching degree and the pause duration, wherein the lying evaluation value and the answer information matching degree Positive correlation, the lying evaluation value is negatively correlated with the pause duration.
结合前述计算机设备,在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:预先获得所述用户的授权查阅所述用户端地点的授权;依据所述证件信息从所述用户端对应的运营商查询所述用户端地点,以获取产生所述对话数据时所述用户的位置;将所述用户的位置与所述案发地点进行距离比较;根据所述距离比较结果获取所述位置评估值,其中,所述位置评估值与所述距离比较结果负相关。With reference to the aforementioned computer equipment, in one embodiment, the processor implements the following steps when executing the computer-readable instructions: obtaining the authorization of the user in advance to consult the authorization of the location of the client; The operator corresponding to the user terminal inquires the location of the user terminal to obtain the location of the user when the dialogue data is generated; compares the distance between the user's location and the crime scene; according to the distance comparison result The location evaluation value is obtained, wherein the location evaluation value is negatively correlated with the distance comparison result.
结合前述计算机设备,在一实施例中,所述处理器执行所述计算机可读指令时实现如下步骤:将所述判定结果与所述证件信息关联存储在区块链***的区块链上。In combination with the aforementioned computer equipment, in one embodiment, the processor implements the following steps when executing the computer-readable instructions: storing the determination result in association with the certificate information on the blockchain of the blockchain system.
在一实施例中,提供一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所 述用户按照预设问题进行回答时所述用户的位置;分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;根据所述事故详情信息和所述对话数据获取说谎评估值;根据所述事故详情信息和所述用户的位置获取位置评估值;根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。In one embodiment, one or more readable storage media are provided storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following: Steps: responding to an insurance report request triggered by a user, to send a material submission request to the client; obtaining the certificate information and accident scene video uploaded by the user according to the material submission request, and the accident scene video includes the user input The oral audio of the accident details and the accident scene image, the accident scene image includes the user's face image; the dialogue data fed back by the user terminal is obtained, and the dialogue data is the dialogue data that the user answers according to preset questions, and obtain the position of the user when the user answers the preset questions; analyze the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain the facial expression according to the change of the facial micro-expression evaluation value; analyze the oral audio of the accident details to extract the accident details information and the voice change of the user; analyze the sound change to obtain a sound evaluation value, and analyze the accident details information to obtain a scene evaluation value; According to the accident details information and the dialogue data, the lying evaluation value is obtained; the location evaluation value is obtained according to the accident details information and the position of the user; the facial expression evaluation value, the sound evaluation value, the scene evaluation value, and the lying evaluation value are obtained according to the evaluation value of the facial expression. and the position evaluation value to determine whether the user has made a false report, and store the determination result in association with the credential information.
结合前述可读存储介质,在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:分别为所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值;按照如下公式计算目标评估值:
Figure PCTCN2021109443-appb-000004
其中,所述totalScore表示所述目标评估值,所述a i对应表示各所述评估值对应的权重值,x i对应表示各所述评估值;判断所述目标评估值是否大于预设阈值;当所述目标评估值大于预设阈值,则判定所述用户为存在虚假报案;当所述目标评估值小于或等于预设阈值,则判定所述用户为非存在虚假报案。
In combination with the aforementioned readable storage medium, in one embodiment, when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: evaluating the facial expression values respectively , sound evaluation value, scene evaluation value, lying evaluation value and position evaluation value configure the corresponding weight value; calculate the target evaluation value according to the following formula:
Figure PCTCN2021109443-appb-000004
Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values; judging whether the target evaluation value is greater than a preset threshold; When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report; when the target evaluation value is less than or equal to the preset threshold, it is determined that the user does not have a false report.
结合前述可读存储介质,在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:利用所述案发地点获取所述案发地点的周围环境信息;将所述周围环境信息与所述案发场景信息进行场景匹配,以获取场景匹配度;根据所述场景匹配度获取所述场景评估值,其中,所述场景评估值与所述场景匹配度正相关。In combination with the aforementioned readable storage medium, in one embodiment, when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: obtaining all information from the crime site by using the crime scene. Describe the surrounding environment information of the crime scene; perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree; obtain the scene evaluation value according to the scene matching degree, wherein the scene The evaluation value is positively correlated with the scene matching degree.
结合前述可读存储介质,在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:利用所述案发时间、案发地点和案发场景信息,生成所述预设问题;将所述预设问题发送至人机对话***,以使所述人机对话***通过所述预设问题向所述用户发起对话处理;接收所述人机对话***所反馈的所述用户按照所述预设问题所回答的对话数据,所述对话数据中包含所述用户所回答的案发时间、案发地点和案发场景信息;将所述事故详情信息中的案发时间、案发地点和案发场景信息,分别与所述对话数据中的所述用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度;解析所述对话数据,以获取所述用户对每个所述预设问题进行回答时的停顿时长;根据所述回答信息匹配度和所述停顿时长获取所述说谎评估值,其中,所述说谎评估值与所述回答信息匹配度正相关,所述说谎评估值与所述停顿时长负相关。With reference to the aforementioned readable storage medium, in one embodiment, when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: using the time of the incident, the generating the preset question; sending the preset question to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question; Receive the dialogue data that the user answers according to the preset questions, which is fed back by the man-machine dialogue system, where the dialogue data includes the crime time, crime location, and crime scene information answered by the user; Matching the incident time, incident location and incident scene information in the accident details information with the incident time, incident location and incident scene information answered by the user in the dialogue data, respectively, Obtain the matching degree of answer information; parse the dialogue data to obtain the pause duration when the user answers each of the preset questions; obtain the lying evaluation value according to the matching degree of the answer information and the pause duration , wherein the lying evaluation value is positively correlated with the matching degree of the answer information, and the lying evaluation value is negatively correlated with the pause duration.
结合前述可读存储介质,在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:预先获得所述用户的授权查阅所述用户端地点的授权;依据所述证件信息从所述用户端对应的运营商查询所述用户端地点,以获取产生所述对话数据时所述用户的位置;将所述用户的位置与所述案发地点进行距离比较;根据所述距离比较结果获取所述位置评估值,其中,所述位置评估值与所述距离比较结果负相关。In combination with the aforementioned readable storage medium, in one embodiment, when the computer-readable instructions are executed by one or more processors, the one or more processors cause the one or more processors to perform the following steps: obtaining the user's authorization to view in advance Authorization of the location of the client; query the location of the client from the operator corresponding to the client according to the certificate information to obtain the location of the user when the dialogue data is generated; compare the location of the user with the location of the user. A distance comparison is performed on the crime scene; the location evaluation value is obtained according to the distance comparison result, wherein the location evaluation value is negatively correlated with the distance comparison result.
结合前述可读存储介质,在一实施例中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:将所述判定结果与所述证件信息关联存储在区块链***的区块链上。In combination with the aforementioned readable storage medium, in one embodiment, when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps: combining the determination result with the The certificate information is associated and stored on the blockchain of the blockchain system.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器 (ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented by instructing relevant hardware through computer-readable instructions, and the computer-readable instructions can be stored in a non-volatile computer. In the readable storage medium, the computer-readable instructions, when executed, may include the processes of the foregoing method embodiments. Wherein, any reference to memory, storage, database or other medium used in the various embodiments provided in this application may include non-volatile and/or volatile memory. Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example. Module completion, that is, dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the above-mentioned embodiments, those of ordinary skill in the art should understand that: it can still be used for the above-mentioned implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the application, and should be included in the within the scope of protection of this application.

Claims (20)

  1. 一种虚假保险报案处理方法,其中,包括:A method for handling false insurance reports, including:
    响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;Responding to the insurance report request triggered by the user, to send a material submission request to the client;
    获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;Acquiring the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
    获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;acquiring dialogue data fed back by the user terminal, where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
    分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;Analyzing the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
    分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;analyzing the accident detail spoken audio to extract accident detail information and changes in the user's voice;
    分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;Analyzing the sound change to obtain a sound evaluation value, and analyzing the accident detail information to obtain a scene evaluation value;
    根据所述事故详情信息和所述对话数据获取说谎评估值;obtaining a lying assessment value according to the accident detail information and the dialogue data;
    根据所述事故详情信息和所述用户的位置获取位置评估值;Obtain a location evaluation value according to the accident detail information and the location of the user;
    根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。According to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
  2. 如权利要求1所述的虚假保险报案处理方法,其中,所述根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,包括:The method for processing a false insurance report according to claim 1, wherein determining whether the user has a false report according to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the location evaluation value, comprising: :
    分别为所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值;Configure corresponding weight values for the facial expression evaluation value, the voice evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value respectively;
    按照如下公式计算目标评估值:Calculate the target evaluation value according to the following formula:
    Figure PCTCN2021109443-appb-100001
    Figure PCTCN2021109443-appb-100001
    其中,所述totalScore表示所述目标评估值,所述a i对应表示各所述评估值对应的权重值,x i对应表示各所述评估值; Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values;
    判断所述目标评估值是否大于预设阈值;judging whether the target evaluation value is greater than a preset threshold;
    当所述目标评估值大于预设阈值,则判定所述用户为存在虚假报案;When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report;
    当所述目标评估值小于或等于预设阈值,则判定所述用户为非存在虚假报案。When the target evaluation value is less than or equal to a preset threshold, it is determined that the user does not have a false report.
  3. 如权利要求1所述的虚假保险报案处理方法,其中,所述事故详情信息包含案发地点和案发场景信息,所述分析所述事故详情信息获取场景评估值,包括:The method for processing a false insurance report according to claim 1, wherein the accident detail information includes the accident location and the accident scene information, and the analyzing the accident detail information to obtain the scene evaluation value includes:
    利用所述案发地点获取所述案发地点的周围环境信息;Use the crime scene to obtain the surrounding environment information of the crime scene;
    将所述周围环境信息与所述案发场景信息进行场景匹配,以获取场景匹配度;Perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree;
    根据所述场景匹配度获取所述场景评估值,其中,所述场景评估值与所述场景匹配度正相关。The scene evaluation value is obtained according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
  4. 如权利要求1所述的虚假保险报案处理方法,其中,所述事故详情信息包含案发时间、案发地点和案发场景信息,所述根据所述事故详情信息和所述对话数据获取说谎评估值,包括:The method for processing false insurance reports according to claim 1, wherein the accident detail information includes the time of the accident, the location of the accident and the scene of the accident, and the lying assessment is obtained according to the accident detail information and the dialogue data values, including:
    利用所述案发时间、案发地点和案发场景信息,生成所述预设问题;generating the preset question using the information about the time of the crime, the location of the crime and the scene of the crime;
    将所述预设问题发送至人机对话***,以使所述人机对话***通过所述预设问题向所述用户发起对话处理;sending the preset question to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question;
    接收所述人机对话***所反馈的所述用户按照所述预设问题所回答的对话数据,所述 对话数据中包含所述用户所回答的案发时间、案发地点和案发场景信息;Receive the dialogue data answered by the user according to the preset question fed back by the man-machine dialogue system, and the dialogue data includes the time of the incident, the location of the incident and the scene information of the incident answered by the user;
    将所述事故详情信息中的案发时间、案发地点和案发场景信息,分别与所述对话数据中的所述用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度;Matching the incident time, incident location and incident scene information in the accident details information with the incident time, incident location and incident scene information answered by the user in the dialogue data, respectively, Get the matching degree of answer information;
    解析所述对话数据,以获取所述用户对每个所述预设问题进行回答时的停顿时长;Parsing the dialogue data to obtain the pause duration when the user answers each of the preset questions;
    根据所述回答信息匹配度和所述停顿时长获取所述说谎评估值,其中,所述说谎评估值与所述回答信息匹配度正相关,所述说谎评估值与所述停顿时长负相关。The lying evaluation value is obtained according to the answer information matching degree and the pause duration, wherein the lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause duration.
  5. 如权利要求1所述的虚假保险报案处理方法,其中,所述事故详情信息包含案发地点,所述根据所述事故详情信息和所述用户的位置获取位置评估值,包括:The method for processing a false insurance report according to claim 1, wherein the accident detail information includes an accident location, and the obtaining a location evaluation value according to the accident detail information and the user's location includes:
    预先获得所述用户的授权查阅所述用户端地点的授权;Obtaining the authorization of the user in advance to consult the authorization of the location of the client;
    依据所述证件信息从所述用户端对应的运营商查询所述用户端地点,以获取产生所述对话数据时所述用户的位置;querying the location of the client from the operator corresponding to the client according to the certificate information to obtain the location of the user when the dialog data is generated;
    将所述用户的位置与所述案发地点进行距离比较;comparing the distance between the user's location and the crime scene;
    根据所述距离比较结果获取所述位置评估值,其中,所述位置评估值与所述距离比较结果负相关。The location evaluation value is obtained according to the distance comparison result, wherein the location evaluation value is negatively correlated with the distance comparison result.
  6. 如权利要求1所述的虚假保险报案处理方法,其中,所述将判定结果与所述证件信息关联存储,包括:The method for processing a false insurance report according to claim 1, wherein the storing the judgment result in association with the certificate information comprises:
    将判定结果与所述证件信息关联存储在区块链***的区块链上。The determination result is associated with the certificate information and stored on the blockchain of the blockchain system.
  7. 一种虚假保险报案处理装置,包括:A device for processing false insurance reports, comprising:
    响应模块,用于响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;A response module, used to respond to an insurance report request triggered by a user, so as to send a material submission request to the client;
    第一获取模块,用于获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;The first acquisition module is configured to acquire the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene The image includes an image of the user's face;
    第二获取模块,用于获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题提问进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;The second obtaining module is configured to obtain the dialog data fed back by the user terminal, where the dialog data is the dialog data that the user answers according to the preset questions, and obtains the dialog data when the user answers the preset questions according to the preset questions. the user's location;
    第一分析模块,用于分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;The first analysis module is used to analyze the facial image of the user, to obtain the change of the user's facial micro-expression, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
    第二分析模块,用于分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;a second analysis module, configured to analyze the oral audio of the accident details to extract the accident details and the change of the user's voice;
    第三分析模块,用于分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;a third analysis module, configured to analyze the sound change to obtain a sound evaluation value, and analyze the accident detail information to obtain a scene evaluation value;
    第三获取模块,用于根据所述事故详情信息和所述对话数据获取说谎评估值;a third obtaining module, configured to obtain a lying evaluation value according to the accident detail information and the dialogue data;
    第四获取模块,用于根据所述事故详情信息和所述用户的位置获取位置评估值;a fourth acquiring module, configured to acquire a location evaluation value according to the accident detail information and the location of the user;
    判定模块,用于根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案;A determination module, used for determining whether the user has a false report according to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value;
    存储模块,用于将判定结果与所述证件信息关联存储。The storage module is configured to store the judgment result in association with the certificate information.
  8. 如权利要求7所述的虚假保险报案处理装置,其中,所述判定模块具体用于:The device for processing false insurance reports according to claim 7, wherein the determination module is specifically used for:
    分别为所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值;Configure corresponding weight values for the facial expression evaluation value, the voice evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value respectively;
    按照如下公式计算目标评估值:Calculate the target evaluation value according to the following formula:
    Figure PCTCN2021109443-appb-100002
    Figure PCTCN2021109443-appb-100002
    其中,所述totalScore表示所述目标评估值,所述a i对应表示各所述评估值对应的权 重值,x i对应表示各所述评估值; Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values;
    判断所述目标评估值是否大于预设阈值;judging whether the target evaluation value is greater than a preset threshold;
    当所述目标评估值大于预设阈值,则判定所述用户为存在虚假报案;When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report;
    当所述目标评估值小于或等于预设阈值,则判定所述用户为非存在虚假报案。When the target evaluation value is less than or equal to a preset threshold, it is determined that the user does not have a false report.
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:A computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor implements the following steps when executing the computer-readable instructions:
    响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;Responding to the insurance report request triggered by the user, to send a material submission request to the client;
    获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;Acquiring the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
    获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;acquiring dialogue data fed back by the user terminal, where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
    分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;Analyzing the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
    分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;analyzing the accident detail spoken audio to extract accident detail information and changes in the user's voice;
    分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;Analyzing the sound change to obtain a sound evaluation value, and analyzing the accident detail information to obtain a scene evaluation value;
    根据所述事故详情信息和所述对话数据获取说谎评估值;obtaining a lying assessment value according to the accident detail information and the dialogue data;
    根据所述事故详情信息和所述用户的位置获取位置评估值;Obtain a location evaluation value according to the accident detail information and the location of the user;
    根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。According to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the location evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
  10. 如权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时实现如下步骤:The computer device of claim 9, wherein the processor, when executing the computer-readable instructions, implements the steps of:
    分别为所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值;Configure corresponding weight values for the facial expression evaluation value, the voice evaluation value, the scene evaluation value, the lying evaluation value and the position evaluation value respectively;
    按照如下公式计算目标评估值:Calculate the target evaluation value according to the following formula:
    Figure PCTCN2021109443-appb-100003
    Figure PCTCN2021109443-appb-100003
    其中,所述totalScore表示所述目标评估值,所述a i对应表示各所述评估值对应的权重值,x i对应表示各所述评估值; Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values;
    判断所述目标评估值是否大于预设阈值;judging whether the target evaluation value is greater than a preset threshold;
    当所述目标评估值大于预设阈值,则判定所述用户为存在虚假报案;When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report;
    当所述目标评估值小于或等于预设阈值,则判定所述用户为非存在虚假报案。When the target evaluation value is less than or equal to a preset threshold, it is determined that the user does not have a false report.
  11. 如权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时实现如下步骤:The computer device of claim 9, wherein the processor, when executing the computer-readable instructions, implements the steps of:
    利用所述案发地点获取所述案发地点的周围环境信息;Use the crime scene to obtain the surrounding environment information of the crime scene;
    将所述周围环境信息与所述案发场景信息进行场景匹配,以获取场景匹配度;Perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree;
    根据所述场景匹配度获取所述场景评估值,其中,所述场景评估值与所述场景匹配度正相关。The scene evaluation value is obtained according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
  12. 如权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时实现如下步骤:The computer device of claim 9, wherein the processor, when executing the computer-readable instructions, implements the steps of:
    利用所述案发时间、案发地点和案发场景信息,生成所述预设问题;generating the preset question using the information about the time of the crime, the location of the crime and the scene of the crime;
    将所述预设问题发送至人机对话***,以使所述人机对话***通过所述预设问题向所 述用户发起对话处理;The preset question is sent to the man-machine dialogue system, so that the man-machine dialogue system initiates a dialogue process to the user through the preset question;
    接收所述人机对话***所反馈的所述用户按照所述预设问题所回答的对话数据,所述对话数据中包含所述用户所回答的案发时间、案发地点和案发场景信息;Receive the dialogue data that the user answers according to the preset questions, which is fed back by the man-machine dialogue system, where the dialogue data includes the crime time, crime location, and crime scene information answered by the user;
    将所述事故详情信息中的案发时间、案发地点和案发场景信息,分别与所述对话数据中的所述用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度;Matching the incident time, incident location and incident scene information in the accident details information with the incident time, incident location and incident scene information answered by the user in the dialogue data, respectively, Get the matching degree of answer information;
    解析所述对话数据,以获取所述用户对每个所述预设问题进行回答时的停顿时长;Parsing the dialogue data to obtain the pause duration when the user answers each of the preset questions;
    根据所述回答信息匹配度和所述停顿时长获取所述说谎评估值,其中,所述说谎评估值与所述回答信息匹配度正相关,所述说谎评估值与所述停顿时长负相关。The lying evaluation value is obtained according to the answer information matching degree and the pause duration, wherein the lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause duration.
  13. 如权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时实现如下步骤:The computer device of claim 9, wherein the processor, when executing the computer-readable instructions, implements the steps of:
    预先获得所述用户的授权查阅所述用户端地点的授权;Obtaining the authorization of the user in advance to consult the authorization of the location of the client;
    依据所述证件信息从所述用户端对应的运营商查询所述用户端地点,以获取产生所述对话数据时所述用户的位置;querying the location of the client from the operator corresponding to the client according to the certificate information to obtain the location of the user when the dialog data is generated;
    将所述用户的位置与所述案发地点进行距离比较;comparing the distance between the user's location and the crime scene;
    根据所述距离比较结果获取所述位置评估值,其中,所述位置评估值与所述距离比较结果负相关。The location evaluation value is obtained according to the distance comparison result, wherein the location evaluation value is negatively correlated with the distance comparison result.
  14. 如权利要求9所述的计算机设备,其中,所述处理器执行所述计算机可读指令时实现如下步骤:The computer device of claim 9, wherein the processor, when executing the computer-readable instructions, implements the steps of:
    将所述判定结果与所述证件信息关联存储在区块链***的区块链上。The determination result is associated with the certificate information and stored on the blockchain of the blockchain system.
  15. 一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:One or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps:
    响应用户触发的保险报案请求,以向所述用户端发送材料提交请求;Responding to the insurance report request triggered by the user, to send a material submission request to the client;
    获取所述用户根据所述材料提交请求上传的证件信息和事故现场视频,所述事故现场视频包含所述用户录入的事故详情口述音频和事故现场图像,所述事故现场图像包括用户的面部图像;Acquiring the certificate information and the accident scene video uploaded by the user according to the material submission request, the accident scene video includes the oral audio of the accident details entered by the user and the accident scene image, and the accident scene image includes the user's facial image;
    获取所述用户端所反馈的对话数据,所述对话数据为所述用户按照预设问题进行回答的对话数据,并获取所述用户按照预设问题进行回答时所述用户的位置;acquiring dialogue data fed back by the user terminal, where the dialogue data is dialogue data that the user answers according to preset questions, and acquiring the user's position when the user answers the preset questions;
    分析所述用户的面部图像,以获取所述用户的面部微表情变化情况,并根据所述面部微表情变化情况获取面部表情评估值;Analyzing the facial image of the user to obtain the change of the facial micro-expression of the user, and obtain a facial expression evaluation value according to the change of the facial micro-expression;
    分析所述事故详情口述音频,以提取事故详情信息和所述用户的声音变化情况;analyzing the accident detail spoken audio to extract accident detail information and changes in the user's voice;
    分析所述声音变化情况得到声音评估值,并分析所述事故详情信息获取场景评估值;Analyzing the sound change to obtain a sound evaluation value, and analyzing the accident detail information to obtain a scene evaluation value;
    根据所述事故详情信息和所述对话数据获取说谎评估值;obtaining a lying assessment value according to the accident detail information and the dialogue data;
    根据所述事故详情信息和所述用户的位置获取位置评估值;Obtain a location evaluation value according to the accident detail information and the location of the user;
    根据所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值判定所述用户是否存在虚假报案,并将判定结果与所述证件信息关联存储。According to the facial expression evaluation value, the sound evaluation value, the scene evaluation value, the lying evaluation value and the location evaluation value, it is determined whether the user has made a false report, and the determination result is stored in association with the certificate information.
  16. 如权利要求15所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:16. The readable storage medium of claim 15, wherein the computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of:
    分别为所述面部表情评估值、声音评估值、场景评估值、说谎评估值和位置评估值配置对应的权重值;Configure corresponding weight values for the facial expression evaluation value, voice evaluation value, scene evaluation value, lying evaluation value and position evaluation value respectively;
    按照如下公式计算目标评估值:Calculate the target evaluation value according to the following formula:
    Figure PCTCN2021109443-appb-100004
    Figure PCTCN2021109443-appb-100004
    其中,所述totalScore表示所述目标评估值,所述a i对应表示各所述评估值对应的权 重值,x i对应表示各所述评估值; Wherein, the totalScore represents the target evaluation value, the a i corresponds to the weight value corresponding to each of the evaluation values, and xi corresponds to each of the evaluation values;
    判断所述目标评估值是否大于预设阈值;judging whether the target evaluation value is greater than a preset threshold;
    当所述目标评估值大于预设阈值,则判定所述用户为存在虚假报案;When the target evaluation value is greater than the preset threshold, it is determined that the user has a false report;
    当所述目标评估值小于或等于预设阈值,则判定所述用户为非存在虚假报案。When the target evaluation value is less than or equal to a preset threshold, it is determined that the user does not have a false report.
  17. 如权利要求15所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:16. The readable storage medium of claim 15, wherein the computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of:
    利用所述案发地点获取所述案发地点的周围环境信息;Use the crime scene to obtain the surrounding environment information of the crime scene;
    将所述周围环境信息与所述案发场景信息进行场景匹配,以获取场景匹配度;Perform scene matching between the surrounding environment information and the crime scene information to obtain a scene matching degree;
    根据所述场景匹配度获取所述场景评估值,其中,所述场景评估值与所述场景匹配度正相关。The scene evaluation value is obtained according to the scene matching degree, wherein the scene evaluation value is positively correlated with the scene matching degree.
  18. 如权利要求15所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:16. The readable storage medium of claim 15, wherein the computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of:
    利用所述案发时间、案发地点和案发场景信息,生成所述预设问题;generating the preset question using the information about the time of the crime, the location of the crime and the scene of the crime;
    将所述预设问题发送至人机对话***,以使所述人机对话***通过所述预设问题向所述用户发起对话处理;sending the preset question to the man-machine dialogue system, so that the man-machine dialogue system initiates dialogue processing to the user through the preset question;
    接收所述人机对话***所反馈的所述用户按照所述预设问题所回答的对话数据,所述对话数据中包含所述用户所回答的案发时间、案发地点和案发场景信息;receiving the dialogue data that the user answered according to the preset question, which is fed back by the man-machine dialogue system, where the dialogue data includes the crime time, the crime location and the crime scene information answered by the user;
    将所述事故详情信息中的案发时间、案发地点和案发场景信息,分别与所述对话数据中的所述用户回答的案发时间、案发地点和案发场景信息进行信息匹配,得到回答信息匹配度;Matching the incident time, incident location and incident scene information in the accident details information with the incident time, incident location and incident scene information answered by the user in the dialogue data, respectively, Get the matching degree of answer information;
    解析所述对话数据,以获取所述用户对每个所述预设问题进行回答时的停顿时长;Parsing the dialogue data to obtain the pause duration when the user answers each of the preset questions;
    根据所述回答信息匹配度和所述停顿时长获取所述说谎评估值,其中,所述说谎评估值与所述回答信息匹配度正相关,所述说谎评估值与所述停顿时长负相关。The lying evaluation value is obtained according to the answer information matching degree and the pause duration, wherein the lying evaluation value is positively correlated with the answer information matching degree, and the lying evaluation value is negatively correlated with the pause duration.
  19. 如权利要求15所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:16. The readable storage medium of claim 15, wherein the computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of:
    预先获得所述用户的授权查阅所述用户端地点的授权;Obtaining the authorization of the user in advance to consult the authorization of the location of the client;
    依据所述证件信息从所述用户端对应的运营商查询所述用户端地点,以获取产生所述对话数据时所述用户的位置;querying the location of the client from the operator corresponding to the client according to the certificate information to obtain the location of the user when the dialog data is generated;
    将所述用户的位置与所述案发地点进行距离比较;comparing the distance between the user's location and the crime scene;
    根据所述距离比较结果获取所述位置评估值,其中,所述位置评估值与所述距离比较结果负相关。The location evaluation value is obtained according to the distance comparison result, wherein the location evaluation value is negatively correlated with the distance comparison result.
  20. 如权利要求15所述的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:16. The readable storage medium of claim 15, wherein the computer-readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of:
    将所述判定结果与所述证件信息关联存储在区块链***的区块链上。The determination result is associated with the certificate information and stored on the blockchain of the blockchain system.
PCT/CN2021/109443 2020-12-28 2021-07-30 False insurance claim report processing method and apparatus, and computer device and storage medium WO2022142319A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011582853.2 2020-12-28
CN202011582853.2A CN112667854A (en) 2020-12-28 2020-12-28 False insurance application processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022142319A1 true WO2022142319A1 (en) 2022-07-07

Family

ID=75411151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109443 WO2022142319A1 (en) 2020-12-28 2021-07-30 False insurance claim report processing method and apparatus, and computer device and storage medium

Country Status (2)

Country Link
CN (1) CN112667854A (en)
WO (1) WO2022142319A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667854A (en) * 2020-12-28 2021-04-16 深圳壹账通智能科技有限公司 False insurance application processing method and device, computer equipment and storage medium
CN114170030B (en) * 2021-12-08 2023-09-26 北京百度网讯科技有限公司 Method, apparatus, electronic device and medium for remote damage assessment of vehicle
TWI801082B (en) * 2022-01-06 2023-05-01 國立清華大學 Predicting method and device of car accident severity, computer-readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150170638A1 (en) * 2002-11-12 2015-06-18 David Bezar User intent analysis extent of speaker intent analysis system
CN109829358A (en) * 2018-12-14 2019-05-31 深圳壹账通智能科技有限公司 Micro- expression loan control method, device, computer equipment and storage medium
CN111192150A (en) * 2019-12-23 2020-05-22 中国平安财产保险股份有限公司 Method, device and equipment for processing vehicle insurance agent business and storage medium
CN112667854A (en) * 2020-12-28 2021-04-16 深圳壹账通智能科技有限公司 False insurance application processing method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150170638A1 (en) * 2002-11-12 2015-06-18 David Bezar User intent analysis extent of speaker intent analysis system
CN109829358A (en) * 2018-12-14 2019-05-31 深圳壹账通智能科技有限公司 Micro- expression loan control method, device, computer equipment and storage medium
CN111192150A (en) * 2019-12-23 2020-05-22 中国平安财产保险股份有限公司 Method, device and equipment for processing vehicle insurance agent business and storage medium
CN112667854A (en) * 2020-12-28 2021-04-16 深圳壹账通智能科技有限公司 False insurance application processing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112667854A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
WO2022142319A1 (en) False insurance claim report processing method and apparatus, and computer device and storage medium
WO2020211388A1 (en) Behavior prediction method and device employing prediction model, apparatus, and storage medium
WO2021159689A1 (en) Electronic contract signing double-recording method and apparatus, and computer device and storage medium
CN107945015B (en) Man-machine question and answer auditing method, device, equipment and computer readable storage medium
CN111355781B (en) Voice information communication management method, device and storage medium
CN109815803B (en) Face examination risk control method and device, computer equipment and storage medium
CN111861732A (en) Risk assessment system and method
US20110224986A1 (en) Voice authentication systems and methods
CN104935438A (en) Method and apparatus for identity verification
US20140310786A1 (en) Integrated interactive messaging and biometric enrollment, verification, and identification system
CN111353925A (en) Block chain-based fraud prevention system and method
CN112464117A (en) Request processing method and device, computer equipment and storage medium
WO2019174073A1 (en) Method and device for modifying client information in conversation, computer device and storage medium
CN110766340A (en) Business auditing method, device and equipment
CN111899100A (en) Service control method, device and equipment and computer storage medium
US11790638B2 (en) Monitoring devices at enterprise locations using machine-learning models to protect enterprise-managed information and resources
CN111489175A (en) Online identity authentication method, device, system and storage medium
CN113873088B (en) Interactive method and device for voice call, computer equipment and storage medium
US10380687B2 (en) Trade surveillance and monitoring systems and/or methods
CN110807630B (en) Payment method and device based on face recognition, computer equipment and storage medium
CN111192150A (en) Method, device and equipment for processing vehicle insurance agent business and storage medium
CN116776857A (en) Customer call key information extraction method, device, computer equipment and medium
WO2021208939A1 (en) Person information storage and verification methods and systems, and storage medium
WO2021042905A1 (en) Vehicle information processing method and apparatus, and computer device and storage medium
CN113642462A (en) Driving behavior assessment method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913068

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.09.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21913068

Country of ref document: EP

Kind code of ref document: A1