CN116311313A - Medical record report form detection method, device, equipment and medium based on artificial intelligence - Google Patents

Medical record report form detection method, device, equipment and medium based on artificial intelligence Download PDF

Info

Publication number
CN116311313A
CN116311313A CN202211566597.7A CN202211566597A CN116311313A CN 116311313 A CN116311313 A CN 116311313A CN 202211566597 A CN202211566597 A CN 202211566597A CN 116311313 A CN116311313 A CN 116311313A
Authority
CN
China
Prior art keywords
text
report
medical record
record report
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211566597.7A
Other languages
Chinese (zh)
Inventor
刁梁
陈少琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202211566597.7A priority Critical patent/CN116311313A/en
Publication of CN116311313A publication Critical patent/CN116311313A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/418Document matching, e.g. of document images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application provides an artificial intelligence-based medical record report detection method, an artificial intelligence-based medical record report detection device, electronic equipment and a storage medium, wherein the artificial intelligence-based medical record report detection method comprises the following steps: carrying out standardized processing on the acquired medical record report images to obtain a standard report image set; performing text detection on the standard report image set to obtain a text box coordinate feature set; calculating the text box coordinate feature set to construct a text feature data set; training a preset neural network model based on the text feature data set to obtain a report risk identification model; and detecting the medical record report image to be detected based on the report risk recognition model to obtain a wind control recognition result. According to the medical record report form identification method and device, the characteristics of the acquired medical record report form image are extracted, and training is carried out by combining with the neural network model, so that identification accuracy of medical record report form fraud risks in an insurance claim settlement link can be effectively improved.

Description

Medical record report form detection method, device, equipment and medium based on artificial intelligence
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an artificial intelligence-based medical record report form detection method, an artificial intelligence-based medical record report form detection device, electronic equipment and a storage medium.
Background
The medical record report forms summarize the actual disease condition and the processing method recorded in the process from the pre-admission to the post-discharge of the patient, and the past medical history, the current illness state, diagnosis and treatment data and the like of the patient are recorded in detail and objectively, so that the medical record report forms are very important evidence in the claim insurance link in risk management and control.
However, the current insurance field has few wind control algorithms about pictures, and few picture dimension template verification methods for medical records and inspection report forms exist, and although templates of the inspection report forms of all hospitals are relatively fixed in a period of time, the inspection report forms uploaded by users are generally found to have false suspicion only after risk cases are detected, so that loss can be effectively reduced if whether the report forms uploaded by the users have false suspicion or not can be accurately judged at the early stage of insurance settlement.
Disclosure of Invention
In view of the above, it is necessary to provide a medical record report detection method, device, electronic equipment and storage medium based on artificial intelligence, so as to solve the technical problem of how to accurately identify the faking risk of the medical record report.
The application provides an artificial intelligence-based medical record report detection method, which comprises the following steps:
Carrying out standardized processing on the acquired medical record report images to obtain a standard report image set;
performing text detection on the standard report image set to obtain a text box coordinate feature set;
calculating the text box coordinate feature set to construct a text feature data set;
training a preset neural network model based on the text feature data set to obtain a report risk identification model;
and detecting the medical record report image to be detected based on the report risk recognition model to obtain a wind control recognition result.
In some embodiments, the normalizing the acquired medical record report images to obtain a standard report image set includes:
acquiring medical record report images of a hospital to obtain a hospital report image set, wherein the hospital report image set corresponds to the hospital one by one;
extracting text keywords on the medical record report sheet image;
and matching the text keywords with medical record report templates of the corresponding hospitals to obtain standard report images, and taking all the standard report images as a standard report image set of the corresponding hospitals.
In some embodiments, the text detecting the standard report image set to obtain a text box coordinate feature set comprises:
Performing text detection on the standard report images according to a preset target detection model to obtain text boxes corresponding to the standard report images;
counting the vertex coordinates of each text box;
and constructing the text box coordinate features of the corresponding text box based on the vertex coordinates, and taking all the text box coordinate features as a text box coordinate feature set.
In some embodiments, the computing the text box coordinate feature set to construct a text feature data set includes:
screening elements in the text box coordinate features to obtain a plurality of first coordinate features;
constructing a plurality of second coordinate features based on the first coordinate features;
constructing a plurality of third coordinate features based on the second coordinate features
And constructing a text feature box corresponding to the text box based on the second coordinate feature and the third coordinate feature, and taking text data included in all the text feature boxes as a text feature data set.
In some embodiments, training the preset neural network model based on the text feature data set to obtain the report risk identification model includes:
extracting semantic information of the text feature data set according to a preset language characterization model to obtain a text semantic data set;
Training a preset neural network model based on the text semantic data set to obtain a report risk identification model.
In some embodiments, training the preset neural network model based on the text semantic data set to obtain the report risk recognition model includes:
different hospitals are assigned different hospital IDs;
a text semantic tag set is obtained by setting tags on the text semantic data set based on the hospital ID;
and inputting the text semantic data set as a training set into a preset neural network model to obtain an output result, and calculating an error between the output result and the text semantic label set according to a cross entropy loss function to iteratively train the neural network model to obtain a report risk recognition model.
In some embodiments, the detecting the medical record report image to be detected based on the report risk recognition model to obtain the wind control recognition result includes:
detecting a medical record report image to be detected based on the report risk identification model to obtain medical record similarity between the medical record report to be detected and medical record report templates of all hospitals;
selecting the hospital with the largest medical record similarity as the home hospital of the medical record report to be detected;
Judging whether the home hospital is consistent with the real hospital to which the medical record report to be detected belongs, if so, the wind control identification result is safe, and if not, the wind control identification result is high risk, and further investigation and verification are required according to a preset mode.
The embodiment of the application also provides a medical record report form detection device based on artificial intelligence, which comprises an acquisition module, a calculation module, a training module and a detection module:
the acquisition module is used for carrying out standardized processing on the acquired medical record report images to obtain a standard report image set;
the acquisition module is used for carrying out text detection on the standard report image set to acquire a text box coordinate feature set;
the computing module is used for computing the text box coordinate feature set to construct a text feature data set;
the training module is used for training a preset neural network model based on the text characteristic data set to obtain a report risk identification model;
and the detection module is used for detecting the medical record report image to be detected based on the report risk identification model to obtain a wind control identification result.
The embodiment of the application also provides electronic equipment, which comprises:
a memory storing at least one instruction;
and the processor executes the instructions stored in the memory to realize the medical record report form detection method based on the artificial intelligence.
Embodiments of the present application also provide a computer-readable storage medium having at least one instruction stored therein, the at least one instruction being executed by a processor in an electronic device to implement the artificial intelligence based medical record report detection method.
According to the medical record report form fraud risk identification method and device, the text feature data set is constructed by extracting the text box features of the acquired medical record report form images and training is carried out by combining the neural network model, so that identification accuracy of the medical record report form fraud risk in the insurance claim settlement link can be effectively improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of an artificial intelligence based medical record report detection method in accordance with the present application.
FIG. 2 is a functional block diagram of a preferred embodiment of an artificial intelligence based medical record report form detection device in accordance with the present application.
FIG. 3 is a schematic structural diagram of an electronic device of a preferred embodiment of an artificial intelligence based medical record report detection method according to the present application.
Detailed Description
In order that the objects, features and advantages of the present application may be more clearly understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, the described embodiments are merely some, rather than all, of the embodiments of the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The embodiment of the application provides an artificial intelligence-based medical record report detection method, which can be applied to one or more electronic devices, wherein the electronic devices can automatically perform numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the electronic devices comprises, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, an ASIC), a programmable gate array (Field-Programmable Gate Array, FPGA), a digital processor (Digital Signal Processor, DSP), an embedded device and the like.
The electronic device may be any electronic product that can interact with a customer in a human-machine manner, such as a personal computer, tablet, smart phone, personal digital assistant (Personal Digital Assistant, PDA), gaming machine, interactive web television (Internet Protocol Television, IPTV), smart wearable device, etc.
The electronic device may also include a network device and/or a client device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the electronic device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
FIG. 1 is a flow chart of a preferred embodiment of the medical record report detection method based on artificial intelligence. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
S10, carrying out standardized processing on the acquired medical record report images to obtain a standard report image set.
In an alternative embodiment, the normalizing the acquired medical record report image to obtain the standard report image set includes:
acquiring medical record report images of a hospital to obtain a hospital report image set, wherein the hospital report image set corresponds to the hospital one by one;
extracting text keywords on the medical record report sheet image;
and matching the text keywords with medical record report templates of the corresponding hospitals to obtain standard report images, and taking all the standard report images as a standard report image set of the corresponding hospitals.
In this optional embodiment, the medical record report images uploaded by the corresponding users can be collected for different hospitals, the corresponding medical record report is stored in different data tables of the same medical record report database according to different hospitals, and all images of each hospital in the medical record report database are used as the hospital report image set corresponding to the hospital. The medical record report database may be an existing database such as MySQL, postgreSQL, influxDB, which is not particularly limited in this scheme.
In this alternative embodiment, the medical record report can include patient base information, medical record information, test information, examination information, and prescription information. The basic information of the patient may include age, sex, height, weight, time of visit, department of visit, doctor of visit, etc., the medical history information may include past medical history, personal history, medication history, final diagnosis, current medical history, etc., the examination information may include examination order name, examination order time, examination order item name, examination result, reference range, etc., the examination information may include department, diagnosis, examination result, report date, examination order name, etc., and the prescription information may include prescription name, specification, usage amount, remarks, etc.
In this alternative embodiment, the format of the medical record report image may be jpg format, bmp format, png format, tif format, and psd format. The text keywords in the medical record report sheet image can be identified and extracted through a OCR (optical character recognition) algorithm, and the OCR algorithm can be used for end-to-end text identification, FOTS, textBoxes ++ and other algorithm models, and the scheme is not particularly limited.
In this optional embodiment, since text content in images of different medical record report forms is complex and various, unified standardized processing needs to be performed on different medical record report forms belonging to the same hospital so as to convert the medical record report forms into standard medical record report forms that can be stored in a unified manner in the medical record report form database. The standard medical record report form at least comprises a hospital name, disease type information, examination information, prescription information, a medicine code, a standard medicine name and a disease type code.
In this alternative embodiment, each hospital has a corresponding medical record report template for converting text data in the medical record report into medical standard names and standard codes, which can be obtained by pre-writing, for example, J18.900 corresponds to pneumonia, J45.900 corresponds to asthma, K59.000 corresponds to constipation, etc.
In this optional embodiment, the text keywords and the medical record report templates of the corresponding hospitals may be matched by a text similarity matching algorithm, such as TF-IDF, BM25, simhash, etc., and the text keywords on the medical record report images may be replaced by standard names and standard codes on the matched medical record report templates, so as to obtain matched standard report images.
Therefore, the acquired medical record report forms of all hospitals can be subjected to unified standardized processing, so that accurate data support is provided for the follow-up process.
And S11, performing text detection on the standard report image set to obtain a text box coordinate feature set.
In an alternative embodiment, said text detecting said standard report image set to obtain a text box coordinate feature set comprises:
performing text detection on the standard report images according to a preset target detection model to obtain text boxes corresponding to the standard report images;
counting the vertex coordinates of each text box;
and constructing the text box coordinate features of the corresponding text box based on the vertex coordinates, and taking all the text box coordinate features as a text box coordinate feature set.
In this optional embodiment, text detection may be performed on the images in the standard report image set by using a preset target detection model, so as to obtain text boxes corresponding to each standard report image. The target detection model can use psenet, the psenet can position any-shaped text, and each text box on the standard report image is selected through a bounding box.
In this alternative embodiment, the text box refers to a detection box obtained by box-selecting a set of continuous texts on a standard report image. It will be appreciated that a text box is the smallest circumscribed box corresponding to each set of consecutive text, i.e. the size of the text box is related to the number of text contained in each set of consecutive text, i.e. the size of the text box is proportional to the number of text contained in each set of consecutive text. The continuous texts and the text boxes in each group are in one-to-one correspondence, namely, the number of the continuous texts in one standard report image is the same as the number of the text boxes. In the actual operation process, a group of continuous texts can be selected by adopting different shapes, for example, the text can be rectangular, oval or the like.
In this alternative embodiment, the continuous text contained in each text box may include one of the following types: characters; a number; a symbol; letters. Of course, the continuous text contained in each text box may include multiple types of boxes as described above, and may include, for example: alphanumeric characters and numbers may also include: words and letters, may also include: numbers and letters, which are not limited in this context.
In this alternative embodiment, taking a text box as an example, four vertex coordinates (x i1 ,y i1 )、(x i2 ,y i2 )、(x i3 ,y i3 )、(x i4 ,y i4 ) Wherein x is i And y i The vertex abscissa and vertex ordinate of the ith text box are represented, respectively. In this embodiment, the four vertex coordinates of each text box are put together as the text box coordinate feature of the corresponding text box, and the text box coordinate feature of the i-th text box is [ x i1 ,y i1 ,x i2 ,y i2 ,x i3 ,y i3 ,x i4 ,y i4 ]And simultaneously taking all the text box coordinate features as a text box coordinate feature set.
Therefore, the text boxes on the standard report images in the standard report image set can be detected through the target detection model, so that the coordinate features of the text boxes are preliminarily constructed according to the vertex coordinates of the text boxes, and the quick positioning of the text data on the standard report images is realized.
And S12, calculating the text box coordinate feature set to construct a text feature data set.
In an alternative embodiment, said computing the text box coordinate feature set to construct a text feature data set comprises:
screening elements in the text box coordinate features to obtain a plurality of first coordinate features;
constructing a plurality of second coordinate features based on the first coordinate features;
Constructing a plurality of third coordinate features based on the second coordinate features;
and constructing a text feature box corresponding to the text box based on the second coordinate feature and the third coordinate feature, and taking text data included in all the text feature boxes as a text feature data set.
In this alternative embodiment, taking the text box coordinate feature of the ith text box as an example, from the abscissa x i1 ,x i2 ,x i3 ,x i4 The coordinate with the smallest value is selected as c i1 Selecting the coordinate with the largest value as c i2 Then from the ordinate y i1 ,y i2 ,y i3 ,y i4 The coordinate with the smallest value is selected as c i3 Selecting the coordinate with the largest value as c i4 I.e. c i1 、c i2 、c i3 、c i4 The relation is satisfied:
c i1 =min(x i1 ,x i2 ,x i3 ,x i4 );c i2 =max(x i1 ,x i2 ,x i3 ,x i4 );
c i3 =min(y i1 ,y i2 ,y i3 ,y i4 );c i4 =max(y i1 ,y i2 ,y i3 ,y i4 )
c to be obtained in this scheme i1 、c i2 、c i3 、c i4 All as first coordinate features.
The cocoa canIn an alternative embodiment, the first coordinate features are calculated differently to construct f i1 、f i2 、f i3 、f i4 As a second coordinate feature, the second coordinate feature satisfies the relation:
Figure BDA0003986304450000081
Figure BDA0003986304450000082
in this alternative embodiment, the second coordinate features are calculated differently to construct f i5 、f i6 、f i7 、f i8 As a third coordinate feature, the third coordinate feature satisfies the relation:
Figure BDA0003986304450000083
f i7 =f i2 -f i1 ;f i8 =f i4 -f i3
in this alternative embodiment, the obtained second coordinate features and the third coordinate features may be put together to form a text feature box of the ith text box, i.e. the four vertex coordinate values of the text feature box of the ith text box are in turn [ f ] i1 ,f i2 ,f i3 ,f i4 ,f i5 ,f i6 ,f i7 ,f i8 ]In the scheme, text data corresponding to all text feature boxes are used as a text feature data set.
In this alternative embodiment, each image in the standard report image set may be selected according to 100 text boxes, that is, each image in the standard report image set corresponds to 100 text boxes, so each image corresponds to 100 text feature boxes, and images with less than 100 text boxes obtain a number of text boxes through zero padding, that is, the corresponding image contains 100 text feature boxes through adding text feature boxes with 0 text data, so that each standard report can be guaranteed to have enough text feature boxes to fully characterize the text data in the image.
Therefore, more accurate text feature data can be obtained through calculation of the coordinate features of each text box in the coordinate feature set of the text box, and accuracy of training the neural network in a subsequent process is improved.
And S13, training a preset neural network model based on the text characteristic data set to obtain a report risk identification model.
In an optional embodiment, training the preset neural network model based on the text feature data set to obtain the report risk identification model includes:
Extracting semantic information of the text feature data set according to a preset language characterization model to obtain a text semantic data set;
training a preset neural network model based on the text semantic data set to obtain a report risk identification model.
In this optional embodiment, a large number of medical record report images may be collected first, text data corresponding to text feature boxes of the medical record report images may be obtained through the steps of S10, S11, and S12 as a training set to pretrain a preset language characterization model, where the preset language characterization model may be a BERT model, and the BERT model is a language characterization model pretrained based on a Transformer encoder, and may be trained using the text data without labels on a large scale, and semantic data containing rich semantic information in the text feature data set may be obtained according to the language characterization model.
In this optional embodiment, the pre-training means that before the pre-training is performed on the pre-set neural network model, the BERT model is first pre-trained, so that semantic information of the text feature data set is extracted, when the pre-set neural network is further trained, only training parameters of the trained BERT model need to be loaded, and the network structure and parameters of the pre-set neural network model are finely tuned, so that training of the pre-set neural network model can be rapidly completed, and meanwhile, a more accurate report risk recognition model can be obtained by using the obtained text semantic data set.
In this alternative embodiment, different hospitals may be assigned different hospital IDs, which may be letters, numbers or symbols, respectively, and the solution is not limited in particular. Setting corresponding labels for text semantic data in text semantic data sets corresponding to different hospitals according to the IDs of the hospitals to which the text semantic data sets belong in a manual labeling mode, and taking the text semantic data with the labels as a text semantic label set.
In this alternative embodiment, a preset neural network model may be trained based on the text semantic data set and the text semantic label set to obtain a report risk recognition model, where the preset neural network model may be an MLP (Multi-layer Perceptron), and the MLP model includes multiple hidden layers, so that accurate classification of data may be achieved.
In this optional embodiment, the basic information and medical record information of the patient in the text semantic data set may be used as Key, the inspection information and the inspection information may be used as Query, the prescription information may be used as Value to be input into a preset neural network model, and the data in the text semantic tag set may be used as output to train the neural network model, specifically: first through the relation
Figure BDA0003986304450000091
Calculating the correlation between Query and Key, then continuing calculating the calculated result in a neural network model to obtain an output result, and carrying out normalization processing on the output result to obtain a normalization result, wherein the normalization result is used for representing the probability value of the corresponding text semantic data belonging to each hospital, and then judging which hospital ID the corresponding text semantic data should belong to according to the probability value. Wherein, the ith text semantic data is normalizedResult of the transformation z i The relation is satisfied:
Figure BDA0003986304450000092
wherein Sim is i Referring to the output result of the neural network model, N represents the total number of text semantic data, softmax is also called a normalized exponential function, and the output of a plurality of neurons can be mapped to the probability of being classified in the (0, 1) interval, so as to perform multi-classification.
In this optional embodiment, the error loss between the output of the neural network model and the corresponding label may be calculated by the cross entropy loss function, and the error loss obtained by each training may be gradually reduced by optimizing the structure and parameters of the neural network model, and finally, when the error loss is 0, a trained neural network model may be obtained.
Therefore, the report risk recognition model for accurately detecting the medical record report can be trained by pre-training the text characteristic data set and combining with a preset neural network model.
S14, detecting the medical record report image to be detected based on the report risk recognition model to obtain a wind control recognition result.
In an optional embodiment, the detecting the medical record report image to be detected based on the report risk recognition model to obtain the wind control recognition result includes:
detecting a medical record report image to be detected based on the report risk identification model to obtain medical record similarity between the medical record report to be detected and medical record report templates of all hospitals;
selecting the hospital with the largest medical record similarity as the home hospital of the medical record report to be detected;
judging whether the home hospital is consistent with the real hospital to which the medical record report to be detected belongs, if so, the wind control identification result is safe, and if not, the wind control identification result is high risk, and further investigation and verification are required according to a preset mode.
In this optional embodiment, for the medical record report image to be detected, the medical record similarity between the report to be detected and the medical record report templates of all hospitals can be obtained by extracting corresponding text semantic data and inputting all obtained text semantic data into the report risk recognition model, and finally, the hospital with the largest medical record similarity is selected as the home hospital of the medical record report to be detected.
For example, if the medical record similarity between the medical record report image a to be detected and the medical record report templates of the three hospitals B, C, D is 0.8, 0.6 and 0.3, the medical record report image a is detected to be attributed to the hospital B.
In this optional embodiment, whether the home hospital is consistent with the real hospital to which the medical record report to be detected belongs or not may be determined according to the obtained home hospital, if so, the wind control identification result is safe, and may pass the security verification, and if not, the wind control identification result is high risk, and may notify the staff of the insurance claim to perform further investigation and verification for the insurance claim of the client, so as to prevent the occurrence of the insurance fraud event.
Therefore, the report risk identification model can be used for realizing the rapid detection of the report image of the medical record to be detected, and further responding to the detection result, thereby effectively preventing the occurrence of insurance fraud events.
Referring to fig. 2, fig. 2 is a functional block diagram of a preferred embodiment of the medical record report form detection device based on artificial intelligence. The medical record report detection device 11 based on artificial intelligence comprises an acquisition module 110, an acquisition module 111, a calculation module 112, a training module 113 and a detection module 114. The unit/module referred to herein is a series of computer readable instructions capable of being executed by the processor 13 and of performing a fixed function, stored in the memory 12. In the present embodiment, the functions of the respective units/modules will be described in detail in the following embodiments.
In an alternative embodiment, the acquisition module 110 is configured to perform normalization processing on the acquired medical record report images to obtain a standard report image set.
In an alternative embodiment, the normalizing the acquired medical record report image to obtain the standard report image set includes:
acquiring medical record report images of a hospital to obtain a hospital report image set, wherein the hospital report image set corresponds to the hospital one by one;
extracting text keywords on the medical record report sheet image;
and matching the text keywords with medical record report templates of the corresponding hospitals to obtain standard report images, and taking all the standard report images as a standard report image set of the corresponding hospitals.
In an alternative embodiment, the obtaining module 111 is configured to perform text detection on the standard report image set to obtain a text box coordinate feature set.
In an alternative embodiment, said text detecting said standard report image set to obtain a text box coordinate feature set comprises:
performing text detection on the standard report images according to a preset target detection model to obtain text boxes corresponding to the standard report images;
Counting the vertex coordinates of each text box;
and constructing the text box coordinate features of the corresponding text box based on the vertex coordinates, and taking all the text box coordinate features as a text box coordinate feature set.
In an alternative embodiment, the computing module 112 is configured to compute the text box coordinate feature set to construct a text feature data set.
In an alternative embodiment, said computing the text box coordinate feature set to construct a text feature data set comprises:
screening elements in the text box coordinate features to obtain a plurality of first coordinate features;
constructing a plurality of second coordinate features based on the first coordinate features;
constructing a plurality of third coordinate features based on the second coordinate features;
and constructing a text feature box corresponding to the text box based on the second coordinate feature and the third coordinate feature, and taking text data included in all the text feature boxes as a text feature data set.
In this alternative embodiment, taking the text box coordinate feature of the ith text box as an example, from the abscissa x i1 ,x i2 ,x i3 ,x i4 The coordinate with the smallest value is selected as c i1 Selecting the coordinate with the largest value as c i2 Then from the ordinate y i1 ,y i2 ,y i3 ,y i4 The coordinate with the smallest value is selected as c i3 Selecting the coordinate with the largest value as c i4 I.e. c i1 、c i2 、c i3 、c i4 The relation is satisfied:
c i1 =min(x i1 ,x i2 ,x i3 ,x i4 );c i2 =max(x i1 ,x i2 ,x i3 ,x i4 );
c i3 =min(y i1 ,y i2 ,y i3 ,y i4 );c i4 =max(y i1 ,y i2 ,y i3 ,y i4 )
c to be obtained in this scheme i1 、c i2 、c i3 、c i4 All as first coordinate features.
In this alternative embodiment, the first coordinate features are calculated differently to construct f i1 、f i2 、f i3 、f i4 As a second coordinate feature, the second coordinate feature satisfies the relation:
Figure BDA0003986304450000121
Figure BDA0003986304450000122
in this alternative embodiment, the second coordinate features are calculated differently to construct f i5 、f i6 、f i7 、f i8 As a third coordinate feature, the third coordinate feature satisfies the relation:
Figure BDA0003986304450000123
f i7 =f i2 -f i1 ;f i8 =f i4 -f i3
in this alternative embodiment, the obtained second coordinate features and the third coordinate features may be put together to form a text feature box of the ith text box, i.e. the four vertex coordinate values of the text feature box of the ith text box are in turn [ f ] i1 ,f i2 ,f i3 ,f i4 ,f i5 ,f i6 ,f i7 ,f i8 ]In the scheme, text data corresponding to all text feature boxes are used as a text feature data set.
In this alternative embodiment, each image in the standard report image set may be selected according to 100 text boxes, that is, each image in the standard report image set corresponds to 100 text boxes, so each image corresponds to 100 text feature boxes, and images with less than 100 text boxes obtain a number of text boxes through zero padding, that is, the corresponding image contains 100 text feature boxes through adding text feature boxes with 0 text data, so that each standard report can be guaranteed to have enough text feature boxes to fully characterize the text data in the image.
In an alternative embodiment, the training module 113 is configured to train a preset neural network model based on the text feature data set to obtain a report risk identification model.
In an optional embodiment, training the preset neural network model based on the text feature data set to obtain the report risk identification model includes:
extracting semantic information of the text feature data set according to a preset language characterization model to obtain a text semantic data set;
training a preset neural network model based on the text semantic data set to obtain a report risk identification model.
In this optional embodiment, a large number of medical record report images may be collected first, text data corresponding to text feature boxes of the medical record report images may be obtained through the steps of S10, S11, and S12 as a training set to pretrain a preset language characterization model, where the preset language characterization model may be a BERT model, and the BERT model is a language characterization model pretrained based on a Transformer encoder, and may be trained using the text data without labels on a large scale, and semantic data containing rich semantic information in the text feature data set may be obtained according to the language characterization model.
In this optional embodiment, the pre-training means that before the pre-training is performed on the pre-set neural network model, the BERT model is first pre-trained, so that semantic information of the text feature data set is extracted, when the pre-set neural network is further trained, only training parameters of the trained BERT model need to be loaded, and the network structure and parameters of the pre-set neural network model are finely tuned, so that training of the pre-set neural network model can be rapidly completed, and meanwhile, a more accurate report risk recognition model can be obtained by using the obtained text semantic data set.
In this alternative embodiment, different hospitals may be assigned different hospital IDs, which may be letters, numbers or symbols, respectively, and the solution is not limited in particular. Setting corresponding labels for text semantic data in text semantic data sets corresponding to different hospitals according to the IDs of the hospitals to which the text semantic data sets belong in a manual labeling mode, and taking the text semantic data with the labels as a text semantic label set.
In this alternative embodiment, a preset neural network model may be trained based on the text semantic data set and the text semantic label set to obtain a report risk recognition model, where the preset neural network model may be an MLP (Multi-layer Perceptron), and the MLP model includes multiple hidden layers, so that accurate classification of data may be achieved.
In this optional embodiment, the basic information and medical record information of the patient in the text semantic data set may be used as Key, the inspection information and the inspection information may be used as Query, the prescription information may be used as Value to be input into a preset neural network model, and the data in the text semantic tag set may be used as output to train the neural network model, specifically: first through the relation
Figure BDA0003986304450000131
Calculating the correlation between Query and Key, then continuing calculating the calculated result in a neural network model to obtain an output result, and carrying out normalization processing on the output result to obtain a normalization result, wherein the normalization result is used for representing the probability value of the corresponding text semantic data belonging to each hospital, and then judging which hospital ID the corresponding text semantic data should belong to according to the probability value. Wherein, normalized result z of ith text semantic data i The relation is satisfied:
Figure BDA0003986304450000141
wherein Sim is i Referring to the output result of the neural network model, N represents the total number of text semantic data, softmax is also called a normalized exponential function, and the output of a plurality of neurons can be mapped to the probability of being classified in the (0, 1) interval, so as to perform multi-classification.
In this optional embodiment, the error loss between the output of the neural network model and the corresponding label may be calculated by the cross entropy loss function, and the error loss obtained by each training may be gradually reduced by optimizing the structure and parameters of the neural network model, and finally, when the error loss is 0, a trained neural network model may be obtained.
In an alternative embodiment, the detection module 114 is configured to detect a medical record report image to be detected based on the report risk recognition model to obtain a wind control recognition result.
In an optional embodiment, the detecting the medical record report image to be detected based on the report risk recognition model to obtain the wind control recognition result includes:
detecting a medical record report image to be detected based on the report risk identification model to obtain medical record similarity between the medical record report to be detected and medical record report templates of all hospitals;
selecting the hospital with the largest medical record similarity as the home hospital of the medical record report to be detected;
judging whether the home hospital is consistent with the real hospital to which the medical record report to be detected belongs, if so, the wind control identification result is safe, and if not, the wind control identification result is high risk, and further investigation and verification are required according to a preset mode.
In this optional embodiment, for the medical record report image to be detected, the medical record similarity between the report to be detected and the medical record report templates of all hospitals can be obtained by extracting corresponding text semantic data and inputting all obtained text semantic data into the report risk recognition model, and finally, the hospital with the largest medical record similarity is selected as the home hospital of the medical record report to be detected.
For example, if the medical record similarity between the medical record report image a to be detected and the medical record report templates of the three hospitals B, C, D is 0.8, 0.6 and 0.3, the medical record report image a is detected to be attributed to the hospital B.
In this optional embodiment, whether the home hospital is consistent with the real hospital to which the medical record report to be detected belongs or not may be determined according to the obtained home hospital, if so, the wind control identification result is safe, and may pass the security verification, and if not, the wind control identification result is high risk, and may notify the staff of the insurance claim to perform further investigation and verification for the insurance claim of the client, so as to prevent the occurrence of the insurance fraud event.
According to the technical scheme, the text feature data set can be constructed by extracting the text box features of the acquired medical record report images, and training is carried out by combining with the neural network model, so that the identification accuracy of the medical record report fraud risk in the insurance claim settlement link can be effectively improved.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 1 comprises a memory 12 and a processor 13. The memory 12 is configured to store computer readable instructions and the processor 13 is configured to execute the computer readable instructions stored in the memory to implement the artificial intelligence based medical record report detection method according to any of the above embodiments.
In an alternative embodiment, the electronic device 1 further comprises a bus, a computer program stored in said memory 12 and executable on said processor 13, such as an artificial intelligence based medical record report detection program.
Fig. 3 shows only an electronic device 1 with a memory 12 and a processor 13, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 1, the memory 12 in the electronic device 1 stores a plurality of computer readable instructions to implement an artificial intelligence based medical record report form detection method, the processor 13 being executable to implement:
Carrying out standardized processing on the acquired medical record report images to obtain a standard report image set;
performing text detection on the standard report image set to obtain a text box coordinate feature set;
calculating the text box coordinate feature set to construct a text feature data set;
training a preset neural network model based on the text feature data set to obtain a report risk identification model;
and detecting the medical record report image to be detected based on the report risk recognition model to obtain a wind control recognition result.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, the electronic device 1 may be a bus type structure, a star type structure, the electronic device 1 may further comprise more or less other hardware or software than illustrated, or a different arrangement of components, e.g. the electronic device 1 may further comprise an input-output device, a network access device, etc.
It should be noted that the electronic device 1 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application and are incorporated herein by reference.
The memory 12 includes at least one type of readable storage medium, which may be non-volatile or volatile. The readable storage medium includes flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 1, such as a mobile hard disk of the electronic device 1. The memory 12 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. The memory 12 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of medical record report sheet detection programs based on artificial intelligence, but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, a combination of various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the electronic device 1, connects the individual components of the entire electronic device 1 with various interfaces and lines, executes or executes programs or modules stored in the memory 12 (e.g., executes an artificial intelligence-based medical record report sheet detection program, etc.), and invokes data stored in the memory 12 to perform various functions of the electronic device 1 and process data.
The processor 13 executes the operating system of the electronic device 1 and various types of applications installed. The processor 13 executes the application program to implement the steps of the various embodiments of the artificial intelligence based medical record report detection method described above, such as the steps shown in FIG. 1.
Illustratively, the computer program may be split into one or more units/modules, which are stored in the memory 12 and executed by the processor 13 to complete the present application. The one or more units/modules may be a series of computer readable instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the electronic device 1. For example, the computer program may be divided into an acquisition module 110, an acquisition module 111, a calculation module 112, a training module 113, a detection module 114.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a processor (processor) to perform portions of the artificial intelligence-based medical record report detection method according to various embodiments of the present application.
The integrated units/modules of the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand alone product. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by instructing the relevant hardware device by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each method embodiment described above when executed by a processor.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, other memories, and the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain referred to in the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The bus may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 3, but only one bus or one type of bus is not shown. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
The embodiment of the application further provides a computer readable storage medium (not shown), in which computer readable instructions are stored, and the computer readable instructions are executed by a processor in an electronic device to implement the medical record report detection method based on artificial intelligence according to any one of the embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may exist alone physically, or two or more modules may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
Furthermore, it is evident that the word "comprising" does not exclude other modules or steps, and that the singular does not exclude a plurality. The various modules or means set forth in the specification may also be implemented by one module or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. An artificial intelligence-based medical record report detection method, which is characterized by comprising the following steps:
carrying out standardized processing on the acquired medical record report images to obtain a standard report image set;
performing text detection on the standard report image set to obtain a text box coordinate feature set;
calculating the text box coordinate feature set to construct a text feature data set;
training a preset neural network model based on the text feature data set to obtain a report risk identification model;
and detecting the medical record report image to be detected based on the report risk recognition model to obtain a wind control recognition result.
2. The medical record report detection method based on artificial intelligence according to claim 1, wherein the normalizing the acquired medical record report image to obtain a standard report image set comprises:
Acquiring medical record report images of a hospital to obtain a hospital report image set, wherein the hospital report image set corresponds to the hospital one by one;
extracting text keywords on the medical record report sheet image;
and matching the text keywords with medical record report templates of the corresponding hospitals to obtain standard report images, and taking all the standard report images as a standard report image set of the corresponding hospitals.
3. The artificial intelligence based medical record report detection method of claim 1, wherein the text detecting the standard report image set to obtain a text box coordinate feature set comprises:
performing text detection on the standard report images according to a preset target detection model to obtain text boxes corresponding to the standard report images;
counting the vertex coordinates of each text box;
and constructing the text box coordinate features of the corresponding text box based on the vertex coordinates, and taking all the text box coordinate features as a text box coordinate feature set.
4. The artificial intelligence based medical record report detection method as set forth in claim 3, wherein the computing the text box coordinate feature set to construct a text feature data set includes:
Screening elements in the text box coordinate features to obtain a plurality of first coordinate features;
constructing a plurality of second coordinate features based on the first coordinate features;
constructing a plurality of third coordinate features based on the second coordinate features;
and constructing a text feature box corresponding to the text box based on the second coordinate feature and the third coordinate feature, and taking text data included in all the text feature boxes as a text feature data set.
5. The medical record report detection method based on artificial intelligence according to claim 1, wherein training a preset neural network model based on the text feature data set to obtain a report risk recognition model comprises:
extracting semantic information of the text feature data set according to a preset language characterization model to obtain a text semantic data set;
training a preset neural network model based on the text semantic data set to obtain a report risk identification model.
6. The medical record report detection method based on artificial intelligence according to claim 5, wherein training a preset neural network model based on the text semantic data set to obtain a report risk recognition model comprises:
Different hospitals are assigned different hospital IDs;
a text semantic tag set is obtained by setting tags on the text semantic data set based on the hospital ID;
and inputting the text semantic data set as a training set into a preset neural network model to obtain an output result, and calculating an error between the output result and the text semantic label set according to a cross entropy loss function to iteratively train the neural network model to obtain a report risk recognition model.
7. The medical record report detection method based on artificial intelligence according to claim 1, wherein the detecting the medical record report image to be detected based on the report risk recognition model to obtain the wind control recognition result comprises:
detecting a medical record report image to be detected based on the report risk identification model to obtain medical record similarity between the medical record report to be detected and medical record report templates of all hospitals;
selecting the hospital with the largest medical record similarity as the home hospital of the medical record report to be detected;
judging whether the home hospital is consistent with the real hospital to which the medical record report to be detected belongs, if so, the wind control identification result is safe, and if not, the wind control identification result is high risk, and further investigation and verification are required according to a preset mode.
8. Medical record report form detection device based on artificial intelligence, characterized in that, the device includes collection module, acquisition module, calculation module, training module and detection module:
the acquisition module is used for carrying out standardized processing on the acquired medical record report images to obtain a standard report image set;
the acquisition module is used for carrying out text detection on the standard report image set to acquire a text box coordinate feature set;
the computing module is used for computing the text box coordinate feature set to construct a text feature data set;
the training module is used for training a preset neural network model based on the text characteristic data set to obtain a report risk identification model;
and the detection module is used for detecting the medical record report image to be detected based on the report risk identification model to obtain a wind control identification result.
9. An electronic device, the electronic device comprising:
a memory storing computer readable instructions; a kind of electronic device with high-pressure air-conditioning system
A processor executing computer readable instructions stored in the memory to implement the artificial intelligence based medical record report form detection method of any one of claims 1 to 7.
10. A computer readable storage medium having computer readable instructions stored thereon, which when executed by a processor, implement the artificial intelligence based medical record report detection method of any of claims 1 to 7.
CN202211566597.7A 2022-12-07 2022-12-07 Medical record report form detection method, device, equipment and medium based on artificial intelligence Pending CN116311313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211566597.7A CN116311313A (en) 2022-12-07 2022-12-07 Medical record report form detection method, device, equipment and medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211566597.7A CN116311313A (en) 2022-12-07 2022-12-07 Medical record report form detection method, device, equipment and medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN116311313A true CN116311313A (en) 2023-06-23

Family

ID=86796584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211566597.7A Pending CN116311313A (en) 2022-12-07 2022-12-07 Medical record report form detection method, device, equipment and medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116311313A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116562271A (en) * 2023-07-10 2023-08-08 之江实验室 Quality control method and device for electronic medical record, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116562271A (en) * 2023-07-10 2023-08-08 之江实验室 Quality control method and device for electronic medical record, storage medium and electronic equipment
CN116562271B (en) * 2023-07-10 2023-10-10 之江实验室 Quality control method and device for electronic medical record, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110910976A (en) Medical record detection method, device, equipment and storage medium
CN112257578B (en) Face key point detection method and device, electronic equipment and storage medium
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes
CN111860377A (en) Live broadcast method and device based on artificial intelligence, electronic equipment and storage medium
CN111986744B (en) Patient interface generation method and device for medical institution, electronic equipment and medium
CN113435202A (en) Product recommendation method and device based on user portrait, electronic equipment and medium
CN114462412B (en) Entity identification method, entity identification device, electronic equipment and storage medium
CN114037545A (en) Client recommendation method, device, equipment and storage medium
CN112885423A (en) Disease label detection method and device, electronic equipment and storage medium
CN115471775A (en) Information verification method, device and equipment based on screen recording video and storage medium
CN116311313A (en) Medical record report form detection method, device, equipment and medium based on artificial intelligence
CN115222443A (en) Client group division method, device, equipment and storage medium
CN113470775B (en) Information acquisition method, device, equipment and storage medium
CN116741358A (en) Inquiry registration recommendation method, inquiry registration recommendation device, inquiry registration recommendation equipment and storage medium
CN113838579B (en) Medical data abnormality detection method, device, equipment and storage medium
CN113850260B (en) Key information extraction method and device, electronic equipment and readable storage medium
CN114722146A (en) Supply chain asset checking method, device, equipment and medium based on artificial intelligence
CN114780688A (en) Text quality inspection method, device and equipment based on rule matching and storage medium
CN114385815A (en) News screening method, device, equipment and storage medium based on business requirements
Dahl et al. Applications of machine learning in tabular document digitisation
CN112530585A (en) Data processing method and device based on medical institution, computer equipment and medium
Alsawwaf et al. In your face: Person identification through ratios of distances between facial features
CN111696637A (en) Quality detection method and related device for medical record data
JP7355303B2 (en) Receipt data significance determination program, receipt data significance determination method, and information processing device
CN113420677B (en) Method, device, electronic equipment and storage medium for determining reasonable image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination